Content area
Full Text
Int J Comput Vis (2014) 109:7493 DOI 10.1007/s11263-014-0696-6
Generalized Transfer Subspace Learning Through Low-Rank Constraint
Ming Shao Dmitry Kit Yun Fu
Received: 31 March 2013 / Accepted: 2 January 2014 / Published online: 31 January 2014 Springer Science+Business Media New York 2014
Abstract It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by nding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benets. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be ltered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.
Keywords Transfer learning Domain adaptation
Low-rank constraint Subspace learning
1 Introduction
Visual classication tasks often suffer from insufcient labeled data because these data are either too costly to obtain or too expensive to hand-label. For that reason, researchers use labeled, yet relevant, data from different databases to facilitate learning process. A common assumption is that consistency exists between training and test data, which means the training and test data should have similar distributions or shared subspaces. This assumption is often wrong, especially in complex applications. Below are a few examples:
In image annotation, due to high labor cost, we expect to reuse images that have already been annotated. However, test images from target domain are either obtained under different...