U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Dual Low-Rank Decompositions for Robust Cross-View Learning

NCJ Number
304130
Author(s)
Zhengming Ding; Yun Fu
Date Published
January 2019
Length
11 pages
Annotation

In addressing the limitations and complications of cross-view data, the current project proposes a robust cross-view learning framework to seek a robust view-invariant low-dimensional space, which led to the development of a dual low-rank decomposition technique to unweave intertwined manifold structures from one another in the learned space.

 

Abstract

 In addition, two discriminative graphs were designed to constrain the dual low-rank decompositions by fully exploring the prior knowledge. The proposed algorithm captures more within-class knowledge and mitigates the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, the proposed method is flexible in addressing such a challenging cross-view learning scenario that only obtains the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of the designed model over the state-of-the-art algorithms. Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views; however, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold. The current project successfully addresses these issues. (publisher abstract modified)