![]() |
|
||
Inter-modality Relationship Constrained Multi-Task Feature Selection for AD/MCI ClassificationFeng Liu1, 2, Chong-Yaw Wee2, Huafu Chen1, and Dinggang Shen2 1Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Sichuan, China 2Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
Abstract. In conventional multi-modality based classification framework, feature selection is typically performed separately for each individual modality, ignoring potential strong inter-modality relationship of the same subject. To extract this inter-modality relationship, L2,1 norm-based multi-task learning approach can be used to jointly select common features from different modalities. Unfortunately, this approach overlooks different yet complementary information conveyed by different modalities. To address this issue, we propose a novel multi-task feature selection method to effectively preserve the complementary information between different modalities, improving brain disease classification accuracy. Specifically, a new constraint is introduced to preserve the inter-modality relationship by treating the feature selection procedure of each modality as a task. This constraint preserves distance between feature vectors from different modalities after projection to low dimensional feature space. We evaluated our method on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and obtained significant improvement on Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) classification compared to state-of-the-art methods. Keywords: Alzheimer’s Disease, Multi-task learning, Sparse representation, Multi-modality, Multi-kernel support vector machine LNCS 8149, p. 308 ff. lncs@springer.com
|