A DEEPGAIT FEATURE EXTRACTION VIA MAXIMUM ACTIVATED CHANNEL LOCALIZATION AND ANALYTICAL STUDY ON MULTI-VIEW LARGE POPULATION GAIT DATASETS / SALISU MUHAMMED; SUPERVISOR: ASSOC. PROF. DR. ERBUĞ ÇELEBI
Dil: İngilizce 2021Tanım: 103 sheets; 31 cm. Includes CDİçerik türü:- text
- unmediated
- volume
- A STUDY OF PUBLIC SECTORS IN ERBIL
Materyal türü | Geçerli Kütüphane | Koleksiyon | Yer Numarası | Durum | Notlar | İade tarihi | Barkod | Materyal Ayırtmaları | |
---|---|---|---|---|---|---|---|---|---|
Thesis | CIU LIBRARY Tez Koleksiyonu | Tez Koleksiyonu | D 267 M94 2021 (Rafa gözat(Aşağıda açılır)) | Kullanılabilir | Computer Engineering Department | T2535 | |||
Suppl. CD | CIU LIBRARY Görsel İşitsel | D 267 M94 2021 (Rafa gözat(Aşağıda açılır)) | Kullanılabilir | Computer Engineering Department | CDT2535 |
CIU LIBRARY raflarına göz atılıyor, Raftaki konumu: Görsel İşitsel Raf tarayıcısını kapatın(Raf tarayıcısını kapatır)
Thesis (PhD) - Cyprus International University. Institute of Graduate Studies and Research Computer Engineering Department
Includes bibliography (sheets 99-103)
ABSTRACT
In this study, a novel maximum activated channel localization framework was created for extracting DeepGait features. In addition, as the models with fewer operations help realize the performance of intelligent computing systems, a Channel-Activated Mapping Network (CAMNet) for DeepGait feature extraction with less operation without dimension decomposition was proposed. More explicitly, the CAMNet is composed of an improved GEINet (three progressive triplets of convolution, batch normalization, and ReLu layers and then two internal max-pooling layers), an external max-pooling to capture the Spatio-temporal information of multiple frames in one gait period. We conducted experiments to validate the effectiveness of the proposed novel algorithm in terms of cross-view gait recognition in both cooperative and uncooperative settings using the state-of-the-art Osaka University Multi-View Large Population OU-MVLP dataset. The OU-MVLP dataset includes 10,307 subjects. As a result, we confirmed that the CAMNet+KNN significantly outperformed state-of-the-art approaches using the same dataset at the rear angles of 180, 195, 210, and 225, in both cooperative and uncooperative settings. The study also gives a comprehensive insight into the natural adversaries found in a multi-view large population dataset. Based on the analyses carried out on the OU-MVLP dataset, we have found that capturing gait frames at view angles 45o gives an equal number of frames in multiple sequences. However, 30o is the second view angle that also gives an equal number of frames in multiple sequences. In terms of age groups, 9-12 is the group that was found to have a higher percentage of subjects with an equal number of frames among the two sequences.