Show simple item record

dc.contributor.authorBaşaran, Emrah
dc.contributor.authorGökmen, Muhittin
dc.contributor.authorKamasak, Mustafa E.
dc.date.accessioned2020-08-07T04:42:16Z
dc.date.available2020-08-07T04:42:16Z
dc.date.issued2020en_US
dc.identifier.citationBasaran, E., Gökmen, M., & Kamasak, M. E. (September 01, 2020). An efficient framework for visible-infrared cross modality person re-identification. Signal Processing: Image Communication, 87. pp. 1-11.en_US
dc.identifier.issn0923-5965
dc.identifier.issn1879-2677
dc.identifier.urihttps://doi.org/10.1016/j.image.2020.115933
dc.identifier.urihttps://hdl.handle.net/20.500.11779/1346
dc.description.abstractVisible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.ispartofSignal Processing: Image Communicationen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectPerson re-identificationen_US
dc.subjectCross modality person re-identificationen_US
dc.subjectLocal Zernike momentsen_US
dc.titleAn efficient framework for visible-infrared cross modality person re-identificationen_US
dc.typearticleen_US
dc.departmentMühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.authoridMuhittin Gökmen / 0000-0001-7290-199Xen_US
dc.identifier.volume87en_US
dc.identifier.startpage1en_US
dc.identifier.endpage11en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.description.wosidWOS:000551127300017en_US
dc.description.scopusid2-s2.0-85087420174en_US
dc.contributor.institutionauthorGökmen, Muhittin
dc.description.woscitationindexScience Citation Index Expandeden_US
dc.identifier.wosqualityQ2en_US
dc.description.WoSDocumentTypeArticleen_US
dc.description.WoSInternationalCollaborationUluslararası işbirliği ile yapılmayan - HAYIRen_US
dc.description.WoSPublishedMonthEylülen_US
dc.description.WoSIndexDate2020en_US
dc.description.WoSYOKperiodYÖK - 2020-21en_US
dc.identifier.doi10.1016/j.image.2020.115933en_US
dc.identifier.scopusqualityQ1en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record