WEKO3
アイテム
{"_buckets": {"deposit": "8fe36e45-ca09-4373-b82b-85ff589bd880"}, "_deposit": {"created_by": 6, "id": "2000571", "owner": "6", "owners": [6], "pid": {"revision_id": 0, "type": "depid", "value": "2000571"}, "status": "published"}, "_oai": {"id": "oai:nagasaki-u.repo.nii.ac.jp:02000571", "sets": ["2011"]}, "author_link": [], "item_2_biblio_info_6": {"attribute_name": "書誌情報", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "2024-01-23", "bibliographicIssueDateType": "Issued"}, "bibliographicPageStart": "art. no. 103939", "bibliographicVolumeNumber": "240", "bibliographic_titles": [{"bibliographic_title": "Computer Vision and Image Understanding", "bibliographic_titleLang": "en"}]}]}, "item_2_description_4": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "Foreground object identification can be considered as anomaly detection in a redundant background. This paper proposes unsupervised deep learning of foreground objects on the basis of the prior knowledge about spatio-temporal sparseness and low-rankness of foreground objects and background scenes. The proposed framework trains a U-Net model to encode and decode the sparse foreground objects in batches of input images with low-rank backgrounds, by minimizing a combination of nuclear and norms as a loss function. This approach is similar to background subtraction based on robust principal component analysis (RPCA): an iterative method that detects sparse foreground objects as outliers while learning the principal components of the linearly dependent background. In contrast, the proposed method is advantageous over RPCA in that once the U-Net model has learned enough features common to the foreground objects, it can robustly detect them from any single image regardless of the low-rankness and sparseness. The U-Net also enables online object segmentation with much less computational expense than that of RPCA. These advantages are illustrated with background subtraction in video surveillance. It is also shown that the proposed method can build up a well-generalized cell segmentation model from only a few dozen unannotated training images.", "subitem_description_language": "en", "subitem_description_type": "Abstract"}]}, "item_2_description_63": {"attribute_name": "引用", "attribute_value_mlt": [{"subitem_description": "Computer Vision and Image Understanding, 240, art. no. 103939; 2024", "subitem_description_language": "en", "subitem_description_type": "Other"}]}, "item_2_publisher_33": {"attribute_name": "出版者", "attribute_value_mlt": [{"subitem_publisher": "Elsevier Inc", "subitem_publisher_language": "en"}]}, "item_2_relation_12": {"attribute_name": "DOI", "attribute_value_mlt": [{"subitem_relation_type": "isIdenticalTo", "subitem_relation_type_id": {"subitem_relation_type_id_text": "10.1016/j.cviu.2024.103939", "subitem_relation_type_select": "DOI"}}]}, "item_2_rights_13": {"attribute_name": "権利", "attribute_value_mlt": [{"subitem_rights": "© 2024 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "subitem_rights_language": "en"}]}, "item_2_source_id_7": {"attribute_name": "ISSN", "attribute_value_mlt": [{"subitem_source_identifier": "10773142", "subitem_source_identifier_type": "ISSN"}]}, "item_2_version_type_16": {"attribute_name": "著者版フラグ", "attribute_value_mlt": [{"subitem_version_resource": "http://purl.org/coar/version/c_970fb48d4fbd8a85", "subitem_version_type": "VoR"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "Takeda, Keita", "creatorNameLang": "en"}]}, {"creatorNames": [{"creatorName": "Sakai, Tomoya", "creatorNameLang": "en"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_access", "date": [{"dateType": "Available", "dateValue": "2024-02-13"}], "download_preview_message": "", "file_order": 0, "filename": "CVIU240_103939.pdf", "filesize": [{"value": "2.4 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "mimetype": "application/pdf", "size": 2400000.0, "url": {"url": "https://nagasaki-u.repo.nii.ac.jp/record/2000571/files/CVIU240_103939.pdf"}, "version_id": "e487517b-6a51-4f6a-bd06-15454a1b99fb"}]}, "item_keyword": {"attribute_name": "キーワード", "attribute_value_mlt": [{"subitem_subject": "Nuclear loss", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Dual frame U-Net", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Low-rank and sparse model", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "Background subtraction", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "journal article", "resourceuri": "http://purl.org/coar/resource_type/c_6501"}]}, "item_title": "Unsupervised deep learning of foreground objects from low-rank and sparse dataset", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "Unsupervised deep learning of foreground objects from low-rank and sparse dataset", "subitem_title_language": "en"}]}, "item_type_id": "2", "owner": "6", "path": ["2011"], "permalink_uri": "http://hdl.handle.net/10069/0002000571", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2024-02-13"}, "publish_date": "2024-02-13", "publish_status": "0", "recid": "2000571", "relation": {}, "relation_version_is_last": true, "title": ["Unsupervised deep learning of foreground objects from low-rank and sparse dataset"], "weko_shared_id": -1}
Unsupervised deep learning of foreground objects from low-rank and sparse dataset
http://hdl.handle.net/10069/0002000571
http://hdl.handle.net/10069/0002000571eade8ab7-52ec-4c5e-95f2-f4fc4d352903
名前 / ファイル | ライセンス | アクション |
---|---|---|
CVIU240_103939.pdf (2.4 MB)
|
|
Item type | 学術雑誌論文 / Journal Article(1) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
公開日 | 2024-02-13 | |||||||||
タイトル | ||||||||||
言語 | en | |||||||||
タイトル | Unsupervised deep learning of foreground objects from low-rank and sparse dataset | |||||||||
言語 | ||||||||||
言語 | eng | |||||||||
キーワード | ||||||||||
言語 | en | |||||||||
主題Scheme | Other | |||||||||
主題 | Nuclear loss | |||||||||
キーワード | ||||||||||
言語 | en | |||||||||
主題Scheme | Other | |||||||||
主題 | Dual frame U-Net | |||||||||
キーワード | ||||||||||
言語 | en | |||||||||
主題Scheme | Other | |||||||||
主題 | Low-rank and sparse model | |||||||||
キーワード | ||||||||||
言語 | en | |||||||||
主題Scheme | Other | |||||||||
主題 | Background subtraction | |||||||||
資源タイプ | ||||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||||||
資源タイプ | journal article | |||||||||
著者 |
Takeda, Keita
× Takeda, Keita
× Sakai, Tomoya
|
|||||||||
抄録 | ||||||||||
内容記述タイプ | Abstract | |||||||||
内容記述 | Foreground object identification can be considered as anomaly detection in a redundant background. This paper proposes unsupervised deep learning of foreground objects on the basis of the prior knowledge about spatio-temporal sparseness and low-rankness of foreground objects and background scenes. The proposed framework trains a U-Net model to encode and decode the sparse foreground objects in batches of input images with low-rank backgrounds, by minimizing a combination of nuclear and norms as a loss function. This approach is similar to background subtraction based on robust principal component analysis (RPCA): an iterative method that detects sparse foreground objects as outliers while learning the principal components of the linearly dependent background. In contrast, the proposed method is advantageous over RPCA in that once the U-Net model has learned enough features common to the foreground objects, it can robustly detect them from any single image regardless of the low-rankness and sparseness. The U-Net also enables online object segmentation with much less computational expense than that of RPCA. These advantages are illustrated with background subtraction in video surveillance. It is also shown that the proposed method can build up a well-generalized cell segmentation model from only a few dozen unannotated training images. | |||||||||
言語 | en | |||||||||
書誌情報 |
en : Computer Vision and Image Understanding 巻 240, p. art. no. 103939, 発行日 2024-01-23 |
|||||||||
出版者 | ||||||||||
言語 | en | |||||||||
出版者 | Elsevier Inc | |||||||||
ISSN | ||||||||||
収録物識別子タイプ | ISSN | |||||||||
収録物識別子 | 10773142 | |||||||||
DOI | ||||||||||
関連タイプ | isIdenticalTo | |||||||||
識別子タイプ | DOI | |||||||||
関連識別子 | 10.1016/j.cviu.2024.103939 | |||||||||
権利 | ||||||||||
言語 | en | |||||||||
権利情報 | © 2024 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). | |||||||||
著者版フラグ | ||||||||||
出版タイプ | VoR | |||||||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 | |||||||||
引用 | ||||||||||
内容記述タイプ | Other | |||||||||
内容記述 | Computer Vision and Image Understanding, 240, art. no. 103939; 2024 | |||||||||
言語 | en |