Notice
This item was automatically migrated from a legacy system. It's data has not been checked and might not meet the quality criteria of the present system.
Filzmoser, P., Brodinova, S., Ortner, T., Breiteneder, C., & Rohm, M. (2020). Robust and sparse k-means clustering in high dimension. Seminarvortrag an der JKU Linz, Linz, Austria. http://hdl.handle.net/20.500.12708/123091
E105-06 - Forschungsbereich Computational Statistics E193-02 - Forschungsbereich Computer Graphics E193-06 - Forschungsbereich Interactive Media Systems
-
Date (published):
2020
-
Event name:
Seminarvortrag an der JKU Linz
-
Event date:
23-Jan-2020 - 24-Jan-2020
-
Event place:
Linz, Austria
-
Abstract:
We introduce a robust k-means-based clustering method for high-dimensional data where not only outliers but also a large number of noise variables are very likely to be present [4]. Although Kondo et al. [2] already addressed such an application scenario, our approach goes even further. Firstly, the introduced method is designed to identify clusters, informative variables, and outliers simultaneou...
We introduce a robust k-means-based clustering method for high-dimensional data where not only outliers but also a large number of noise variables are very likely to be present [4]. Although Kondo et al. [2] already addressed such an application scenario, our approach goes even further. Firstly, the introduced method is designed to identify clusters, informative variables, and outliers simultaneously. Secondly, the proposed clustering technique additionally aims at
optimizing required parameters, e.g. the number of clusters. This is a great advantage over most existing methods. Moreover, the robustness aspect is achieved through a robust initialization [3] and a proposed weighting function using the Local Outlier Factor [1]. The weighting function provides a valuable source of information about the outlyingness of each observation for a subsequent outlier detection. In order to reveal both clusters and informative variables properly,
the approach uses a lasso-type penalty [5]. The method has thoroughly been tested on simulated as well as on real high-dimensional datasets. The conducted experiments demonstrated a great ability of the clustering method to identify clusters, outliers, and informative variables.