CRTI - Centre de Recherche en Technologie de l'Information
Disciplines :
Computer science Mathematics
Author, co-author :
De Handschutter, Pierre ; Université de Mons > Recherche > Service ERC Unit - Matrix Theory and Optimization
Gillis, Nicolas ; Université de Mons > Faculté Polytechnique > Service de Mathématique et Recherche opérationnelle
Siebert, Xavier ; Université de Mons > Faculté Polytechnique > Service de Mathématique et Recherche opérationnelle
Language :
English
Title :
A Survey on Deep Matrix Factorizations
Publication date :
10 August 2021
Journal title :
Computer Science Review
ISSN :
1574-0137
Publisher :
Elsevier, Netherlands
Volume :
42
Peer reviewed :
Peer reviewed
Research unit :
F151 - Mathématique et Recherche opérationnelle
Research institute :
R300 - Institut de Recherche en Technologies de l'Information et Sciences de l'Informatique R450 - Institut NUMEDIART pour les Technologies des Arts Numériques
Udell, M., Horn, C., Zadeh, R., Boyd, S., Generalized low rank models. Found. Trends Mach. Learn. 9:1 (2016), 1–118.
Udell, M., Townsend, A., Why are big data matrices approximately low rank?. SIAM J. Math. Data Sci. 1:1 (2019), 144–160.
Wold, S., Esbensen, K., Geladi, P., Principal component analysis. Chemometr. Intell. Lab. Syst. 2:1–3 (1987), 37–52.
Golub, G.H., Reinsch, C., Singular value decomposition and least squares solutions. Linear Algebra, 1971, Springer, 134–151.
Papyan, V., Romano, Y., Sulam, J., Elad, M., Theoretical foundations of deep learning via sparse representations: A multilayer sparse model and its connection to convolutional neural networks. IEEE Signal Process. Mag. 35:4 (2018), 72–89.
Georgiev, P., Theis, F., Cichocki, A., Sparse component analysis and blind source separation of underdetermined mixtures. IEEE Trans. Neural Netw. 16:4 (2005), 992–996.
Lee, D.D., Seung, H.S., Learning the parts of objects by non-negative matrix factorization. Nature 401:6755 (1999), 788–791.
Marcus, G., Deep learning: A critical appraisal. 2018 arXiv preprint arXiv:1801.00631.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., Generative adversarial nets. Advances in Neural Information Processing Systems, 2014, 2672–2680.
Trigeorgis, G., Bousmalis, K., Zafeiriou, S., Schuller, B., A deep matrix factorization method for learning attribute representations. IEEE Trans. Pattern Anal. Mach. Intell. 39:3 (2016), 417–429.
Gillis, N., Vavasis, S.A., Fast and robust recursive algorithms for separable nonnegative matrix factorization. IEEE Trans. Pattern Anal. Mach. Intell. 36:4 (2013), 698–714.
Févotte, C., Idier, J., Algorithms for nonnegative matrix factorization with the β-divergence. Neural Comput. 23:9 (2011), 2421–2456.
Wang, Y.-X., Zhang, Y.-J., Nonnegative matrix factorization: A comprehensive review. IEEE Trans. Knowl. Data Eng. 25:6 (2012), 1336–1353.
Kim, J., He, Y., Park, H., Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. J. Global Optim. 58:2 (2014), 285–319.
Fu, X., Huang, K., Sidiropoulos, N.D., Ma, W., Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications. IEEE Signal Process. Mag. 36:2 (2019), 59–80.
Gillis, N., The why and how of nonnegative matrix factorization. Regul. Optim. Kernels Support Vector Mach., 12(257), 2014.
Cichocki, A., Zdunek, R., Phan, A.H., Amari, S., Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation. 2009, John Wiley & Sons.
Abdolali, M., Gillis, N., Simplex-structured matrix factorization: Sparsity-based identifiability and provably correct algorithms. 2020 arXiv preprint arXiv:2007.11446.
Miao, L., Qi, H., Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 45:3 (2007), 765–777.
Ang, M.A., Gillis, N., Volume regularized non-negative matrix factorizations. 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2018, IEEE, 1–5.
Fu, X., Huang, K., Sidiropoulos, N.D., Ma, W., Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications. IEEE Signal Process. Mag. 36:2 (2019), 59–80.
Hoyer, P.O., Non-negative sparse coding. Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on, 2002, IEEE, 557–565.
Mørup, M., Hansen, L.K., Archetypal analysis for machine learning and data mining. Neurocomputing 80 (2012), 54–63.
De Handschutter, P., Gillis, N., Vandaele, A., Siebert, X., Near-convex archetypal analysis. IEEE Signal Process. Lett. 27 (2019), 81–85.
Javadi, H., Montanari, A., Nonnegative matrix factorization via archetypal analysis. J. Amer. Statist. Assoc., 2019, 1–22.
Vavasis, S.A., On the complexity of nonnegative matrix factorization. SIAM J. Opt. 20:3 (2009), 1364–1377.
Cichocki, A., Zdunek, R., Multilayer nonnegative matrix factorisation. Electron. Lett. 42:16 (2006), 947–948.
Cichocki, A., Zdunek, R., Multilayer nonnegative matrix factorization using projected gradient approaches. Int. J. Neural Syst. 17:06 (2007), 431–446.
Trigeorgis, G., Bousmalis, K., Zafeiriou, S., Schuller, B., A deep semi-NMF model for learning hidden representations. International Conference on Machine Learning, 2014, 1692–1700.
Yu, J., Zhou, G., Cichocki, A., Xie, S., Learning the hierarchical parts of objects by deep non-smooth nonnegative matrix factorization. 2018 arXiv preprint arXiv:1803.07226.
Dikmen, O., Yang, Z., Oja, E., Learning the information divergence. IEEE Trans. Pattern Anal. Mach. Intell. 37:7 (2014), 1442–1454.
Févotte, C., Bertin, N., Durrieu, J.-L., Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis. Neural Comput. 21:3 (2009), 793–830.
Leplat, V., Gillis, N., Ang, A.M.S., Blind audio source separation with minimum-volume beta-divergence NMF. IEEE Trans. Signal Process., 2020, 3400–3410.
C.H. Ding, T. Li, W. Peng, H. Park, Orthogonal nonnegative matrix t-factorizations for clustering, in: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 126–135.
Pompili, F., Gillis, N., Absil, P.-A., Glineur, F., Two algorithms for orthogonal nonnegative matrix factorization with application to clustering. Neurocomputing 141 (2014), 15–25.
Li, B., Zhou, G., Cichocki, A., Two efficient algorithms for approximately orthogonal nonnegative matrix factorization. IEEE Signal Process. Lett. 22:7 (2014), 843–846.
Lyu, B., Xie, K., Sun, W., A deep orthogonal non-negative matrix factorization method for learning attribute representations. International Conference on Neural Information Processing, 2017, Springer, 443–452.
Qiu, Y., Zhou, G., Xie, K., Deep approximately orthogonal nonnegative matrix factorization for clustering. 2017 arXiv preprint arXiv:1711.07437.
Eggert, J., Korner, E., Sparse coding and NMF. 2004 IEEE International Joint Conference on Neural Networks, 4, 2004, IEEE, 2529–2533.
Kim, J., Park, H., Sparse Nonnegative Matrix Factorization for Clustering: echnical Report., 2008, Georgia Institute of Technology.
Gribonval, R., Jenatton, R., Bach, F., Sparse and spurious: dictionary learning with noise and outliers. IEEE Trans. Inform. Theory 61:11 (2015), 6298–6319.
Cohen, J.E., Gillis, N., Nonnegative low-rank sparse component analysis. ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, IEEE, 8226–8230.
Guo, Z., Zhang, S., Sparse deep nonnegative matrix factorization. Big Data Min. Anal. 3:1 (2019), 13–28.
Beck, A., Teboulle, M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2:1 (2009), 183–202.
Gillis, N., Sparse and unique nonnegative matrix factorization through data preprocessing. J. Mach. Learn. Res. 13:Nov (2012), 3349–3386.
Lyu, S., Wang, X., On algorithms for sparse multi-factor NMF. Advances in Neural Information Processing Systems, 2013, 602–610.
Peharz, R., Pernkopf, F., Sparse nonnegative matrix factorization with l0-constraints. Neurocomputing 80 (2012), 38–46.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R., Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15:1 (2014), 1929–1958.
J. Cavazza, P. Morerio, B. Haeffele, C. Lane, V. Murino, R. Vidal, Dropout as a low-rank regularizer for matrix factorization, in: International Conference on Artificial Intelligence and Statistics, 2018, pp. 435–444.
Pascual-Montano, A., Carazo, J.M., Kochi, K., Lehmann, D., Pascual-Marqui, R.D., Nonsmooth nonnegative matrix factorization (nsNMF). IEEE Trans. Pattern Anal. Mach. Intell. 28:3 (2006), 403–415.
Song, H.A., Lee, S.-Y., Hierarchical representation using NMF. International Conference on Neural Information Processing, 2013, Springer, 466–473.
Sharma, P., Abrol, V., Thakur, A., ASE: Acoustic scene embedding using deep archetypal analysis and GMM. Interspeech, 2018, 3299–3303.
Keller, S.M., Samarin, M., Wieser, M., Roth, V., Deep archetypal analysis. German Conference on Pattern Recognition, 2019, Springer, 171–185.
Alemi, A., Fischer, I., Dillon, J., Murphy, K., Deep variational information bottleneck. ICLR, 2017.
Li, X., Zhao, C., Shu, Z., Wang, Q., Multilayer concept factorization for data representation. 2015 10th International Conference on Computer Science & Education (ICCSE), 2015, IEEE, 486–491.
Zhang, Y., Zhang, Z., Zhang, Z., Zhao, M., Zhang, L., Zha, Z., Wang, M., Deep self-representative concept factorization network for representation learning. Proceedings of the 2020 SIAM International Conference on Data Mining, 2020, SIAM, 361–369.
Zhang, Z., Zhang, Y., Xu, M., Zhang, L., Yang, Y., Yan, S., A survey on concept factorization: From shallow to deep representation learning. Inf. Process. Manage., 58(3), 2021, 102534.
Meng, Y., Shang, R., Shang, F., Jiao, L., Yang, S., Stolkin, R., Semi-supervised graph regularized deep NMF with bi-orthogonal constraints for data representation. IEEE Trans. Neural Netw. Learn. Syst., 2019.
Sidiropoulos, N.D., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E.E., Faloutsos, C., Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 65:13 (2017), 3551–3582.
Bi, X., Qu, A., Shen, X., Multilayer tensor factorization with applications to recommender systems. Ann. Statist. 46:6B (2018), 3308–3333.
Casebeer, J., Colomb, M., Smaragdis, P., Deep tensor factorization for spatially-aware scene decomposition. 2019 IEEE Workshop on Applications of Signal Processing To Audio and Acoustics (WASPAA), 2019, IEEE, 180–184.
Smaragdis, P., Venkataramani, S., A neural network alternative to non-negative audio models. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, IEEE, 86–90.
Jia, C., Shao, M., Fu, Y., Sparse canonical temporal alignment with deep tensor decomposition for action recognition. IEEE Trans. Image Process. 26:2 (2016), 738–750.
Oymak, S., Soltanolkotabi, M., End-to-end learning of a convolutional neural network via deep tensor decomposition. 2018 arXiv preprint arXiv:1805.06523.
Domanov, I., Lathauwer, L.D., Generic uniqueness conditions for the canonical polyadic decomposition and INDSCAL. SIAM J. Matrix Anal. Appl. 36:4 (2015), 1567–1589.
Maggu, J., Majumdar, A., Unsupervised deep transform learning. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, IEEE, 6782–6786.
Gillis, N., Successive nonnegative projection algorithm for robust nonnegative blind source separation. SIAM J. Imaging Sci. 7:2 (2014), 1420–1450.
Lee, D.D., Seung, H.S., Algorithms for non-negative matrix factorization. Advances in Neural Information Processing Systems, 2001, 556–562.
Ahn, J., Choi, S., Oh, J., A multiplicative up-propagation algorithm. Proceedings of the Twenty-First International Conference on Machine Learning, 2004, ACM, 3.
Nesterov, Y.E., A method for solving the convex programming problem with convergence rate O(1/k̂2). Dokl. Akad. Nauk Sssr, 269, 1983, 543–547.
Huang, K., Sidiropoulos, N.D., Liavas, A.P., A flexible and efficient algorithmic framework for constrained matrix and tensor factorization. IEEE Trans. Signal Process. 64:19 (2016), 5052–5065.
Zhou, Y., Xu, L., A deep structure-enforced nonnegative matrix factorization for data representation. Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 2018, Springer, 340–350.
Arora, S., Cohen, N., Hu, W., Luo, Y., Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems, 2019, 7411–7422.
Fan, J., Cheng, J., Matrix completion by deep matrix factorization. Neural Netw. 98 (2018), 34–41.
Q. Wang, M. Sun, L. Zhan, P. Thompson, S. Ji, J. Zhou, Multi-modality disease modeling via collective deep matrix factorization, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 1155–1164.
Le Roux, J., Hershey, J.R., Weninger, F., Deep NMF for speech separation. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, IEEE, 66–70.
Koren, Y., Bell, R., Volinsky, C., Matrix factorization techniques for recommender systems. Computer 42:8 (2009), 30–37.
Bioucas-Dias, J.M., Plaza, A., Dobigeon, N., Parente, M., Du, Q., Gader, P., Chanussot, J., Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 5:2 (2012), 354–379.
Ma, W.-K., Bioucas-Dias, J.M., Chan, T.-H., Gillis, N., Gader, P., Plaza, A.J., Ambikapathi, A., Chi, C.-Y., A signal processing perspective on hyperspectral unmixing: Insights from remote sensing. IEEE Signal Process. Mag. 31:1 (2013), 67–81.
Data - rslab, (Accessed on 09/09/2020), https://rslab.ut.ac.ir/data.
Mongia, A., Jhamb, N., Chouzenoux, E., Majumdar, A., Deep latent factor model for collaborative filtering. Signal Process., 169, 2020, 107366.
Xue, H., Dai, X., Zhang, J., Huang, S., Chen, J., Deep matrix factorization models for recommender systems. IJCAI, 2017, 3203–3209.
Yi, B., Shen, X., Liu, H., Zhang, Z., Zhang, W., Liu, S., Xiong, N., Deep matrix factorization with implicit feedback embedding for recommendation system. IEEE Trans. Ind. Inf., 2019.
Yang, Y., Wang, H., Multi-view clustering: A survey. Big Data Min. Anal. 1:2 (2018), 83–107.
H. Zhao, Z. Ding, Y. Fu, Multi-view clustering via deep matrix factorization, in: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 2921–2927.
B. Cui, H. Yu, T. Zhang, S. Li, Self-weighted multi-view clustering with deep matrix factorization, in: Asian Conference on Machine Learning, 2019, pp. 567–582.
Wei, S., Wang, J., Yu, G., Domeniconi, C., Zhang, X., Multi-view multiple clusterings using deep matrix factorization. AAAI, 2020, 6348–6355.
Xu, C., Guan, Z., Zhao, W., Niu, Y., Wang, Q., Wang, Z., Deep multi-view concept learning. IJCAI, 2018, 2898–2904.
Huang, S., Kang, Z., Xu, Z., Auto-weighted multi-view clustering via deep matrix decomposition. Pattern Recognit., 97, 2020, 107015.
Xiong, Y., Xu, Y., Shu, X., Cross-view hashing via supervised deep discrete matrix factorization. Pattern Recognit., 103, 2020, 107270.
J. Yang, J. Leskovec, Overlapping community detection at scale: a nonnegative matrix factorization approach, in: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, 2013, pp. 587–596.
Ye, F., Chen, C., Zheng, Z., Deep autoencoder-like nonnegative matrix factorization for community detection. Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018, ACM, 1393–1402.
Rajabi, R., Ghassemian, H., Spectral unmixing of hyperspectral imagery using multilayer NMF. IEEE Geosci. Remote Sens. Lett. 12:1 (2014), 38–42.
Tong, L., Yu, J., Xiao, C., Qian, B., Hyperspectral unmixing via deep matrix factorization. Int. J. Wavelets Multiresolut. Inf. Process., 15(06), 2017, 1750058.
Feng, X., Li, H., Li, J., Du, Q., Plaza, A., Emery, W.J., Hyperspectral unmixing using sparsity-constrained deep nonnegative matrix factorization with total variation. IEEE Trans. Geosci. Remote Sens. 56:10 (2018), 6245–6257.
Rudin, L.I., Osher, S., Fatemi, E., Nonlinear total variation based noise removal algorithms. Physica D 60:1–4 (1992), 259–268.
Zhao, G., Zhao, C., Jia, X., Multilayer unmixing for hyperspectral imagery with fast kernel archetypal analysis. IEEE Geosci. Remote Sens. Lett. 13:10 (2016), 1532–1536.
Gao, F., Liu, X., Dong, J., Zhong, G., Jian, M., Change detection in SAR images based on deep semi-NMF and SVD networks. Remote Sens., 9(5), 2017, 435.
Li, H., Yang, G., Yang, W., Du, Q., Emery, W.J., Deep nonsmooth nonnegative matrix factorization network factorization network with semi-supervised learning for SAR image change detection. ISPRS J. Photogramm. Remote Sens. 160 (2020), 167–179.
Sharma, P., Abrol, V., Sao, A.K., Deep sparse representation based features for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 25:11 (2017), 2162–2175.
Davis, S., Mermelstein, P., Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process. 28:4 (1980), 357–366.
C. Hsu, J. Chien, T. Chi, Layered nonnegative matrix factorization for speech separation, in: 16th Annual Conference of the International Speech Communication Association (Interspeech 2015), Vols 1-5, 2015, pp. 628–632.
Thakur, A., Abrol, V., Sharma, P., Rajan, P., Deep convex representations: Feature representations for bioacoustics classification. Interspeech, 2018, 2127–2131.
Thakur, A., Rajan, P., Deep archetypal analysis based intermediate matching kernel for bioacoustic classification. IEEE J. Sel. Top. Sign. Proces. 13:2 (2019), 298–309.
Ding, C., Li, T., Peng, W., On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing. Comput. Statist. Data Anal. 52:8 (2008), 3913–3927.
S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, M. Zhu, A practical algorithm for topic modeling with provable guarantees, in: International Conference on Machine Learning, 2013, pp. 280–288.
Dobigeon, N., Tourneret, J., Richard, C., Bermudez, J.C.M., McLaughlin, S., Hero, A.O., Nonlinear unmixing of hyperspectral images: Models and algorithms. IEEE Signal Process. Mag. 31:1 (2013), 82–94.
Sainath, T.N., Kingsbury, B., Sindhwani, V., Arisoy, E., Ramabhadran, B., Low-rank matrix factorization for deep neural network training with high-dimensional output targets. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, IEEE, 6655–6659.
Zhang, Y., Chuangsuwanich, E., Glass, J., Extracting deep neural network bottleneck features using low-rank matrix factorization. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, IEEE, 185–189.
Kang, T.G., Kwon, K., Shin, J.W., Kim, N.S., NMF-based target source separation using deep neural network. IEEE Signal Process. Lett. 22:2 (2014), 229–233.
Ng, A., Sparse autoencoder. CS294A Lecture Notes 72:2011 (2011), 1–19.
Lemme, A., Reinhart, R.F., Steil, J.J., Online learning and generalization of parts-based image representations by non-negative sparse autoencoders. Neural Netw. 33 (2012), 194–203.
Hosseini-Asl, E., Zurada, J.M., Nasraoui, O., Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. IEEE Trans. Neural Netw. Learn. Syst. 27:12 (2016), 2486–2498.
Flenner, J., Hunter, B., A deep non-negative matrix factorization neural network. Semant. Sch., 2017.
Tariyal, S., Majumdar, A., Singh, R., Vatsa, M., Deep dictionary learning. IEEE Access 4 (2016), 10096–10109.
van Dijk, D., Burkhardt, D.B., Amodio, M., Tong, A., Wolf, G., Krishnaswamy, S., Finding archetypal spaces using neural networks. 2019 IEEE International Conference on Big Data, 2019, IEEE, 2634–2643.
C. Bauckhage, K. Kersting, F. Hoppe, C. Thurau, Archetypal analysis as an autoencoder, in: Workshop New Challenges in Neural Computation, 2015, p. 8.
Razaviyayn, M., Hong, M., Luo, Z.-Q., A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM J. Optim. 23:2 (2013), 1126–1153.
Sun, R., Li, D., Liang, S., Ding, T., Srikant, R., The global landscape of neural networks: An overview. IEEE Signal Process. Mag. 37:5 (2020), 95–108.
Laurent, T., Brecht, J., Deep linear networks with arbitrary loss: All local minima are global. International Conference on Machine Learning, 2018, PMLR, 2902–2907.
S. Arora, N. Golowich, N. Cohen, W. Hu, A convergence analysis of gradient descent for deep linear neural networks, in: 7th International Conference on Learning Representations, ICLR 2019, 2019.
Bartlett, P.L., Helmbold, D.P., Long, P.M., Gradient descent with identity initialization efficiently learns positive-definite linear transformations by deep residual networks. Neural Comput. 31:3 (2019), 477–502.
S. Arora, N. Cohen, E. Hazan, On the optimization of deep networks: Implicit acceleration by overparameterization, in: International Conference on Machine Learning, 2018, pp. 244–253.
S. Du, W. Hu, Width provably matters in optimization for deep linear neural networks, in: International Conference on Machine Learning, 2019, pp. 1655–1664.
O. Shamir, Exponential convergence time of gradient descent for one-dimensional deep linear neural networks, in: Conference on Learning Theory, 2019, pp. 2691–2713.
Gunasekar, S., Woodworth, B.E., Bhojanapalli, S., Neyshabur, B., Srebro, N., Implicit regularization in matrix factorization. Advances in Neural Information Processing Systems, 2017, 6151–6159.
Huang, K., Sidiropoulos, N.D., Swami, A., Non-negative matrix factorization revisited: Uniqueness and algorithm for symmetric decomposition. IEEE Trans. Signal Process. 62:1 (2013), 211–224.
Malgouyres, F., Landsberg, J., On the identifiability and stable recovery of deep/multi-layer structured matrix factorization. 2016 IEEE Information Theory Workshop (ITW), 2016, IEEE, 315–319.
Malgouyres, F., Landsberg, J., Multilinear compressive sensing and an application to convolutional linear networks. SIAM J. Math. Data Sci. 1:3 (2019), 446–475.
O. Seddati, S. Dupont, S. Mahmoudi, M. Parian, Towards good practices for image retrieval based on CNN features, in: Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 1246–1255.
Smaragdis, P., Non-negative matrix factor deconvolution; extraction of multiple sound sources from monophonic inputs. International Conference on Independent Component Analysis and Signal Separation, 2004, Springer, 494–499.