File(s) not publicly available
TRF: Learning Kernels with Tuned Random Features
Version 2 2023-10-06, 04:27
Version 1 2023-02-21, 04:53
journal contribution
posted on 2023-10-06, 04:27 authored by Alistair Shilton, Sunil Gupta, Santu Rana, Arun Kumar Venkatesh, Svetha VenkateshRandom Fourier features (RFF) are a popular set of tools for constructing low-dimensional approximations of translation-invariant kernels, allowing kernel methods to be scaled to big data. Apart from their computational advantages, by working in the spectral domain random Fourier features expose the translation invariant kernel as a density function that may, in principle, be manipulated directly to tune the kernel. In this paper we propose selecting the density function from a reproducing kernel Hilbert space to allow us to search the space of all translation-invariant kernels. Our approach, which we call tuned random features (TRF), achieves this by approximating the density function as the RKHS-norm regularised least-squares best fit to an unknown ``true'' optimal density function, resulting in a RFF formulation where kernel selection is reduced to regularised risk minimisation with a novel regulariser. We derive bounds on the Rademacher complexity for our method showing that our random features approximation method converges to optimal kernel selection in the large N,D limit. Finally, we prove experimental results for a variety of real-world learning problems, demonstrating the performance of our approach compared to comparable methods.
History
Journal
Proceedings of the AAAI Conference on Artificial IntelligenceVolume
36Pagination
8286-8294Publisher DOI
Link to full text
ISSN
2159-5399eISSN
2374-3468Issue
8Publisher
Association for the Advancement of Artificial Intelligence (AAAI)Usage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC