Title:
Improved Bounds for Learning Symmetric Juntas

Thumbnail Image
Author(s)
Lipton, Richard J.
Markakis, Evangelos
Authors
Advisor(s)
Mehta, Aranyak
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Supplementary to
Abstract
We consider a fundamental problem in computational learning theory: learning in the presence of irrelevant information. In particular we are interested in learning an arbitrary boolean function of n variables which depends only on k of them. The problem has a lot of interesting applications in artificial intelligence, neural networks and machine learning theory. The problem was first proposed by Blum [1] and Blum and Langley [2]. Ever since, the first nontrivial algorithm was given by [4], which runs in time n[superscript 0.7k] poly(log 1/δ,2[superscript k],n), for general juntas and n[supserscript 2/3k]poly(log 1/δ,2[superscript k],n) for symmetric juntas. We give an algorithm for symmetric juntas which runs in time n[superscript k/3(1+o(1))]poly(log 1/δ,2[superscript k],n). We further show that when k is bigger than some large enough constant, the algorithm runs in time n[superscript 0:18k]poly(log 1/δ,2[superscript k], n). To our knowledge, this is the best known upper bound for learning symmetric juntas under the uniform distribution. The same algorithm has also been proposed by [5]. In [5] it was shown that the running time is bounded by n[superscript k/2(1+o(1))poly(log 1/δ,2[superscript k],n). It was also shown that under a certain number theoretic assumption, the running time is n[superscript k/3]poly(log 1/δ,2[superscript k],n).
Sponsor
Date Issued
2003
Extent
175213 bytes
Resource Type
Text
Resource Subtype
Technical Report
Rights Statement
Rights URI