Abstract:
Selective transfer has been proposed as an alternative for transferring fragments of knowledge. Previous work showed that transferring selectively from a group of hypotheses helps to speed learning on a target task. Similarly, existing hypotheses could benefit by selective backward transfer of recent knowledge. This setting applies to supervised machine learning systems that observe a sequence of related tasks. We propose a novel scheme for bi-directional transfer between hypotheses learned sequentially using Support Vector Machines. Transfer occurs in two directions: forward and backward. During transfer forward, a new binary classification task is to be learned. Existing knowledge is used to reinforce the importance of subspaces on the target training data that are related to source support vectors. While this target task is learned, subspaces of shared knowledge between each source hypothesis and the target hypothesis are identified. Representations of these subspaces are learned and used to refine the sources by transferring backward. Albeit fundamental, the exploration of the problem of hypothesis refinement has been very limited. We define this problem and propose a solution. Our experiments show that a learning system can gain up to 5.5 units in mean classification accuracy of tasks learned sequentially using our scheme, within 26.6% of the number of iterations when these tasks are learned from scratch.