Browse Prior Art Database

Method for Improving Scalability of Multiple Kernel Learning (MKL)

IP.com Disclosure Number: IPCOM000236115D
Publication Date: 2014-Apr-07
Document File: 5 page(s) / 128K

Publishing Venue

The IP.com Prior Art Database

Related People

Avishek Saha: INVENTOR [+4]

Abstract

A method is disclosed for interpreting Multiple Kernel Learning (MKL) as searching for a kernel that optimizes a minimum kernel distance between two convex polytopes. The method includes providing a geometric formulation for resolving MKL problem using Quadratically Constrained Quadratic Program (QCQP). The QCQP is resolved using a variant of Matrix Multiplicative Weight Update (MMWU) method.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 49% of the total text.

Method for Improving Scalability of Multiple Kernel Learning (MKL)

Abstract

A method is disclosed for interpreting Multiple Kernel Learning (MKL) as searching for a kernel that optimizes a minimum kernel distance between two convex polytopes.  The method includes providing a geometric formulation for resolving MKL problem using Quadratically Constrained Quadratic Program (QCQP).  The QCQP is resolved using a variant of Matrix Multiplicative Weight Update (MMWU) method.

Description

Feature generation is a difficult and probably one of the most important problems being dealt within machine learning community.  One particular popular feature generation approach is Multiple Kernel Learning (MKL) which has seen great demand with the advent of kernelized machine learning algorithms.  Unfortunately most existing MKL methods do not scale to large amounts of data.

Disclosed is a method for interpreting Multiple Kernel Learning (MKL) as searching for a kernel that optimizes a minimum kernel distance between two convex polytopes.  The method includes providing a geometric formulation for resolving MKL problem using Quadratically Constrained Quadratic Program (QCQP).  The QCQP is resolved using a variant of Matrix Multiplicative Weight Update (MMWU) method.

In accordance with the method, a scalable algorithm for scaling MKL is provided.  The algorithm provides a primal-dual combinatorial algorithm for solving Semidefinite Programs (SDP) and QCQP.

It is assumed that is a collection of n training samples in a d-dimensional vector space.  Additionally,  are binary class labels for data points in .  Further, it is assumed that   denote rows corresponding to positive entries of y and  X corresponds to negative entries.

From standard duality, maximum margin Support Vector Margin (SVM) is equivalent to finding shortest distance between convex hulls of   and .  This shortest distance is between two points on respective hulls as illustrated in Figure.  

Figure

The points may be expressed as a convex combination of rows of   and , respectively.  If p+ is the closest point on the positive hull, then  is expressed as , wherein  and .  This is expressed as,

,

where, .

When all  terms are collected together by defining   and expanding distance term , the above expression is termed as, 

where,  and

  .

The geometric interpretation of the dual does not change when the examples are transformed by a Reproducing Kernel Hilbert Space (RKHS). The Euclidean norm of the base vector space in  is substituted with RKHS norm:

where, kernel function  is a kernel distance or maximum mean discrepancy.  The dual formulation is changed with covariance term  being replaced by kernel matrix . 

Multiple kernel learning is the SVM problem with the additional complication of unknown kernel function.  It is assumed that kernel function is convex combination of other kernel functions.  The dual problem is expressed as,

where .

Thereafter, it is known that...