Browse Prior Art Database

Method to prevent unfair classifications in machine learning

IP.com Disclosure Number: IPCOM000248937D
Publication Date: 2017-Jan-23
Document File: 2 page(s) / 30K

Publishing Venue

The IP.com Prior Art Database

Abstract

It is proposed to build an extension to a machine learning mechanism to ensure that the result of classification performed on the underrepresented feature vector is marked as a possibly unfair with a list of insights.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 53% of the total text.

1

Method to prevent unfair classifications in machine learning

Machine learning methods are widely used to perform classification of individual matching them to some categories. The quality of a classification heavily depends on the quality of the training set used to train the classifier , in particular if particular set of values of features being classified was under represented in the train set , we can expect that the chance of unfair classification may be significant . These classifications are being performed as part of internal process in various institutions : banks, hospitals, etc - even though the results are later accepted by human , the sole fact of machine learning classification is not being communicated to the individual who in turn does not have a change to object the results , thus we need to establish some kind of protection from unfair classification caused by the possibility that the individual belongs to some under represented group

We are proposing the introduction of a new machine learning system that consists of: - statistics of the learning set module (so for each value in a feature vector we are able to tell if any value of any feature it is under represented ) - classification module (This is already existing classifier - Importtant point - this proposal can be applied to any existing system does not need to rework existing classifiers) - fairness module which deduces if the data can be underrepresented and tells how this underrepresentation influ...