System, Method and Apparatus for Detecting Leakage of Custom Secret Data via Correlation Analysis
Publication Date: 2014-Sep-22
The IP.com Prior Art Database
rSources of private information include well-known APIs, such as ones which return the device unique identifier. While such sources always return private data, some sources ('custom sources') do not. For example, user-input APIs would return private data if the user inserted sensitive info (e.g. his/her SSN). This invention proposes a method for automatically accounting for custom sources of private information. The idea is to track information flow also from candidate privacy sources, and examine the correlations between such flows and flows emanating from well-known privacy sources as a way of filtering out irrelevant information.
Page 01 of 2
Method and Apparatus for Detecting Leakage of Custom Secret Data via
Method and Apparatus for Detecting Leakage of Custom Secret Data via Correlation Analysis
Detecting Leakage of Custom Secret Data via Correlation Analysis Background.
In the mobile era, users are becoming increasingly more exposed to confidentiality threats. Notable examples include communicating the user's physical location and/or the device ID (known as the IMEI) to an advertising or analytics website.
There are several reasons for this emerging trend: First, mobile applications are downloaded and installed on the person's physical device, as opposed to Web applications, and so they gain access to sensitive information. Second, mobile applications have access to contextual information, such as the user's location. Third, the Android security (or permission) model is limited and coarse, at the granularity of system resources like the internet and file system, which forces the user to either give an app permission to access the internet or refuse to install the app.
In response to the growing need to enforce user privacy, multiple tools have recently been developed to detect - either at compile time [PiOS] or at runtime [TaintDroid] - potential privacy threats. These tools are mostly based on information-flow tracking, checking whether data originating at a privacy source (e.g., a statement reading the device ID or the user's contacts list) may flow into a release point (e.g., an advertising website).
A common theme across all of the existing privacy enforcement systems is that they are based on an expert specification of what the privacy sources and sinks are. The specification provides the list of source and sink statements. While these are indeed the common and general criteria to define a (potential) leakage problem, there are other sources of private information that are specific to the application at hand.
An example is a standard text input, where the user is asked to enter their social security number (SSN) or credit card number. In general, considering all text inputs as sources is a huge source of noise. In most cases, input by the user is not secret or confidential. There are, however, cases (such as the SSN) where the user puts in sensitive information.
Existing privacy enforcement tools require thi...