Browse Prior Art Database

Method for Trusted Web Site URLs Communication Disclosure Number: IPCOM000211731D
Publication Date: 2011-Oct-14
Document File: 5 page(s) / 80K

Publishing Venue

The Prior Art Database


Disclosed is a novel approach for securing content provided by web applications by providing a whitelist manifest enabling client side action enforcement. With this invention, browsers check against this list of valid sources for content or scripts before loading, linking, or executing something provided by the page.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 34% of the total text.

Page 01 of 5

Method for Trusted Web Site URLs Communication

When a network client (e.g. a web browser) requests a resource (e.g. web page) from a server, the server currently responds with the markup that represents the resource requested, and some meta data in the transport headers.

Network clients will interpret and render this markup, including executing code. When this data is provided by users and has not been properly sanitized by the application, consequences include vulnerabilities such as cross site scripting, linking to inappropriate sites, phishing, malware exploit, or spamming. A known way to infect computers with malware is to visit sites that host malicious code. This can be done by man-in-the-middle attacks (where the content is modified dynamically), or by attacking vulnerability in the applications, that allow a bad guy to inject malicious content or scripts into an otherwise good site. This can be either malicious sites explicitly created with the intent to confuse, or good sites whose content has been compromised through techniques like cross site scripting, where a bad guy is able to add a script to a good site that will pull in malicious content or scripts. The payoff can vary from pointing to bogus sites (spam), directing users to malicious sites (infected if they follow the link), or load inappropriate scripts or content (immediate infection).

While mature application development builds security and input sanitization into the development process, erroneous handling of one variable is enough to create a risk. Server side tools to mitigate this risk include firewalls and intrusion prevention systems (IPS). Client side tools include anti-virus that scan against known attacks (black list) provided by third parties.

The concept of scanning web traffic to detect malicious content is not new. Third party companies maintain lists of known bad sites and will give visual cues to prevent users from visiting them. Likewise, Intrusion Prevention Systems (IPS) work by scanning the network traffic and detecting / filtering known bad pay loads. These methods rely on third parties knowing which sites or traffic patterns are malicious and actively keeping a list of known malicious sites (i.e., black listing).

The following represents related work and enabling art:

• Methods and systems for improving security when accessing a Uniform Resource Locator (URL), such as a Web site. [1]

• Method and system for preventing fraudulent activities. [2]

• URL-based secure download system and method. [3]

• Safe browsing protocols. [4-7]

The novelty of this proposal is in a method where the site owner (as opposed to a third party company) has to explicitly vet the URLs or resources to which the site will be linking (i.e., white listing). The site owner can build this list either manually or dynamically and the URL list can be specific URLs or contain a wild card, such as regular expressions. The approach relies on the application to publish a...