Nov 24, 2012

Is XSS Solved?

In academic research a problem is solved when it is fully understood and a solution is shown to work in a practical setting. If we define "XSS solved" as every instance of XSS eradicated from earth we will probably not see a solution in our lifetime. So, from a research perspective, is XSS solved already?

Geeks in a Castle

Early October I attended the weeklong seminar on web application security at castle Dagstuhl, Southwest Germany. An awesome opportunity to socialize and discuss with leading experts in web appsec academia.

Group photo outside the castle (original).


"XSS Is Solved"

One of the break-out sessions was on XSS. Someone had voiced the opinion that XSS is solved already the day before. The break-out session took the claim seriously and hashed it out.

From a principal standpoint, which is the typical standpoint of academic research, a problem like XSS is solved when a) we fully understand the problem and its underpinnings, and b) have a PoC solution that is practical enough to be rolled out and has the potential to solve the problem fully.

Do we fully understand XSS and its underpinnings?


Important Papers on XSS

Looking at recent publications we arrived at the following short list that we felt summarizes how academia understands XSS today:
  • Context-Sensitive Auto-Sanitization in Web Templating Languages Using Type Qualifiers [pdf]
  • ScriptGard: Preventing Script Injection Attacks in Legacy Web Applications with Automatic Sanitization [pdf]
  • A Symbolic Execution Framework for JavaScript [pdf]
  • Gatekeeper: Mostly Static Enforcement of Security and Reliability Policies for JavaScript Code [pdf]
  • Scriptless Attacks – Stealing the Pie Without Touching the Sill [pdf]
The conclusion was that yes, we think the understanding of XSS is fairly good. But we lack a definition of XSS that would summarize this understanding and allow new attack forms to be deemed XSS or Not XSS.


Current Definitions of XSS

Can you believe that? We still don't have a reasonable definition of XSS.

Wikipedia says "XSS enables attackers to inject client-side script into web pages viewed by other users". 

But it can easily be shot down. Do we need "web pages" to have XSS? Does an attack have to be "viewed by other users" to be XSS? More importantly the Wikipedia definition doesn't say whether the attackers' scripts have to be executed or not or in what context. With default CSP in place you can still inject the script into a page, right? With sandboxed JavaScript you can both inject and execute without causing an XSS attack. And what about these "attackers"? Can they be compromised trusted third parties, legitimate users of the system, or even clumsy business partners?

OWASP says "Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user."

Again "web sites" seem to be a prerequisite, but are they? Here the injected scripts have to be "malicious", but do they? And does the target web site have to be "benign and trusted"? OWASP just like Wikipedia fails to state that the injected script has to be executed. Then OWASP changes its mind and says XSS happens when an attacker "uses a web application to send malicious code". Clearly, this widens the scope beyond JavaScript. But look at that sentence and imagine Alice using gmail.com to send an email to Bob containing a malicious code sample. Alice has done XSS since she used a web application to send malicious code.

I know I'm nit-picking here. Neither Wikipedia nor OWASP have proposed an academic definition of XSS. They're trying to be pedagogical and reach out to non-appsec people.

But we still need a (more) formal definition. To be clear, we need a definition of XSS that allows us to say if a certain vulnerability or attack is XSS or not. Without such a definition we cannot know if countermeasures such as CSP "solves XSS" or not.

Also, Dave Wichers brought up an interesting detail at this year's OWASP AppSec Research conference in Athens. We need to redefine reflected XSS, stored XSS, and DOM-based XSS into server-side XSS reflected and stored, and client-side XSS reflected and stored.

Current, insufficient categorization of XSS.

Proposed new categorization of XSS.


A New Candidate Definition of XSS

To get the juices flowing at the castle we came up with a candidate definition of XSS that the rest of the participants could shoot down.

Candidate definition of XSS: An XSS attack occurs when a script from an untrusted source is executed in rendering a page.

It was shot down thoroughly, in part by yours truly :). 

Terms more or less undefined in the candidate definition:
  • Script. JavaScript, any web-enabled script language, or any character sequence that sort of executes in the browser?
  • Untrusted. What does trusting and not trusting a script mean? Who expresses this trust or distrust?
  • Source. Is it a domain, a server, a legal entity such as Google, or the attacker multiple steps away in the request chain?
  • Executed. Relates to "Script" above. Does it mean running on the JavaScript engine, invoke a browser event, invoke an http request, or what?
  • Rendering. Does rendering have to happen for an attack to be categorized as XSS?
  • Page. Is a page a prerequisite for XSS? Can XSS happen without a page existing?


So Is XSS Solved?

Back to the original question. The feeling at Dagstuhl was that CSP is the mechanism we're all betting on to solve XSS. Not that it's done in version 1.0, not even 1.1. But it's a work horse that we can use to beat XSS in the long run.

What we need right now is a satisfactory definition of XSS. That way we can find the gaps in current countermeasures (including CSP) and get to work on filling them. Don't be surprised if the gaps are fairly few and academic researchers start saying "XSS is solved" within a year. Hey, they need to work on application security problems of tomorrow, not the XSS plague in all the legacy web apps out there.

Please chip in by commenting below. If you can give a good definition of XSS, even better!

Nov 7, 2012

The Rugged Developer

I took part in the intense, weeklong Rugged Summit this spring. Rugged as in Rugged Software. Rugged Software as in secure and robust software. The major outcome of the summit and the homework afterwards was a strawman of The Rugged Handbook. It's free, available here (as docx).

My main contribution to the handbook was being lead author of The Rugged Developer chapter. I'd like to share it with you as a stand-alone blog post below. Hopefully you can give me and the other authors some feedback!

The Rugged Developer
As a Rugged Developer, I want my software to be secure against attacks, interference, corruption, random events, and more.

To achieve my goals I have come to value...

  • Software Quality over Security Products
  • Defensive Code over Patching
  • Ruggedizing Your Own Systems over Waiting To Be Hacked

Your Mission as a Rugged Developer
Good news – you're already Rugged ... in part. We developers do all sorts of things to ensure our code is robust and maintainable. To become a fully rugged developer you only need to add security to the quality goals you try to achieve.

The interesting part of security is that you're protecting your code against intelligent adversaries, not just random things. So in addition to being robust against chaotic users and errors in other systems, your code also has to withstand attacks from people who really know the nuts and bolts of your programming languages, your frameworks, and your deployment platform.

Being a Rugged developer means you have a key role in your project’s security story. The story should tell you what security defenses are available, when they are to be used, and how they are to be used.  Your job is to ensure these things happen. You should also strive to integrate security tests into your development life cycle, and even try to hack your own systems. Better you than a “security” guy or bad guy, right?

Ideas for Being an Effective Rugged Developer
Add Security Unit Tests. Perhaps you've been to security training or you've read a blog post on a new attack form. Make it a habit of trying to add unit tests for attack input you come across. They will add to your negative testing and make your application more robust. A certain escape character, for instance ', may be usable in a nifty security exploit but just as well produce numerous of errors for benign users. A user registration with the name Olivia O'Hara should not fizzle your SQL statement execution. You as a developer have the deepest knowledge of how the system is designed and implemented and thus you are in the best position to implement and test security.

Model Your Data Instead of Using Strings. Almost nothing is just a string. Strings are super convenient for representing input, but they are also capable of transmitting source code, SQL statements, escape characters, null values, markup etc. Write wrappers around your strings and narrow down what they can contain. Even if you don't spend time on a draconic regular expressions or check for syntax and semantics, a simple input restriction to unicode letters + digits + simple punctuation may prove extremely powerful against attacks. Attackers love string input. Rugged developers deny them the pleasure.

Hack Your Own Systems. Even more fun, do it with your team. If management has a problem, tell them it's better you do it than someone on the outside.

Get Educated. There are many materials available to help you learn secure coding, including websites, commercial secure coding training, and vulnerable applications like WebGoat. Also, although top lists can be lame, the OWASP Top 10 and CWE Top 25 are great places to start. As luck would have it, most of the issues in these lists are concrete and you can take action in code today. There are a lot more good materials available at both OWASP and MITRE.  

Make Sure You Patch Your Application Frameworks and Libraries. Know which frameworks and libraries you use (Struts, Spring, .NET MVC, jQuery etc) and their versions. Make sure your regression test suite allows you to upgrade frameworks quickly. Make sure you get those patch alerts. A framework with a security bug can quickly open up several or all your applications to attacks. All the major web frameworks have been found to have severe security bugs the last two years so the problem is very much a reality today.

Metrics
The Rugged Developer should evaluate success based on how well their code stands up to both internal and external stresses. How many weaknesses are discovered after the code is released for testing? How often are mistakes repeated? How long does it take to remediate a vulnerability? How many security-related test cases are associated with a project?

Mar 23, 2012

Rugged Summit Summary

I spent the last week in Washington DC as an invited expert to the Rugged Summit, part of the Rugged Software initiative.

The very minute I announced I'd be participating I got several messages on Twitter saying Rugged is a failure and I shouldn't go. Those messages were sent from people I like and trust. Sure, I was reluctant to a manifesto written to developers by security experts. Also, I hadn't heard much since the Rugged announcement in 2010.

But shouldn't I try to bring my view---a developer's view---to the table? Of course I should!

At the summit I got to work with some amazing people. Ken van Wyk, Joshua Corman, Nick Coblentz, Jeff Williams, Chris Wysopal, John Pavone, Gene Kim, Jason Li, and Justin Berman. Four very intense days. And still no silver bullet :).

Rugged Software In Short
My take on rugged is defensible software free from well-known bug types. A rugged application should be able to withstand a real attack as long as the attack doesn't exploit unknown bugs in the platform or unknown bug categories in the app. If the rugged application is breached the developers and operations should be able to recover gracefully.

Rugged also applies to operations and there's an ongoing Rugged DevOps initiative.

Why Should Organizations Become Rugged?
We first focused on *why* organizations should produce or require rugged software. Does software security also enhance software quality? Should we try to measure return on investment, reduced cost, reduced risk or what? What would make a CIO/CTO decide to go rugged?

Fundamentally we believe rugged software is part of engineering excellence. And we all need to do better. Software is enhancing our lives and revolutionizes almost everything mankind does. We want software to be good enough to enable further revolution.

Software security is currently in a state of vulnerability management. That's a negative approach and it hasn't made frequent breaches go away. Rugged is a more positive approach where you're not supposed to find a bunch of vulnerabilities in pentesting.

Here's three examples of motives for rugged we worked on.

"Telling your security story" could be a competitive advantage. Look at http://www.box.com/enterprise/security-and-architecture/ and put it in the context of Dropbox's recent security failures. Imagine the whole chain of people involved in a system being built to chip in to produce evidence of why their product or service is secure.

Another idea is to define tests that prove that you're actually more secure after becoming rugged than before. We believe executives feel security is a black art and a pentest+patch doesn't show if the organization is 90 % done or 1 % done. HDMoore's Law could be such a test (works without Rugged too of course). How to actually test against Metasploit will have to be figured out.

Third, if buyers of software started demanding more secure software that would drive producers to adopt something like Rugged. So we worked on a Buyers' Bill of Rights and a Buyer's Guide. Buyer empowerment if you will.

The Rugged Software Table of Contents
The rest of the summit was spent on various aspects of how we think software and security can meet successfully. Our straw man results will be published further on and there will be plenty of chances to help making it the right thing.

But the table of contents may give you an impression of where we're headed:

  1. Why Rugged?
  2. This is Rugged
    _
  3. The Rugged Tech Executive
  4. The Rugged Architect
  5. The Rugged Developer (I will write the first version of this section)
  6. The Rugged Tester
  7. The Rugged Analyst
    _
  8. Becoming a Rugged Organization
  9. Proving Ruggedosity
  10. Telling the Rugged Story (internally and publicly)
  11. How Rugged Fits in with Existing Work
  12. Success Case Study

Feb 18, 2012

We Need a Free Browser, Not Just an Open Source Browser

The security community was chocked when Trustwave came clean and revoked a subordinate root certificate it had sold to a third party which explicitly said it would use it to introspect SSL traffic.

The news of Trustwave's severe malpractice sparked demands for removing the Trustwave root certificates from the Mozilla trust stores (Firefox, Thunderbird, SeaMonkey). The demand was filed as a bug in Bugzilla and the issue has also gotten a fair amount of attention on the Mozilla-dev-security-policy mailing list.

We experienced the same process – Bugzilla + mailing list outbursts – during the recent DigiNotar and Comodo scandals.

Kill or Not to Kill, That's the Question
According to Trustwave they had to sell the man-in-the-middle certificate since other CAs do it. That in itself is extremely worrying. These bastards who've been charging us $$$ for maintaining trust on the Internet. They've not only been negligent in their security operations but also done business selling out the trust built in by all browsers.

So, should Mozilla kill the Trustwave root because of their misconduct? Tricky question.

On the one hand I feel Trustwave's CA business deserves nothing less than the ditch. They did the wrong thing with open eyes.

On the other hand we probably have a large scale problem at our hand – CAs worldwide have been issuing subCA certs that allow employers, governments, and agencies to intercept the traffic we all thought was authenticated, encrypted, and integrity checked. Killing the Trustwave root doesn't fix that.

Think about it. The whole trust model crumbles. Can customers now claim someone else must have manipulated their buy order for the stock that later plummeted? Can payment providers who leak credit cards now claim somebody must have MItMed them? Will the increase in online shopping continue once mainstream media understands and writes about this issue?

Whichever path we take it has to lead to reestablished trust in the CA model. The alternatives such as building on DNSSEC or Moxie's excellent Convergence are nowhere near mainstream roll-out.

But you know what? Democracy and openness seem to work. Mozilla has made the right decision.

CAs world-wide have until April 27 to come clean. Mozilla says the following on its security blog:

"Earlier today we sent an email to all certificate authorities in the Mozilla root program to clarify our expectations around certificate issuance. In particular, we made it clear that the issuance of subordinate CA certificates for the purposes of SSL man-in-the-middle interception or traffic management is unacceptable. We made it clear that this practice remains unacceptable even when the intended deployment of such a certificate is restricted to a closed network.


In addition to this clarification, we have made several requests. We have requested that any such certificates be revoked, and their HSMs destroyed. We have requested the serial numbers of those certificates and fingerprints of their signing roots so that we, and other relying parties, can detect and distrust these subCA certificates if encountered. We have requested that any CAs who have issued subCA certificates fulfill these requests no later than April 27, 2012."

Where else did you see such a clear message to the CAs who have abused our trust? Mozilla makes me proud.

We Need a Free Browser, Not Only an Open Source Browser
The handling of Comodo, DigiNotar and Trustwave tells me we truly need Mozilla and Firefox. Nowhere else in the web community have I seen such openness, freedom of speech, and focus on regular users' interests. Hey, even internet trolls get their say on the mailing list :).

Sure, I love my Chrome, I know Google is a high-paying partner to Mozilla, and I know Firefox has been lagging behind in performance and developer tools. But there's something really great and important in a free alternative.

Speaking of lagging behind ... JavaScript performance and Chrome's V8 have been industry standard for a few years. But when I run the SunSpider 0.9.1 benchmark on my new MacBook Air I get:

  • Chrome v17.0.963.56: 280.8 ms
  • Firefox v10.0.2: 233.0 ms

Therefore I would like to urge the Mozilla Foundation to get us tab sandboxing and silent auto-upgrades in Firefox so I can go all-in!

We need a free browser, not just an open source browser.

Jan 6, 2012

Stateless CSRF Protection

In the era of RESTful services and rich internet applications it's important to find security solutions that don't impose unnecessary state or computation on servers. I previously wrote a post on stateless session ids. Let's have a look at how we can protect against cross-site request forgeries (CSRF) without server-side state.

CSRF Basics
Forged requests are nasty attacks. They rely on the fact that your browser automatically adds cookies to HTTP requests if it has cookies associated with the target domain and path. That includes session cookies.

Let's say you're currently authenticated to twitter.com. If you visit another site on another domain that site can issue requests to twitter.com and your Twitter session cookie will be added to those requests.



How can domain B issue requests to domain A, formally doing a cross-site HTTP request? Well, there are some obvious cases – images, JavaScript, and CSS.

<img src="whatever.domain.org/path/logo.png" />

… is allowed from any site, which means a malicious site can contain tags like …

<img src=”https://secure.bank.com/checkAccounts" height=0 width=0 />

Such a tag will issue an HTTP GET to secure.bank.com/checkAccounts including the victim's session cookie for *.bank.com should he or she be logged in. The browser doesn't know if there's an image on that URL or not. It just fires the request. And by setting the image size to 0x0 the victim will see nothing.

CSRF With POST
Most sensitive stuff require an HTTP POST since a GET should be idempotent and not change any state server-side. So can a malicious page issue an HTTP POST to any domain? Yes.



The CSRF code from the image above ($ is jQuery):

<form id="target" method="POST" 
 action="https://1-liner.org/form">
  <input type="text" value="I hate OWASP!" name="oneLiner"/>
<input type="submit"
 value="POST"/>
</form>


<script>
  $(document).ready(function() {
     $('#target').submit();
  });
</script>

CSRF Against RESTful Services
But maybe you've left HTML forms behind and go with rich clients, a RESTful backend and communication via JSON? Can a malicious page issue an HTTP POST targeting such services? Yes.

You can change the encoding of HTML forms to text/plain and do some tricks to produce parseable JSON in the request body. Here's an example that I got working with a Java JAXRS backend:

<form id="target" method="POST" 

action="https://vulnerable.1-liner.org:
8444/ws/oneliners" 

style="visibility:hidden"
 enctype="text/plain">
  <input type="text"

   name='{"id": 0, "nickName": "John",
          "oneLiner": "I hate OWASP!",

          "timestamp": "20111006"}//'
   value="dummy" />
  <input type="submit" value="Go" />

</form>

Notice the enctype and that the JSON is in the input name, not the value. The above form produces a request body looking like this:

{"id": 0, "nickName": "John","oneLiner": "I hate OWASP!","timestamp": "20111006"}//=dummy

… which is accepted by for instance the Jackson parser.

CSRF Protection With Double Submit
Traditional anti-CSRF techniques use tokens issued by the server that the client has to post back. The server validates the request by comparing the incoming token with it's copy. But that small word "copy" means server-side state. Not good.

Double submit is a variation of the token scheme where the client is required to submit the token both as a request parameter and as a cookie.



A malicious page on another domain cannot read the anti-CSRF cookie before its request and thus cannot include it as a request parameter.



Two Misconceptions About Double Submit
There are two common misconceptions about the double submit CSRF protection.

First, it has been suggested that the session cookie should be used for this purpose. Since you have to use JavaScript to pick up the cookie value and add it as a request parameter the cookie cannot have the HTTPOnly attribute. And you want HTTPOnly on your session cookie to prevent session hijacking via cross-site scripting.

But you should not use the session cookie as anti-CSRF cookie. Instead add a specific anti-CSRF cookie which does not have the HTTPOnly attribute and keep your session cookie protected.

Second, people have stuck with server-generated, stateful anti-CSRF cookies. But double submit cookies can be generated client-side and don't have to be saved by the server at all.

Stateless CSRF Protection with Double Submit
The protective measure of double submit lies in the fact that a malicious site cannot read the cookie and include it as request parameter. That condition still holds if the cookie is generated by the client and never saved by the server.

So let the client generate the anti-CSRF value and only compare and check format of cookie and request parameter on the server. Ergo, stateless CSRF protection!



Hardening the Double Submit Protection
Double submit protection breaks down if the attacker somehow can read or set the anti-CSRF value. We can harden double submit against malicious reads.

First of all we make the client change the anti-CSRF value upon every request. This is typically done by centralizing backend calls to a custom AJAX proxy, possibly inherited.

Second, we zero the anti-CSRF cookie directly after each backend call. This will allow for accurate server-side detection of forged requests. A zeroed double submit cookie is a clear signal of either a client-side bug or a forged request. With zeroed anti-CRSF cookies the attacker has to issue his/her attack to exactly when the cookie is set by the client.

Drawbacks of Double Submit
You typically hear two drawbacks of the double submit protection – it's reliance on JavaScript to add the cookie value as request parameter, and the possibility to read the anti-CSRF cookie via cross-site scripting.

The issue with JavaScript is diminishing as JavaScript is becoming a requirement for more and more sites anyway.

The cross-site scripting critique is invalid. If you can script the site you already own all of it and can setup your own AJAX proxy, read any tokens in the DOM etc.