Mar 1, 2011

A Client-Oriented OWASP

Right now there are tons of thoughts, ideas, and discussions on where OWASP should go. I'm beginning to see an image of a Client-Oriented OWASP (thanks Dinis, for finding the word). In that image there are great initiatives as well as a few things we have to set straight.

Current New Initiatives
A few samples of the thoughts going around: Jeff Williams sent his OWASP 4.0 email, I wrote about the gap between appsec and developers, Mark Curphey wrote about OWASP reaching a tipping point, and Michael Coates wrote about a vision for OWASP.

Since then Michael started the Defenders Community and I started the Developer Outreach Initiative which will become the Builders Community shortly.

Client-Oriented Part 1 – Builders and Defenders
If you combine the Defenders and Builders initiatives, a new, more client-oriented OWASP emerges. Client-oriented in the sense that we put more effort into understanding and helping the IT industry who builds, operates and maintains web applications. On the less technical side we're doing this already with processes and guides – great!

But on the more technical side, OWASP needs to mix up the pentesting and appsec tooling with how to defend and how to build secure webapps. And that for me means Builders and Defenders projects but also gearing our conferences and chapters more towards builders and defenders.

I'm not saying we should cut down on pentesting or scanning tools. I love pentesters and ethical hackers. Heck, I read and retweet your blogs daily. I'm also very interested in static analysis tools, proven by my publications in the area in 2002 and 2005. I'm just saying we need to address a larger crowd and get more balance into our efforts.

Client-Oriented Part 2 – Dealing With Independence
"Our freedom from commercial pressures allows us to provide unbiased, practical, cost-effective information about application security."About OWASP

Independent or unbiased has two parts in my opinion:
  1. OWASP should be independent in the statements it publishes, all the way from chapters to the board.
  2. OWASP should not avoid certain projects, results, or discussions only because some individual/corporate member or sponsoring organization will be upset.
Right now I think OWASP is doing fine on number one. I hear no bashing nor promoting of brands or vendors except for publicly thanking them for their support. Thank you supporters!

But number two is worrying me. At AppSec NYC 2008 there was a talk on comparing static analysis tools called "NIST and SAMATE Static Analysis Tool Exposition" (video). Some well-known brands were in the study. But the speaker refused to show figures for individual tools. There seemed to be a consensus in the community that we should not present anything that could be interpreted as negative for certain vendors, not even if the test setup was made totally transparent. That's a violation of point two above, in my opinion.

2.5 years have gone and we're still not using our independence to compare and test appsec tools. Why? John Steven, an OWASP leader with immense experience in static analysis has published serious obstacles to comparing static analysis tools but they are all saying "Just don't make your tool choices based on a general comparison" which is good advice. We should tell people that. But we still need to start putting appsec tools to the test.

Creating the Client-Oriented OWASP means we'll have to start doing independent, client-oriented research. And if OWASP has been implicitly silenced before I will not take it anymore. Here's a list of ideas:
  • Commit to Stefano Di Paola's brand new OWASP Myth Breakers Project.
  • Create a space for customer comments on appsec tools (free as well as commercial). Something like AppStore reviews, good and bad.
  • Start to compare blackbox and whitebox scanning tools. I suggest we go for a synthesized testbed (i.e. a controlled environment) and invite tool vendors/builders to take part. They get a workday to configure their tools and then we go. The testbed, configurations, and versions will all be published along with due reservations such as John Steven's.
  • Start to organize free pentests and design reviews of open platforms. In the best case we cooperate, in the worst case we make our information public in an ethical way to help clients make the right choices.
  • Sign the Open Letter to WebAppSec Tool and Services Vendors: Release Your Schemas and Allow Automation

21 comments:

  1. Re SAMATE:

    Are you sure that you are complaining about the right people? Has SAMATE ever released their results?

    Maybe it's an OWASP problem - but - I've never seen those results anywhere so I'd suspect that it's not OWASP that's holding them back in this case.

    Otherwise...

    I think your ideas have some merit - although I'm not sure any of them are someplace OWASP should go.

    First, Testing black and white box testers is really one of those problems where you think it is a fly, but it turns out to be an elephant in the distance... Does anyone have the time and money to actually do the rigorous testing that would be valuable? Or would this be, "We tried point and shoot with 10 demo versions without any training"?

    If you want to do this - I'd recommend the spec.org as a place to look first.

    Next,

    "Crowd-sourcing" opinions from the opinionated is IMO, actually worse then doing nothing. Unless you are willing to require real names, proven identities and full-disclosure of affiliations - otherwise it'll basically be a popularity/spam contest. It's not hard to see this devolving into a libel-fest.

    Finally,

    I see zero value to OWASP becoming a pentest team. There are already plenty of people and organizations doing pentesting. Unless you can put together some sort of rationale for why _OWASP_ should be involved in this - then this idea, should be left at the idea stage.

    OWASP has significant value as an awareness organization. Some of these ideas, IMO, risk the reputation of the organization without any obvious benefit.

    Pissing off vendors (who are incidentally, the ones who pay the bills - there are only 1200 individual paying members) needlessly doesn't seem like a good organizational plan - nor does it seem like a step that helps the goal of spreading awareness.

    Just because "someone" should do it - it doesn't necessarily follow that that someone should be OWASP.

    ReplyDelete
  2. For what it's worth - Aspect participated in the SATE on the condition that the full results would be released. I was shocked that NIST was pressured into suppressing them. I complained bitterly and extensively about the censorship and bad science. --Jeff

    ReplyDelete
  3. @Dan:

    Regarding the SATE study I'm complaining about the community including OWASP accepting this silencing. We're all passively supporting an opaque appsec market and OWASP can help change that.

    Regarding tool testing I have at least two scientific publications in that area and a third is under way. Comparative studies can be made and are always beneficial if all test cases and configurations are made public.

    Crowd-sourcing opinions has problems for sure. But I'm not willing to kill the idea yet. Moderation and peer pressure may just as well work.

    The reason why OWASP should organize free pentests of open platforms is both community outreach (we help and build important bridges) and fixing real-world problems in a manner suitable for a charity such as us.

    I would never piss of vendors for the sake of it. On the contrary I brought the issue up a couple of times which eventually led to this blog post. But I still think OWASP's independence has to mean we don't avoid certain projects only because it would upset a vendor. Or a consultancy firm for that sake.

    For me "someone" is OWASP. This is the only appsec community turf I know. I trust the people and the organization behind too.

    ReplyDelete
  4. @Jeff:

    Good to hear you complained "bitterly and extensively"! I did not have the guts at the time. I was just a very disappointed attendee.

    ReplyDelete
  5. I think John's ideas have definitely a space at OWASP and I really like that these are the typical activities that force 'hard problems' to be solved.

    Of course that there will be vendors on all sides that will have a problem with OWASP doing this, and if we lose a couple OWASP members because of it, then that will be a very small price to pay, specially when we should be able to get a LOT more members from the 'non-vendor' community.

    The big elephant in the room is that OWASP is ALREADY very vendor influenced. The simple fact that we are still at the stage of having this debate shows it very clearly :)

    We should have a guideline that says: "When in doubt, take the side of the client's best interests, vs the side of the vendor's interests"

    What I find absolutely ironic is that the AppSec vendors (in most cases) have still not realised that having this data openly available and helping out the clients to consume what they product, will actually INCREASE the market and their potential sales/profits :)

    Now, doing this type of analysis will be very hard, but having an OWASP 'explorers' group that is going that direction is VERY important (the key is finding a way to crowd-source this)

    ReplyDelete
  6. I am seeing a new place for me to play at OWASP (that may not be what you think). I am more interested in building software which is why I am signing up to build an app to manage local chapter meetings in a more democratic way. I can probably sign up to build an app to allow for feedback on tools as well.I am passionate about social software, developing web tech and love OWASP so this might be a real sweet spot for me. Build the platform to run the OWASP 4.0 Community on. More when we speak tonight.

    ReplyDelete
  7. The new initiative, especially the builders/defenders thing is counterproductive. I dislike that you are fronting this and involved Michael Coates. Regardless, I am willing to let you learn what OWASP and the appsec industry is about before you attempt to ruin it. If you turn into another Jeremiah Grossman, you will surely make a lifelong industry enemy out of me. This is a threat and I will continue to threaten you for life. Be careful.

    OWASP has always been about defense. The problem is that Jeff Williams allowed Tom Brennan to gain power and influence, who then joined WhiteHat Security and got in bed with Jeremiah Grossman, Robert Hansen, and other idiots. Now Tom Brennan works for Trustwave, which is twice the devil as WhiteHat Security. I would like to see both of these companies go down.

    Nobody in OWASP or at OWASP events that I've attended ever mentions pentesting or scanning (or static ananalysis) tools to the degree that you are making it out to be. If you want to make it that and have a conversation -- find me and I'll explain it to you for as long as you can listen. I can go for years.

    I will give general comparisons of commercial security-focsed static analyis tools. They all suck. I gave a talk at Toorcamp called "Why AppSec Tools Suck". Never buy a commercial security-focused static analysis tool if an open-source of free one can do better. CAT.NET is 30 lines of code -- a pathetic AST parser which relies on a trivial source-sink database. HP Fortify SCA and IBM AppScan Source Edition are no different -- Fortify supports more languages and AppScan SE is more newbie-friendly. Neither does anything useful, nor does Armorize CodeSecure or Checkmarx CxSuite. The interesting ones are Coverity, which implements abstract implementation and Klocwork, which is Hoare logic capable (see Hacking Exposed Linux 3E for more info on both techniques). All of the others are AST parsers that can theoretically be 30 lines of code. Lame tools!

    The issues with commercial security-focused static analysis tools is cost-effectiveness. For appsec consulting companies (and to their largest and most important customers), this is their bread and butter for making what they do cheaper. For someone who just wants to scan their source code and get results (who isn't in that category) -- they are simply out of luck. Too bad. Go home. Foritfy SCA is something like 60k per year per auditor (don't even THINK about doing it per-IDE or per-build-server, use Burp Pro licenses instead for these models). Ounce used to be $3k per app per 2 week assessment, which is really the same price as Fortify. It's good when you are constantly pen-testing apps and want to see results all year round. It's bad if you have a handful or less apps that you want to "see how well they do".

    My first suggestion to you is to stop thinking about tools and what is available to you and start thinking about the concepts. You can accomplish a lot with Burp Pro or Fiddler. You can accomplish a lot with RIPS or CAT.NET. You can accomplish a lot with graudit or whatever else -- as long as you INVEST time for customization and risk analysis. This is why consultants are penultimate appsec success. You'll get it eventually -- I'm giving you some time to figure it out.

    All of my research is client oriented and independent. I do not promote vendors. I do not promote anything except for affordable, verifiable strategy consulting.

    If I were you, I'd hate Jeremiah Grossman and Robert Hansen. I'd hate Trustwave and WhiteHat Security. I'd hate FishNet Security. I'd hate Accuvant. I'd hate all of the commercial appsec tool vendors. You'll get to the point where I'm at in no time and then real people who do real things will listen to you and care. And you won't sound like a shill or an idiot.

    ReplyDelete
  8. >Regarding tool testing I have at least two
    >scientific publications in that area and a
    >third is under way.

    One seems to be a comparison between 4 simple tools for 1 specific class of vulnerabilities - the other seems to be a review of a single approach/tool - I kind of skimmed - maybe I missed something. Not to belittle what was obviously a significant effort on your part - but the problem space is a LOT more complicated in the real world then either of these cases (which of course was the right choice for the purpose of your papers).

    >Comparative studies can be made and are always
    >beneficial if all test cases and
    >configurations are made public.

    If you're not familiar with spec.org (performance benchmarking) - take a look at their process. If OWASP took the role of spec.org relating to testing tools - that would be great. Let the vendors test their tools "best foot forward" with full-disclosure against some OWASP designed and maintained test cases... IMO, It gets you all of the benefits with none of the associated risk.

    >Moderation and peer pressure may just as well
    >work.

    Against anonymous or unverified identities? Moderation probably works for the "Suks" or "rulZ" cases - but the FUDdy "Hard to use", "Seemed buggy", etc it won't. (FYI - my wife and her 100 facebook friends think NetSparker is buggy and hard to use...). :)

    >The reason why OWASP should organize free >pentests

    Maybe we could just lurk on full-disclosure and send out public fixes instead? I think we can come up with better public relations and outreach campaigns then this... At least consider limiting yourself to doing this on projects that invite you to do so.

    >But I still think OWASP's independence has to
    >mean we don't avoid certain projects only
    >because it would upset a vendor. Or a
    >consultancy firm for that sake.

    I agree - and I was using some odd definition of "vendor" that mentally included consultancies. IMO, there's enough other reasons to avoid these projects and OWASP doesn't need to be everything to everyone - focus is good. :)

    ReplyDelete
  9. I agree that Netsparker is buggy and hard to use. I think that Burp Pro is easy to use, but that HTTP/TLS is complex and once you understand either, then you'll be on your way.

    Check this guy's stuff out while you continue to debate (he also talks about static analysis tools) -- http://andrewpetukhov.blogspot.com/2011/01/web-application-scanner-comparison.html

    ReplyDelete
  10. Just a quick note on SATE. We've participated all 3 years. I am pretty certain they did release full data, at least for 2009 and 2008. They may still be working on their final summary for the 2010 study. The main problem with the study is that not all major vendors choose to participate, which defeats the purpose.

    ReplyDelete
  11. Clarification... by "we" I mean Veracode (this is Chris Eng).

    ReplyDelete
  12. You're right, 08 and 09 are out there - carefully encoded with a thick layer of tedium (tediography?) :)

    Multi-hundred megabyte gziped tar files with gigabytes of results in their own XML format. Nothing consumer friendly.

    If you're bored - some of the comments in the analysis sections are fun.

    09 came out in June of 10 - so, maybe 10 will be out here in a couple months...

    ReplyDelete
  13. @dre:

    Hate is a strong word and I feel no hate in the appsec community. Some of the people you say you hate are my friends or colleagues in appsec. I disagree they do OWASP harm.

    However, the history of the appsec business and some of the companies you mention may very well explain why OWASP has such a strong breaker/vendor/consultancy drive today, and not so much a client-oriented drive. My take on that is to make the Builders and Defenders initiatives measure up. And while I was at it I thought I'd bring up two sensitive things -- security people's bad attitude against developers, and OWASP avoiding certain projects because of bias.

    Your experiences of static analysis tools are interesting. If we could avoid hate and flame wars I'd be happy to get some input on how OWASP can make some kind of a comparison happen.

    When you say I should stop thinking about tools and start thinking about concepts you're forgetting I'm developer, not a pentester, app tester or such. I like using http proxies and even fuzzers for general app testing as well as security testing. But I'm not at the level of a professional pentester. So good comparison points for me would be how well the tools work in various IDEs, how they integrate with build servers, how easily I can turn bug reports into GUI or unit tests etc.

    ReplyDelete
  14. @Chris & @Dan:

    Actually, SATE interests me less than the fact that we as a community accept this silence. No matter how hard it is and how much we'll get beaten up by upset vendors I still think we should try. For the sake of clients and for the sake of appsec.

    I completely understand vendors worries of having their tools measured in an unfair way. That's why we should do this in cooperation. I'm not voting for discrediting any vendor or inventor. I'm just voting against a biased silence.

    @Dan:

    In the first study 2002 (written in 2001) these were the tools available. Yes, they were simple compared to today's state of the art.

    In the second study we present a novel approach based on program slicing or so called System Dependecy Graphs. We pattern match both good and bad security practice in code and we also address the problem of prioritizing reported problems by exploiting the graph dual (e.g. no input validation point at all is worse than an input validation point that has a narrowing typecast).

    I don't believe in anonymous comments either. So maybe established but not public identities?

    Free pentesting I think would be more fun than lurking on full-disclosure and fixing stuff. Finding bugs is what drives pentester, or? Therefore I think we have a better chance of success there. And success in this regard means new connections, better impact and a gloria for OWASP.

    ReplyDelete
  15. Sad to see some of these comments. Really sad. Jon, I may not agree with the direction you are taking but its a valiant effort and please don't let angry voices stop you from a noble effort.

    BTW DRE if you think CAT.NET could be built in 30 LOC then you clearly don't understand what it does or how it is built! Not going to get engaged in comment wars but wanted to clarify that point. Read Livshits original paper and then show me in 30 LOC how you build anything remotely similar and I will send you a very nice bottle of wine to celebrate your technical prowess.

    ReplyDelete
  16. For the record - I am totally not an "angry voice" - I'd go with "strenuous objector". :)

    ReplyDelete
  17. Meh.

    We've all created some good and bad boundaries in the community. A lot of vendors, consultants, and glory-seekers lie about their capabilities.

    I have found nowhere to open up these issues for discussion. OWASP is clearly not the place for them.

    I intentionally tried to make people angry by what I said because sometimes it takes the discussion to the next level. There are few people actually passionate enough to set boundaries about what is allowed or not allowed to continue to go on in our industry. Take this whole patent war for example.

    ReplyDelete
  18. @ Mark Curphey:

    I didn't hear it directly from Matt Miller or anyone inside Microsoft, but the "core" is likely to be less complicated than most people make it out to be. CAT.NET is also a free tool, and it's written in C# .NET itself, which means it's reversible in a decompiled state, correct? So really anyone can take a look inside very easily. See for yourself!

    Innovation in appsec tools is a major issue facing us today. From my perspective, it is not OWASP that is past its prime -- but instead the dominant appsec vendors (i.e. anyone on the Gartner Magic Quadrants). Vendors provide features/services for the customers. The customers' voice is your average Fortune 500 CIO. The features/services they request are trivial compared to the problems that actually need to be solved by the vendors.

    I brought up CAT.NET to solidify a point that I was trying to make: that the tools are terrible and they are not going to get better, so we are stuck with what we have. Many years from now, university level research will potentially grow into a better Fortify or Coverity. And that is our one hope, assuming that patent laws or other industry issues don't squash the research.

    ReplyDelete
  19. John - Awesome idea and awesome post! Same for Michael Coates' post. The Elite Security Professional "you developers did this wrong" attitude needs kaboshed. (You'll note I said the same things about the SANS/CWE Top 25 initial release)

    I am eager to participate. Dan summed up my few concerns eloquently: Crowdsourced opinions are a BAD idea, for OWASP, for the community, and for vendors, for many reasons. (discuss offline)

    One important note - I started the OWASP tools benchmarking project, ran several iterations of it, in addition to doing the same for several books. I say this only to follow with - useful and effective benchmarking is darn hard.

    Now that I work on an "appsec tool" I routinely turn down benchmark opportunities. It is not because we have anything to be afraid of - we openly encourage prospects to bake us off against anything/anyone. The reason we do not participate is that so far virtually all scanner benchmarking efforts are extremely poorly executed, due to lack of time or lack of knowledge/experience. Most analysts massively underestimate the time to do a moderate job, let alone a thorough job.

    Now here are benchmarks I would love to participate in, if OWASP would set this up:

    1) There are a few million web applications out there. Let's pick a small sample, say 2,000. This is similar to the number of web apps many large enterprises have (many have many more). Now let's see which tools, vendors, and consultants can test the most apps in say 6 months, with the most accurate results and fewest false positives.

    2) For round two take those 2,000 apps and see who can provide the most code coverage and retesting capabilities each time the code changes, over the course of say 1 year. Who covers the most, in depth and frequency, in 12 months. Excited to know?

    3) Final test - let's see who can make that mountain of data actionable - which tools, consultants, or platforms result in the greatest reduction of risk exposure from those vulns the most quickly. Mitigation, remediation, or educated risk analysis and acceptance (e.g.insurance). Who moves the bar the most and/or the fastest?

    Now - THAT would be an awesome, useful benchmark for the enterprises and business owners who own and have to deal with all this insecure software.

    It is myopic wankery to run another HackMe2 single app hacking test with tricks built in to derail tools and consultants, that rarely appear in the real world. While those tests make for a fun read, and are awesomely amusing over a beer, they do not help businesses move the bar.

    In conclusion - I would like to dis-spell a vicious rumor circling these premises:

    I know for a fact that Tom Brennan did not go to bed with Jeremiah Grossman. Jeremiah is ardently and boringly monogamous, and last I checked Tom Brennan is still into women, despite my best advances.

    Cheers!

    ps - Micheal's graphic should be the front-page structure for OWASP. Really.

    Arian J. Evans
    Software Security Scanning Sophist

    ReplyDelete
  20. @DRE - CAT.NET was totally re-written since Matt Miller built the original version way back when. Can't really compare the tools. We removed the dependancies on FUGE and many other things including moving the framework to Pheonix and building a new graph representation engine and node navigation algorithm to optimize performance. It aint 30 LOC in anyones currency.....We never positioned it as a silver bullet but a tool that was good in a specific set of circumstances on a specific set of vulns ie where the science could map to the problem space reasonably well.

    ReplyDelete
  21. Well, obviously I disagree with Arian (about everything) and that's ok and happens.

    As for Microsoft and their awesomeness, it's true that they develop amazing things. I just wish the other vendors would recognize their greatness and simplicity. I'm pretty sure Richard Johnson worked with Matt Miller on the Phoenix engine, too, but Microsoft has a lot of employees and I don't ever keep track of any of them.

    ReplyDelete