TL;DR Nope, I didn't find a major breach, just an interesting detail in reCAPTCHA's design.
CAPTCHA ain't no good for CSRF
I was told on twitter that CAPTCHA mitigates CSRF (And, wow, it is "officially" on OWASP Challenge-Response section). Here is my opinion — it does not, it will not, it should not. Feel free to wipe out that section from the document.
CAPTCHA in a nutshell
http://en.wikipedia.org/wiki/CAPTCHA
"challenge" is literally a key to solution (not necessary random, solution can be encrypted inside - state less).
"response" is an attempt to recognize the given image.
Client is a website that hates automated actions, e.g. registrations.
Provider generates new challenges and verifies attempts.
User — a human being or a bot promoting viagra online no prescription las vegas. Nobody knows exactly until User solves the CAPTCHA.
Ideal flow
1. User comes to Client.
2. Client securely gets a new challenge from Provider.
3. Client sends back to User the challenge value with corresponding image.
4. User submits the form along with challenge and response. Client verifies them by sending over to Provider.
5. If Provider responds successfully — User is likely a pink human, otherwise:
if amount_of_attempts > 3
BAN FOR A WHILE
else
GOTO 2
end
As you can see User talks directly to Client. He has no idea about Provider.
Only 1 challenge per form attempt is allowed. 3 fails in a row = BAN. Does not matter how hard challenges are, User must try to solve them.
reCAPTCHA is easy to install, free, secure and very popular.
The reCAPTCHA flow
1. Client obtains public key and private key at https://developers.google.com/recaptcha/intro
2. Client adds some JS containing his public key to the HTML form.
3. User opens the page, User's browser makes a request to Provider and gets a new challenge.
4. User solves it and sends challenge with response to Client
5. Client verifies it by calling Provider's API (CSRF Tool template). In case of failed attempt User is required to reload the Provider's iframe to get a new challenge - GOTO 3.
The reCAPTCHA problem
Client knows how many wrong attempt you made (because verification is server side) but doesn't know how many challenges you actually received (because User gets challenge with JS, Client isn't involved). Getting a challenge and verifying a challenge are loosely coupled events.
Let's assume I have a script which recognizes 1 of 1000 reCAPTCHA images. That's quite a shitty script, right?
Wait, I have another script which loads http://www.google.com/recaptcha/api/noscript?k=VICTIM_PUBLIC_KEY (demo link) and parses src="image?c=CHALLENGE_HERE"></center
For 100 000 images script solves (more or less reliably) 100 of them and crafts valid requests to Client by putting solved challenge/response pairs in them.
Analogy: User = Student, Client = Exam, Provider = Table with questions.
To pass it Student got to solve at least 1 problem, and he has only 3 attempts. In reCAPTCHA world Student goes to the table and looks over all questions on it, trying to find the easiest one.
You don't need to solve reCAPTCHAs as soon as you receive them anymore. You don't need to hit Client at all to get challenges. You talk directly to Provider, get some reCAPTCHAs with Client's PUBLIC_KEY, solve the easiest and have fun with different vectors.
There are blackhat APIs like antigate_com which are quite good at solving CAPTCHAs (private scripts and chinese kids, I guess).
With such trick they can create a special API for reCAPTCHA. You send victim's PUBLIC_KEY and get back N solved CAPTCHAs which you can use in malicious requests.
Mitigation
I cannot say if it should be fixed, but website owners must be aware that challenges are out of their control. To fix this reCAPTCHA could return amount of challenges and failed attempts with verification response.
Questions? I realized this an hour ago so I can possibly be mistaken somewhere, or I didn't discover it first. Point it out please.
Sunday, May 19, 2013
Saturday, May 18, 2013
CSRF Tool
I facepalm when I hear about CSRF in popular websites. (I was searching for them in the past but then realized that's a boring waste of time).
tell me whatcha gonna do???
A while ago our friend Nir published CSRF changing Facebook password and it was the last straw. I can recall at least 5 major CSRF vulnerabilities in Facebook published in last 6 months. This level of web security is inacceptable nonsense for Facebook.
So, here is a short reminder about mitigation:
Every state-changing (POST) request must contain a random token. Server-side must check it before processing the request using value stored in received cookies: cookies[:token] == params[:token]. If any POST endpoint lacks it — something is clearly wrong with implementation
For making world a better place I created simple and handy CSRF Tool: homakov.github.io
- Copy as Curl from Web Inspector, paste into text field and get a working template in a few clicks:
- No hassle. Researchers need a playground to demonstrate CSRF, with CSRF Tool you can simply give a link with working template.
- No disclosure. Fragment (part after #) is not sent on server side, so I am not able to track CSRFs you currently research (Github Pages don't have server side anyway). Link to template contains all information inside.
- Auto-submit for more fun, Base64 makes URL longer but hides the template.
- Add new fields and modify existing ones, change request method and endpoint path seamlessly.
- Post into iframe (which is carefully sandboxed) or a new window, try Referrer-free submission and so on.
tell me whatcha gonna do???
Everything is free but donations are welcome :) PayPal: homakov@gmail.com
Tuesday, May 14, 2013
Two Factor Authentication? Try OAuth!
UPD: no wonder, I missed the fact that OAuth providers use static passwords and it cannot be legit 2nd factor, just makes 1st factor harder to exploit. Thanks for feedback people from reddit!
Disclaimer: I'm noob in Two Factor Authentication (TFA). I got an idea today which I want to share and get feedback, your comments are totally welcome.
I don't have a mobile phone. Not only because russian mobile providers are cheaters (likely, same in your country) but also for many other reasons: traveling (my mastercard was blocked once in Sofia and I needed SMS approval code, which I couldn't receive — my mobile was "outside the coverage area" all the time), no daily usage (never needed to call someone ASAP in real life. maybe I am such a nerd), VoIP FTW etc — who cares, this is not my point.
The thing is all physical items (mobile phone, yubikey, token generators, biometrics of eye, fingerprints) are clone-able / steal-able or just not reliable enough (face/gesture/speech recognition).
Again, in disclaimer I said I don't know if scientists already created a universal reliable physical object for TFA, I just read wiki article a bit and seems they did not.
Why must Second Factor provider be a real object in our digital century? Is it really any better/safer (clearly less convenient) than yet another password or bunch of cookies our browsers store? I doubt.
In browser we trust.
OAuth is not supposed to authenticate you, no surprise here. Although an OAuth (or OpenID) provider can be trusted 3rd party which will approve the action your are about to commit.
Trusted 3rd Party Website
- every normal Internet user has or can register Facebook/Twitter/Paypal/Google account immediately with no "physical" hassle attached.
- Attack surface is added, attack complexity increases dramatically.
example.com surface + twitter surface + facebook surface = hacker needs XSS or similar bug in two major social networks and your example.com password to log in your example.com account.
Not enough? Add Paypal Connect. Add force-login option so attacker will need all of your passwords.
The more guys say John is a reliable person I can trust, the more I believe he really is. And I don't need to look at John's tattoo (a poor analogy for biometrics) which he hates to show! - Hassle-free. Just be logged in FB/twitter all the time and couple of quick OAuth redirects in iframes (no interaction required at all) will make sure that your current FB account is the one attached to example.com account, your current twitter user is equal example.com attached one.
It can be simplified and more secured because you only need /me endpoint data, actual access_token will not be used.
Leaving the post short by purpose, waiting for your ideas, perhaps I missed something huge. Thanks!
Saturday, May 4, 2013
Do not use RJS-like techniques
RJS (Ruby JavaScript) — a straightforward technique when server side (e.g. Rails app) responds with Javascript code and client-side eval-s it (Writing JS in Ruby is unrelated, I only consider response-evaling concept!)
Here are my usability and security concerns about this interaction.
Possibly other developers use their own RJS-like techniques — they can find my post helpful too.
Here are my usability and security concerns about this interaction.
Possibly other developers use their own RJS-like techniques — they can find my post helpful too.
(c) from http://slash7.com/assets/2006/10/8/RJS-Demistified_Amy-Hoy-slash7_1.pdf |
- Broken concept & architecture. This feels as weird as the client-side sending code that is going to be evaled on the server-side... wait... Rails programmers had similar thing for a while :/
Any RCEJS technique can be painlessly split into client's listeners and server's data. - Escaping user content and putting it into Javascript can be more painful and having more pitfalls then normal HTML escaping. Even :javascript section used to be vulnerable in HAML (</script> breaks .to_json in Rails < 4). There can be more special characters and XSS you should care about.
- JSONP-like data leaking. If app accepts GET request and responds with Javascript containing private data in it attacker can use fake JS functions to leak data from the response. For example response is:
Page.updateData("data")
and attacker crafts such page:
<script>var Page={updateData:function(leak){ ... }}</script>
<script src="http://TARGET/get_data.js?params"></script>
Voila, joys of RJS - UPD as pointed out on HN evaling response will mess with your global namespace and there is no way to jump into closure you created request in..
Original RJS (Ruby-generates-JS) was removed by Yahuda Katz 6 years ago, he gave me this link with more details.
But I still see in the wild apps responding with private data in javascript. This is a very fragile technique, refactoring will both improve code quality and secureness of your app.
P.S. Cool, GH uses cutting edge HTML 5 security and added CSP headers. My thoughts:
- Rails 4 has built-in default_headers (
guess who added it?), which has better performance than before filter - current CSP is easily bypassed with JSONP trick, just write in console on github.com:
document.write('<script src="https://github.com/rails.json?callback=alert(0)//"></script>')
i will fix it when get spare time, btw: https://github.com/rails/rails/pull/9075 - CSP is very far from ultimate XSS prevention. Really very far, especially for Rails's agileness and jquery_ujs. Github should consider privileged subdomains https://github.com/populr/subdomainbox
Subscribe to:
Posts (Atom)