How one man could have deleted any image on Facebook

We’ve written about insecure direct object references before.

Here’s another one that could have given a bug-hunter called Pouya Darabi the ability to remove other people’s images from Facebook.

Fortunately for the world at large, Darabi told Facebook, who quickly fixed the bug and paid him a $10,000 bug bounty.

Insecure direct object references on websites are where you figure out a way to take a web request that lets you access an item that belongs to you, such as a video, article or image…

…and then deliberately modify the data in the request so that it references an object that belongs to someone else, but in such a way that the server authorises the request anyway, thus implicitly authorising you to access to the other person’s data.

In this way, to you trick the server into giving you access to something that would usually be blocked or invisible.

As Naked Security’s Mark Stockley very neatly put it in 2016 when describing a long-standing flaw in how domain names were administered in American Samoa (.AS):

Insecure direct object reference[s are] a type of flaw that allows [you] to access or change things that aren’t under [your] control by tweaking things that are.

For example, imagine that there’s an image you can’t access, on a server you want to hack, that’s published via a URL like this:

https://example.net/photos/7746594545.jpg

--- HTTP request generated: ---

GET /photos/7746594545.jpg HTTP/1.1
Host: example.net

Now imagine that after you login to your own account, you can edit your own private images with a special URL, combined with a session cookie, like this:

https://example.net/api/edit/?image=4857394574.jpg

--- HTTP request generated: ---

GET /api/edit/?image=4857394574.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In this made-up example, the authtoken is a session cookie that tells the server that it’s you, and that you’ve already authenticated.

Imagine that the server validates only your authtoken, and doesn’t check the specific image 4857394574 against your account to make sure you really are allowed to edit it.

You may be able to tweak and replay this request with the original, prohibited image filename in it, like this:

https://example.net/api/edit/?image=7746594545.jpg

--- HTTP request generated: ---

GET /api/edit/?image=7746594545.jpg
Host: example.net
Cookie: authtoken=HRCALAGJEOWRGTMW

In other words, in this hypothetical example, you end up authenticated to edit all image files simply by virtue of having the right to edit some of them.

That’s a bit like checking into a hotel, getting a key that opens your allocated room, and then stumbling across the fact that it opens all the other rooms on your floor due to a key encoding error.

Typically, this sort of flaw happens when software is tested to make sure it passes tests that it’s supposed to pass, but isn’t tested to make sure it doesn’t pass when it’s supposed to fail.

The Facebook flaw

Darabi noticed that when he created a Facebook poll with an image attached, he could modify the outgoing HTTP request to refer to other people’s images, not merely his own, by rewriting some of the fields in the relevant HTTP form (click on the image to see the original):

The poll would then show up with someone else’s image in it.

This sort of image substitution isn’t a problem if the substituted image is meant to be public anyway, so this doesn’t feel like much of a bug to start with…

…but when Darabi deleted the poll, which he was allowed to do because he created it, Facebook helpfully deleted the images attached to it, apparently assuming that his authentication to delete the poll extended to the image objects referenced in the poll.

Thus, insecure direct object reference.

What to do?

If you’re a Facebook user, you don’t need to do anything.

Thanks to Darabi’s bug report (sweetened for him by that $10,000 payout), this vulnerability has already been patched, so you can no longer rig up a poll that removes other people’s images.

If you’re a programmer, remember to test everything.

Sometimes, “failing soft”, where faulty code causes security to be reduced, is appropriate, such as automatically unlocking the fire escape doors if your security software crashes or the electrical power fails.

At other times, you want to “fail hard”, or “failed closed”, such as not accepting any authentication passwords if you think some of them have been compromised.

In particular, if there are conditions in your software that the developer assures you “cannot happen”, assume not only that they can but also that they surely will, and test accordingly…