Kim Dotcom's coders hacking on Mega's cryptography even as we speak - true "perpetual beta" style

Filed Under: Cryptography, Featured, Privacy

Kim Dotcom's new file sharing storage venture, Mega, wants to shield itself from accusations of failing to take action against piracy.

It does so by using cryptography to make sure it doesn't see, and indeed cannot tell, what you've uploaded.

That provides privacy for you (other people, including Mega's own staff, can't snoop on your files) and deniability for Mega (other people, including Mega's own staff, can't even tell what your files might be).

But to deliver on that promise, you have to get the crypto right.

As we explained yesterday, early indications were that Mega's coders hadn't done so: we wrote about problems with entropy (randomness), deduplication and the use of poorly-chosen data in Mega's sign-up emails, needlessly making password dictionary attacks possible.

Researchers at hacking group fail0verflow subsequently wrote up yet another cryptographic error: the use of a faulty hash function to authenticate the JavaScript code that makes the whole service work.

I'll try to explain briefly.

You'd expect a secure site to deliver all its content via HTTPS, including (or perhaps especially) the JavaScript code that drives the cryptography.

Mega uses HTTPS, but to save money it serves up most of its JavaScript from "cheap" servers in distributed content delivery networks that use 1024-bit RSA keys.

→ Shorter RSA keys require less CPU power, so you get more performance for a given outlay. But distributed networks mean more servers and require widepsread key distribution, so you increase the risk of a compromise.

To raise overall security, Mega also serves up, from centralised servers that use 2048-bit keys, a list of cryptographic hashes for the content hosted in the "cheap seats". These centralised servers also serve up the cryptographic code needed to verify those hashes.

That means that you would have to compromise the 2048-bit-protected servers and the 1024-bit-protected servers in order to serve unauthorised scripts from the lower-security part of the network.

Lo! A neat way to have your security cake but pay less to deliver it.

Yesterday, however, fail0verflow documented the crypto code served up from the higher-security servers.

The hashing code used to authenticate the JavaScript coming from lower-security servers was the function h() below:

function h(s)
    var a = [0,0,0,0];
    var aes = new sjcl.cipher.aes([111111,222222,333333,444444]);
    s += Array(16).join('X');

    for (var i = s.length & -16; i--; ) {
        a[(i>>2)&3] ^= s.charCodeAt(i)<<((7-i&amp;3)<<3);
        if (!(i&15)) a = aes.encrypt(a);
    return a;

Don't worry if you don't speak JavaScript. All you need to know is this:

  • The function h() is an algorithm known as a keyed CBC-MAC.
  • CBC-MACs aren't secure unless the key is secure. You can create a forgery if you know the key.
  • The key used in h() is hard-wired and insecure. It's right there: the JavaScript array [111111,222222,333333,444444].

Ouch. You can't use CBC-MACs like that. You need to use a proper cryptographic hash instead.

SHA-3 or SHA-256 would probably be a good idea. As fail0verflow pointed out, SHA-1 would be OK and even MD5 would be better than the code above, despite MD5's known cryptographic weakness.

The silver lining, of course, is that cloud services can issue updates nearly instantaneously. Just serve the new code the next time a user visits your site.

(Yes, you sidestep traditional change control, because the user doesn't get to choose if or when to update. But you also avoid change control sluggishness, because the user doesn't get to choose if or when to update.)

And, sure enough, by this afternoon, Mega had taken fail0verflow's advice.

The 2048-bit-protected crypto code that protects the 1024-bit-protected crypto code has been updated to:

function sha256(d)
    h = new sjcl.hash.sha256();
    for (var i = 0; i < d.length; i += 131072) 
        h = h.update(d.substr(i,131072));
    return h.finalize();

Now, SHA-256 is used instead of CBC-MAC with a hard-wired key.

Good work by Mega's coders to get the fixes out quickly. But if you're relying on Mega for your own privacy, you probably want to ask yourself why they didn't get the basics of their crypto implementation right up front.

Commenters on our previous article offered the reasonable suggestion that Mega's main cryptographic need is self-preservation: ensuring its own deniability in order to avoid getting taken down along with its users' data. Your privacy and security comes in second place.

There are two really simple solutions if that's true. You can choose either or both: use a different provider, or encrypt the data yourself before you let Mega encrypt it.

In fact, why not encrypt your own data yourself, always and anyway? It's your data, after all.

, , , , , , ,

You might like

10 Responses to Kim Dotcom's coders hacking on Mega's cryptography even as we speak - true "perpetual beta" style

  1. Jeremy · 988 days ago

    The best idea is always to self encrpyt your data using 256 AES or higher and then upload it to mega if you need someone else to access it securely. I think they have done a great job with security compared to Apple, Oracle and Windows. (A patch in a few hours is pretty good)

    • Paul Ducklin · 988 days ago

      Actually, this wasn't really a patch, so what Mega has done here can't be compared to the sort of patches pubished by the companies you mention.

      Aside from lifting out the h() function and dropping in sha256(), all that was really needed was to re-generate the list of checksums for the files in their JavaScript source set.

      And Mega's content delivery system would already need an automated way to rebuild that data when any of the checksummed files changed anyway..

      • Jeremy · 987 days ago

        I realise it is not a patch since it doesn't need to be installed on the consumers computers but in any case it's the time it took them to make the final fix as well as the fact they have embraced the open source idea. As a Facebook comment also notes, there is too much negativity at Sophos in regard to Mega.

        • Paul Ducklin · 987 days ago

          You mean the negativity which caused me to write, "Good work by Mega's coders to get the fixes out quickly"?

          And I've written precisely two articles about Mega (the second IMO really just a follow-up to the first), in one of which I offered some overt praise. Might be a *bit* harsh to accuse me of "too much negativity", wouldn't you say?

          • Jeremy · 987 days ago

            The negativity hasn't been as strong as previously but it's been present. I'm not the only person stating this by any means.

  2. Gary · 987 days ago

    Am I missing something?

    If you send something to their server encrypted with the h() function and try are read it with the sha256() you wont be able to, so the old h() function must still be in the code and useable?

    • Johan · 987 days ago

      This isn't about files submitted by users.

      This is about their own JavaScript code used for the MEGA website, which they download via an insecure connection and then verify using a piece of JavaScript that IS hosted on a secure connection.

    • Carlos · 987 days ago

      I believe the h() and sha256() functions only deal with checking their own javascript code. Sort of a self signing code.

      • Paul Ducklin · 987 days ago

        That's right. The idea is that if you manage to compromise any of the code on the low-security (read: cheap-to-operate but faster) servers, you also have to compromise the code-checking-code on the high-security servers so the checksums still match.

        Ergo, the high-security servers don't need to have as much bandwidth (and don't need their content and crypto keys replicated so widely across the net), because they aren't under the sort of load they would be if Mega served all the JavaScript from the high-security servers...

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About the author

Paul Ducklin is a passionate security proselytiser. (That's like an evangelist, but more so!) He lives and breathes computer security, and would be happy for you to do so, too. Paul won the inaugural AusCERT Director's Award for Individual Excellence in Computer Security in 2009. Follow him on Twitter: @duckblog