Naked Security readers will be well aware of the great TrueCrypt mystery.
TrueCrypt is, or was, a long-running software project that claimed to provide strong encryption software that you could use for free on Windows, Linux and OS X.
Over the years, it became popular for many reasons, notably that it was free, cross-platform and apparently untainted by association with governments or commercialism.
It also had a feature called plausible deniability that gave it cachet amongst cypherpunks and privacy activists, even if they weren’t planning to use this feature themselves.
Plausible deniability works by letting you create an encrypted file with two passwords.
One password decrypts the content you really want to keep secret, while the other cunningly decrypts a bunch of innocent data to fool anyone who forces you, by fair means or foul, to reveal your password.
→ Be careful. This sort of feature can be a double-edged sword. Firstly, it’s harder than it sounds to maintain your fake data so that it actually looks plausible when you decrypt it. Secondly, if someone is determined to extract your data under duress, they’ll just ignore the first password you give them and keep squeezing you until you cough up the second.
You could even get the source code for TrueCrypt, as a sort of implicit guarantee that there were no shabby secrets or backdoors hidden in there.
But it wasn’t truly open source, since you couldn’t then do what you liked with that source code.
Furthermore, the developers were anonymous and the development process closed: you couldn’t go to conferences, for example, and openly meet the coders and ask them what was coming next in the product, and when.
So, despite (or, perhaps, because of) the apparent anti-commercialism of the software, there were certain commercial challenges in using it, not least that you couldn’t tell who you were dealing with, or what might happen to it next.
That’s a bit of a risk with any software product, especially one with the primary purpose of turning your precious data into shredded cabbage with the promise that you will be able to unshred it later.
Fair enough, of course: the coders provided it for free, and if you really wanted, you could use their source code to help you write your own replacement, but it wouldn’t be TrueCrypt and you couldn’t use the name to suggest it were.
What that meant is that the open source practice of a fork was impossible.
Forking a software product gets its name from the Unix system call fork(), by which new processes are created.
When a process is forked, the new process, called the child, is a clone of the parent: it has the same code and data in memory, so it inherits access to all resources that the parent had.
Open file handles, network sockets, indeed all the data structures that the process currently has in memory, are duplicated so that the child process starts as a replica of the parent.
Thereafter, of course, the child process can, and does, diverge.
So when you fork a software project, you start off with an exact copy of the original, with the freedom to implement and experiment with changes that the original project wasn’t willing, or didn’t have time, to try.
If you’re a programmer, you’ll also be familiar with the word branch, which is an offshoot of a software project created in exactly the same way.
Technically, the road metaphor of a fork and the railway metaphor of a branch refer to the same process.
Generally speaking, however, branches are code variants that the coders on a software project start and maintain themselves, perhaps merging the changes back into the mainline later.
Forks, on the other hand, are usually more serious splits intended to deliver two independent versions of the project.
If programming were a religion [It isn’t? Ed.], you wouldn’t call it a fork, you’d probably call it a schism.
If the fork is better, it will take over; if it isn’t, it will wither away; if it caters to a different crowd, both new and old versions will continue, growing into similar but different products used more widely overall.
Anyway, as you will no doubt have heard, the developers of the don’t-care-about-commercialism TrueCrypt product recently decided to call it a day.
They shut down the project abruptly, declared it insecure, and published one final version that only did decryption, as a final way for you to unshred your cabbage.
Conspiracy theories abounded:
- The NSA made them shut it down, because the product was too secure!
- Hackers got into their website and stole their code signing key, then set about destroying the product to push users onto tainted alternatives!
- Malicious actors forced them to introduce covert backdoors, and this was the way of telling us without actually saying so!
- It was all a bit of a hoax to raise awareness of encryption, so keep calm and carry on!
Well, the mystery is now solved.
The coders have called “game over” on the project, and they’ve decided to take their ball home, too, by refusing permission for a fork.
It seems they were going to retire the code anyway, even if they’d kept going with the project, and intend to keep true to that decision:
I am sorry, but I think what you're asking for here is impossible. I don't feel that forking truecrypt would be a good idea, a complete rewrite was something we wanted to do for a while. I believe that starting from scratch wouldn't require much more work than actually learning and understanding all of truecrypts current codebase.
And that would seem to be that.
Solution is the new mystery
Except, of course, that an anonymous message on Pastebin can never be considered definitive.
So the mystery, far from being solved, might now be deeper than ever:
- The NSA made them say it!
- Hackers did it! [Posting on Pastebin is not “a hack”. Ed.]
- It’s still a hoax, you’ll see!
What’s your theory?
For further information
We’ve also put together an information page (yes, we’re suggesting a commercial replacement!) at http://sophos.com/truecrypt.