Anyone here old enough to remember MS-DOS?
In those days, memory protection meant putting the lid back on your computer properly, process separation meant having two computers, and the term “sneakernet” was a tautology.
Code and data were gloriously undistinguished to the point that deliberately interleaving machine instructions and data variables in your programs was perfectly normal.
Indeed, many programs started with a JMP
instruction that caused the CPU to hop forwards in memory, skipping over things like error messages, menu screens and other data tables, to land in the executable part.
The unused memory starting at the end of the executable part was used for temporary storage needed while the program ran.
In fact, when your application loaded, its so-called uninitialised variables ended up “initialised”, often rather interestingly, with whatever was left over in memory from the previous program.
(Programs were always full-blown “applications” back then. The newfangled diminutive “app” didn’t exist, which is ironic when you consider that modern apps are thousands of times, sometimes even millions of times, larger than old-school applications.)
If anything went wrong with your MS-DOS program – a buffer overflow, for example, or a corrupted return pointer, or just a wrongly directed jump caused by some other sort of bug – then the results were almost always catastrophic, at least for your data.
When a crash involved a memory access that went wrong, the destination address would often be the RAM in your video card.
This gave wild results, because every even byte denoted the character to display, and every odd byte denoted the colour combination to use, giving absurdly abstract art like this:
And these catastrophes weren’t just occasional annoyances: a busy user might expect to reboot several times a day.
There were no arguments back then about whether you should leave desktop PCs turned on overnight.
First, computers used a lot of power in those days, so you saved serious money by turning them off; second, the chance of it running correctly through the night was pretty low; and third, you’d reboot in the morning anyway, just to ensure that you had a fresh start.
Malicious video problems
It was with all of this in my mind that I read a recent story on 9to5mac with a dramatic headline: There’s another malicious link floating around that will cause any iOS device to freeze.
Simply put, it’s a video that somehow consumes sufficiently many resources, or perhaps even triggers what might turn out to be a potentially dangerous vulnerability…
…that you end up with an entirely unresponsive device.
The video eats so much of your iPhone’s lunch, in fact, that you have to reboot by holding the power button for a few seconds to access the iOS shutdown slider so you can restart:
Things can get so bad that the power button alone isn’t enough – after all, the slider shown above is a itself a software control.
If your device is frozen so solid that you can’t even slide the shutdown button, you can do a force restart by holding the power and home buttons down at the same time for 10 seconds. (On an iPhone 7, use power and volume down.)
The bottom line
All said, this does constitute a security risk, even if only a Denial of Service (DoS) where someone crashes your phone by enticing you to a booby-trapped video.
At any rate, there will probably be some sort of security fix in a forthcoming iOS update.
However, we think that the bottom line of this story is good news…
…when you think how far we have come in the past 20 or 30 years.
We’ve evolved from the crashtastic ecosystems of MS-DOS and early Macs to a world in which a video that doesn’t play properly is considered cause for security concern, and where an unexpected reboot is rightly written up as something malicious.
If that’s not a sliver of good security news for the Black Friday weekend, we don’t know what is.
You must have run some really flaky DOS applications. My recollection is rock-solid computing: VisiCalc, PC-Write, even Prodigy.
The parity-protected memory in those early computers protected you from flaky memory and consequent instabilities.
Contrast that with today’s applications–particularly those running interpreted languages. Even when you’re not doing anything, you can open Task Manager and watch the memory leak away. If someone had the backbone to write a browser that reported those leaks to the user, the quality of Javascript would improve immensely.
Partity memory is irrelevant to the article. I am talking about the fact that any part of any program could write anywhere in memory at any time. In other words, almost every buffer overflow or corrupted pointer immediately wrecked the integrity of the system and usually caused it to crash, either immediately or shortly afterwards.
You didn’t really get “memory leaks” in DOS because there wasn’t really a memory manager (and many programs deliberately messed with the layout of already-allocated memory as a “performance hack”.
I’m in Larry’s camp on this one. Sure, there were glitches, especially if you wantonly toyed with every bit of freebie software you could get hold of. But by and large, considering the way things worked (as you described), personal computing in those days was amazingly reliable. Lotus 1-2-3 was an amazing product which fulfilled an incredible number of business needs. And there *were* memory managers – Quarterdeck’s QEMM was awesome – though MS-DOS itself didn’t have much to offer in that department.
Indeed, there were ways to corrupt data and/or code, but the problems generally arose from sloppy programming; any technological weakness in the OS was simply due to a lack of control over that sloppy programming. You wouldn’t need a warden if your prisoners were all well-behaved.
I know this from experience: in the late 80s through early 90s, my wife and I ran a multi-line, multi-user BBS which offered essentially what you find on the web these days as social media, forums, downloads, email, chat rooms, etc… but all (or nearly all) in text form. It all ran under MS-DOS on a 16Mz 386 initially, before our later upgrade to a 33MHz 386. A whopping 4 megs of RAM, woohoo! Lots of room for errant memory movement, no? Well, I was learning the C language “on the job”, essentially by trial and error. I hacked up a modification to the source code to send an email to the SysOp whenever a new user signed up. A very useful feature, but not a great way to learn the significance of memory management. The end result was that under certain circumstances that email would go to some random user. These days, that could have landed my name on the pages of Naked Security in a very negative light!
Anyway, it was a complex system doing many, many things in what seemed all at the same time, but it was quite stable except for the problems that *I* induced myself through my ignorance. It all comes back to something you yourself keep reminding us, Paul… that programmers play an extremely important role in our security – as well as our sanity! 😉