If you’re interested in artificial intelligence (AI) and how it can be used in cybersecurity…
…here’s a DEF CON presentation you’ll like, coming up this weekend!
DEF CON is perhaps the ultimate “come one/come all” hackers’ convention, now in its 28th year, and it famously takes place in Las Vegas each year in a fascinating juxtaposition with Black Hat USA, a corporate cybersecurity event.
Black Hat, where tickets cost thousands of dollars, runs during the week, and then DEF CON, where tickets are just a few hundred dollars, takes over for the weekend that follows, resulting in what can only be described as a Very Massive Week for those who attend both.
At least, that’s how it was last year, and for many years before that.
This year is different, of course – holding a physical conference and running all the many DEF CON Villages would have been impracticable due to coronavirus social distancing regulations, if it would even have been possible at all. (Though you would surely have seen the funkiest facemasks ever!)
The DEF CON Villages are breakout zones at the event where where likeminded researchers gather to attend talks and discussions in research fields all the way from Aerospace, Application Security and AI to Social Engineering, Voting Machines and Wireless.
But DEF CON doesn’t give up easily and, like many other events in 2020, has gone virtual, wittily dubbing this year’s event DEF CON 28 SAFE MODE.
Safe Mode is the special, stripped-down mode you use when you boot up your operating system or your mobile phone with a minimal set of drivers and apps – ironically, a mode that is sometimes used by ransomware crooks so they get access to scramble all your files without the pesky problem of your security and system management software getting in the way.
So, for all that the cancellation of the physical DEF CON event is bad news for those who build it into an annual cybersecurity pilgrimage to Las Vegas…
…the flip side is that you can “attend” this year without travelling at all, and free of charge, too!
So, as we said at the start, if you’re interested in artificial intelligence and machine learning, why not tune in for an AI Village talk that two Sophos researchers are giving on Sunday 2020-08-09 at 09:00 PDT, entitled:
Detecting hand-crafted social engineering emails
with a bleeding-edge neural language model
Why is this interesting? More to the point, why is it important?
Well, one reason is that there is a whole category of cybercrime known as BEC, short for Business Email Compromise, where crooks find a way to pass themselves off as someone important in your organisation such as the CEO or CFO, and send out emails giving false instructions.
Typically, those emails don’t try to trick anyone into clicking links or opening booby-trapped attachments – they often just issue bogus corporate orders such as, “Please use a different bank account number from now on”, or, “Urgent! Please remit this money now but don’t talk about it to any colleagues because it’s an acquisition and we are under a strict non-disclosure rule until later this week”.
In other words, most of the telltale signs that are so useful in trapping conventional spams and scams are missing – in particular, BEC emails rarely include clickable web links or attached files that stand out as suspicious and can be analysed for signs of danger.
Worse still, if the crooks have compromised the email account completely, they have access to the legitimate owner’s own outbox, typically going back months or even years, so they can study the language, company jargon and style that the person would usually use in their own correspondence.
Indeed, the crooks can copy and paste boilerplate text such as greetings, common turns of phrase and sign-off lines so that their fraudulent emails have just the sort of opening and closing remarks you’d expect. (For example, if your CEO would always write, “Dear Paul” and wouldn’t dream of an informal “Hi there, Duck” – or vice versa – then the crooks will know.)
But copying someone’s overall writing style exactly is hard, especially when you are writing things that are the opposite of what the real sender would actually say.
After all, machine learning models are immune to blandishments, threats, flattery and other tricks that social engineers use when communicating with humans, so they can’t be manipulated into overlooking or excusing the unavoidable imprecision and incorrectness that is necessary to commit fraud.
Viewing the presentation
The live stream has already happened, but you can watch the recorded talk on YouTube:
You might also like…
Here’s a recent Naked Security Live video in which we discuss the human defences you can muster again Business Email Compromise crooks: