In my day job as a tester of anti-malware solutions, I often get asked the same question: how do I plan to test against Advanced Persistent Threats, aka APTs? These threats are very different from your everyday malware, and testing protection against them turns out to be a very different kind of task.
Just about every security company publishes some sort of prevalence data - those little bar charts and top tens showing the most important and widespread threats. The raw data behind these easy-to-consume representations can be very useful to security experts and testers.
It sometimes seems like anyone with a computer feels qualified to do comparative anti-virus testing. There are a lot of pitfalls to look out for, which often trip up unwary would-be testers and regularly lead to wonky data and odd conclusions. So how do you know which tests are any good?
Anti-virus tests are a bit of a minefield. Why are they all different? How do you know who to believe? What makes one test better than another, or are they all equally brilliant/useless/biased/random? John Hawes takes a look.
The latest anti-malware tests performed by Dennis Technology Labs show that comparative testing can actually be a strong indicator of how well today's security offerings can protect a user.