Anti-virus or, as we say now in the industry, anti-malware testing has been around for years.
These tests and comparatives are the consumer reports of the IT security industry, aimed at educating both the anti-malware developer and the consumer on how a product performs.
There’s been a fair bit of activity in the anti-malware testing world lately – both AV-Test and AV-Comparatives released major reports last week, and at Virus Bulletin we’re putting the finishing touches to our latest comparative on Windows XP, due out in the next week or so.
As usual at this time of year I’ve been getting a lot of people asking me, why are they all different? How do I know who to believe? What makes one test better than another, or are they all equally brilliant/useless/biased/random?
They’re never easy questions to answer.
Testing anti-malware products is a complex and difficult process, and ‘reading’ tests – judging their quality, significance and relevance to one’s own personal requirements – can be equally taxing.
So, I thought it might help to put together some simple points about how to spot a quality test, and how to judge the relevance of its findings.
Certifications vs. comparatives
To start with, traditional testing falls into two main types: certifications and comparatives.
Comparatives will pit a bunch of products against each other, hopefully on an at least fairly-level playing field, and report which does best, often combining a number of different metrics.
Certification schemes, which I’m going to focus on today, tend not to rank products in order. They instead set a fixed standard and award a badge to all products reaching that standard.
In reality, many testers who have traditionally run comparative testing have moved to a kind of hybrid model, providing some sort of certification awards alongside their comparative figures (this has been Virus Bulletin’s approach for many years now).
How certifications work
The ‘pure’ certification schemes tend to provide little or no information on what goes on behind the scenes. They will work with vendors to ensure their required standard is reached and once it is, the certification is awarded, usually with ongoing testing and consulting to ensure the standard is maintained.
Hybrid approaches are usually a little more open, publishing details of how all participants performed and calling out those which did badly.
While in some cases this may become a form of bullying (pressuring developers to stump up for consultancy fees to solve the problems reported in a test), in most cases honest and legitimate testers will provide participating vendors with ample information to confirm, diagnose and fix any issues they spot.
So, the first thing to consider when a product wins a badge in a test is, what does that mean? What is the standard required to earn the award?
This causes quite some confusion in itself. A surprising number of people assume that holding a ‘certified’ badge is a mark of extreme brilliance, that only the very best and brightest could possible attain such dizzy heights, but that is rarely the case.
Certainly in the case of the VB100 award (which I operate), the certification itself is meant as a mark of basic competence, showing a product is reasonably well put-together and maintained – but not necessarily more than that.
For example, if you’re in an elevator and you see a safety kite mark such as TUV, you know it means that it has been tested for conformity to safety standards. You don’t assume that it must also have extra shiny walls, super speeds, or one of those little fold-down chairs for when you get tired.
This is how most certifications should usually be understood. Passing is a sign of quality in the sense of quality assurance, not necessarily an indicator of surpassing innovation.
Finding out exactly how the baseline for a given certification is set is not necessarily an easy task. This in itself should be something of a warning signal – one of the most basic requirements for judging the quality of a test is access to a complete and detailed methodology, and if the methodology is inadequately described, there’s no real way for the consumer to tell if the test is worthwhile.
The Anti-Malware Testing Standards Organisation (AMTSO), a cross-industry group working to improve testing, included in its ‘Fundamental Principles of Testing’ document the words “Testing should be reasonably open and transparent”, and this seems like a pretty obvious thing if the end-users of tests are supposed to understand and trust their findings. (Full disclosure – I’m on the board of directors of AMTSO, and helped draft several of its documents.)
At Virus Bulletin, our current requirements are quite simple – products must detect (in standard static scanning and on access) 100% of samples from the most recent ‘WildList’ (a small but carefully selected set of malware samples which are independently validated and proven to be affecting real-world users). At the same time there must be no false positives in our in-house set of clean files.
We can keep it this basic because our reports also contain a wide range of other metrics, including several other detection measures and a number of speed and performance metrics, including a recently-added stability rating system.
Other certifications will likely include a wider range of requirements – for example, ICSA Labs, one of the biggest players in general security certification, has requirements which include logging and administration features to ensure products are usable and provide reliable reporting as well as quality protection.
Both AV-Test and AV-Comparatives base their awards on a combination of factors from their overall test suite; AV-Test scores each product on a range of areas and sets a passing score which their combined total must reach, while AV-Comparatives has a multi-level award system giving higher value awards to the top performers.
Summary
There can be value in certification tests. But before you even think of relying on a badge or award to judge the relative quality of one solution over another, dig a little deeper into its background and find out what exactly is required to earn that badge.
I was going to include links in here to some of the major test labs’ methodologies, but maybe it’s better to let you find them for yourselves – go to your favourite tester’s website, and dig out their description of how they do their tests and how they grant their awards. Or, just Google the lab name and the word ‘methodology’.
If you can’t find out the full details of how a test works, ask yourself, why not? What’s the big secret? Why should I just take it as read that this test is being done right?
If you do manage to find it, read it through. Does it tell you all you need to know? Do you come away from it with a clear idea of what is being tested, and how?
Most importantly in the case of certifications, does it clearly state what the baseline requirements are for earning the award?
If so, then you’re in luck – this basic understanding is the first step on the road to being able to properly follow and apply the results of a test, compare it with others, and perhaps even find out which really is the best product for your particular needs.
There’s still some way to go though – in my next article, we’ll look a bit more closely at the various group tests and comparatives.
Image of certified stamp and check box courtesy of Shutterstock.
I have heard numerous times that AV-Test, um, encourages financial consideration in return for positive reviews. Any truth to that? I'm sincerely curious as I've heard it many times.
"Consumer Reports" magazine evaluates AV programs at least once a year for the last several years or more. I guess they may not be "leading experts" but at least they're impartial since they don't accept advertising and their tests consist of having the AV programs being tested search for known viruses as well as new ones CR makes up for the tests.
What I find interesting is that even their best rated programs are rarely excellent on detection, and that they usually recommend AV freeware is usually inferior to the mediocre premium versions they evaluate. I don't know how they decide on which AVs to test but they seem to focus on big "Brand Names" such as Norton, McAfee, MSE, Kaspersky etc.
Correct, but with a small caveat: I don't think Consumer Reports evaluates antivirus programs for Macs. They seem to evaluate programs meant for PCs only.
And I have heard that Antivirus companies write viruses themselves. Any truth to that? I'm sincerely curious as I've heard it many times.
Ludicrous to say the least. Conspiracy nuts love to push that one.
Of course employees of the AV companies write viruses and malware — they are always exploring those characteristics that must exist in order for malware to propagate; these essential characteristics become the recognition criteria for their detection algorithms.
These virus authors serve the same function as the scout team for a football club. These guys are the most versatile players on the team.
Lance ==)——————-
If memory serves right that was McAfee or Norton that got caught doing that as they signed it with there own digital signiture ( one of the larger AV companys of the time anyway )
Oh and nice article it was a good read.
Thanks John for this very informative article. I am often asked what do the various certifications mean and it is sometimes difficult to provide an accurate but brief answer. I will direct anyone with these questions to this very useful blog post.
Thanks again and I am looking forward to your next related article.