All posts

Cyber Security DE:CODED – Cheating in security testing

“If they chose the best products by rolling a dice then they should say so”

[buzzsprout episode=’10578741′ player=’true’]

Show notes for series 2, episode 8

If we’ve given the impression that we’re at the heart of the security world, working with the organisations that spend billions on security – and with the companies that make billions by selling security products – you’d be right. And that puts us in an awkward position. Because we want to make security better for everyone. And sometimes that means speaking some uncomfortable truths.

This episode is the uncomfortable truth episode.

Cheating in security testing

Stay with us as we explore how testers behave, or misbehave. And the different ways, some more honest than others, that security vendors engage with testing. Buckle up.

Please subscribe and join the discussions. Use one of the ‘Listen On’ links above to subscribe using your favourite podcast platform.

Topics
  • Introduction
  • What do security reports tell you about products?
  • Which reviews are worth reading?
  • How to spot a useful review (and warning signs of bad ones)
  • A quick guide to YouTube anti-virus reviews
  • How do security vendors use (and abuse) security reports?
  • Define cheating…
  • Marketing teams (mis)interpreting test reports
Other resources
Transcription

(Generated automatically)

Simon Edwards 0:01
Welcome to DE:CODED, providing in depth insight into cybersecurity. Can you trust security tests? What do their results actually mean? And how honest are the testers and the security companies that they test? We answer all of these questions and more with our special guest, Richard Ford, from Praetorian. Show notes, including any links mentioned in the show, are available at DecodedCyber.com.

Throughout this series, we’ve covered a lot of subjects related to computer security. We’ve explored which security platforms are the most secure and trustworthy. We’ve dug into the murky world of firewall performance. And we’ve spent a lot of time considering email threats and how to stop them. Cloud security, cryptocurrency and even mental health are all on our menu since we started series two of cyber security decoded at the start of 2022. But we asked security testers at heart and the thing we’re most passionate about, is checking the all of the security products, services and approaches we’ve talked about actually work. They cost you loads of money, you should be aware of the strengths and weaknesses of these things. If we’ve given the impression that we’re at the heart of the security world, working with the organizations that spend billions on security, and with the companies that make billions by selling security products, where you’d be right. And that puts us in an awkward position, because we want to make security better for everyone. And sometimes, that means speaking some uncomfortable truths. This episode is the uncomfortable truth episode. Stay with us as we explore how testers behave or misbehave, and the different ways some more honest than others, that security vendors engage with testing. Buckle up. What do respected security reports tell you about security products? Let’s start by looking at security reports you might download from our website, and the websites belonging to our competitors. These reports usually start off with a list of products, each of which one impressive looking awards. But have you considered what those fabulous awards mean? How come there aren’t any massive losers in the list? And how hard is the security test anyway? Most people outside of the security industry, it really just wants a shortlist of products to choose from. They aren’t interested in the nerdy details. But there are lots of ways that you can test products. And this is really important when you’re using reports to help you choose which to buy. Because some are useful. And some are confusing at best, if not just plain wrong, actually. But in all cases, you need to know how they tested to know how to interpret their results. Now let’s move away from security just for a minute, you could prod a teddy bear and say, Well, that looks good enough. Or you could take it to pieces and analyze every component forensic ly for build and functional quality. This toy looks safe. It’s parts of large, soft and non toxic and we can’t burn it easily. Plus, it’s got big cute eyes. Now this could be a baseline for cuddly toys safe with cuteness as an extra bonus. But what about security products? What’s a baseline for antivirus? For example? For anti malware products, we have to consider a few different things, including the following. First, is it really an anti malware product? Is it at least basically functional? Second, can it determine a good quantity of common malware without blocking lots of useful software? And finally, can it stop the malware as well as simply detecting it? How hard do you want your security testing to be? We could take a product scan a real virus or the harmless eicar test file and record that it detected a threat. Is that good enough? It’s good enough to answer the first question. It is at least basically functional. And it should reassure you that you’ve installed the software correctly. But you can’t tell if it’s better than other anti malware products because they should all react in the same way. You also can’t tell if the product has extra functionality capable of detecting and stopping other types of threats, of which there are many. Scanning files is a basic way to test anti malware. And in most cases, it’s too basic. Let’s turn up the dial and throw a wider range of attacks at the products. We could use malware that bad guys used to attack everyone every day. We call these threats commodity attacks, because they’re all over the place indiscriminately damaging computers across the globe. as testers, our hope is that all main anti malware products will detect each of these threats. To us, that’s the baseline. If a product can’t detect all common malware, there’s something very wrong going on. We go further than that, though. By using forensics, we can tell not only how many threats a product can recognize, but how well it can protect against them to an antivirus might say I see a virus and I’ve blocked it. But we don’t trust that claim. We check that it’s really done what it said it did. And we could go hard to steal and use targeted attacks, designed to evade anti malware. So we do, and that’s the targeted attacks. Part of the SE Labs endpoint tests, for example, is beyond the baseline and helps show which innovative products are capable of the best protection. This stuff isn’t secret. We go into loads of detail, even including tutorials about threat chains and protection layers in each of our reports. Make sure that any reports you use a similarly detailed, or ask the tester some detailed questions like why don’t they explain what they’ve done? And a double A or triple A awards, that products achieving our SE Labs reports show that they go well beyond basic functionality. It shows how they can handle both common and customized threats without blocking the software you need to run on your computer. When you read reports from other testers, check out what they’re giving scores for is it detection, protection. And from what saying something is good when you’ve not tested it properly, isn’t very helpful. So we’ve looked at what the baseline of a product is basically functional and capable of detecting the most common threats. Now let’s look at the baseline of a security report. What should it include to be credible? How should the reporters themselves behave? And why does any of this matter?

Which reviews are worth using? Given that you might want to look further than SE Labs for advice on antivirus? Where else should you go? The safest option is to visit websites belonging to well known professional antivirus testers. There are only a handful, and most are members of the anti malware testing standards organization. And so tracks tests from members and notable non members. If you go to their website, you’ll find a list of testers. Not every antivirus test is worth your attention though. While we applaud enthusiastic amateurs getting involved in the security world. Sometimes homemade tests are unintentionally biased, which means they can give you the wrong information. It’s like trusting your friends down the pub on social media to give sound advice on vaccine technology or investing in the stock market. You might listen to them, and they might even be right some of the time. But hopefully, you won’t make life changing decisions based entirely on their opinions. While established professional testers don’t always agree with each other. They use scientific methods to check the antivirus software works properly. If they all agree that certain products are strong, you can be fairly confident and choosing them. But less rigorous reviews on the Internet can be very misleading. How can you find out which reviews are worth your time, and which are so naive or just made up? That you should ignore them completely? When you search for best antivirus on Google and YouTube, you’ll see plenty of reviews. And who are these people? And how do they decide which antivirus programs are best? And can you trust their opinions much or even at all? Some reviews are created by enthusiasts. So journalists, I myself was a journalist for many years. And I tested antivirus software as part of my job. Now while those tests stand up pretty well against the standards of today. I’m not sure how accurate our tests were when we gave opinions on other kinds of things like sound cards and large monitors or printers. And there are some basic specifications to look at, and opinions to form about ergonomics and price. But there’s also a lot of marketing nonsense to filter through, which is kind of your job as a journalist reviewer. I remember once arguing with a marketing representative for Dolby that the product she was describing which she claimed to be coming soon and wanted me to report on defied physics as we know it today. She stopped to the script. But 15 years later, I’m still waiting for this magic device to appear. The sadly journalists for other magazines did report on its imminent arrival. And the same way that for years they repeatedly reported that we’d be getting broadband through our electricity cables, which is something I first heard about in the late 90s. It never happened in any significant way. So with the best will and respect in the world. I don’t think I’d go to a journalist for recommendations of complex security products, unless they report using data from more technical sources in which case Great. Other reviews are created by business people who make money promoting antivirus products. They then earn commission when you buy them. Some reviewing organizations even run by security companies that sell antivirus products. This is a different kettle of fish to the unknowing giving an opinion. You might worry about taking financial advice from someone who earns commission on the pension plans or energy deals that they recommend. The same applies here, the reviewer is biased and making recommendations for their benefit, not necessarily yours, they might recommend a few products that are well trusted and avoid the very worst, but you’re not getting the full picture. Everyone has a right to their opinions. But we all need to be aware of their agendas before we decide to trust them. Here’s an example. The safety detectives website makes recommendations about security software. This site is owned by a company that makes security software. And we know this because safety detectives openly admits it. A link at the top of the webpage called ownership produces a pop up the notes that the site is owned by Cape technologies PLC, which in turn owns ExpressVPN CyberGhost zenmate, Private Internet Access and the Mac antivirus product inseego. And yes, Indigo is listed as one of the best antivirus products along with a common set of others. Now, we welcome this unusual level of transparency, because it helps us understand the agenda and trust the recommendations accordingly. Many of the reviews that we’ve seen from different sources agree with each other very closely. They consistently recommend products from the same four or five antivirus companies. Could it be that each of these reviewers have managed to reach the same conclusion independently based on good testing? Well, it’s possible, and it’s nice to think the best of people. But as the review is given no detail about how they test, we can only guess another possibility is that they’ve all identified the antivirus companies that run well paying affiliate marketing schemes, it is possible that they’re recommending the products that will bring them the most money. It’s not just possible is it it’s likely that earning money is not illegal. And maybe these four frequently recommended products are really good. So could these reviews still be useful? Well, yes, if the reviewers were clear about how they decided which antivirus products to recommend, if they explained how they tested that would help. If they claim to follow sensible guidelines, they could avoid accusations of bias, we’d know that they weren’t just recommending certain products in order to earn the best commissions. If a review declares that it recommends products because of the Commission’s at least customers could understand what’s going on and decide whether or not to trust it. Here’s a top tip to help you spot a useful security review. Look out for these three things, which are promising signs that a report is useful. One, it explains how the reviewer tested the product to it claims to follow some security industry guidelines. Three, it complies with the amtsl standard. There are some warning signs to be aware of reports published by businesses that make money selling the recommended products. And especially be careful if you see more opinion and intuition than facts. Ideally, reviewers would follow the one accepted standard for testing antivirus, then we could have confidence in their results, or at least enough understanding to be able to judge the merits of the review process. A good standard like the amso standard requires testers to explain how they tested for example, if they chose the best products by rolling a dice, well, then they should say so and we the customer can decide if that’s an OK way to to make a decision. If they don’t tell us how they wish their decisions. We can’t possibly know how seriously to take them. At the time of recording. I was the co chair of amtsl. So I do have an agenda to disclose here. But that agenda is a pretty unselfish one. I want you to prioritize reports that are transparent. I’m not saying you should care only about SE Labs reports. I’d like all of the major testers to follow the standard for their own good and for yours, but few do. It’s up to you to draw your own conclusions from that and to ask them to follow the standard. It’s not hard work for them to do this.

Cybernews 14:46
While everyone not today, I’m going to be talking about the best antivirus options for 2022. So imagine if malware got into your PC and encrypted all your sensitive files. Oh yeah, that It is a nightmare fuel. So let’s avoid that and make sure you have the best antivirus established

Simon Edwards 15:06
testers within the security industry tend to publish PDF reports, and maybe have some clever dynamic comparison tables on their websites or something. But we as a group, we’re not known to embrace modern media. We’ve mostly not on YouTube, and I think this decoded podcast is the only test of adoption. So what do YouTube reviews of antivirus look like? We’ve picked the first Youtube anti virus review that appeared on Google when we use the search term, Best Antivirus. This was published by cyber news. Cyber news makes money when you buy antivirus through its reviews website. It claims to link and evaluate products and services because of their quality, and not because of the compensation we receive. Here’s a clip from the review

Cybernews 15:51
has loads of amazing features felt so secure with Bitdefender. Look, I had real time protection, Advanced Threat Defense, web attack prevention, anti spam, and anti phishing filters. They also have a little feature called Safe pay, which essentially make sure your online payments are extra secure, which happened to us a lot to say I’m a little materialistic. And you don’t even have to worry about snooping cybercriminals because Bitdefender antivirus also has microphone and webcam protections. So with all these features, I really felt like the viruses were never getting close Bitdefender fee,

Simon Edwards 16:28
inverted commas Best Antivirus reviews that we’ve read and watched tend to focus on feelings of security rather than facts. Sometimes reviewers run very basic tests that neither stress the products, nor give them a chance to show off the strength of their features. Feelings are driving the reviewers opinion here. This opinion is based on the existence of features. The reviewer largely assumes that these features actually work. Although he did run a very small anti malware test. He says he planted 10 malicious files onto the PC. We don’t know how he did this. What does planted mean? Did he copy the files from a USB drive, giving the antivirus programs a chance to detect them as they arrived? Or download them from a malicious website? Or send them via email? All of these are realistic ways to bring malware onto a system. Or did he disable the antivirus, copy the files onto the system and then run a basic scan in a very limited and unrealistic test? And we don’t know because he doesn’t say. And you know, as I say this, I realize I sound like one of the grumpy old guys who told me 20 years ago that my antivirus testing was all wrong. Does anyone outside of the security industry really care about details like this? You know, I’m sorry, I’m not having an existential crisis here. But I do worry that sometimes we hold ourselves up to such a high standard, and maybe very few people care. But I think as a security testing organization, we have to keep going through the highest standards that we can. Anyway, this YouTube reviewers performance testing, is also extremely vague. He says that the product might slow down an older device. Well, it might or it might not. Did he test it with a number of PCs to find out? And how are those PCs set up to they contain lots of files or plain Windows installation? We don’t know because he doesn’t say. Now remember, cyber news is claims that its reviewers Lincoln evaluate products and services because of their quality and not because of the compensation we receive. We you deserve to know how reviews such as this evaluate quality. I hope YouTube reviewers hear this episode and improve the transparency of their reporting, even if it means putting some technical details in the notes of their channels. If they don’t want to start talking about the details they feel their viewers won’t care about. The best way to choose an antivirus product is to check the reviews from well known scientific testing organizations. Then consider the products that performed the best over a period of time and which have features you personally care about. And price is obviously going to be a major consideration to avoid excitable reviews that prattle on about how awesome an antivirus program is, without providing any detail about what awesome means. Use affiliate sites if they can provide you with savings on antivirus products. But we respectfully suggest that you don’t trust their lists of best products without double checking with reports such as those we provide for free on our website. Now we know what baseline can mean for a security product. And we can now identify some red flags when we see a questionable security report. So finally, let’s now think about how the security vendors, the companies that make security products behave with tests. What do they want out of testing and how do they achieve their goals? Dr. Richard Ford was an early editor of virus bulletin magazine, and one of the first ever anti virus testers. He now works at Advanced cybersecurity consultancy praetorian. Richard, when you are testing anti malware products, did you ever encounter vendors cheating?

Richard Ford 20:18
Why think Simon, this comes down to a couple of things. First of all, let’s let’s be clear on what we mean by cheating. Because I think there’s there’s a lot of gray in the middle, right. And I think that’s one of the things that that we need to recognize something that you as a tester might feel like is cheating, might feel completely justifiable to the anti malware antivirus company. So to me cheating is when you’re blatantly doing things that are designed to only help when you’re testing. So a good example, I’m scanning a directory, and I see 30 different types of malware in there. And the system immediately kicks itself into its maximum heuristics mode. There’s, that situation is never really going to occur, me use this machine. Right? So so that let me crank the heuristics. Or even more easily, let me just say, everything in this directory is infected with something. I would put that in the cheating bucket, because it’s pointed, it has no real user benefit. It only exists to make you do better in tests. Right? So if we encountered that? You betcha.

Simon Edwards 21:31
In your example, you’re talking about an endpoint scanner that say, these days security products are rarely an island, working independently to protect your systems. So with some rare exceptions, like super high security facilities, you know, vendors can exercise control over their products via the cloud. So does that make it easier for vendors to cheat?

Richard Ford 21:52
Do you think a lot of AV now is cloud assisted? So do we all get the same service in the cloud from a vendor? We kind of wish we did. But now there are definitely tears of users. Right. So for example, if you have a very troubled user base of a large high value customer, who is who is unhappy with the service they’re receiving, you may choose to give them more filtering, or more automated analysis or more manual analysis in the cloud. While we wish the cloud was, you know, this sort of AI automated thing, there’s an awful lot of mechanical turkeying. That goes on in the, in the cloud and in cloud based security products.

Simon Edwards 22:41
What Richard means by mechanical Twerking is the idea that the team on the vendor side could sit there and watch results flowing in from systems that we’re testing. And that team could identify that with focusing on certain kinds of threats, for example, and tweak the product to be better at detecting those for the time being. And when we switch to using legitimate software in the test, they notice that and change the configuration again, to get optimum results. And if you are a large customer, this might actually be realistic, pay enough money. And security vendors will put a security operation center, a sock on call supporting the software you’ve chosen. But for most organizations, that’s not what happens. And if vendors behave like this in a live test, it makes the results misleading.

Richard Ford 23:32
So I think, you know, the question for you is, hey, if I do that, because I know it’s Simon, when he’s testing me, is that cheating? Because I also potentially give a customer more visibility on what time or you know, a higher level of of diligence if they were an unhappy customer, because customers aren’t created equal.

Simon Edwards 23:54
Yeah, I think you’ve hit the nail on the head, when you said about the real world benefits. What is the purpose of a test, if the tests purpose is to assess a product and find some kind of average general baseline of effectiveness, then you would hope that the vendor wouldn’t be working away very hard in the background, monitoring everything that you’re doing and, and adjusting things. But in a real world, they might like, say, have a very high value client. And they’re essentially taken on some of the role of the sock in which case they would be doing all of those things. So I think it’s down to the tester to set the scope of the work and to say, we’re doing this test because we want to see what your product or service does when you get a new customer in and you’re not doing anything different. So treat us like a regular customer. And and that should be good enough to get the results that we’re looking for. And I guess the cheating would come when the vendor says Yes, fine. We’ll leave you to it. But then they don’t and they secretly go behind the scenes and have whole teams of people. And it’s interesting because we have definitely seen that as well. And one slightly sneaky thing you can do to aggravate them. And to foil their plan is to have a shedule for testing and then not quite keep to it. So they can have huge teams of people sat there waiting, and nothing’s happening. So they’re spending money. And then after the test seems to have finished, you actually continue. And then you compare what happened to the beginning of the test to the end. And if you see massive, dramatic differences in how the product performed, then you can have your suspicion that they’re up to something.

Richard Ford 25:27
Yeah, that’s really interesting. Simon, I think I think one of the one of the tricks here, though, is let’s say that I am a security vendor. And I want to make certain which will be a good business choice that when I onboard a new customer, their initial experience is really good. So I’m going to do a bunch of hand tuning, I will look at their traffic, I’m going to make certain things running well. But maybe after 30 days, they drop into sort of my customer pool, and then you know, they don’t get that level of tuning, let’s call it that. So if I treat you like a regular customer, I’m still not it’s not a representative test, right, it’s going to be a representative test of what your first 30 days might look like. And that things might drop off quite badly until you start complaining.

Simon Edwards 26:10
And that’s a great point. And especially now that we’re seeing more machine learning involved in certain types of products, vendors requiring like a baselining period at the beginning of a test. So this is a totally realistic thing. This isn’t just for testers, this is for anyone, any regular customer, where they’ll deploy the product and watch for 30 days, say what normal looks like on the network and then configure the product accordingly, or the product will configure itself because it’s using machine learning to do so.

Richard Ford 26:39
Yeah, and I think it makes it very difficult, right for you as the tester.

Simon Edwards 26:43
Yeah, it is hard. But we’re focusing on the gray areas here, aren’t we? What about when vendors just go for it and try to get artificially good results.

Richard Ford 26:53
Evil is obvious when you run into it. And we can, we can focus on what I would think about as more blatant cheating versus, you know, some of these sort of gray areas where, where there is customer benefit, and where it’s actually a reasonable way to conduct business.

Simon Edwards 27:10
Right. So I’ll give you an example of some blatant cheating that I’ve heard about where the tester has done some work. And the guy that did the work was excluded from the conversation where the the senior members of staff got together, and basically changed the results. Now, to me, that’s it’s a very non technical way of cheating, but it is a way of cheating.

Richard Ford 27:32
Yeah, absolutely. And I think, I think the real measure is around how much protection can I expect to get from this product, if you install wonder scam on your on your home machine, and you install sort of grotty scan on your other home machine, over a period of time, which one is going to get impacted or, or penetrated or breached or whatever first. And that’s really the most important thing.

Simon Edwards 27:58
Well, that’s, that brings up another interesting point, which is a general principle where you might have a shop in a very high risk area with lots of security, and it might be being broken into and stolen from all the time. And you could have another shop with almost no security in a low risk area. And it’s not getting any issues at all in terms of security. So you can’t just judge the performance of the security measures, because in one case, they’ve got really good security, but the risk is so high, that the attackers are so persistent, that they’re going to get through maybe 1% of the time. So the security is good. It’s just not perfect, because it never is. Whereas the shopper at the village shop that never gets raided or whatever. They could claim they’ve got the best security in the world, but they have none. In fact, they’re just in a better place.

Richard Ford 28:44
That’s my clinical trials model is such a nice model, because clinical trials, sort of by design help help you think about that, because they use a large cohort, let’s do something topical right? COVID-19 vaccination, right? We’re not getting into the politics of it not getting into that. I like it or I don’t, or whatever we can, we can look at the efficacy by different types of people. So we know in one demographic and one age group, what the efficacy is what the change is, in terms of your likelihood to be hospitalized, say with COVID. And so those models work really well. And they do give somebody a way to go, Well, I’m a, you know, 50 year old sort of guy, average risk, relative good health, what results can I expect by looking at a cohort of people like me, the problem is that that’s financially not very viable for testing security products. And so we have to come up with more elegant ways, which, by definition, can introduce some bias and I think our job is to figure out how do we minimize that bias and how do we not till the board accidentally to those vendors who are maybe not as squeaky clean as some of the other vendors who play it fair, the challenge with testing, as you well know is it’s such a high stakes game, because how does a company know how good a product is? After three tests?

Simon Edwards 30:20
Yeah, because the alternative is cybersecurity marketing claims, which we all know are not the most accurate things to go by

Richard Ford 30:28
you. You’re kidding me. Right? Yeah, I can’t believe those

Simon Edwards 30:31
things. But listen, Richard, I’ve just thought something. We’re talking, I’m talking about cheating from a testers perspective. So I’m thinking, these evil vendors are trying to make a mockery of the work that I’m doing and come out better. But what about the testers who aren’t doing a very good job? Is it a valid thing to try and cheat on a test, which is so bonkers and bad, that you want the good result, but you’re not going to change your product to if you change your product, it’s going to affect your users. And if you change your product to win a test, and that’s not a useful way to change it, is it a valid thing to cheat in that test just to kind of get the result you need and then move on to doing good things in the world?

Richard Ford 31:11
Well, Simon, you didn’t warn me that we were going to have a discussion around County and ethics? Because this is this, this is an ethical question. That’s quite tricky. Because you’re saying there’s a greater good that requires me to essentially lie? And that’s a really difficult question. You know, one that’s been debated for for many, many years. First of all,

Simon Edwards 31:34
well, maybe this podcast is where we solve it once and for all. And looking

Richard Ford 31:37
forward to it the book that we shall join me,

I think I think the best approach would be to annihilate all the bad tests out there. Right. And I know that you sort of agree with that, because part of what amsale Part of what it was all about was giving more credence and visibility to tests that are actually compliant and sensible, and getting these terrible tests that encourage vendors to do terrible things, or put vendors in a bind where the best products have to make a really difficult decision of, should I let my product look bad in this test? Or should I just pass the dark garland tests, I think we have to find a way of weeding those tests out and shining a light on the fact that you really shouldn’t pay too much attention to this test. Because it’s not a scientifically valid way of, of measuring something. I further believe that security is such an important thing that we have to I don’t want to say regulate these tests in some way. But as an industry, we have to come together go, these are these bound tests, we can’t just get on board. It’s another sticker on the box, we have to come out on mass and say this test really isn’t representative and we’re not going to play. Yes, so some vendors see testing as a way to get to batch as a purely marketing lead project. And actually, we’ve even been asked to run fake tests in the past, where the vendor expected to send us money and receive an award in return even when the product didn’t really do anything. And obviously, we declined this corrupt approach. But the type of security vendors that work with us understand that our testing is extremely challenging. And there are two likely outcomes. Either the product is really strong, in which case the award is deserved, and customers can buy with confidence and the marketing departments very happy. Or the product failed at some parts of the test, in which case the engineers to send and work out how to improve the product. At the end of the day, the products are stronger, and everyone is better protected. But don’t just take our word for it. Let’s give the final word to CrowdStrike. CTO, Mike centeredness, his company spent a long time checking out which tests were worthwhile engaging in and which were just box ticking marketing exercises, the end of the day.

Mike Sentonas 34:04
The big thing for us is transparency, when everybody knows what the rules are, when it’s not sort of a you know, pay to play sort of tests that’s behind the curtain, and then you just put out a result. You know, we want to work with testing centers. We use the test to build better products. At the end of the day, I’d rather be finding holes in the products that we provide, I’d rather find the gaps I’d rather find the areas that we need to improve before an adversary finds them and uses them in a way that impacts one of our customers.

Simon Edwards 34:39
Now, just before we finish its security life hack time. At the end of each episode, we give a special security tip that works for real people in the real world for work and in personal lives. This episode’s life hacker is email security expert and malware Nemesis Sigurdur stiffness and or as we know and love him

Siggi 34:59
Give dishonest security answers. What I found is when I’m setting up accounts online, especially being Icelander, I found that the company is asking me to put in, like, for instance, your mother’s real maiden name and stuff like that. It’s effectively first of all, it’s unusable for me. But I also don’t think they need to know this. And as being an Icelander, for me, for years now, I actually been using basically putting in fake information there. Because if I would give that out as my mom, or when you get married in Iceland, you never change your name. Any of those information are too easy to get to. So what I do is I generate a random password and use use that in there.

Simon Edwards 35:42
Please subscribe. And if you enjoyed this episode, please send a link to just one of your close colleagues. We also have a free email newsletter. Sign up on our website, where you’ll also find this episodes, show notes, and bonus episodes featuring full length interviews with our guests. Just visit DecodedCyber.com. And that’s it. Thank you for listening, and we hope to see you again soon.

Feedback

Please send your comments, questions and concerns to info@decodedcyber.com.

Contact us

Give us a few details about yourself and describe your inquiriy. We will get back to you as soon as possible.

Get in touch

Feel free to reach out to us with any questions or inquiries

info@selabs.uk Connect with us Find us