Hacking has its good guys but very few business owners are aware of them.
We explore what ethical hacking is and how it could benefit your business, potentially saving you from a costly cybersecurity breach that brings your business to its knees.
Ethical hacking, also known as ‘white hat hacking’ in a nod to old cowboy movie morality, is the positive side of hacking the public doesn’t hear much about. The opposite of the black hats who grab almost all the attention, these men and women have always done important work securing computers, usually for little gratitude or acknowledgement.
But what defines a white hat or ethical hacker? Answering that is more difficult than it might seem, encompassing everything from amateur bug hunters and sleuths to full paid professionals who work for security companies in some capacity. What’s not in doubt is that white hat hackers have become important people without whom the cybersecurity industry and its customers would struggle. What matters for business owners is that this wealth of expertise, once out of reach and unaffordable, is now much easier to tap into than it used to be.
Why Ethical Hacking Is Needed
Defenders are always at a fundamental disadvantage to attackers. At the heart of this is the asymmetry that attackers know what they are attacking – software vulnerabilities, gaps in protection, human error – while the defenders are left to guess what these might be in advance. The standard defence against this is to use good security precautions such as patching, layers of authentication, least privilege, careful account management, and user education, to reduce the number of weaknesses to an absolute minimum.
As the number of security failures in the real world demonstrates, this model doesn’t work or at least fails often enough that that fact becomes noticeable. Why? Because every now and again, even well-run organisations make mistakes, or fall victim to novel software vulnerabilities, or their networks become so complex that they can’t keep up with the number of weaknesses that arise. Sometimes, the flaw lies with someone in the supply chain, or a partner or employee doing something they’ve been told not to do. This list of known unknowns grows longer with every year and every new technology.
Penetration (Pen) Testing
This is where the idea of penetration testing first caught on. The principle behind it is simple: how secure a network looks on paper and how secure it is under real-world conditions are not the same thing. To bridge this reality gap, a penetration test simulates how an attacker sees the network, using any technique or tools to beat its defences, including probing not only technology but for holes in security processes. At the end of this kind of exercise, the defenders are handed a report which potentially tells them the sort of ugly truths they need to hear in time to do something about it.
Pen testing comes in different forms, including white box tests, where an agreed set of tests are run against certain systems, and black box tests, which probe for weaknesses in an open-ended way. This kind of testing has even made it to formal assessments, for example Cyber Essentials Plus certification which requires an external vulnerability assessment that checks a networks’ resistance to common attacks, weaknesses, and software vulnerabilities.
Public Vulnerability Reporting
A less appreciated but increasingly vital way organisations can gain intelligence about their security is by setting up a channel for members of the public to report flaws. For years, only a small minority of organisations, usually tech companies, took this seriously but the profile of vulnerability reporting by the public has grown in recent years as the volume of possible security issues has grown.
At the SME end of the scale, this usually involves amateur researchers who use public vulnerability tools such as Shodan to spot exposed networks, contain misconfigured equipment, or public-facing websites or plug-ins with unpatched security flaws. Common examples include printers and VPN equipment although even complex applications can be involved. In other cases, researchers hear of exposed credentials on encrypted chat systems or dark markets. A growing problem is popular apps or services inadvertently exposing sensitive data that’s being indexed by search engines without employees realising.
Essentially, this is free intelligence, a way for external sources to tell an organisation that something is wrong, possibly a short-lived exposure the IT team (or managed services company) hasn’t noticed. Unfortunately, it’s rare for SMEs to offer a way (e.g. a web form or email address) for researchers to report these issues, an own goal given how simple it is to publish an email link. Even when SMEs do, a common problem is that the job of monitoring this contact it is not clearly assigned to a named member of the IT team to respond to.
In some cases, anecdotal reports suggest organisations even become hostile when they receive reports of weaknesses in their systems, interpreting it as an attempt to embarrass them or extort money. It’s a strange situation. On the one hand reports tend to be ignored, on the other honest researchers concerned for the public good receive threatening legal letters for telling organisations they have a security problem they definitely need to know about.
It bears repeating but if a researcher can find a security weakness then so can the hundreds of thousands of professional hackers who use identical tools to trawl for prey. Ignorance should never be a defence. Messengers are not the problem. What SMEs don’t know about will hurt them eventually. Regardless of what an ethical pen test turns up, or a member of the public reports, knowing the truth is always preferable to a large ransom demand and the threat of public disclosure.