For a long time, artificial intelligence existed merely within the realms of our imagination; in Terminator movies the most, perhaps. Over the past decade, we’ve seen it evolve from simplicity to complexity, and at a rapid pace. In 2018, researchers had managed to utilize AI systems developed by Google, Amazon, and Apple to open and navigate websites, and even dial phones for conversations. But while Google Assistant, Alexa, and Siri rank amongst the most widely used AI programs in operation, they hardly fill the list in totality.
It’s not unimaginable for a bad actor to target a financial institution’s fledgling AI software, or for competitors to attack another company’s algorithms. In fact, cyber security professionals expect attackers to utilize AI against the companies they work for. To those responsible for corporate or enterprise security, from CIOs to CISOs, AI presents two forms of risk with the potential to influence or change the nature of their jobs. The first highlights an attackers’ use of AI to exploit defensive vulnerabilities; the second risk marks the threat of bad state actors, criminals, or unscrupulous competitors against nascent AI programs. Simply put, we are all in a cybersecurity arms race.
A recent report cited that attackers have easy access to inexpensive malware, identity theft kits, and other tools on the dark web. In terms of priority, AI-enabled attack kits are on their way as well, with expectations around their availability at commodity prices in the coming years. But despite the inherent risks involved with artificial intelligence, a part of the answer for countermeasures lie in the power of AI itself; companies now tap the power of AI to both protect their AI-driven initiatives and upgrade their cybersecurity capabilities. There are three levels to this.
Protection and Prevention
The future of cybersecurity is likely to benefit the most from AI-enabled protection and prevention systems; this implies the use of advanced machine learning to bolster defenses. These systems can further improve to allow for flexible human interactions with algorithmic decision making.
AI can provide insights into sources of potential threats from pieces of monitoring software, or from external and internal sensors, that evaluate digital traffic; this is done via deep packet inspections. However, it is important to note that for most companies, AI-based detection or automation require careful oversight and policy design to conform with the regulations and laws of data use.
AI-driven response systems are dynamic; they segregate networks to redirect attackers away from vulnerabilities, or isolate valuable data to safe spots. To an analyst, this system drastically improves efficiency as well, allowing him or her to focus more on high-probability signals than spend time searching for them.
We need to understand that AI evolution works against us just the same; it increases an attacker’s speed, resilience, and opportunities for success. Given the technology’s ability to learn and become smarter, hackers can now automate attack methodologies; the algorithms are often open-source, public, and increasingly easy to use. AI can now help malware evade detection, for example.
The intelligence is artificial, perhaps, but the risks are all too real. It reemphasizes cybersecurity as an arms race, and no matter the cost, it’s best we stay ahead of the pack.