NAIROBI, KENYA: Popular visions of artificial intelligence (AI) often focus on robots and the dystopian future they will create for humanity, but to understand the true impact of AI, its skeptics and detractors should look at the future of cybersecurity.
The reason is simple: if we have any hope of winning the war on cybercrime, we have no choice but to rely on AI to supplement our human skills and experience.
With the ranks and sophistication of cybercriminals continuing to grow, the technology industry has started to address this challenge through the use of AI. As with many new technologies, however, the good that AI can do is threatened by the misconceptions and hyperbole that surrounds it.
For this reason, the technology industry must address these popular perceptions, and that starts with redefining AI as what it truly is: augmented intelligence.
This might seem to be simple semantics, but I contend this redefinition is critical to the future understanding and acceptance of AI, and our ability to apply it in areas that are of such critical importance to society, from education to healthcare to the environment.
When it comes to cybersecurity, AI is emerging as our most powerful ally, especially as it has become clear that relying primarily on humans to fight this war is a losing battle plan.
Cybercriminals have created one of the largest illegal economies in the world, generating $445 billion in annual profits and stealing more than a billion records of personal information, such as credit card numbers and health records, every year.
The most concerning fact, though, is that 80 percent of cyberattacks are driven by highly organized crime rings that freely exchange data, tools and tricks of the trade. Cybersecurity experts just can’t keep up, and the situation will continue to be challenging with a projected 1.5 million security positions to remain unfilled between now and the conclusion of this decade.
Cybersecurity experts can’t wait for these jobs to be filled. They need technology that augments their abilities by filling gaps in monitoring and identifying threats.
The good news is there is a growing understanding among security experts about the benefits of cognitive security. A recent survey by the IBM Institute of Business Value found that nearly 60 percent of security professionals believe cognitive security solutions can significantly slow down cybercriminals.
The same survey revealed there will be a three-fold increase in the percentage of companies implementing cognitive-enabled security solutions in the next two to three years, from 7 to 21 percent. This won’t alleviate the need to hire additional cybersecurity experts, because the fight against cybercrime will require a closer alliance between human and machine.
Even if all the open cybersecurity jobs were filled, we would still face a crisis due to the staggering volume of data that humans alone simply can’t consume. The average organization sees over 200,000 pieces of security event data per day with enterprises spending $1.3 million a year dealing with false positives alone, equaling nearly 21,000 wasted hours.
Couple this with 10,000 security research papers published each year and over 60,000 security blogs published each month, and security analysts are severely challenged to move with informed speed.
AI will help security professionals by sorting through all this data, using natural language processing to understand the imprecise human language contained in blogs, articles, videos, reports, alerts and other “unstructured data,” connecting obscure data points humans couldn’t possibly spot, and making recommendations on remediation strategies based on those connections and insights.
Without AI, unstructured data will continue to be the Achilles heel of cyber-defense because it represents a huge blind spot, comprising more than 80% of all data.
Augmenting the expertise of cyber professionals, AI systems are learning how to monitor unstructured data to detect risks before they emerge. As they continue to learn, AI systems will be more adept at detecting the difference between a computer glitch and a malicious attack, alleviating the need for security analysis to waste valuable time on wild goose chases.
Once an attack is identified, security analysts often turn to the Internet for the latest ways to address it, generating thousands of pages of results that may or may not contain the solution. It’s a process that is neither fast nor accurate. In this stage of the fight, AI can play an important role analyzing reams of information, including unstructured data, to identify the most probable fixes – and do so in orders of magnitude faster than any human.
While we are just on the forefront of the “cognitive” era of security, progress is well underway in making this vision a reality. Cognitive tools such as IBM’s Watson are currently being trained to ingest and understand vast amounts of security data and research created for human consumption. Dozens of organizations are already working with this technology and helping discover new ways Watson can be used in the fight against cybercrime.
In the future, bots will seek out network vulnerabilities, diagnose them and recommend ways to patch them – all while working seamlessly with cybersecurity experts who will be even more valuable in the fight against cybercrime because they have been trained in the use of augmented intelligence.
Today, the often automatic reaction to any mention of systems gaining intelligence is that the robots have come to take our jobs. In the war on cybercrime, reality could not be further from this view. AI will enable humans to deal with ever-increasing threats by augmenting our expertise – but it’s critical for people to first understand and accept the true definition of AI.
The writer is Chief Technology Officer, IBM Security