How this AI-human partnership takes cybersecurity to a new level

A program designed by MIT to battle hackers is example of effective artificial intelligence and human collaboration.

|
AP Photo/Lee Jin-man
South Korean professional Go player Lee Sedol puts the first stone against Google's artificial intelligence program, AlphaGo. AlphaGo beats Lee Sedol 4-1. MIT scientists are now working on AI programs that could detect cyberattacks.

In the ongoing battle against cyber attacks, a man-machine collaboration could offer a new path to security.

To keep up with cyber threats, the cybersecurity industry has turned to assistance from unsupervised artificial intelligence systems that operate independently from human analysts.

But the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology in Cambridge, Mass., in partnership with the machine-learning startup PatternEx, is offering a fresh approach. Their new program, AI2 ,  draws on what humans and machines each do best: It allows human analysts to build upon the large scale pattern recognition and learning capabilities of artificial intelligence.

The industry standard right now is unsupervised machine learning, CSAIL research scientist Kalyan Veeramachaneni, who helped develop the program, says in a phone interview with The Christian Science Monitor. 

Cybersecurity firms send out AI programs that autonomously identify patterns in data and flag any outliers as possible attacks.

"There's a recognition that the volume of data that has to be analyzed is going up exponentially," says Alan Brill, senior managing director of cybersecurity firm Kroll, in a phone interview with the Monitor. "But the human resources that are both trained and available to do this are not going up at that rate. And that starts to leave an analytic gap. That has to be handled by some form of machine intelligence that can look at the log files and transactions and the data and make some intelligent decisions."

The problem: "Every day the data changes, you change something on the website and the behaviors may change, and ultimately outliers are very subjective," Dr. Veeramachaneni says. 

With AI2 , the team of researchers asked this question: "Is it possible to have an AI flag a possible attack and then have an analyst report whether or not it was a threat?"

The result is a feedback loop that created a new partially supervised, unsupervised machine-learning program that has produced excellent results.

AI2  flags anomalies in the data like any other unsupervised program, but then reports a small sample of its findings to analysts. Analysts look for false positives, data that was incorrectly flagged as a threat, and tell the AI program. The feedback is plugged into the learning equation and refines AI2  for its next search.

In other words, as the AI does the laborious work of searching through millions of lines of data, the humans confirm and refine their findings, including labeling the type of attack it found as brute-force, trojan, etc. and identifying new combinations. 

Veeramachaneni's study, released Monday, shows that the MIT program now has an 85 percent detection rate and as low as a 5 percent false positive rate. The false positive rate is the impressive number, according to cybersecurity experts.

"If you use unsupervised learning alone, to get 85 percent detection it will end up with 20-25 percent false positive rate," Veeramachaneni says. "There are hundreds of thousands of events, if you’re showing analysts 25 percent, that is huge. People can say they have a program with 99 percent detection rate, but you have to ask how many false positives."

Why haven't other AI cybersecurity programs learned from the feedback? It's a common AI practice, but it has been extremely hard to adapt for cybersecurity.

For example, researchers have used feedback from people to help an AI program identify objects in images. A group of willing participants could look through millions of images and flag the ones that have lamps in them. That data set would then be used to help teach an AI program to identify lamps.

While simple objects are easily identifiable, it's harder to pick out a cyber attacks in lines of data or code. And the experts that can are already swamped with having to look through millions of lines.

The AI2  program developed a system that made teaching the program relatively easy. The program only presents a small portion of its findings, and those are continually refined down. On Day 1 it might present an analyst with 200 anomalies, but later it may only present 100.

Successfully finding a way to input such feedback opened up a more reliable and adaptable system for the fight against cyber attacks and could be a major boon for businesses.

"The number and sophistication of cyber attacks is a disruptor for traditional industries," says Brill at Kroll. What AI
does "is the kind of thing we need to keep up with the hackers," he adds.

For now, PatternEx is bringing the AI2  to Fortune 500 companies, but the hope is to have the program available for businesses of all sizes.

"As we build more and more of these around different companies, the model are transferable," Veeramachaneni says. "For a small company, that doesn’t have a budget for a security team, we could transfer the models from other companies for them."

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to How this AI-human partnership takes cybersecurity to a new level
Read this article in
https://www.csmonitor.com/Technology/2016/0420/How-this-AI-human-partnership-takes-cybersecurity-to-a-new-level
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe