How explainability will improve the human and AI relationship

Darktrace

By Tony Jarvis, Director Enterprise Security
Thursday, 09 June, 2022


How explainability will improve the human and AI relationship

Cybersecurity is not a problem that can be solved at a human scale as digital environments and the requirements to secure them are too complex. But human interaction is a requirement in solving the problem.

Unfortunately, modern organisations hold and ingest too much data for humans to anticipate and respond to for every cyber attack. The task of manually sorting through an organisation’s security logs and writing static detections is too large, and as a result, cybersecurity practitioners need augmentation.

Cybersecurity professionals are a sceptical bunch and they need to work with and understand a system before they can trust it. AI and humans must work together to defend from sophisticated attackers with increasingly sophisticated technology. While breakthroughs in AI can help security teams perform better, they can’t just rely on mathematical algorithms. Human beings must have control of systems and understand how artificial intelligence impacts them.

The result? An increased focus on explainable artificial intelligence (XAI). In cybersecurity, a ‘black box’ is a system viewed through inputs and outputs, with no knowledge of its internal workings. Yet these outputs are often made without explanation, and with the focus on security so paramount, teams need to convey the expected impacts and actions of AI.

Understanding the ‘why’

XAI inverts the situation, ensuring that cybersecurity professionals can see ‘under the hood’ and understand why the technology is making the choices it does. To build trust, humans are required to stay in control to understand an AI engine’s decision-making. It isn’t about people understanding and critiquing every decision made by the AI, but rather affording them the ability to drill down and explore decision-making when required. This is crucial when investigating cyber incidents and evaluating how to respond.

Merely understanding that something has occurred isn’t enough. Security teams need to understand the how and why of an incident. You can’t identify a vulnerability or cause of a serious exploitation without understanding how an attacker got through, or why the AI was able to contain the threat.

So, how can cybersecurity professionals use XAI? The answer lies in granting human access to inputs on multiple levels and explaining what has been done throughout the process. It’s about getting understandable answers because XAI makes the underlying decision-making data available to the human team, where possible and safe. The data is made available in plain language, along with visualisations and other tools. With these processes and methods, which allow users to comprehend and trust the results and output created by machine learning, the data can be placed in the forefront of security operations centres (SOCs).

XAI provides insight into all levels of AI decision-making — from close to the output through to the low abstraction levels. By programming AI to explain the micro-decisions it makes all the time, human teams are empowered to make impactful macro-decisions that impact the whole business.

Take the use of natural language processing (NLP) in threat data analysis as an example. When NLP is combined with sophisticated AI threat detection and response, it assists in making sense of the data and can autonomously ‘write’ reports. These reports explain the attack in its entirety, from the earliest stages to the remediation actions needed and taken. NLP can also be applied to existing frameworks, like the MITRE ATT&CK framework, to help communicate the findings in a way that adds value to security analyst workflows.

NLP could even be able to convey the ‘how’, in addition to the ‘what’, of a cyber attack. Not only does this break down the actions of the threat detection and response in a digestible way, it could also inform teams on how they can stop these threats from happening again.

Recognised by regulators

It isn’t just security leaders who are recognising the importance of XAI — regulators can see that the means of training AI could pose some risks. AI is usually trained on massive datasets containing sensitive data, which can potentially be shared across whole teams, organisations or regions, creating a complicated regulation and compliance situation. To make life easier for organisations and regulators, XAI should be implemented wholesale in the interest of transparency, objectivity and building AI resilience.

Organisations must also take the measures required to leverage AI to benefit human teams and make them stronger and more efficient. As biases or inaccuracies emerge in algorithms, organisations must rely on XAI to identify where they formed and how to mitigate against them, as well as understanding the processes behind its decisions that led them there.

With this identification and optimisation, AI will become a true force for good and help eliminate rather than cultivate existing challenges for human teams. For AI algorithms to truly bolster security defences, the human behind AI needs to understand decisions through explainability.

Image credit: ©stock.adobe.com/au/metamorworks

Related Articles

The power of AI: chatbots are learning to understand your emotions

How AI is levelling up and can now read between the lines.

Making public cloud work for Australia

Why businesses are still struggling to adapt to a future in the cloud.

Generative AI: from buzzword to boon for businesses

There are already solid business applications for generative AI, but as the technology continues...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd