How AI-powered risk management builds trust

BSI Group

By Mark Brown, Global Managing Director, Digital Trust Consulting, BSI
Monday, 23 October, 2023


How AI-powered risk management builds trust

Journeys worth taking often carry an element of uncertainty — and the digital journey is no exception, throwing up a number of cybersecurity considerations around personal data and who has access to information. As AI accelerates the road to digital transformation and Society 5.0, shaping how we work, rest and play, it has the potential to be a force for good for those who are well prepared and focused on building digital trust before setting out.

You could be forgiven for thinking that AI was new on the scene, given the extent of global coverage of developments such as ChatGPT. Analysis using the Signal monitoring tool suggests coverage in top media titles of AI rose 286% in the first half of 2023 compared to the preceding six months*. In fact, it’s been revving up for years and has long been a regular discussion point at major global events such as the G7, G20 and the World Economic Forum (WEF), as shown by the latter’s 2018 Future of Jobs report.

What’s new is that AI is now crossing over from small, contained environments into mainstream technology at a consumer level, as McKinsey’s research shows. This brings added risk. Importantly, it offers added opportunity to drive progress across society — for those who know how to unlock it. AI has the potential to positively shape our future, including when it comes to making us more cybersecure.

Bridging the societal gap

Clearly, AI is here to stay as many organisations around the world already incorporate AI on a daily basis, and it’s already firmly embedded within business operations and everyday consumer interactions such as targeted information and recommendations from the likes of Google and Amazon. The speed of this shift means that there could be a gap between the pace of change and the public’s understanding of it.

Awareness is key here — helping people to understand that when they put data into systems and use certain technology, it is likely to be used by AI with the intent of providing benefit to the consumer — but there is also the potential for it to be used for other purposes; for example, to sell them something.

It’s worth noting that whilst figures suggest the number of data breaches is decreasing, the number of records being breached is increasing as big tech firms — with huge volumes of aggregated consumer data — increasingly find themselves being targeted. Now is the moment to upskill the public — and fast. Mass societal education is critical to ensure AI can be a positive force for society.

AI as the cybersecurity gatekeeper

Major enterprise organisations already utilise SIEM tools (security information and event management) to monitor activity and remain alert to threats. At BSI alone, in line with other organisations of our size, we average around 150 million security log events per month (these are events related to security, such as login attempts, object access and file deletion)**. For humans this is very much needle-in-a-haystack territory and that is the security opportunity for AI — to provide real-time data analysis based on a set of algorithms and rules which are predetermined by the controls that we operate as an organisation.

The potential improvement is clear — shifting these tasks to AI could allow issues to be identified far quicker and without taking up employee time and energy. With the right tools in place, there’s the prospect of the AI analysis presenting an opportunity to take remedial action before an incident even becomes significant. This could be advantageous to organisations in all sectors.

There is, of course, a critical role for humans here. Trust still remains a crucial factor when it comes to AI, and for many people, complete trust will need to be present before AI can take responsibility over everyday tasks. In the future, the ideal situation will be cybersecurity managed as a partnership between people and AI.

Assessing the risk appetite

Against this backdrop, organisations can assess their risk appetite — do they view security as a bare minimum cost? Or, as set out by McKinsey, do they see it as a route to engaging the trust of employees, partners, consumers and institutional investors, thereby creating a competitive advantage through digital trust?

Organisations that take the long-term view may well see that, with additional investment, AI can enable them to enhance their cybersecurity, privacy or digital risk landscape and act as a proactive as well as preventive tool in their armoury. The average time to detect a breach is more than six months, according to IBM. Yet the company’s annual Cost of Data Breach report found that AI or automation cut breach lifecycles by 108 days. In other words, AI can be a game changer.

Not all data sets are equal

As we seek to better understand the opportunity around AI, acknowledging that there are different types of datasets can be key to unlocking their true potential.

One question to consider is: Is this data complete, or is it evolving? Identifying whether you are dealing with a fixed dataset (not updated automatically, such as ChatGPT), a generative dataset (which learns on the go, such as Google or Amazon) or a transient dataset (relevant for a limited time period such as BBC Sport live scores) means that we can assess it on its merits. Generative data, for example, may well include missing or incorrect information. The concern is that misinformation seeps through the system and gets picked up by generative AI tools — so it’s important someone is taking the time to validate it. There’s an opportunity for AI here to act as a filter, helping to exclude missing or incorrect information, thus driving positive progress through the dissemination of accurate information.

Organisations that put building greater digital trust at the heart of their strategy are ideally positioned to thrive on the rapidly evolving digital journey. As we accelerate towards Society 5.0, AI can play a central role in tackling cyber risks — acting as a force for good by making us safer and more secure as individuals, organisations and society.

*Figure based on a Signal search of articles where artificial intelligence was detected as a topic in publications identified as 50 of the most influential, comparing H2 2022 with H1 2023. In H2 2022, there were 2743. In H1 2023, there were 10,594.

**Figures from internal BSI data for year to date 2023.

Image credit: iStock.com/MicroStockHub

Related Articles

Safeguarding Australia's global resiliency

There are three essential steps to design applications for maximum resiliency.

Staying ahead: business resilience in the hybrid cloud era

The rise of cloud computing and advancements in virtualisation have revolutionised how businesses...

Taming cloud costs and carbon footprint with a FinOps mindset

In today's business environment, where cloud is at the centre of many organisations' IT...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd