Guidance on GenAI intelligence risk: ISACA
A new white paper from ISACA examines the benefits and risks associated with generative AI use, including recommended protocols and practices for AI security.
The Promise and Peril of the AI Revolution: Managing Risk explores the rapidly evolving risk landscape and the steps that risk professionals should take to keep up.
While excitement around generative artificial intelligence applications like OpenAI’s ChatGPT and Google’s Bard has grown, so have the notes of caution from many in the industry, who point to a range of potential problems.
The paper examines several different types of potential risk that enterprises could face with generative AI, including invalid ownership, weak internal permission structures, data integrity and cybersecurity and resiliency impact, not to mention larger societal risk.
As AI will likely affect businesses in every industry, organisations must take four important steps to maximise AI value while installing appropriate and effective guardrails, as part of a continuous risk management approach:
- Identify AI benefits.
- Identify AI risk.
- Adopt a continuous risk management approach.
- Implement appropriate AI security protocols.
Following these steps will allow leaders to strike a good balance of risk versus reward as AI-enabled tools and processes are leveraged in their enterprises. In addition to breaking down the above four steps, the ISACA paper includes eight protocols and practices for building AI security programs in the fourth step, including:
- Trust but verify.
- Design acceptable use policies.
- Designate an AI lead.
- Perform a cost analysis.
“While some leaders may prefer to wait to adopt AI tools, it can be a risk to your organisation to delay the implementation of proper security and risk management plans; AI risk isn’t just a precaution — it’s a necessity,” said Jason Lau, Chief Information Security Officer of Crypto.com and ISACA Board Director.
“It is imperative that leaders prioritise establishing the correct infrastructure and governance processes for AI in their organisations, ensuring they align with core ethics, sooner rather than later.”
The paper is available for download here.
36% of Australian organisations lack AI maturity: research
Research from TeamViewer finds that many Australian organisations believe AI is now essential for...
Zoho to leverage NVIDIA tech to build LLMs
Business software company Zoho is leveraging NVIDIA's AI Accelerated computing platform to...
Zendesk introduces AI CX capabilities
Zendesk has introduced a series of AI-powered tools for customer service teams, including AI...