Innovation and security: the challenges of generative AI for enterprises
By John Hopping, Senior Manager, Sales Engineering Asia Pacific, Cradlepoint
Monday, 01 April, 2024
Despite what some might think, no, bots probably won’t come looking for your job, but they may well attack your intellectual property. Generative AI (GenAI) has become a transformative technology for businesses in more ways than one. In 2024, its widespread adoption will undoubtedly have an impact on organisations across all industries, resulting in increased productivity and efficiency. However, GenAI can be a double-edged sword. Organisations need to tread carefully when assessing organisation security risk, especially when it comes to data protection.
Research conducted by Harvard Business school in 2023 showed that implementing generative AI could increase employee productivity by up to 40%, while introducing new data security challenges. One main concern: employees who use GenAI to perform work tasks on a daily basis and unintentionally expose sensitive data to these large language models (LLMs). Today, in addition to the many other security issues that organisations need to be aware of, they need to protect themselves from the potential threats posed by this powerful and prolific tool.
The benefits of generative AI in business
Generative AI stands out for its remarkable ability to generate content, automate software development, improve customer interactions through chatbots, and optimise support operations. According to Gartner, an overwhelming majority of companies have already begun to integrate this technology into their processes, demonstrating its transformative potential. Gartner predicts that within two years, more than 80% of companies will likely use APIs (application programming interfaces) and generative AI models or deploy dedicated applications for their production environments, up from less than 5% last year.
The risks associated with generative AI
However, the integration of generative AI into business practices is not without significant concerns. The ease with which employees can access and use these tools increases the risk of accidentally exposing confidential information, a concern exacerbated by the ability of these systems to process vast amounts of data. In addition, the formation of these systems on data available online raises legitimate questions around copyright and intellectual property. Biases present in the data can also lead to questionable results, highlighting the need for an ethical and critical approach in the deployment of generative AI.
Security solutions for generative AI
The rapid increase in the use of these technologies in the enterprise underscores the urgency of developing security solutions that keep up with this growth. The implementation of data loss prevention solutions based on Zero Trust technology offers a promising approach, enabling the secure use of generative AI. With solutions such as generative AI data loss prevention, organisations can securely enable the use of generative AI across the enterprise without risking their data and integrity.
Zero trust architecture, which airgaps use of GenAI apps in secure, isolated cloud containers, provides true protection. Organisations can easily implement this clientless solution to set access policies for users rather than preventing use of GenAI sites. For example, organisations can block users from entering personally identifiable information (PII) or prevent them from using the copy/paste function, which risks sensitive corporate data flowing into LLMs and shared outside the organisation. In addition, GenAI isolation also protects users’ devices and corporate networks from any malware generated by a GenAI tool or transmitted from a malicious source.
Generative AI represents a unique opportunity for business innovation, but it also requires special attention to the security risks it can generate. By adopting advanced security strategies, organisations can harness the full potential of this technology while ensuring the protection of their most valuable assets. For Australian organisations, the balance between innovation and security will become the key to successfully navigating the digital future.
Building a critical infrastructure security dream team
Today it's essential to have a strong cyber strategy, with all corners of the business aware...
The AI regulation debate in Australia: navigating risks and rewards
To remain competitive in the world economy, Australia needs to find a way to safely use AI systems.
Strategies for navigating Java vulnerabilities
Java remains a robust and widely adopted platform for enterprise applications, but staying ahead...