Getting practical with generative AI

Amelia
By Richard Oldham, Head of Solutions, Asia Pacific at Amelia
Wednesday, 12 July, 2023


Getting practical with generative AI

In today’s rapidly evolving landscape of generative AI and large-language models (LLMs), there is a flood of information and excitement, but little practical guidance available for enterprise executives seeking to capture tangible, measurable business value with this transformative technology.

This is because beneath all the hype, excitement and heightened concerns, few parties are well-positioned enough to lay out a clear direction to move in when it comes to generative AI.

Uncertainty and mixed messages are in some ways understandable — these are formative times for generative AI. There’s no supporting corpus or body of knowledge for enthusiasts to lean on for the purposes of their own research and decision-making. This makes it difficult for observers to stay on top of developments and to separate the signal from the noise.

For this very reason, there is a growing demand for trusted advisors with expertise in both LLMs and AI to help cut through the confusion and provide practical advice. These advisors can help organisations understand how to harness generative AI to unlock phenomenal business value, especially when coupled with existing cognitive technologies like conversational AI, which understands and responds to natural human language.

Having been in this space for many years now and using LLMs — albeit smaller versions — within our platform, the team at Amelia has a perspective on this to share, as well as practical guidance for enterprise executives who want to learn how their workforce can leverage this technology to drive meaningful business value. The truth is, generative AI does not signal the end of human innovation. The true value of generative AI lies in its ability to accelerate our creativity, our problem-solving skills and our ability to help others.

With our expertise and guidance, we can help navigate this transformative technology landscape and unleash the full potential of generative AI — raised to the power of cognitive AI — to drive real, measurable business outcomes.

The first practical step is to do ‘something’

To do nothing, even at this early juncture, would be a mistake. Generative AI is going to impact all our lives and fundamentally change many business functions.

The temptation to hold off may be due to an over-inflated sense of risk or lack of experienced resources. An early narrative that has formed around ChatGPT and Google Bard, for example, is its potential to produce “hallucinations” — false citations, references, quotes, answers to questions and so on. But equally dangerous is businesses seeing generative AI generally as a technology that will fail them and/or as having too much inherent and unmitigated risk to experiment with at all.

The underlying technology is formative and evolving rapidly; it will become more accurate. Additionally, the problem may not be entirely with the model. Mistakes in the output are cause for internal reflection as well. Part of the issue may be in the way the question was asked. This is known as “prompt engineering”, and training staff can lead to better phraseology and better outputs being generated. Organisations should also explore — or at least know about — fine-tuning the pre-trained models. Getting your training wheels on with generative AI should be the first step.

By engaging in practical experimentation, individuals and organisations can gain a deeper understanding of the capabilities and limitations of the technology. It opens the door to generating ideas for how generative AI can be applied to solve business problems and unlock tangible value. Initial experimentations could focus on enabling employees on sales and marketing teams to leverage generative AI to create first drafts of content such as blogs, white papers, presentations or customer emails. Another use case could involve enabling software engineers to use generative AI as a co-pilot for writing and debugging code.

It’s important to note, however, that generative AI should not be used blindly for creating and publishing content or code. Instead, employees must collaborate with this technology to complete first drafts in minutes instead of hours, and then focus on editing until the output is of the highest quality. Employees must still have a vision for what they need to accomplish — generative AI is simply there to help bring that vision to life. In essence, the technology does not replace innovation, it accelerates it.

Beginning the experimentation here yields increases in productivity, but does not capture outcomes that truly impact the bottom line. This is why it’s important to take a technology like generative AI and combine it with existing cognitive technologies to create intelligent virtual assistants (IVAs) that drive cost savings and increased customer and employee satisfaction.

The second practical step is to decide what it is you want to do

As companies gain experience working with generative AI, they realise that while this technology is transformational, it still needs guardrails. An enterprise cannot just take a tool like ChatGPT and deploy it for customer service without risking serious ramifications.

This is why it’s important for business leaders to develop policies on the appropriate use of generative AI within their organisations. For example, an enterprise that works with sensitive or private data should implement a policy that no data or source code should be shared with an LLM.

After defining policies around the use of generative AI, enterprises must decide where and how they will adopt the technology. For example, empowering employees with this technology is a great way to improve productivity. Leveraging the power of generative AI, reinforced by the focus, guardrails and security of enterprise conversational AI, creates an opportunity for organisations to drive meaningful business value through increased operational efficiency.

The combined impact of these two technologies also helps companies reimagine customer experience. For a moment, imagine a world where you chat or call into a contact centre and describe your query in your own natural language, and in response, an IVA immediately resolves your request in a manner that is contextualised and personalised to you. Only a few years ago this would have sounded like fiction — but today it is made possible by leveraging the collaborative power of generative AI and conversational AI tools.

Conversational AI provides foundational conversational flows, system and data integrations, security and privacy, and response guardrails, while generative AI’s outstanding language understanding and generation capabilities enable tasks to be done faster, and at a higher quality and scale. Together, these technologies resolve simple FAQs, as well as accomplish action-oriented tasks that require both conversational and operational understanding.

To determine high-value use cases, businesses can begin by analysing high-volume requests and their average handle time, response complexity and the potential business impact of automating the response workflow. Use cases will vary by industry and horizontal, by the priorities of a business, and by whether the implementation is employee- or customer-facing.

Regardless of industry, there is tremendous value to be captured from these technologies, and having a trusted AI advisor empowers organisations to drastically accelerate their AI journeys.

The third practical step is to not limit experiments to one LLM or chat tool

OpenAI, with ChatGPT, is the current leader from a market buzz perspective when it comes to generative AI. As a result, many businesses are focusing on OpenAI’s platform to explore the capabilities of the technology.

While this may be a reasonable place to start, ensure that you are also looking at multiple options in your R&D processes. In much the same way the cloud space evolved to have different platforms with different strengths and weaknesses, generative AI is an expansive space, with a choice of many models and options. Various models outperform each other for different use cases. Examples of this include language understanding, summarisation and explanation, language translation, generating text, idea generation, technical tasks like writing Excel functions or code, debugging or reviewing code, question answering, and more.

When selecting an LLM, it’s crucial to consider various factors beyond functional capabilities. Each model has an associated cost and can easily become cost prohibitive to implement with enterprise volumes. Additionally, each model has varying performance, risk and flexibility depending on the provider.

Enterprises should also explore the differences between using a cloud API versus self-hosting LLMs, as well as understand their associated implications. All of this is to say that there are numerous criteria to consider when evaluating and choosing which LLMs to use, and sometimes, LLMs are not even the ideal choice.

Relying solely on one technology or vendor can therefore lead to limitations and missed opportunities. Instead, building your generative roadmap in a way that allows flexibility and integration with different technologies and vendors is paramount. This approach ensures that you can adapt and leverage the strengths of different LLMs based on the specific requirements and challenges of each use case.

While it may be challenging to stay abreast of what other providers are out there and the relative strengths of each, a trusted advisor can provide considerable assistance.

The fourth practical step is to consider the total user experience

When considering the practical steps for implementing generative AI, it is crucial to take a holistic approach to the user experience. While taking a broad lens to the space is considered best practice, leveraging different generative AI models independently — implemented by different teams across an organisation — can provide a fragmented user experience, as well as decentralise the risk controls making for a more complex set of solutions.

To address this challenge, organisations should explore the adoption of an Enterprise AI Platform that has a library of conversational and analytical models. This provides a single-point-of-entry and a consistent experience for all experiments being run across the organisation. In addition, ultimately, organisations should have a flexible approach where models can be augmented and changed over time to ensure the organisation achieves maximum business benefit.

An Enterprise AI Platform enables consistency, centralisation and reduced risk when organisations work with generative AI and encourages creativity for how the technology can be implemented. We are at the early stages of working with these technologies, and the true extent of how we can leverage these tools to accelerate innovation has yet to be fully realised.

By avoiding rigid boundaries and embracing a mindset of exploration and experimentation, organisations can remain at the forefront of innovation. The highest-value use cases for generative AI are undoubtedly still waiting to be conceived, and by staying open to new possibilities, businesses can position themselves to harness this technology’s transformative power.

Image credit: iStock.com/Robert Way

Related Articles

How to prepare for the AI future (that isn't here yet)

Something big is indeed coming, but the revolution is not here yet.

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd