Responsible AI: work without limiting innovation

SAS

By Ray Greenwood, Customer Advisory Lead, Public Sector, SAS
Thursday, 19 October, 2023


Responsible AI: work without limiting innovation

A new AI-driven era is emerging, and it’s important that organisations explore the technology’s potential. Understanding the need for appropriate controls is essential to building a responsible approach to its future deployment.

The fundamentals of data and analytics have evolved rapidly over the past 25 years, but never as fast as they have in the last year. The rise of Open AI’s ChatGPT and Google’s Bard and others have captured a great deal of interest far beyond enterprise business circles. This rapid evolution of AI tools has captured the broader public’s imagination, revealing immense potential... and risk.

The big questions many are asking right now about ethical AI — such as whether the interests of people like themselves are adequately protected — are important for ensuring the future of AI is a just future for everyone... but they’re not new questions.

Professionals working in data and analytics have considered the issues of bias, representation, model drift and fairness throughout their careers. The public, on the other hand, is just now starting to see these issues and the clear impact that tools like ChatGPT can have on the world. These issues require rapid response thinking from government, educational facilities and industry, but many are simply not equipped to solve such problems in short timeframes.

Responsible automation as we enter the era of Generative AI

Technology often develops faster than regulation, and history shows that governments struggle to put the tech genie back in the bottle. With GenAI, there is a need to quickly assess the potential impact of these tools on people across the social and economic divide.

For example, what damage could be done in education if students use GenAI to auto-generate answers without building comprehension and understanding? At the same time, AI has potential to help students learn complex concepts through a conversational dialogue. It brings to mind “The Young Lady’s Illustrated Primer” from Neal Stephenson’s 1995 sci-fi novel, The Diamond Age. The book suggests that harnessing the power of AI for more than just an elite few (and directing it the in the right way) could have an incredible potential positive impact on education.

Working with organisations in domains including law enforcement, we know what it means to manage data where lives and reputations are on the line. Applying the right analytics to the right datasets with the right governance is essential for every mission-critical environment to give teams the tools they need to support their work without adding unexpected errors into their processes.

Balancing capability while ensuring scalable deployment of analytic assets is managed appropriately has always been a critical element of analytics services. Analysis of unstructured data, whether powered by natural language processing (NLP), deep learning and reinforcement learning (RL) or other analytics tools, can incorporate GenAI, expanding the reach of such tools beyond data scientists and statisticians.

However, as specialists have also operated with an understanding of biases and other factors that must be considered when implementing AI in decision-making, business leaders must now ensure that broader use is supported by the right training and clarity around implementation that will lead to ethical outcomes.

The potential for bias, inaccuracy and misinformation in GenAI reflects our human inputs. These problems cannot be fully removed from a system but must be carefully managed by the humans who control inputs and outputs, making it essential to keep human decision-making in the governance loop for every AI-powered business process. Curation, management, governance and transparency must become essential parts of the GenAI process if we are to maximise its potential and minimise risks.

Image credit: iStock.com/kentoh

Related Articles

How to prepare for the AI future (that isn't here yet)

Something big is indeed coming, but the revolution is not here yet.

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd