Is ethical AI no longer important?

Dataiku
By Grant Case, Regional Vice President, Sales Engineering - APJ at Dataiku
Thursday, 22 December, 2022


Is ethical AI no longer important?

In early November, as part of a wave of layoffs at Twitter, Elon Musk let go of the company’s ethical artificial intelligence (AI) team. This team was Twitter’s strongest internal watchdog, responsible for making the platform’s algorithms transparent, fair and inclusive.

With tech companies around the globe laying off staff in the face of the economic downturn, this might inspire many leaders to take a leaf out of Twitter’s book and cut headcount in similar teams. Has the time come to re-evaluate the importance of ethical AI?

The answer is no. In fact, ethical AI matters more than ever.

According to research by IDC, commissioned by Dataiku, 39% of organisations in APAC have already invested in AI. Among those looking to increase AI spending, 75% expect to spend over 10% of their technology budget on AI alone.

There’s no doubt that AI and machine learning (ML) technologies are developing rapidly, and companies are using them in new and sometimes sensitive contexts. This means an increase in the potential for unintended or unanticipated outcomes that could have detrimental impacts on people’s lives.

Several governments have recognised this, including Australia, where the government released eight principles for safe, secure and reliable AI. These principles aim to encourage businesses and other governments to adhere to the highest ethical standards throughout the AI design and deployment process.

For companies looking to implement ethical AI standards, the big question lies in the ‘how’. As a starting point, businesses should set out to create AI that is fair, unbiased and accountable, with the ultimate goal of steering clear of unintended or harmful actions. Once the goal has been made clear and communicated with the teams, it’s time to get into the nitty-gritty.

AI governance

A strong starting point is for organisations to outline their AI governance approach. Beginning with a framework in which organisational priorities and values are articulated. From there, rules, processes and requirements can be set up to shape behaviours.

Several teams across the organisation often have a hand in creating AI models, each using different technologies and datasets. Once models are deployed, monitoring by individual owners can lead to a misalignment of models’ aims, statuses and impacts.

AI governance requires projects to be centralised for easy monitoring and deployment. In addition, a diversity of views from different stakeholder groups at each point in the modelling process is key to making ethical AI a shared responsibility. Once centralised and evaluated by stakeholders, each AI project can be qualified, reviewed and deployed. Following a consistent and formal set of steps ensures AI projects are more likely to meet organisational and AI governance standards.

Establishing an ethical AI framework

Establishing an ethical AI framework does not require a large, complex services engagement from a consultancy, but can start with a simple checklist. An ethical AI checklist, applied early in a project, provides a list of questions for employees across various functions to assess the ethics of an AI project and informs more thoughtful decision-making. Creating an ethical AI checklist fosters open discussions about the ethical implications of AI projects and raises awareness of the topic.

However, it is up to each organisation to create a checklist that best suits its needs and norms, as common practice for one organisation may go against the principles of another.

While outcomes and questions will differ between organisations, most checklists will include a few common questions like: What are the project’s objectives? What measure will denote project success? Who will the project impact? What potential biases live in the data? What have we done to address the potential ethical concerns of the project? Who will monitor these concerns after a project deploys?

An ethical AI checklist is just the first step to a holistic ethical AI framework. As an organisation matures, the framework should widen to include ethics guidelines, a process to audit the impact and outcomes of an AI model, a redress system when AI model failures occur, agreed-upon metrics regarding fairness, explainability and other elements. Ultimately, organisations should continue to revisit their checklist to consider changing societal contexts and organisational principles, ensuring ethical lapses do not occur because of a lack of oversight.

Ethical AI begins with diversity

A diversity of backgrounds and an education about AI are as important as governance and frameworks in achieving ethical AI. Organisations must establish teams that consist of a diverse group of individuals and educate all of them about the importance and implementation of ethical AI. Without diversity and education, an echo chamber can form and lead an organisation astray in its AI use, with no-one realising their myopia of the situation until too late.

Following these steps will help an organisation create AI projects with ethical AI and governance at the core and caters to the many nuances of the global community.

Image credit: iStock.com/StudioM1

Related Articles

Is the Australian tech skills gap a myth?

As Australia navigates this shift towards a skills-based economy, addressing the learning gap...

How 'pre-mortem' analysis can support successful IT deployments

As IT projects become more complex, the adoption of pre-mortem analysis should be a standard...

The key to navigating the data privacy dilemma

Feeding personal and sensitive consumer data into AI models presents a privacy challenge.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd