GenAI anxiety is warranted — if you value privacy

ManageEngine

By Enterprise Analyst John Donegan, ManageEngine
Tuesday, 31 October, 2023


GenAI anxiety is warranted — if you value privacy

Generative AI is everywhere these days, with stories around the use, misuse and potential impact of this burgeoning technology published daily.

As we grapple to understand the implications of AI’s implementation, the Albanese government has focused on the safe and responsible use of the technology, leading to Australia becoming one of the first countries to adopt AI ethics principles.

Of course, generative AI itself is not new. Its roots can be traced back to the 1950s, when machine learning was introduced and researchers began to explore the idea of using algorithms to develop new data.

New generative AI iterations, like ChatGPT, have some impressive capabilities including natural-language understanding. At a high level, there are two subtopics with natural-language processing: natural-language generation (NLG) and natural-language understanding (NLU). As the names imply, NLG describes a computer’s ability to write, while NLU describes a computer’s reading comprehension skills.

OpenAI’s GPT-3 was trained on 175 billion parameters, and the multimodal GPT-4 can process information in text, images, audio and video in multiple languages.

However, the recent rapid emergence and uptake of generative AI tools like ChatGPT — it garnered more than 100 million users within two months of release — has raised questions and instilled anxiety among some commentators.

This anxiety is warranted — and not just because of job displacement, algorithmic biases, AI-powered cybersecurity attacks and propensity for disinformation and misinformation to spread at scale. While all these concerns are indeed valid, generative AI is particularly troublesome when viewed from the perspective of privacy.

It’s worthwhile noting that nearly every new technology throughout recorded history — books, electricity, radio, television, video games, smartphones and social media — has instilled panic in a portion of the population. That said, in certain circumstances it is necessary to invoke regulatory and legislative efforts to rein in emerging technology, and this is one of those occasions.

Legislative efforts to regulate generative AI

In June, the Australian Federal Government released a Safe and responsible AI discussion paper, which examines the existing regulatory and governance response in Australia and around the world. In addition, the National Science and Technology Council released the Rapid Response Report: Generative AI, which outlines potential risks, as well as the potential opportunities of emerging AI technologies, with a particular focus on a scientific basis for discussions about the way forward.

Data privacy concerns and the need to generate revenue

Privacy concerns are valid, especially when considering that millions of websites are scraped for training data that feeds applications like ChatGPT. Among these web pages are hundreds of billions of words, many of which are copyrighted, proprietary and/or comprise the personal data of individuals. Even where this data is publicly available (a phone number in a digital CV, for example), there is still the issue of contextual integrity — an increasingly important privacy benchmark that says that an individual’s personal data shouldn’t be revealed outside the initial context for which it was given.

OpenAI originally automatically incorporated all the data from its user prompts into ChatGPT’s corpus of training data. However, due to the backlash that followed, it no longer uses data submitted through its API for model training — although users can still opt in to provide their data to OpenAI if they wish.

Although companies like OpenAI have privacy policies set out to reassure us, they are for-profit entities. OpenAI and other organisations like it will eventually need to make money, which may come from selling of user data to third parties.

Companies charging to be first to market in the generative AI race have not had much concern for user data privacy. In the text-generation space alone, players include Google (Bard), Meta (Llama), Baidu (Ernie), DeepMind (Sparrow) and OpenAI (ChatGPT) — make what you will of that cast of characters.

Of course, generative AI technology is not itself inherently bad. While it can be used for nefarious purposes, generative AI — whether an AI-powered chatbot, synthetic video or deepfake audio file — it can also be used for positive initiatives. For example, researchers are exploring the use of neural networks and synthetic audio to help ALS patients’ speech.

There’s little doubt the generative AI race will, in some way, infringe on privacy. At the very least, it poses threats deserving of further discussion as this emerging technology evolves.

Image credit: iStock.com/NanoStockk

Related Articles

How to prepare for the AI future (that isn't here yet)

Something big is indeed coming, but the revolution is not here yet.

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd