AHRC highlights impact of AI on human rights


Tuesday, 24 July, 2018

AHRC highlights impact of AI on human rights

A major project has been launched by the Australian Human Rights Commission (AHRC) to protect human rights in the era of artificial intelligence (AI).

Artificial intelligence, facial recognition, global data markets and other technological developments pose unprecedented challenges to privacy, freedom of expression and equality.

The Human Rights and Technology Issues Paper questions how Australian law should protect human rights in the development and use of new technologies. It asks what protections are needed when AI is used in decisions that affect our basic rights — in areas as diverse as insurance, social media and the criminal justice system. It also invites ideas on how we can make technology more inclusive of a diverse community.

This is the start of the conversation the commission will have with industry, government, academia and civil society over the coming months. A discussion paper will be published in early 2019 and a final report and recommendations will be delivered in late 2019.

“Working collaboratively with industry and government, we will develop a practical roadmap for reform in Australia,” said Human Rights Commissioner Edward Santow.

“Human rights must shape the future these incredible innovations have made possible. We must seize the opportunities technology presents but also guard against threats to our rights and the potential for entrenched inequality and disadvantage.”

Keynote addresses at the conference will be delivered by Dr Alan Finkel, Australia’s Chief Scientist; Kathy Baxter, Research Architect at Salesforce; Steve Crown, Vice President and Deputy General Counsel at Microsoft Corporation; and Aza Raskin, Co-Founder of the US Center for Humane Technology.

“I’m calling it: from self-driving cars to facial recognition, 2018 is the year when artificial intelligence has moved from science fiction to everyday living. But how do we build a future where AI are not only our creations, but also our trusted partners and friends?” said Finkel.

“We need to learn how to weave human values and AI capabilities together, so our next generations — both biological and electronic — can play nice and get along.”

Salesforce Research Architect Kathy Baxter said, “AI can enable the automated, yet invisible imposition of its creator’s values on society at a global scale. It is everyone’s responsibility to ensure that the technology created protects human rights first and foremost.”

More information about the project and the issues paper is available from tech.humanrights.gov.au/consultation.

Image courtesy of Australian Human Rights Commission.

Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletter and quarterly magazine.

Related News

Only 20% of Australian organisations mature in AI adoption

New research published by V2 Digital points to a disconnect between Australian professionals'...

New Salesforce AI agent "may make chatbots obsolete"

Salesforce's new Einstein Service Agent is an AI customer service agent with a wide range of...

Teradata announces integration with DataRobot

Teradata has arranged to allow enterprise customers to import and operationalise DataRobot AI...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd