Regulation and Transparency Key to Mitigating Bias in Emerging Technology

A silhouetted figure stands before a digitally illustrated map of Australia, symbolizing the intersection of technology and regulation.

The Australian government’s pursuit of productivity gains from artificial intelligence (AI) is being cautioned against by the country’s human rights commissioner, Lorraine Finlay, who warns that the technology could entrench racism and sexism if not properly regulated. Finlay’s comments come as Labor senator Michelle Ananda-Rajah breaks ranks to call for all Australian data to be “freed” to tech companies to prevent AI from perpetuating overseas biases.

Finlay emphasizes that the lack of transparency in the datasets used to train AI tools makes it difficult to identify potential biases, which can lead to unfair decisions. She stresses that algorithmic bias, combined with automation bias, poses a significant risk of creating entrenched discrimination and bias that may not even be recognized. This is particularly concerning in areas such as job recruitment, where AI tools are increasingly being used to screen and select candidates.

Finlay points to a recent study published in May, which found that job candidates being interviewed by AI recruiters risked being discriminated against if they spoke with an accent or were living with a disability. She notes that this is just one example of the many ways in which AI can perpetuate existing biases and inequalities.

The Human Rights Commission has long advocated for an AI act to bolster existing legislation, including the Privacy Act, and rigorous testing for bias in AI tools. Finlay urges the government to establish new legislative guardrails, including bias testing and auditing, to ensure proper human oversight and review.

Meanwhile, Labor senator Michelle Ananda-Rajah, a former medical doctor and AI researcher, argues that AI tools must be trained on Australian data to avoid perpetuating overseas biases. She warns that not opening up domestic data would mean Australia would be “forever renting” AI models from tech behemoths overseas, with no oversight or insight into their models or platforms.

Ananda-Rajah points to examples such as skin cancer screening by AI, where tools have been shown to have algorithmic bias. She believes that training models on diverse data from Australia, with appropriate protections for sensitive data, is key to overcoming bias and discrimination. This approach would also enable Australian researchers and developers to create AI tools that are tailored to the country’s unique needs and demographics.

Judith Bishop, an AI expert at La Trobe university and former data researcher at an AI company, cautions that freeing up more Australian data could help train AI tools more appropriately, but warns that international data may not reflect the needs of Australians. Bishop stresses the importance of careful consideration to ensure that systems developed in other contexts are applicable to the Australian population.

The eSafety commissioner, Julie Inman Grant, also expresses concern over the lack of transparency around the data used by AI tools. She urges tech companies to be transparent about their training data, develop reporting tools, and use diverse, accurate, and representative data in their products. Grant notes that this is particularly important in areas such as content moderation, where AI tools are increasingly being used to make decisions about what content to allow or remove.

The issue of AI regulation and transparency has become increasingly pressing as the technology continues to advance and integrate into various aspects of Australian life. As the government prepares to discuss productivity gains from AI at the federal economic summit next week, it remains to be seen how these concerns will be addressed.

Leave a comment

Trending