A report from the Senate Select Committee on the Adoption of AI was tabled today (November 27), and while it’s all 222 pages long – including dissenting opinions from two Liberal National Party senators and comments additional comments from the Australian Greens and independent senator David Pocock – the report can be boiled down to the 13 main recommendations presented by the committee.
Regulating AI in Australia
The committee’s report initially focuses on what it calls “high-risk uses of AI,” particularly as they relate to deepfakes, data privacy and security, and bias and discrimination.
As Dr. Darcy Allen, Professor Chris Berg, and Dr. Aaron Lane noted in their brief, “biases in generative AI models are, in part, a reflection of the biases inherent in humans.”
“These models are trained on large datasets… Not surprisingly, biases from the datasets are built into the models. It is [an AI system] capture the dominant trends, preferences and biases of the data it was trained on,” they said.
The committee makes three recommendations in this area, and we will quote them verbatim for clarity:
1. That the Australian Government introduce new dedicated economy-wide legislation to regulate high-risk uses of AI, in line with Option 3 set out in the Government’s paper Introduce Mandatory Safeguards for AI. AI in high-risk contexts: proposals document.
2. That, as part of dedicated AI legislation, the Australian Government adopt a principles-based approach to defining high-risk uses of AI, supplemented by a non-exhaustive list of uses of AI to high risk explicitly defined.
3. That the Australian Government ensure that the non-exhaustive list of high-risk AI uses explicitly includes general purpose AI models, such as large language models (LLM).
Develop a local AI industry
The report states: “The key challenge for Australia is to grow its AI industry through policies that maximize the widespread opportunities offered by AI technologies, while ensuring appropriate protections are in place. »
According to the committee, AI is a transformative technology being developed by various organizations, large and small, in both the private and public sectors. Sovereign AI capability was also identified as a key focus of many contributions to the report.
There is only one broad but inclusive recommendation in this area:
4. That the Australian Government continues to increase the financial and non-financial support it provides to support Australia’s sovereign AI capability, focusing on Australia’s existing areas of comparative advantage and the unique perspectives of First Nations.
Impact of AI on workers and industry
Unsurprisingly, this is where the bulk of the committee’s recommendations regarding the benefits and risks of AI for employers and employees, as well as the industry as a whole, focus.
The committee notes that creative industries are particularly at risk, while the healthcare sector could see both immense benefits and “very serious risks” from the growing adoption of AI.
Overall productivity has been identified as an area that could see considerable improvement. According to a statement from Microsoft Vice President Steven Worrall, “Australia has an incredible foundation to build on. Forecasts predict that AI could create 200,000 new jobs and contribute up to $115 billion annually to our economy.
The committee makes six recommendations in this area:
5. That the Australian Government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts people’s rights at work, regardless of whether a definition approach based on principles or lists is adopted.
6. That the Australian Government expand and apply the existing workplace health and safety legislative framework to workplace risks posed by the adoption of AI.
7. That the Australian Government ensures that workers, workers’ organizations, employers and employers’ organizations are consulted comprehensively on the need for and best approach to new regulatory responses to address the impact of the AI in work and workplaces.
8. That the Australian Government continues to consult with creative workers, rights holders and their representative organizations through CAIRG on appropriate solutions to the unprecedented theft of their works by multinational technology companies operating in Australia.
9. That the Australian Government requires developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that use of such works be duly authorized and paid.
10. That the Australian Government urgently undertake further consultations with the creative industry to consider an appropriate mechanism to ensure that fair remuneration is paid to creators for commercial AI-generated productions based on material protected by the copyright used to train AI systems.
Automate the decision-making process
AI is increasingly used in automated decision making, or ADM. This has significant benefits and efficiencies, but it also carries similar transparency and accountability risks.
The Law Council of Australia noted in its submission that “transparency is essential to the responsible use of ADM by Australian organisations, both in the public and private sectors”.
The biases inherent in any AI-based decision-making process are also a concern.
“AI draws conclusions from patterns in existing data,” the ARC Center of Excellence on Automated Decision Making and Society said in its submission to the committee. “When biases are built into the data used to train models, the models tend to perpetuate those biases…”
Here are the committee’s recommendations:
11. That the Australian Government implement the recommendations relating to automated decision-making when considering the Privacy Actincluding Proposition 19.3 to introduce the right for individuals to request meaningful information about how substantially automated decisions that have legal or equally important effect are made.
12. That the Australian Government implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission relating to the establishment of a coherent legal framework covering ADM in government services and an oversight body for these decisions. This process should draw on the consultation process currently being carried out by the Ministry of the Attorney General and be consistent with the safeguards for high-risk uses of AI developed by the Ministry of Industry, Science and Technology. Resources.
Environmental impacts
We already know that data centers used to drive generative AI have a huge environmental cost, and this is a topic discussed in many presentations to the committee.
Dr Catherine Foley, Australia’s chief scientist, said: “[training] a model like GPT-3…[is estimated] use about 1 ½ thousand megawatt hours… [which is] the equivalent of watching about 1½ million hours of Netflix.”
Another submission, this time from the Department of Industry, Science and Resources, stated that “a single data center can consume the equivalent of energy equivalent to heating 50,000 homes for a year”.
The committee’s final recommendation aims to make AI growth sustainable:
13. That the Australian Government take a coordinated and holistic approach to managing the growth of Australia’s AI infrastructure to ensure growth is sustainable, delivers value to Australians and is in the public interest national.
You can find an HTML version of the full report – and it’s worth reading – here.