An American Senate Select Committee just released a statement vehemently critical of big tech companies, including OpenAI, Meta, and Google, while calling for the huge language model products produced by them to be labeled as “high-risk” under a fresh Australian AI law. The findings follow an eight-month investigation into the country’s adoption of AI.
The Senate Select Committee on Adopting Artificial Intelligence was given the task of examining Australia’s potential and issues. The scope of its investigation included a wide range of topics, from the benefits of AI-driven production to bias and climate concerns.
In the agency’s last record, it was determined that international tech companies used training data from Australia but were hesitant to disclose details about their LLMs. In addition to recommending the introduction of an AI laws and the need for companies to consult with employees when using AI at work, it also recommended the use of an AI rules.
Big software companies and their AI models lack clarity, report sees
The commission said in its statement that a significant amount of time was dedicated to discussing the structure, progress, and effects of the world’s “general-purpose AI models”, including the LLMs produced by large global tech companies like as OpenAI, Amazon, Meta, and Google.
The committee noted that there was a lack of transparency regarding the models, the market power these businesses have in their respective fields,” their track record of dislike to transparency and governmental compliance,” and “overt and obvious robbery of copyrighted data from Asian rights holders.”
The government body also listed” the non-consensual scraping of personal and private information“, the potential scope and magnitude of the models ‘ programs in the Australian environment, and” the disappointing evasion of this agency’s questions on these topics” as areas of concern.
According to the report,” the committee believes that these problems warrant a regulatory reply that explicitly defines public purpose AI versions as high-risk.” ” In doing so, these designers will be held to higher testing, clarity, and transparency needs than many lower-risk, lower-impact functions of AI”.
Report outlines additional AI-related concerns, including job loss due to automation
The committee acknowledged the high likelihood of job losses caused by automation, while also acknowledging that AI would improve economic productivity. These losses may have an impact on positions held by those who have lower education and training requirements or by those who are vulnerable, such as women and those who belong to lower socioeconomic groups.
The committee expressed concern about the supporting evidence for use cases involving AI in use cases like workforce planning, management, and surveillance at the workplace.
The report states that” the committee takes note that such systems are already being implemented in workplaces,” and that they are frequently being developed by large multinational corporations to increase profitability by maximizing employee productivity.
Dovetail CEO advocates for a balanced approach to regulation of AI innovation.
According to the investigation’s findings, there is a significant chance that these intrusive and dehumanizing uses of AI at work will seriously impair worker rights and conditions in general.
What should IT leaders take from the committee’s recommendations?
The committee recommended the Australian government:
- Make sure applications that have an impact on workers ‘ rights are included in the final definition of high-risk AI.
- Expand the current legislative framework for workplace health and safety to include the risks posed by AI adoption at work.
- Ensure that” those who work for and are employed are thoroughly consulted on the need for, and best approach to, additional regulatory responses to address the impact of AI on work and workplaces.”
SEE: Why should businesses be utilizing AI to improve their resilience and sensibility.
The Australian government does not need to act on the committee’s report. It should encourage local IT leaders to continue to take into account all aspects of the use of AI tools and technologies in their organizations in order to realize the expected productivity gains.
Firstly, many businesses have already taken into account the legal or reputational impact of applying various LLMs based on the training data used to create them. When applying any LLM to their organization, IT leaders should keep in mind the underlying training data.
AI is expected to impact workforces significantly, and IT will be instrumental in rolling it out. In addition to encouraging employee engagement with the organization, IT leaders could promote more “employee voice” initiatives in the development of AI. This would help promote employee engagement with the organization as well as the adoption of AI tools and technologies.