A new document from Deloitte has revealed that fears over data protection in relation to conceptual AI have risen. The number rose to 72 % this year from last year when only 22 % of professionals placed it among their top three concerns.
Transparency and data history were the next two ethical concerns of GenAI, with 47 % and 40 % of experts putting them in the top three this time. However, only 16 % expressed worry over work movement.
The complexity of AI systems is increasing, especially in light of the sensitive information that employees are becoming more and more interested in learning. Almost half of protection professionals believe AI is dangerous, according to a study conducted in September by HackerOne, with some citing leaked training data as a threat.
Similarly, 78 % of business leaders ranked” safe and secure” as one of their top three ethical technology principles, marking a 37 % increase from 2023, further demonstrating how the issue of security is top of mind.
The poll results come from Deloitte’s 2024″ State of Ethics and Trust in Technology” review, which surveyed over 1, 800 business and technical experts worldwide about the ethical rules they apply to systems, especially GenAI.
High-profile AI safety incidents are possible gathering more attention
Just over half of respondents to this year’s and next week’s reports said that mental technologies like AI and GenAI pose the biggest social hazards compared with other emerging technologies, such as electronic fact, classical computing, autonomous vehicles, and robotics.
This innovative focus may be related to a wider awareness of the importance of data security owing to well-publicised situations, such as when a bug in OpenAI’s ChatGPT exposed personal information of around 1.2 % of ChatGPT Plus members, including names, letters, and partial repayment details.
The news that hackers had stolen sensitive information from the firm’s AI systems and had breached an online forum used by OpenAI employees surely eroded trust in the chatbot.
SEE: Artificial Intelligence Ethics Policy
In a press release, Beena Ammanath, Global Deloitte AI Institute and Trustworthy AI leader, said,” Widespread availability and adoption of GenAI may have raised respondents ‘ familiarity and confidence in the technology, boosting optimism about its potential for good.”
The need for specific, evolved ethical frameworks that enable positive impact is highlighted by the continued cautionary sentiments around its apparent risks.
The global operation of organizations is being affected by AI legislation.
Naturally, more personnel are using GenAI at work than last year, with the percentage of professionals reporting that they use it internally rising by 20 % in Deloitte’s year-over-year reports.
A significant 94 % of respondents said their businesses have somehow incorporated it into processes. However, most indicated it is still in the pilot phase or use is limited, with only 12 % saying it is in widespread use. This contrasts favorably with recent Gartner research that found that the majority of GenAI projects do n’t make it past the proof-of-concept stage.
SEE: IBM: While Enterprise Adoption of Artificial Intelligence Increases, Barriers are Limiting Its Usage
Regardless of its pervasiveness, decision makers want to ensure that their use of AI does not get them into trouble, particularly when it comes to legislation. Compliance was cited by 34 % of respondents as the top reason for adopting ethical tech policies and guidelines, while regulatory penalties were one of the top three issues raised by those who did n’t follow these standards.
The E. U. AI Act came into force on Aug. 1 and imposes strict requirements on high-risk AI systems to ensure safety, transparency, and ethical usage. Non-compliance could result in fines ranging from €35 million ($ 38 million USD ) or 7 % of global turnover to €7.5 million ($ 8.1 million USD ) or 1.5 % of turnover.
Over a hundred companies, including Amazon, Google, Microsoft, and OpenAI, have already signed the E. U. AI Pact and volunteered to start implementing the Act’s requirements ahead of legal deadlines. This both helps them avoid future legal challenges and demonstrates their commitment to a responsible AI deployment to the general public.
Similarly, in October 2023, the U. S. unveiled an AI Executive Order featuring wide-ranging guidance on maintaining safety, civil rights, and privacy within government agencies while promoting AI innovation and competition throughout the country. Many U.S. businesses may make policy changes to meet evolving federal requirements and public expectations for AI safety, even though it is n’t a law.
SEE: G7 Countries Establish Voluntary AI Code of Conduct
34 % of European respondents reported that their organizations had changed how they used AI over the past year as a result of the E. U. AI Act’s influence in Europe. However, the impact is more widespread, as 26 % of South Asian respondents and 16 % of North and South American respondents also made changes due to the Act’s instalment.
Furthermore, 20 % of U. S. based respondents said they had made changes at their organisations in response to the executive order. A quarter of South Asian respondents, 21 % in South America, and 12 % in Europe said the same.
According to the report’s authors,” Cognitive technologies like AI are recognized as having the highest potential to benefit society and the highest risk of misuse.”
” The accelerated adoption of GenAI may be outpacing organizations ‘ capacity to govern the technology,” he said. Companies should prioritize both the development of ethical standards for GenAI and the appropriate selection of use cases for GenAI tools.