Generative AI discrimination, driven by design training information, remains a significant problem for organisations, according to leading experts in data and AI. These experts advise APAC organizations to implement proactive strategies to reduce or eliminate discrimination when creating conceptual Artificial use cases.
Teresa Tung, top managing director at Accenture, explained to TechRepublic that conceptual AI models were mostly trained on digital data in English with a solid North American perspective, and were likely to carry on views that were common online. Tech rulers in APAC are hampered by this.
If you’re not based in China or Thailand or other places, you’re not seeing your vocabulary and perspectives represented in the model, she said, only from a language perspective.
Tung said that non-English speaking nations are even putting a strain on technology and business skills. The drawback is that” English speakers and people who are indigenous or can work with American” are primarily conducting the relational AI experiments.
While some home grown models are developing, mainly in China, some language in the region are not covered. ” That accessibility gap is going to get big, in a way that is also biased, in addition to propagating some of the perspectives that are predominant in that corpus of]internet ] data”, she said.
Artificial discrimination may lead to organizational issues.
Kim Oosthuizen, mind of AI at SAP Australia and New Zealand, noted that discrimination extends to sex. People were significantly underrepresented in pictures for higher paid industries like doctors, according to a Bloomberg review of Firm Diffusion-generated images, despite higher real participation rates in these professions.
At the recent SXSW Festival in Sydney, Australia, she said,” These exaggerated biases that AI systems create are known as representational harms.” These harms are caused by strengthening the status quo or furthering stereotypes, she said, degrading some social groups.
” AI is only as good as the data it is trained on, if we’re giving these systems the wrong data, it’s just going to amplify those results, and it’s going to just keep on doing it continuously. That’s what occurs when the data and those creating the technology do n’t have a representative view of the world.
SEE: Why Generative AI projects risk failure without business exec understanding
The issue could get worse if nothing is done to make the data better. According to Oosthuizen, many of the internet’s images could be artificially produced in a matter of years. She stated that” when we exclude groups of people into the future, it’s going to continue doing that.”
Oosthuizen cited an AI prediction engine that looked at blood samples for liver cancer as another illustration of gender bias. Because the model did not have enough women in the data set it was using to produce its results, the AI ended up being twice as likely to identify the condition in men as women.
Tung argued that organizations are at a particular risk because they could be at risk if treatments were being suggested based on biased research findings. In contrast, if not complemented by a human-in-the-know and a responsible AI lens, AI use in job applications and hiring may be problematic.
AI model designers and users must create algorithms to combat AI bias.
To overcome biased data or protect their organizations from it, businesses should adapt how they design generative AI models or incorporate third-party models into their businesses.
Model producers are working on fine-tuning the data used to train their models, Tung said, by adding new, pertinent data sources or creating new data to introduce balance. Using synthetic data to a model that is representative and produces” she” as much as “he” would be a good example for gender.
According to Tung, organizational users of AI models will need to test for AI bias in the same way they test for quality control of software code or when using third-party APIs.
” Just like you run the software test, this is getting your data right”, she explained. ” As a model user, I’m going to have all these validation tests that are looking for diversity bias, but it could just be about accuracy, making sure we have a lot of that to test for the things we care about.”
SEE: AI training and guidance a problem for employees
Organizations should implement guardrails outside of their AI models that can check for bias or accuracy before providing outputs to an end user, in addition to testing. In an example, Tung gave an example of a business that created code that identified a new Python vulnerability using generative AI.
” I will need to take that vulnerability, and I’m going to have a Python expert create some tests — these question-answer pairs that display good looks and possible false answers,” Tung said. I’m then going to test the model to see if it works or not.
” If it does n’t perform with the right output, then I need to engineer around that”, she added.
Diversity in the field of AI technology will help to reduce bias.
Oosthuizen argued that it is crucial for women to “have a seat at the table” in order to reduce gender bias in AI. This means including their perspectives in every aspect of the AI journey — from data collection, to decision making, to leadership. She said she would need to improve the perception of AI careers among women.
SEE: Salesforce offers 5 guidelines to reduce AI bias
Tung agreed improving representation is very important, whether that is gender, racial, age, or other demographics. She argued that having multi-disciplinary teams is “is really key” and that the fact that” not everyone has to be a data scientist today or be able to apply these models is a benefit of AI.”
” A lot of it is in the application”, Tung explained. It’s actually someone who is extremely knowledgeable about marketing, finance, or customer service, and who is not just part of a talent pool that is n’t really as diverse as it needs to be. So when we think about today’s AI, it’s a really great opportunity to be able to expand that diversity”.