Close Menu
Alan C. Moore
    What's Hot

    ‘No Gods or Kings’: Musk posts cryptic message — Here’s the video game that inspired it

    June 16, 2025

    Watch: Iran launches fresh strikes on Israel; missiles seen streaking across Jerusalem skies

    June 16, 2025

    Who is Vance Boelter? Suspect arrested in Minnesota lawmaker shooting; What we know

    June 16, 2025
    Facebook X (Twitter) Instagram
    Trending
    • ‘No Gods or Kings’: Musk posts cryptic message — Here’s the video game that inspired it
    • Watch: Iran launches fresh strikes on Israel; missiles seen streaking across Jerusalem skies
    • Who is Vance Boelter? Suspect arrested in Minnesota lawmaker shooting; What we know
    • THEY GOT HIM: Minnesota Shooter Vance Boelter Apprehended
    • In Today’s Modern World, Pakistan Has a Sub-Conventional Army
    • NYC Mayoral Polls: Eric Adams seeks re-election as early voting begins— Full list of contenders
    • EU chief pushes for diplomatic solution to Israel-Iran crisis in call with Netanyahu
    • NYC Primary Elections 2025: Strong early voting numbers— All you need to know
    Alan C. MooreAlan C. Moore
    Subscribe
    Monday, June 16
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » AI Bias: Accenture, SAP Leaders on Diversity Problems and Solutions

    AI Bias: Accenture, SAP Leaders on Diversity Problems and Solutions

    October 23, 2024Updated:October 23, 2024 Tech No Comments
    tr ai bias diversity sap accenture leaders insights jpg
    tr ai bias diversity sap accenture leaders insights jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Generative AI discrimination, driven by design training information, remains a significant problem for organisations, according to leading experts in data and AI. These experts advise APAC organizations to implement proactive strategies to reduce or eliminate discrimination when creating conceptual Artificial use cases.

    Teresa Tung, top managing director at Accenture, explained to TechRepublic that conceptual AI models were mostly trained on digital data in English with a solid North American perspective, and were likely to carry on views that were common online. Tech rulers in APAC are hampered by this.

    If you’re not based in China or Thailand or other places, you’re not seeing your vocabulary and perspectives represented in the model, she said, only from a language perspective.

    Tung said that non-English speaking nations are even putting a strain on technology and business skills. The drawback is that” English speakers and people who are indigenous or can work with American” are primarily conducting the relational AI experiments.

    While some home grown models are developing, mainly in China, some language in the region are not covered. ” That accessibility gap is going to get big, in a way that is also biased, in addition to propagating some of the perspectives that are predominant in that corpus of]internet ] data”, she said.

    Artificial discrimination may lead to organizational issues.

    Kim Oosthuizen, mind of AI at SAP Australia and New Zealand, noted that discrimination extends to sex. People were significantly underrepresented in pictures for higher paid industries like doctors, according to a Bloomberg review of Firm Diffusion-generated images, despite higher real participation rates in these professions.

    At the recent SXSW Festival in Sydney, Australia, she said,” These exaggerated biases that AI systems create are known as representational harms.” These harms are caused by strengthening the status quo or furthering stereotypes, she said, degrading some social groups.

    ” AI is only as good as the data it is trained on, if we’re giving these systems the wrong data, it’s just going to amplify those results, and it’s going to just keep on doing it continuously. That’s what occurs when the data and those creating the technology do n’t have a representative view of the world.

    SEE: Why Generative AI projects risk failure without business exec understanding

    The issue could get worse if nothing is done to make the data better. According to Oosthuizen, many of the internet’s images could be artificially produced in a matter of years. She stated that” when we exclude groups of people into the future, it’s going to continue doing that.”

    Oosthuizen cited an AI prediction engine that looked at blood samples for liver cancer as another illustration of gender bias. Because the model did not have enough women in the data set it was using to produce its results, the AI ended up being twice as likely to identify the condition in men as women.

    Tung argued that organizations are at a particular risk because they could be at risk if treatments were being suggested based on biased research findings. In contrast, if not complemented by a human-in-the-know and a responsible AI lens, AI use in job applications and hiring may be problematic.

    More Australia coverage

    AI model designers and users must create algorithms to combat AI bias.

    To overcome biased data or protect their organizations from it, businesses should adapt how they design generative AI models or incorporate third-party models into their businesses.

    Model producers are working on fine-tuning the data used to train their models, Tung said, by adding new, pertinent data sources or creating new data to introduce balance. Using synthetic data to a model that is representative and produces” she” as much as “he” would be a good example for gender.

    According to Tung, organizational users of AI models will need to test for AI bias in the same way they test for quality control of software code or when using third-party APIs.

    ” Just like you run the software test, this is getting your data right”, she explained. ” As a model user, I’m going to have all these validation tests that are looking for diversity bias, but it could just be about accuracy, making sure we have a lot of that to test for the things we care about.”

    SEE: AI training and guidance a problem for employees

    Organizations should implement guardrails outside of their AI models that can check for bias or accuracy before providing outputs to an end user, in addition to testing. In an example, Tung gave an example of a business that created code that identified a new Python vulnerability using generative AI.

    ” I will need to take that vulnerability, and I’m going to have a Python expert create some tests — these question-answer pairs that display good looks and possible false answers,” Tung said. I’m then going to test the model to see if it works or not.

    ” If it does n’t perform with the right output, then I need to engineer around that”, she added.

    Diversity in the field of AI technology will help to reduce bias.

    Oosthuizen argued that it is crucial for women to “have a seat at the table” in order to reduce gender bias in AI. This means including their perspectives in every aspect of the AI journey — from data collection, to decision making, to leadership. She said she would need to improve the perception of AI careers among women.

    SEE: Salesforce offers 5 guidelines to reduce AI bias

    Tung agreed improving representation is very important, whether that is gender, racial, age, or other demographics. She argued that having multi-disciplinary teams is “is really key” and that the fact that” not everyone has to be a data scientist today or be able to apply these models is a benefit of AI.”

    ” A lot of it is in the application”, Tung explained. It’s actually someone who is extremely knowledgeable about marketing, finance, or customer service, and who is not just part of a talent pool that is n’t really as diverse as it needs to be. So when we think about today’s AI, it’s a really great opportunity to be able to expand that diversity”.

    Source credit

    Keep Reading

    Major Outages Impact Google Cloud, OpenAI, More This Week: What We Know

    $14B Meta Investment in Scale AI Boosts Plans for Superintelligence Lab

    UK Passes Data Bill Without Controversial AI Copyright Clause: ‘Evolution, Not Revolution’

    First Known ‘Zero-Click’ AI Exploit: Microsoft 365 Copilot’s EchoLeak Flaw

    The Meta AI App Lets You ‘Discover’ People’s Bizarrely Personal Chats

    NVIDIA Expands AI Dominance in Europe with Major Partnerships and Infrastructure Deals

    Editors Picks

    ‘No Gods or Kings’: Musk posts cryptic message — Here’s the video game that inspired it

    June 16, 2025

    Watch: Iran launches fresh strikes on Israel; missiles seen streaking across Jerusalem skies

    June 16, 2025

    Who is Vance Boelter? Suspect arrested in Minnesota lawmaker shooting; What we know

    June 16, 2025

    THEY GOT HIM: Minnesota Shooter Vance Boelter Apprehended

    June 16, 2025

    In Today’s Modern World, Pakistan Has a Sub-Conventional Army

    June 16, 2025

    NYC Mayoral Polls: Eric Adams seeks re-election as early voting begins— Full list of contenders

    June 16, 2025

    EU chief pushes for diplomatic solution to Israel-Iran crisis in call with Netanyahu

    June 16, 2025

    NYC Primary Elections 2025: Strong early voting numbers— All you need to know

    June 16, 2025

    ASU’s ‘Queer Visual Resource Center’ features sexually explicit art, free condoms

    June 16, 2025

    Where are the men? Research finds record low male enrollment at U. Michigan

    June 16, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.