Keeping customers ‘ confidence can become extremely difficult because of relational AI’s volatility in the image, whether you are creating or customizing an AI plan or reevaluating how your business approaches confidence. We spoke with Michael Bondar, deputy and business confidence chief at Deloitte, and Shardul Vikram, head of data and AI at SAP Industries and CX, about how businesses can sustain trust in an era of AI.
Organizations gain from confidence
First, Bondar stated that each company should define trust in terms of how it applies to their specific requirements and customers. Deloitte offers tools to do this, such as the “trust domain” system found in some of Deloitte’s downloadable frameworks.
People who are involved in discussions of trust frequently resent their customers, he said, but organizations want to be trusted by their customers. According to Deloitte, companies that are trusted produce stronger financial results, better stock performance, and greater customer loyalty.
“And we’ve seen that nearly 80 % of employees feel motivated to work for a trusted employer, ” Bondar said.
Vikram defined trust as believing the organization will act in the customers ’ best interests.
When thinking about trust, customers will ask themselves, “ What is the uptime of those services? ” Vikram said. “Are those services secure? Can I rely on that specific partner to protect my data and ensure compliance with local and international laws? ”
According to Deloitte, trust “begins with a combination of competence and intent,” which is the assurance that the organization is capable and trustworthy of fulfilling its promises, Bondar said. “ But also the rationale, the motivation, the why behind those actions is aligned with the values ( and ) expectations of the various stakeholders, and the humanity and transparency are embedded in those actions. ”
Why might businesses struggle to increase trust? Bondar attributed it to “geopolitical unrest, ” “socio-economic pressures” and “apprehension ” around new technologies.
If customers are n’t informed about its use, generational AI can erode trust.
Generative AI is top of mind when it comes to new technologies. According to Bondar, generative AI needs to be robust and trustworthy in order not to undermine trust.
“Privacy is key, ” he said. “Consumer privacy must be respected, and customer data must be used within and only within its intended. ”
That includes every step of using AI, from gathering initial data for large-scale language models to allowing users to choose not to use their data in any way.
In fact, Vikram said that training generative AI and seeing where it messes up might be a good time to get rid of outdated or unimportant data.
SEE: Microsoft Delayed Its AI Recall Feature’s Launch, Seeking More Community Feedback
He suggested the following ways to ensure customer trust when using AI:
- Give employees training on how to safely use AI. Focus on war-gaming exercises and media literacy. Keep in mind your own organization’s notions of data trustworthiness.
- When creating or using a generative AI model, you should look for data consent and/or IP compliance.
- When possible, train employees to recognize AI metadata and watermark AI content.
- Be open to using AI in all its forms and capabilities while keeping your use of it a secret.
- Create a trust center. A trust center is a “digital-visual connective layer between an organization and its customers where you’re teaching, ( and ) you’re sharing the latest threats, latest practices ( and ) latest use cases that are coming about that we have seen work wonders when done the right way, ” Bondar said.
CRC firms are likely already adhering to regulations that may affect how they use customer data and artificial intelligence, such as the California Privacy Rights Act, the General Data Protection Regulation, and the SEC’s cyber disclosure regulations.
How SAP establishes trust in generative AI products
“At SAP, we have our DevOps team, the infrastructure teams, the security team, the compliance team embedded deep within each and every product team, ” Vikram said. This allows us to consider trust from day one and not as an afterthought when making product decisions or architectural decisions. ”
By establishing these connections between teams and by developing and adhering to the company’s ethics policy, SAP operationalizes trust.
Vikram said,” We have a policy that we cannot actually ship anything without the approval of the ethics committee.” “It’s approved by the quality gates … It’s approved by the security counterparts. This adds a layer of process to operational tasks, and the combination of them actually makes us more effective or effective at operating trust. ”
When SAP rolls out its own generative AI products, those same policies apply.
CX AI Toolkit for CRM, a generative AI product that can write and rewrite content, automate some tasks, and analyze enterprise data, has been released by SAP. CX AI Toolkit will always show its sources when you ask it for information, Vikram said; This is one of the ways SAP tries to win over its customers who use AI products.
How to implement generative AI in an ethical manner within the organization
Companies must integrate trustworthiness and generative AI broadly into their KPIs.
With generative AI in the mix, and particularly with generative AI, there are additional KPIs or metrics that customers are looking for, such as: How do we build trust, transparency, and auditability into the outcomes we receive from the generative AI system? ” Vikram said. “The systems, by default or by definition, are non-deterministic to a high fidelity.
“And now, in order to use those particular capabilities in my enterprise applications, in my revenue centers, I need to have the basic level of trust. What are we doing at least to reduce hallucinations or provide the necessary insights? ”
C-suite decision-makers are eager to try out AI, Vikram said, but they want to start with a few specific use cases at a time. This desire for a measured approach may conflict with the speed with which new AI products are released. Common are concerns about poor quality content or hallucinations. Generative AI for performing legal tasks, for example, shows “pervasive” instances of mistakes.
However, businesses are willing to try AI, Vikram said. For the past 15 years, I have been creating AI applications, and it has never been this. Never was there this growing appetite, and it never was one for learning more, but rather to do more with it. ”