It’s been said repeatedly in recent years that relational AI can perform more challenging tasks while failing to do so. In that situation, how much time does relational AI really save or boost employee performance at work?
In its 2024 Prospect of Professionals report, Thomson Reuters, a provider of professional service and technology in the fields of law, taxes, compliance, and more, examined how pros are using AI. We spoke with David Wong, the CEO of Thomson Reuters, in an exclusive interview about conceptual AI in the workplace prior to the release of the document.
Thomson Reuters surveyed 2, 205 specialists in legal, tax, and danger and conformity across the globe. When asked about AI, the report did n’t specifically define conceptual artificial intelligence, but the features that are frequently discussed in the report frequently relate to generating AI. We used AI as a common term for relational models that you review images or text in the discussion with Wong.
Percentage of professionals who think AI may be” transformative” increased by 10 %
The statement was generally positive about AI, with time being anticipated for artificial intelligence. And 77 % of respondents said they believed AI would “have a significant or transformative impact on their work over the next five years,” an increase of 10 % from the previous year’s report.
You could have predicted that the publicity period would be at its top and that people would be so excited when ChatGPT and GPT-4 hit the picture last year, Wong said, but I was a little shocked that the proper relevance increased.
However, almost all of the sectors Thomson Reuters serves saw an increase in interest in the proper impact of AI from primarily law companies. Wong suggested that the higher figures may reflect a wider range of industries rather than an increase in interest from laws companies.
The gap between those who are extremely careful and those who are very optimistic about AI is stark.
Wong noted that there is a fascinating disconnect between those who are optimistic and those who are ambitious in terms of relational AI. What is the answer to the question that Thomson Reuters asked in the record:” In the future, roughly what portion of the work that your team produces will be [performed by individuals or AI]?” To find out whether professionals were careful or optimistic about using AI technology for work, the survey asked for four possible responses, covering a range of AI-led or human-led projects. They found 13 % of professionals fell under the” cautious” category, thinking a low percentage of work will be done by AI assistants even in five years ‘ time. The “ambitious” category, where 19 % of experts predicted AI would be used to do a significant portion of their work by five years from now, was at the other extreme.
” Many experts have come to understand what the practical suggestion, the truth of a lot of tech is,” Wong said. And based on that research that took place over the past 12 month or so, we are now starting to see those experts put the research into practice.
What tasks does AI not accomplish?
According to Gartner, relational AI’s expectations were high in 2023, but they are likely to decline once more before reaching a plateau.
For legal experts and the other work covered in the Thomson Reuters statement,” AI solutions are extremely good at any type of job where you can offer, frankly, a really good set of instructions”, said Wong.
According to one respondent to the report, that kind of task includes research, summarizing documents, or” Researching high level concepts that do n’t require specific legal citations.”
What AI ca n’t do is make decisions by itself. Artificial companies want it to eventually be able to do this, in fact, carrying out actions independently on a person’s representative is stage 3 of 5 on OpenAI’s fresh AI capacity rankings. However, there is n’t yet an AI solution, and Wong pointed out that the quality of the faith people place in it and the technician’s abilities are factors in this case for Thomson Reuters ‘ industries.
SEE: To thrive, a present enterprise data business needs the appropriate human team members.
” I think that AI has truly never been able to get to a place, in terms of at least respect, to be able to make choices by themselves”, Wong said.
In many cases, Wong said, AI “do n’t perform as well as human reviewers, except in the most simple things”.
According to the report, 83 % of legal professionals, 43 % of risk, fraud and compliance professionals and 35 % of tax professionals think “using AI to provide advice or strategic recommendations” is “ethically … a step too far”.
Most respondents (95 % of legal and tax respondents ) think “allowing AI to represent clients in court or make final decisions on complex legal, tax, risk, fraud and compliance matters” would be” a step too far”.
How likely do you think AI would make the right choice or the best decision possible if you ask the question, as a human would? I believe the response to “is it ethical” could actually be different. Wong said.
Will everyone have an AI assistant in the next five years?
Despite these reservations, Thomson Reuters made a bold claim in the report that “every professional will have a genAI assistant within five years.” According to them, that assistant will act like a human team member and carry out challenging tasks.
Wong remarked that some of the optimism comes from unintended outcomes. In the last two years, there have been more than a hundred companies offering AI products, including the enormous smartphone manufacturers.
” Pretty much everybody that has an iPhone15 and above and iOS 18 is gonna have an AI system in their pocket”, said Wong. ” And I’m confident that you’ll be able to access that assistant in every new version and Apple device in a few more years.” Microsoft has also been quietly releasing Copilot. I think it’ll be, in a few years, pretty hard to have a version of Microsoft 365 which does n’t have Copilot”.
SEE: The TechRepublic cheat sheet will teach you everything you need to know about Microsoft Copilot.
Organizations are considering how their product or production process might change as a result of the use of AI to create, analyze, or summarize content. According to the report, the majority of C-suite respondents believe that AI has a significant impact on their operational strategy ( 59 % ) and product/service strategy ( 53 % ).
” I think that’s where pretty much every single company is looking at right now, which is that the operations of one business has a lot of routine, rote work that you could describe with an instruction manual,” Wong said.
These repetitive tasks make perfect sense for AI. In the legal field, he said, AI could change businesses ‘ process for submitting regulatory or statutory filings.
What responsible,’ professional-grade’ AI looks like
The responses to the report had a variety of viewpoints on what constitutes responsible workplace AI use. Many people thought that data security was a crucial component of ethical AI use. Others valued:
- Data security is at the next step in the query.
- Compulsory review of outputs by a human professional.
- Care should be taken when deciding what tasks AI-powered technologies can be used for.
- the sources of the answers that have been gathered in a transparent manner.
” If anyone says that]generative AI is ] perfect, hallucination free, with no errors, then they are either deluded or the claim should be highly, highly scrutinized”, Wong said. ” What you want, though, is you want to have transparency into the performance”.
Responsible AI systems used in business settings should have validated content, be measurable, and be able to cite their references, he said. They should be constructed with safety, dependability, and confidentiality in mind.
Because it does n’t fulfill those needs, Wong claimed, ChatGPT is” the worst poster child for a generative AI solution for professionals.” However, you can create a ChatGPT that is secure, respects confidentiality, and does n’t rely on data training. Those are system design choices. They are not a part of AI.