According to a recent report from technology consulting Thoughtworks, AI tools and techniques are quickly becoming more prevalent in technology as businesses strive to simplify large-language models for use in real-world programs. However, inappropriate use of these tools can also present challenges for companies.
In the agency’s latest Technology Radar, 40 % of the 105 identified equipment, techniques, systems, languages, and systems labeled as “interesting” were AI-related.
Sarah Taraporewalla leads Thoughtworks Australia’s Enterprise Modernisation, Platforms, and Cloud ( EMPC ) practice in Australia. She stated in an exclusive interview with TechRepublic that AI techniques and tools are demonstrating themselves in a market place beyond the current artificial intelligence enthusiasm.
” To get onto the Technology Radar, our own team have to be using it, so we can have an opinion on whether it’s going to be effective or not”, she explained. ” We’re seeing across the globe in all of our projects that we’ve been able to generate about 40 % of these things we’re talking about from work that’s actually happening,” says one worker.
New AI techniques and tools are quickly entering output.
The global Technology Advisory Board of the consultancy has developed a tool called” Technical Radar” to observe “interesting points” emerging in the international software engineering landscape. The statement also assigns them a rating that indicates to tech customers whether to “adopt”,” trial”, “assess”, or “hold” these devices or practices.
According to the statement:
- Adopt:” Hiccups” that corporations should strongly consider.
- Test: Using tools or methods that Thoughtworks considers to be useful but are not as effective as those that fall under the adopt category.
- Assess: Things to look at carefully, but not necessarily test already.
- Carry: Proceed with caution.
The report designated retrieval-augmented technology as the “favored structure for our team to improve the quality of reactions generated by a huge vocabulary model.” However, methods such as “using LLM as a assess” — which leverages one LLM to evaluate the actions of another LLM, requiring careful set up and validation — was given a” test” position.
Though AI officials are new, the GCP Vertex AI Agent Builder, which allows organisations to create AI Agents using a natural language or password first view, was also given a” test” position.
Tools or methods must have already entered the field, according to Taraporewalla, before being recommended for” trial” status. Therefore, they would represent success in actual practical use cases.
” So when we’re talking about this Cambrian explosion in AI tools and techniques, we’re actually seeing those within our teams themselves”, she said. That’s representative of what clients are expecting from them, and how ready to break through the hype and examine the application of these tools and techniques in APAC.
SEE: Will Power Availability Derail the AI Revolution? ( TechRepublic Premium )
Rapid AI tools adoption causing concerning antipatterns
According to the report, the rapid adoption of AI tools is beginning to produce antipatterns or bad patterns in the industry that are causing subpar outcomes for businesses. In the case of coding-assistance tools, a key antipattern that has emerged is a reliance on coding-assistance suggestions by AI tools.
According to Taraporewalla, “relying on the answer that’s being spat out” is one of the trends we’re seeing. So while a copilot will assist us in writing the code, there is a risk of overbloating our systems if you do n’t have that expert skill and the human in the loop to evaluate the response that’s coming out.
The Technology Radar raised questions about the rapid growth of codebases and the code-quality of generated code. The report stated that” the code quality issues in particular highlight an area of continued diligence by developers and architects to ensure they do n’t drown in “working-but-terrible” code.
Thoughtworks notes that this approach aims to ensure AI was helping rather than encrypting codebases with complexity. The report issued a “hold.”
Clean code, clean design, and testing, which help lower the overall total cost of ownership of the code base, where we have an overreliance on the answers the tools are spinning out, are things that we’ve been strong supporters of, Taraporewalla warned.
She continued,” Teams just need to double down on those good engineering practices that we’ve always talked about, such as unit testing, fitness functions from an architectural perspective, and validation techniques,” to make sure that the code is being released in the right way.
How can businesses navigate the changing environment of the AI toolkit?
It is crucial for organizations to adopt the right tools and techniques without being swept up in the hype by focusing on the problem rather than the technology solution.
” The advice we frequently give is to work out what problem you’re trying to solve and then to look for solutions or tools to help you solve that problem,” Taraporewalla said.
AI governance will also need to be a constant and ongoing process. Organizations can benefit from having a team that can help define their AI governance standards, assist with employee education, and keep track of changes to the AI ecosystem and regulatory environment.
” Having a group and a team dedicated to doing just that, is a great way to scale it across the organisation”, Taraporewalla said. You not only ensure that both guardrails are put in place the right way, but you also give teams the opportunity to experiment and see how to use them.”
Additionally, businesses can create AI platforms with integrated governance features.
You could incorporate your policies into an MLOps platform and use that as the foundation for the teams to build on, Taraporewalla said. ” That way, you’ve then constrained the experimentation, and you know what parts of that platform need to evolve and change over time”.
Using AI tools and methods to experiment with might pay off.
Organizations that are testing AI tools and techniques may have to alter what they do, but they will also be developing their platforms and capabilities over time, according to Thoughtworks.
” I think when it comes to return on investment, we’re looking at what are the elements that we’ll continue to just build on our platform as we move forward, as our foundation,” Taraporewalla said.” We’re using these tools to do a job, but we’re looking at what are the things we’re looking at,” he said.
She noted that organizations might benefit from a more holistic approach to AI experiments.
” I think the return on investment will pay off in the long run,” he said.” If they can continue to look at it from the perspective of, what parts are we going to bring to a more common platform, and what are we learning from a foundation’s perspective that we can make that into a positive flywheel.”