close
close

Thoughtworks reports rapid growth in artificial intelligence tools for software development

Thoughtworks reports rapid growth in artificial intelligence tools for software development

Artificial intelligence tools and techniques are rapidly proliferating in software as organizations look to optimize large language models for practical applications, according to a recent report from technology consultancy Thoughtworks. However, using these tools incorrectly can still cause problems for companies.

In the company’s latest technology radar, 40% of the 105 identified tools, techniques, platforms, languages ​​and frameworks flagged as “interesting” were related to AI.

Sarah Taraporewalla leads Thoughtworks Australia’s Enterprise Modernization, Platforms and Cloud (EMPC) practice in Australia. In an exclusive interview with TechRepublic, she explained that AI tools and techniques go beyond the AI ​​hype that exists in the market.

Sarah Taraporevalla's profile photo.
Sarah Taraporewalla, Director, Enterprise Modernization, Platforms and Cloud, Thoughtworks Australia.

“To get on the technology radar, our own teams have to use it so we can have an opinion on whether it will be effective or not,” she explained. “In all our projects around the world, we see that we have been able to create about 40% of the elements that we talk about based on work that actually happens.”

New artificial intelligence tools and methods are rapidly being introduced into production

Thoughtworks’ Technology Radar is designed to track “interesting things” discovered by the consulting firm’s global technology advisory board that are emerging in the global software development space. The report also assigns them a rating, which tells technology buyers whether to “adopt,” “try,” “rate,” or “hold” the tools or techniques.

According to the report:

  • Adopt: “Spikes” that companies should carefully consider.
  • Trial: Tools or techniques that Thoughtworks believes are ready for use, but are not as proven as those in the “adoption” category.
  • Evaluate: Something to look out for, but not necessarily a test.
  • Hold: Proceed with caution.

In the report, search-enhanced generation received an “accepted” status as “the preferred pattern for our teams to improve the quality of responses generated by a large language model.” Meanwhile, methods such as “using the LLM as a judge,” which uses one LLM to score the responses of another LLM, requiring careful setup and calibration, have been given “trial” status.

While AI agents are new, the GCP Vertex AI Agent Builder, which allows organizations to build AI agents using natural language or a code-first approach, has also received trial status.

Taraporewalla said tools or techniques must already be in production to be recommended for “trial” status. Therefore, they will represent success in real practical use cases.

“So when we talk about the Cambrian explosion of artificial intelligence tools and techniques, we’re actually seeing them within our very teams,” she said. “In Asia Pacific, it’s an indication of what we’re seeing from clients in terms of their expectations and how willing they are to cut through the hype and look at the reality of these tools and techniques.”

WATCH: Will electricity availability hinder the AI ​​revolution? (TechRepublic Premium)

Rapid adoption of artificial intelligence tools is causing anti-patterns to emerge

According to the report, the rapid adoption of artificial intelligence tools is beginning to create antipatterns—or bad patterns—across the industry that lead to poor outcomes for organizations. In the case of coding assistance tools, a key anti-pattern that has emerged is the reliance on AI tools to offer coding assistance.

“One of the anti-patterns we see is relying on the output response,” Taraporewalla said. “So while the co-pilot will help us generate the code, if you don’t have that kind of expert skill and someone to be part of the loop to evaluate the response that comes out, we risk overloading our systems.”

Technology Radar noted concerns about the quality of the generated code and the rapid growth of codebases. “Code quality issues in particular highlight an area of ​​constant diligence among developers and architects to ensure they are not drowning in ‘works-but-awful’ code,” the report said.

The report advocates against replacing pair programming techniques with artificial intelligence, with Thoughtworks noting that the approach aims to ensure that artificial intelligence helps, rather than complicates, the encryption of codebases.

“We strongly advocate clean code, clean design and testing, which help reduce the overall total cost of ownership of the codebase; when we over-rely on the answers that tools provide… it will not help maintain the lifespan of the code base,” warned Taraporewalla.

She added, “Teams just need to double down on those engineering best practices that we’ve always talked about—like unit testing, fitness features from an architectural perspective, and verification methods—just to make sure that it’s the right code that’s coming out.”

How can organizations navigate changes in AI tooling?

Focusing primarily on the problem, rather than the technological solution, is key to ensuring organizations implement the right tools and techniques without getting caught up in the hype.

“The advice we often give is to identify what problem you’re trying to solve and then figure out what might be out there in terms of solutions or tools that will help you solve that problem,” Taraporewalla said.

Managing AI must also be a continuous and ongoing process. Organizations can benefit from building a team that can help define AI governance standards, help train employees, and continually monitor these changes in the AI ​​ecosystem and regulatory environment.

“Having a group and a team dedicated to just that is a great way to scale that across the organization,” Taraporewalla said. “This way you get both guardrails right, but you also allow teams to experiment and see how they can use these tools.”

Companies can also create artificial intelligence platforms with integrated management functions.

“You can codify your policies in the MLOps platform and use it as a foundation for further development of teams,” Taraporewalla added. “This way you limit experimentation and know which parts of this platform need to evolve and change over time.”

Experimenting with AI tools and techniques can pay off

According to Thoughtworks, organizations that experiment with AI tools and techniques may have to change what they use, but they will also build out their platform and capabilities over time.

“I think when it comes to ROI… if we have a testing approach, we’re not only using these tools to get the job done, but we’re also looking at what elements we’ll continue to just build on our platform. as we move forward, this is our foundation,” Taraporewalla said.

She noted that this approach could allow organizations to realize greater returns from experimenting with AI over time.

“I think the return on investment will pay off over the long term—if they can continue to look at it in terms of what pieces we’re going to bring to the broader platform and what we’re learning from a fund perspective. that we can turn this into a positive flywheel?”