Time for a reality check on AI in software testing

Gartner predicts that by 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain. That’s a huge shift from just 15% in 2023, but what does this mean for software testers?

Before test automation and AI, programming was an essential skill for software testers. With the increasing demand for AI/ML and no-code testing tools, that is no longer the case. And that means that AI skills, not programming, are becoming the preferred skills for testers.

But it won’t happen overnight. Gartner has already said that by 2028, most companies will be using AI tools to code their software. Many companies aren’t yet using AI specifically for testing, but the huge increase in code volume each year, in part because of AI, will make no-code testing more important. It will also put pressure on IT leaders to make informed decisions about which AI-enhanced tools to invest in.

To do this, they must understand the differences between them.

Robert Sales as Martin

Human testers and AI-enhanced testing

Let’s take a look at what AI can and can’t do for testing. We’re still a long way from testing without human input. AI-enhanced tools can help testers with things like test generation and maintenance, but you still need human validation and oversight to ensure accurate testing.

That said, 52% of IT leaders expect to use GenAI to build software, according to Practitest. That number will only grow in the coming years, alongside a dramatic acceleration in software production. That means they’ll need to test the results that AI generates, potentially with AI.

But how will AI accelerate software development? The benefits of AI are essentially the same as what we see from automation – i.e. quality at speed. By automating test generation and maintenance, AI-enhanced testing tools can accelerate development, enabling faster test cycles and faster adaptation to market changes and customer needs – ultimately improving market responsiveness for software.

Some AI tools can also process massive amounts of data, identifying patterns in complex applications much more thoroughly than human testers. This comprehensive process means superior test coverage, less chance of overlooked edge cases and missed bugs, and generally improved software quality.

Machine learning also enables AI tools to analyze historical defect data and test execution logs and predict potential defects. This allows AI to refine and optimize test cases, resulting in more robust and reliable tests that are less prone to unreliability or false positives.

Practically speaking…

While AI-enabled testing tools are still in their early stages, the 2024 State of Testing Report highlights that organizations are already using them for test case creation (25%), test case optimization (23%), and test planning (20%).

But it’s not one size fits all. The AI-augmented testing technologies on the market today aren’t all that advanced. And none of them are a magic wand. Each business needs to set the right expectations for what each tool can do. The areas each specializes in will determine the value of the tool to your business.

Even if you can technically generate a lot of test cases with AI, are they really quality test cases? Do those test cases require so much human editing and validation that it outweighs the time you save by generating them?

One organization might prioritize streamlining the early stages of test planning with automated test generation, which might mean using AI to derive test cases and scenarios from user stories. On the other hand, a company dealing with sensitive data or privacy concerns might prioritize synthetic data creation, using AI to generate data that mimics production environments while addressing test reliability and confidentiality concerns.

These are things IT leaders should evaluate when researching tools.

The roadblocks

Before AI can transform software testing, there are still some hurdles to overcome. The most important is finding both IT leaders and testers with the skills that are truly consistent with how tools evolve.

One of the most important skills IT leaders need to be aware of among testers is AI/ML skills. Demand for AI/ML skills is set to increase from 7% in 2023 to 21% in 2024, according to the 2024 State of Testing Report. Meanwhile, the perceived importance of traditional programming skills in testing is set to decline from 50% in 2023 to 31% in 2024. That shift in skills will inevitably transform the testing tools themselves, moving the industry away from entirely code-based automation approaches and toward much greater adoption of no-code, AI-powered tools.

Since AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more accountable for the security and privacy of that data. Compliance with regulations such as GDPR is essential, and robust data governance practices must be implemented to limit the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to minimize bias within AI-augmented testing.

But perhaps we’re getting ahead of ourselves here. Because even with the continued evolution of AI and the increasing normalization of autonomous testing, we still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results remains the responsibility of testers.

AI will change software testing for the better. But don’t treat any tool that uses AI as a straight-up upgrade. They all have different benefits within the software development lifecycle. It’s about being aware of what your organization actually needs, not what’s hot in the market. And no matter how important those AI/ML skills become… you still need people.

We provide an overview of the best IDEs for Python.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post