This article is inspired by a talk by Nikolay Advolodkin from Sauce Labs, that he shared at our Browser Conference
Software testing is undergoing a seismic shift. Before AI came into the picture, manual testing and basic automation scripts were enough to catch stray bugs and validate if your code was working. But that also came with its own challenges.
For instance, Katalon’s State of Quality report found that 48% of engineers struggle with a lack of time to ensure code sticks to quality standards. On the other hand, 34% of engineers say they deal with frequent changes in requirements and lack of skilled resources to do this well.
This is where AI can bridge the gap. We have several tools now that can do the heavy lifting for us. Instead of spending hours generating tests or weeks learning how to do it, you can do this in minutes with AI. But how do you really use it?
In this article, we’ll explore:
- How AI tools generate and optimise test cases—improving testing automation.
- The impact of AI on improving test coverage and identifying code flaws.
- How you can integrate AI tools into your development workflow to improve testing processes.
How can AI help improve automated testing?
AI-driven testing goes beyond traditional automated testing. It uses artificial intelligence and machine learning algorithms to generate relevant tests or identify issues as you go. These algorithms have already been trained on code data, so they can learn from past tests and be used for different use cases.
Here are some of its core capabilities:
- Predictive analytics: It analyses historical data and code changes to tell you where potential issues could happen—giving you more context while creating tests.
- Natural Language Processing (NLP): It can use natural language inputs to understand what you're saying, generate test code, brainstorm testing scenarios, or help you with other support tasks.
- Self-healing tests: AI testing tools can also change testing code in real-time as the application is being tested, reducing the testing burden on developers over time.
How does AI benefit different types of test?
The benefits of AI vary depends on the type of test you’re running, so let’s take a quick look:
Unit testing
Unit testing focuses on the individual components on the code—and tests each component in isolation. The problem with that is:
- It requires complete test coverage, and generating those tests can be cumbersome
- You’ll have to maintain these as code changes
- As you test in isolation, you’ll miss out on how each component interacts with each other.
In this case, you're better off using AI to generate the individual tests. AI also looks at the code block as a whole, giving it more context for the test generation process.
Functional testing
Functional testing verifies that the software functions as intended by testing the application against specified requirements. For example, if you’re testing an ecommerce website’s checkout functionality, you might want to generate tests to see if the “Add to cart” and “Checkout” buttons work properly.
Here’s an example:
However, there are two issues with this: You'll have to dream up every possible scenario, and one function can have multiple use cases. It's hard to test the functionality without going through this process, so use AI to brainstorm these aspects and take it from there.
Non-functional testing
Non-functional testing focuses on the other side of the coin: specific behaviours like performance or scalability. For example, if you want to see how your website performs (load times, latency, etc.), you'll run a Google Lighthouse test. You could automate it with /performance APIs, but even then, you'll need to check if these tests actually work.
Here's an example of when you want to make sure multiple users can access the site when you need it:
But the problem is that you'll need specialised tools to do this (like Lighthouse) and simulate multiple real-world scenarios. AI would be a great help for the latter.
Visual testing
Visual testing does exactly what it says. It looks at the visual components of code—and verifies if the application’s user interface (UI) works properly on different devices and screens.
Testing can get quite complicated. You need to:
- Create high-quality visual baselines
- Manage false positives due to UI changes
- Account for device/size variations
Doing all that is resource-intensive, to say the least. So, use AI to get there faster.
Let’s say you want to see if a new feature impacts the UI. The AI tool can:
- Capture screenshots for the baseline state
- Capture new screenshots after code changes
- Analyses the screenshots for potential defects
Use cases for AI-driven testing platforms
As of now, these are the most popular use cases for AI testing tools:
- Intelligent test case generation: Use AI to analyse your entire codebase and suggest test cases for different scenarios. For example, if you're creating a new feature for your SaaS application but are not sure how many testing scenarios to emulate, you could generate it by giving the tool more context on your use case.
- Optimising test coverage: Use AI to analyse the code and identify parts of the application that are not adequately tested. For instance, run your current test code blocks and application code and ask the tool to write tests for areas that have yet to be tested.
- Code analysis and bug detection: You can also review your code and suggest areas where potential flaws might exist. Eventually, it’ll start catching issues before they become problematic in production.
- Automated code reviews with AI: You can also start automating code reviews. For example, create your own GPT bot using ChatGPT and your internal quality guidelines. Let it keep running through your code and flagging code blocks that don’t comply.
Top AI tools for automating testing workflows
ChatGPT
ChatGPT, developed by OpenAI, is a versatile AI tool for generating test cases from natural language descriptions. Its ability to understand and respond to detailed prompts makes it a powerful asset for creating customised test cases.
Example: Suppose you have a simple React component and want to write tests to ensure it renders correctly. Here's how ChatGPT might generate a test case:
Prompt: "Generate a test case for a React component called MyButton
that checks if the button renders with the correct label."
Generated test code:
In this example, ChatGPT provides a straightforward test case that checks if the MyButton component renders with the label "Click Me". This test ensures the button's text content is correctly rendered on the screen.
Amazon CodeWhisperer
Amazon's CodeWhisperer assists developers by providing real-time code suggestions and helping them create test cases. It integrates seamlessly with development environments and offers suggestions for test case creation based on the code context.
Here’s an example of a unit test generated using the platform:
GitHub Copilot
GitHub Copilot, powered by OpenAI's Codex, assists developers by suggesting entire lines or blocks of code based on the current context. It excels in generating boilerplate code for tests and suggesting comprehensive testing strategies.
How to start integrating AI tools in your testing workflows
Now that you know how you can use AI and which tools to experiment with, let’s look at how to get started:
1. Evaluate your current testing workflows first
Before you start experimenting with AI, identify bottlenecks in your testing workflow. Map the workflow and see which areas could use more automation or help. Maybe you need to create more variations of a unit test, or you can't test a specific part of the application. Find that point and then find AI tools for those use cases.
Also, look for key integration points. You don't want to switch between applications, and if the AI tool integrates with your IDE, it'll also be much easier to adopt. For example, you could use GitHub Copilot to automate the generation of test cases during the coding phase.
2. Choose the right AI tools for the job
Choose tools that support the languages and frameworks your project uses. For instance, GitHub Copilot works well with JavaScript, Python, and other popular languages. But Amazon CodeWhisperer also supports Kotlin, Rust, and SQL.
Also, look for the right integration capabilities like IDE plugins or CI/CD pipeline integrations.
3. Adopt AI incrementally
You don't want to start integrating these tools and then realise it doesn't work for your use case or hallucinates more often than you'd like. Instead, experiment with it in a sandbox environment first. Start a pilot project or just experiment on your own time.
Only when you get a clear idea of what it can and cannot do, integrate them into your workflows. After the pilot phase, expand the use of AI tools to other parts of the project.
4. Create guidelines for AI usage internally
Maybe your peers don't want to adopt AI tools yet, or there are no guardrails for implementation. Address this internal resistance by explaining your experience with them and realistically evaluating their capabilities.
Also, ensure your team has access to resources and support as they adapt to the new tools. Encourage your team to use AI tools as a supplement, not a replacement. And create AI guidelines to make sure everything's done according to "code," so to speak.
5. Implement a continuous improvement mentality
The AI space is moving at break-neck speed—literally. This also means that you'll probably see a new update almost every week, so keep your tools up-to-date.
Irrespective of your use of AI, ensure your code always meets internal standards. It's easy to let AI do its thing while you're focusing on other things. But if it hallucinates or doesn't create the right tests, it's pointless to use it.
This is also why you need to create a solid internal feedback loop and adjust your AI strategy to keep getting value from it.
AI-driven testing is the future
AI is already changing how developers work with code every day. You can use it to:
- Hasten your development cycles
- Get more creative with your testing process
- Tighten up testing workflows
- Improve test coverage with ease
- Improve code quality over time
That said, it's still in its infancy, which means it's the best time to start learning how to leverage it in your testing workflows. As new tools or improved capabilities come into play, you'll be farther along than your peers and can deliver high-quality software faster with time.