The Rise of AI and Machine Learning in Software Testing

Eoghan Phelan
6 min readJan 4, 2024

--

Introduction

In software development, the continuous pursuit of high quality products has driven the evolution of testing methods. To stay on top of a competitive digital landscape, software products require constant additions of innovative features as well as continuous maintenance. This, in turn, is the QA team’s continuous mission. Products require constant upkeep and it is mandatory to make sure that all functionality of the product is working before each release. Where there can be hundreds of tests in place, monitoring multiple features, managing this can become a tedious task without appropriate tooling.

In the early 2000s, the introduction of Agile Manifesto and Continuous Delivery practices brought with it the evolving adoption of test automation. Faster product delivery demanded more efficient testing processes, leading to the increased use of test automation tools. Tools such as Selenium gained popularity as web application automation tools. Many tools still employ the record and playback feature to identify web elements for tests.

As an increasing number of devices and technologies are introduced, testing has grown in complexity, demanding a broader range of scenarios and use cases for robust application testing. Test automation tools and the establishment of device farms have facilitated this complexity, enabling testers to remotely test their applications across multiple devices.

Nevertheless, maintaining test cases, test data and test frameworks can introduce quite a challenge to testing teams, and in some cases may impede overall product delivery speed. A remedy to these challenges has emerged, utilizing AI and Machine Learning capabilities.

The Emergence of AI in Software Testing

Artificial Intelligence is allowing testers to evolve their capabilities and improve their performance. AI in software testing involves a variety of technologies, including machine learning, pattern recognition and natural language processing. New tools have empowered these technologies to identify patterns in testing deficiencies and make more informed decisions that leads to a more efficient testing process. While the utilization of AI in software testing is still exploring its full potential, the following implementations are in current use:

Automated Test Script Generation

AI can enable the automatic creation of test scripts by analysing the applications behaviour and understanding the underlying code in front of it. A non-AI example predating the use of AI is the record and playback functionality found in various test automation tools. A tester would navigate through a test scenario, with the tool recording these steps. The tool collects the applications element identifiers and generates code for the tester to integrate into their testing framework. Testing tools such as AskUI, Virtuoso and Qyrus have now taken this function to the next level. These tools have a low code feature where you load up an application’s URL and navigate through a user scenario. As you interact with applications components, the tool is automatically capturing the components locators and logging an automated test step. AskUI, for example, can use AI to find any visible element on the device screen — this means no more long xpath selectors are required in code. This alone removes a significant amount of maintenance on a test suite. Qyrus TestPilot tool can identify the functionality and domain of an app based on its URL using the AI plugin. It then creates tailor-made test scenarios for optimal coverage.

I consider this feature as more of an optimizer than a total replacement to test script generation. As is commonly faced with AI, there may be potential biases in algorithms. Tests still require a human touch to refine the data and results generated ensuring that they thoroughly assess an application to achieve the highest quality.

The more accurate the data we provide to the AI, the more improved results we can expect in return. An example of this is NOVA — an AI tool in Qyrus. NOVA reads JIRA tickets and analyzes the requirements mentioned to automatically generate functional test scenarios and ensure the delivered functionality matches expectations. The quality of these test scenarios depends on the quality of the user stories and descriptions in the JIRA ticket, again, demonstrating the importance for a human input to maximize the full potential of AI.

AI-Driven Visual Testing

It happens to us all — tester tunnel vision. A component has not changed in an application for some time. Test automation is continuously passing and no issues are being flagged. Suddenly, a UI bug on the component goes unnoticed by the tester, due to some underlying system or component library change. Some testing tools have attempted to fill this gap, however, AI powered within these tools have excelled their effectiveness. Applitools has an excellent visual AI testing tool. As tests are running, Applitool compares screens from a previous build to the screen of the current screen being tested and flags any variations found in the contents of both screens. You can set the level to the AI of the strictness of the match level comparison. Unlike stricter visual testing tools ,Applitools compare differences to the human eye, while ignoring the difference in pixel values that are platform dependent due to rendering software or hardware components. This is particularly beneficial for screens where data appears ad hoc, such as variations in table data or popups.

This is a significant advancement for testers. For users, the UX and UI is the driving factor that influences their decision to continue interacting with the application. Any visual flaws may lead them to use another application.

Real-time Monitoring

As mentioned above, a major pain point in test automation is maintenance and monitoring of your testing infrastructure to ensure all of your tests are:

  1. Working as initially designed
  2. Still testing the right things

If a test is now failing due to a small change in a web element identifier, time has to be spent by the tester maintaining those tests to fetch the new identifier and re-run the test. AI can remedy this pain point. Self healing test tools can now automatically repair failing tests (due to a test framework error) in microseconds. Xpath and HTML information is captured and transforms the data into a representation which will identify the position of each element on the page. It will capture how the element looks and the distance from other elements. Upon failure, the tool looks for the element which matches the representation, using a machine learning algorithm similar to the nearest neighbour algorithm. The healer understands the app element structure and relocates the element of interest at the instance of failure. The test re-runs without any human interaction. Tools such as Qyrus and mabl use this technology.

Monitoring test runs extend beyond the tests themselves; it also involves using AI to observe the resources on which the tests are executed. Analyzing usage patterns using AI reveals the most frequently used Operating Systems and browsers. In addition, valuable information can be reported to the software engineers by providing insights into how your application is performing on different browsers and OSs.

In terms of test creation optimization, Testim will recognise repeated patterns in test cases and offer suggestions that helps identify similar steps across tests and automatically suggests shared groups as replacements to speed up test case creation. In turn, creating a more robust and maintainable testing suite.

Conclusion

In a previous Test Guild podcast, a perspective on AI in software was expressed as “I look at AI and ML more as more of an accelerator” which resonates.

There are many other applications of AI in software testing which I did not discuss above, such as data generation. However, the topics discussed above, I feel, are significant tool enhancements in how a software tester operates. These should not be looked upon as replacements or substitutes for any work you do, more of a valuable sidekick in your own work.

--

--

Eoghan Phelan
Eoghan Phelan

No responses yet