|---Module:text|Size:Small---|
As remembered on Part 1 of this article, AI stands for “Artificial Intelligence” and represents the capability of a machine to demonstrate intelligence, meaning that any device with perception of the surrounding environment and ability to take actions to achieve an objective can be considered intelligent. Standing for “Machine Learning”, ML is the field of Artificial Intelligence that heavily relies on algorithms and statistical techniques to give computer systems the ability to “learn” from data, without being explicitly programmed.
They are taking place in our world as a whole but, are also taking place in our world as software developers and testers and, one of the first challenges encountered in AI is not related with the technology itself but with the identification of specific applications and, in the State of Testing Survey 2017 and the World Quality Report 2019/2019 (WRQ) understanding, one of the possible ways to apply AI is in Quality Assurance field.
The use of AI systems in testing can bring some advantages in whole QA lifecycle and help in several fields, such as:
Furthermore, a correct application of AI-Driven tools can expose underlying truths that humans would take much longer to discover and lead to a risk reduction, cost optimisation and higher customer satisfaction.
Not only advantages can be obtained from the third wave tools, these tools also reveal some challenges.
Since we are just beginning this wave, the time spent creating automated tests is still considerably high, and the vulnerabilities regarding volatile systems require great level of maintenance on the automated tests. Applications that change constantly, test data and environment availability are the main challenges in achieving the desired level of test automation, and here is where Machine Learning and Artificial Intelligence would perform a greater change.
Creating algorithms that could be aligned with the automated testing technics (or new ones still to be discovered) with the ability to change, adapt and learn according with the systems’ changes would aim to solve the most difficult tasks related with automated testing, using one of the most recent and ground-breaking technologies.
Another biggest challenge that elapse from the previous ones is the possible rising of new testing roles comprising different skills as development, data science, mathematics, algorithmic knowledge and, obviously, Machine Learning.
Still, tester job will still be needed but with different outlines. Ideally, AI and tester should work together to deliver the best quality in the software. We can transport to the AI the tasks with repetitive nature and keep on the human the difficult tasks such as exploratory testing (that require critical sense, which machines still don't have), machine monitoring to validate and correct the anomalies detected and the decisions made by the AI and, the most important thing to us, the communication and knowledge transfer within and between teams.
Additionally to these, some other challenges can be faced when setting up: Identifying where AI can be applied; Integrating AI with the existing applications; Lack of development and testing knowledge.
From the State of Testing Survey 2017we can retain that the greatest challenge of the today’s testing is the frequency with which the applications change. The automation tools and the automation tests development struggles to keep up with these changes, forcing testers to constantly maintain and modify scripts and test cases.
Additionally, in testing we cannot test everything due to time and budget constraints and one of the most challenging tasks of testing a product is to determine and identify the acceptable number of tests needed to cover the most critical components of the product.
Instantaneously, these facts trigger the thought: Can we use ML to approach these problems?
An artificial intelligence computer ML program can be designed, without knowing anything about the product being tested (let’s say a website), to navigate through it using a process called neural evolution.
To do it, it must use the inputs, in this case it is the elements of the html document of the page, it is what the program “sees”, the simplified view, body, div, forms, labels, buttons, etc.
By trial and error, it will create a neural network between the inputs and outputs. A neuron is created when the program decides to select an element of the html page of the type button, selects the left button of the mouse and it with the html page refreshes and the program learns by cause and effect that if it uses the left button of the mouse for that element type button, something will happen, it will go to another page. That is how a neuron is created. By through running several simulations the computer program can create several neurons that connect input and outputs or even other neurons.
With enough computational power a neural network can come close to create every simulation we could obtain from every combinatorial test that could be designed. With it we could also identify every simulation that ended with an error page and concentrate efforts to find more errors on that specific part of the product, or for a test scenario identify all the different simulations that started from the same starting point and ended in the same expected result.
So how can we take this further? Can we use it to create test scenarios? Yes, through evolution.
As an example of a Level 4 AI, we can make an AI program to measure progress by how far it goes on a check-out flow. It would have to go through the catalogue page, select product page, check-out page, delivery page, contract page, payment page and confirmation page. We would award incremental progress points for each consecutive html page that the simulation reaches. The simulations where the program goes backwards or simply terminates the check-out flow, will have lower points result of bad decisions and are simply discarded, while the simulations with the best decisions (of the new neural network) that went further and faster on the check-out flow will be incorporated into the new test suit and be reused for the next generation of simulation tests. By doing so, the program evolves into making the best decision process and reach the final page. In the end we would have a suit of test cases with a few variants should the tester find that there were one or more simulations that are worth to include on the permanent test suite.
AI tools are just in the beginning of their evolution in most of the companies but, it seems that they are coming to stay and change our world to an “AI-first world”.
They are not here to replace testers – they are here to help them. We can program an AI to perform their repetitive and time-consuming tasks, this way they will save time and use it to perform different tasks. Additionally, there are always a range of tasks that should keep with testers, for example tasks related to interpersonal skills, such as communication. Despite that in a level 5 AI-Driven testing tools it is supposed that AI interacts with the different team members and stakeholders, it is not exactly the same as the communication between individuals.
One of the testers most common difficulty is the Test Suite Optimisation, which is exactly one of the advantages of AI. Tools with this nature can help saving test cases duplication and automation efforts, calculating the test coverage, reducing the testers fatigue and improving productivity. With enough computational power, a neural network can come close to create every simulation we could obtain from every combinatorial test that could be designed and optimise the test suite.