To get a finger on the pulse of any developments in the software world, we only need to look at the usual suspects — CIOs. What are CIOs looking at when it comes to real-time risk assessment in various stages of software development and delivery cycles? Turns out, CIOs the world over are exploring how Artificial Intelligence (AI) can help them achieve their digital transformation goals.
Software development is getting more complex by the day, with ever reducing time spans for delivery. This means that the testers need to provide feedback and evaluations instantly to the development teams. Releases that once took a month are now only taking days, with a release every other day, or even multiple releases within a day!
AI can hold the key here, but the question is: how to make use of AI to streamline the testing process?
What is the status of AI currently?
There are multiple stages where AI testing can help progress the lifecycle of the software. Hypothetically, let us look at testing web pages as an example.
The likely stages in the adoption of AI in the hypothetical testing of web pages
AI can check the pages and ensure that the expected values in it are the correct ones. This would help you understand the difference between the actual page and the expected baseline.
AI here will provide a higher level of understanding. If all the pages include the same change, AI will understand that it is the same page and show it to the human as one change. AI can look at the page and differentiate between content changes and layout changes. When we use it to test responsive websites, even if the layout changes slightly, the content should remain the same. AI here can help analyze the change, but cannot understand whether a page is correct or not.
AI here applies machine learning techniques to the page. Here AI can look at the visual aspects of the page and figure out if the design is off, based on standard rules of design — viz., alignment rules, whitespace use, color and font usage, and layout rules. So far, AI only checks things automatically and the humans are still driving the test and clicking on the links.
We are not here yet, though this stage is not far off. Here, AI will be able to understand by looking at real users driving the application. AI at this level can write the tests, and write the checks that test the pages. However, it will still need to observe humans at work and optimize.
Today it is possible to use machine learning to collate and analyze the humungous amounts of data generated by way of incident management reports, server logs, system logs, database logs, application logs, and test logs. This can be transformed into actionable intelligence that inputs into future testing, and the entire process can be automated to implement a self-healing, testing ecosystem that creates automated test scripts and continuously refines and improves them. Advanced machine learning algorithms can go a step further and predict defect triage and root causes, enabling faster incident resolution. This translates into significant quality gains at an application level, and vastly superior customer experiences at the end-user level.
Broadly speaking, the usage of AI in enterprise software testing today would be in the partial automation stage and it is fighting hard to move into conditional automation. We can expect high levels of automation to become the de facto standard sooner rather than later.
When it comes to testing games and gaming applications, AI can play a huge role. You can use machine learning to test games such as Candy Crush, for instance, that are random in nature, where anything can happen.
In essence, AI can easily be used to automate the most routine tasks and would be the way to go in scenarios where standard testing techniques are hard to implement. This would empower testers to avoid menial jobs and spend more time on tasks that require creative thinking, and problem-solving and decision-making skills.
Application of AI in Testing — SHABD, and HORUS
With the increasing adoption of Amazon’s Alexa, Google Assistant, and Apple’s Siri, the number of users who prefer to use voice assistants for their everyday work is expected to increase significantly.
Aligned with these user expectations, Zuci came up with SHABD, a voice assistant for Testing. SHABD assists testers in designing test cases using voice technology and creates mind maps for any application by taking voice inputs. SHABD is available for everyone’s use — check out the source code at https://github.com/zucisystems/shabd or view the demo at https://www.youtube.com/watch?v=GPZCRHat8u0
HORUS is an end-to-end digital reimagination platform that enables enterprises to smoothly transition through their digital transformation journey. HORUS delivers the next level of empowerment to engineering teams across levels, right from CxO and Architects to Developers and Engineering Managers.
While most solutions focus on code and development processes, HORUS takes a giant leap forward by adding the missing element of context, and delves into orthogonal aspects of development.
Using the concept of “cohorts” (which are bucket lists or classes of 100s of metrics), it creates context-sensitive, real-time views of every aspect of the software lifecycle — including static code objects (SCOs), processes, releases, production/runtime, business, accounting, and overall test coverage. These insights are then further translated into meaningful and relevant action items that streamline software development and drive critical business decisions.
Zuci is revolutionizing the way software platforms are engineered with the help of patented AI and deep learning models. Learn more about Zuci at www.zucisystems.com
About the author
Sujatha is Head of Quality Engineering at Zuci Systems. She has managed quality for a variety of product lifecycle across different domains and technology stacks. She specializes in test automation, performance testing, and manages the team of Quality Engineering in end-to-end delivery. She loves to talk about any technology that can improve quality. Find her at Sujatha.