I’m Gem Wright-Ingle and I’m a process automation engineer on the Nintex K2 DevOps team. Our labs engineering team was struggling to manage test cases sustainably, so we built an in-house test automation solution that leverages Selenium, and I’m excited to share details with other engineering professionals looking to automate manual test cases.
A few years ago, the labs team started on a very precarious journey into trying to automate our exceptionally large amount of manual test cases (the set of conditions that determine if a system is functioning correctly) that we would be required to retest every release – resulting in complex, hair-pulling regression cycles that lasted for weeks and sometimes months.
At first, the idea of automating all these test cases was met with apprehension as we just didn’t have the time and resources to fully commit to it but showing a demo to the development managers of the tool and what it could do for us to prove how much time this could possibly save changed their minds.
Testing Nintex K2 with Selenium
Our first approach was to see if existing software or technology could be used to assist us in automating these test cases. We stumbled across many different tools, each with their own strengths and weaknesses, but only Selenium worked semi-well and it was determined that a custom tool would be required for us to take full advantage of this endeavor. Thus, a solution was born; by utilizing the Selenium OpenQA framework.
What is Selenium?
If you’ve never heard of Selenium, it is an opensource record and playback test automation tool for web applications. It essentially records tests exactly as a user or tester would use the app. It saves the tests in a readable format, and it’s available to rerun at a later stage inside your software development lifecycle. It’s easy to install and you don’t need coding experience to use it.
There are some limitations with Selenium. For instance, when we first started with this project, it was only available in Firefox and no other browsers were supported. We also couldn’t automate some of our most popular capabilities—such as drag-and-drop—due to steps not being available or not supported with our product code.
The Selenium OpenQA framework
We decided to build an in-house testing solution that leverages the Selenium framework, but adds extra support for our product specific needs, including the ability to use the tool with all browsers. This also afforded us the ability to add features to the tool if and when we needed to. Having a custom in-house tool came with its own challenges, but also a lot of success. One of the most immediate changes was that our regression cycles became less stressful. We finally had more time to use our product and automate things we would never have thought of testing before.
Our in-house testing solution
Our biggest challenges were the flakiness of our tests and the turnaround time for our test runs. Sometimes these things take a while. Working through failing tests to try and identify the cause would take an exceptionally large amount of our time – almost on par with manual regression. It became a fulltime focus for one of our team members to work through the test results daily and determine if the test were failing due to product issues or if they were just flaky, random failures.
After sifting through thousands of failing tests within the last few years, we identified a few key areas where our tests were not robust. The results gave us a shift in focus: we decided to do a proof of concept wherein our test build up would be done in code. Creating this mix of coded tests and UI tests would help us lessen the number of things the UI would have to do.
These coded UI Tests were superior in so many ways. They integrated with our release pipelines seamlessly and gave us a direct link into our test results, which we were able to integrate into our automated Power BI reporting. This made our release reporting a lot more accessible and collective, and a single person was no longer required to give the go-ahead for a release.
The future of test automation
Going forward, there are some very exciting test automation initiatives out there utilizing machine learning and AI to support the test automation effort. One of the biggest focuses right now is self-healing tests, which is great because it means we spend less time maintaining the same tests and they are capable of automatically adjusting according to product changes. We are hoping to incorporate this, and many other improvements, into our UI Test Automation framework soon.
Interested in exploring more engineering posts from our team here at Nintex? Click here to discover more.