It’s been a while since I’ve posted some testing automation stuff. Not because I haven’t been busy automating, just not a lot has been new with it. I’m in maintenance and refactoring mode. (Also, for a while we weren’t calculating maintenance hours into sprint planning, and I got a little behind.) As our group matures, my goal in testing is advancing. This is no longer just about best practices, or covering my changes, it’s about getting meaningful information about tests. JUnit Rules are going to help me accomplish that.
First Thoughts on Test Reporting
So when I first got the directive of measuring tests, I looked into the SauceLabs REST API. It didn’t take long for me to realize that wasn’t going to be a very good option. I’d have to poll the data periodically, since there are no webhooks. There’s also no good way to query for a specific test run. I’d have to pull the most recent jobs, and iterate through them to see if I’d already recorded them or not before acting on the data. Woof.
Ever wonder why you start with a crazy and/or bad solution instead of an easy and/or good solution? Me too. This is a classic example of that. I may not use SauceLabs forever, so designing a solution around it is impractical. I will, however, use JUnit (or at least some other testing framework) to run tests for as long as I run tests. Let’s have JUnit help me out! But how can that happen?
Step 1: Annotate
One of my issues in test maintenance is on occasion I write a lazy test name. Shocking. I have a nice written plan in test management in ServiceNow, but it’s not carried over to my Java project. While I still don’t have a way to tie the ServiceNow test into my Java project (yet), I do have a beginning to that end. A new annotation: ServiceNowTest.
A ServiceNowTest annotation would look like this, at least for now:
Great, how do we use it?
Step 2: Write a Rule
The TestWatcher class has several functions that run at various times before or after tests. For this effort, we just care about two: failed and succeeded.
First, we use the Reflection API to get information about the annotation, if it exists. This way, if we have tests that aren’t annotated, it doesn’t blow up what we’re doing normally. Next, we’ll get the information and use the ServiceNow REST API to post it to the tm_test_instance table. You can see I’ve modified the table a little bit. I’ve added a flag for the test being automated, and a list for stories related to a test.
Step 3: Profit
With this up and running in the repo, our tests now can tell ServiceNow what happened after they run.
Next up is creating an annotation to flag what tests can run in what environment. This is exciting, because it enables the idea of production safe tests (things that aren’t manipulating data). Currently, we don’t use our automated testing on production, even for releases. But with this added functionality, I’m hoping to push for certain tests to be run in prod daily to keep an eye on things.