With our upgrade to NAV 2016 from NAV 2009 R2 many new things came available to us at The Learning Network, formerly known as Van Dijk Educatie. Technical improvements like the performance of the service tier, which was one of the major reasons for wanting to leave behind NAV 2009. But of course, next to a vast number of functional features, also the availability of PowerShell cmdlets, upgrade codeunits and test automation. Those who have been following me one way or another know that the latter subject has my special attention: testing and test automation.
- My series on Test-Driven Development and the testability framework
- The presentations and webinars on the Test Automation Suite for NAV Skills on July 13, 2016 and May 2, 2017, QBS on July 7, 2016, NAVUG on May 10, 2016 and April 6, 2017, Dutch Dynamics Community on January 12, 2016 and June 13, 2017
- The Q&A Jan Hoek initiated
- The upcoming workshops ( Getting Microsoft automated tests working on your solution and Writing your own automated test ) and session on NAV TechDays 2017
In today's post I would like to make a start in sharing our approach and findings in how we started to use the Test Toolkit as being released by MS on each NAV product DVD since NAV 2016. For that I have setup a kind of framework below that allows me refer back to some of its parts when elaborating on it more in posts to come.
So here we go, fasten your seatbelts and stay tuned. And … do not be afraid to ask.
Goal
Our primary goal of this exercise was to setup a test collateral, based on the standard MS application tests, to be used as a regression test suite, by
- Getting as many standard application tests running on our "solution"
- Running these, successful tests against the latest version of our stable code, on a daily basis
- Noting if any of the, formerly, successful tests, have failed (or not, preferably π
- Analyzing failures and fixing them, in either app or test code
Basic Plan
The basic plan to achieve this was to:
- Run standard tests against our stable code (read more)
- Select all successful tests
- Export these as one suite
- Import as a new, separate suite
- Run this suite against our stable code
- Get more standard test running successfully to get more coverage
- Repeat steps 1 to 7, until all standard tests, or a reasonable, run successful
Assumptions
With any endeavor there are always a number of assumptions. Or should I say loads?
Well, basically we had these two, being that all MS tests …
- … run successful in CRONUS
Proved to be nearly true. Approx. 1 ‰ failed; well, also automated tests is software π - … are independent, so any test function in any codeunit can be run independent from the other tests in the same test codeunit
Proved to be true for most of the test functions, but not for all (unfortunately)
Basic Environment
This is the world we live in at The Learning Network:
- App: NAV 2016 NL – RTM (technically CU11)
- Database: CRONUS Nederland BV
- Tests: NAV 2016 NL – RTM, 16.128 tests
- Customization: Ca. 400 standard/Ca. 630 own
To continued …