I want to speed up my integration testing by making tests in the following way:
1/ setup by creating records in the database with fixed data. Even fixing (setting) the current time, so the inserted_at values are the same on each test run.
2/ do some actions (load a LV/non-LV page, click a link or trigger LV action)
3/ KEY PART: instead of multiple assertions of phrases or html tags/attributes, I call my CustomTesting.assert(html, call_identifier) function that compares the html output (the whole webpage) to the previously stored html output during the prevous successful test run for this particular assertion call (identified by call_identifier).
-
during the first run the html output is sent to the browser for me to see it and visually inspect that everything is fine. Then I click “Store” button at the very bottom to store this html output for future test runs.
-
when such assertion fails it will open two tabs in the browser, making a diff (like kdiff, git diff). The first tab will load the html from the test run and for each difference it will surround it with a div tag having a thick red border. In the second tab the same will happen for the expected html with green border. Below I will have a button “Store” to replace the expected html with the newly produced html if I want to.
4/ I can continue with more actions and CustomTesting.assert() calls.
Advantages that I see over using several (sometimes more than 10) small assertions to test that the key elements of the webpage are ok:
1/ I don’t have to write these mini assertions, I don’t have to update them manually when my main code changes. I don’t have to think what is important to assert and what isn’t and many other details like - is it possible in the sidebar to have the same phrase I assert for (now or in the future version of my app).
2/ The full html assertions are easy to manage even though they are brittle (easy to break). If I change my footer, for example, all the tests will fail, but in CustomTesting I can have a func to run all the tests and replace all the HTML output (because I know that the footer change is the only change and it will not break my actual code, so I don’t have to inspect each test manually).
3/ Manual inspections of failed tests happen in the browser with regions in red/green to guide me where the difference is. I can fix my main code, I can fix my tests setup/actions or I can click the “Store” button at the bottom to fix the test assertion.
What bothers me:
- I can’t find similar approach or testing tool being used in any programming language (maybe there is). If there isn’t - there is probably a good reason why? The only similar thing I found is Snapshot Testing (an overkill in my opinion) - it takes snapshots of the page (like .jpg) and then compares the images.
Can you think of why this approach is a bad idea and stop me, before I start implementing it?