Gilded Rose (Approval Testing)
Learn how Approval testing can help you when dealing with legacy code
Objectives
Learn a practice that will help you be quickly productive in an unfamiliar environment
Use Approval Testing to deal with legacy code
Before we start
Clone the repository here
Make sure you can build it
Connection
On a sticky note, write a question you want answered about Approval Testing
Concepts
What is Approval Testing ?
Approval Tests are also called Snapshot Tests or Golden Master.
Approval tests work by creating an output that needs human approval / verification.
Once the initial output has been defined and “APPROVED” then as long as the test provides consistent output then the test will continue to pass.
Once the test provides output that is different to the approved output the test will fail. The developer then has two choices:
If the change in the output was unintended then fix the bug that’s causing the change in the output.
“Approve” the new output as the baseline for future tests.
Output can be anything you want, as long as it can be compared to another copy in a consistent manner.
The difference with Assert-based tests
Unit testing asserts can be difficult to use. Approval tests simplify this by taking a snapshot of the results, and confirming that they have not changed.
In normal unit testing, you say assertEquals(5, person.getAge())
. Approval tests allow you to do this when the thing that you want to assert is no longer a primitive but a complex object. For example, you can say, Approvals.verify(person).
Main characteristics
Test cases check actual program output against a previously approved value, and any difference will fail the test.
Normally, a human inspects and approves some actual program output when creating a test case.
Raw program output may be processed into a more convenient format before being used for approval and comparison.
Design a Printer to display complex objects, instead of many assertions.
If actual program output is not yet available, the approved value may be a manual sketch of the expected output (usefull when you do TDD).
Approved values are stored separately from the sourcecode for the test case, although in the same VCS repository.
When a test case fails, you can use a tool to inspect the differences and easily update the approved value.
How to write Approval Tests ?
When you start working on a new feature :
Tools
There are many tools depending the language you use.
The most common one is ApprovalTests available on a lot of language (from js to C# passing through C++, ...)
Concrete Practice
Read the specifications in the Readme
Look at the code
Individually :
What do you think about this code ?
If you have to add a new type of items what would be your strategy ?
Think about testing
How many tests would you write before being confident enough to refactor the code ?
Which ones ?
Exercise
We have recently signed a supplier of conjured items. This requires an update to our system:
"Conjured" items degrade in Quality twice as fast as normal items
Draw a diagram representing different paths of the updateQuality
Add a first test
Based on the specifications write a first test by using junit 5 (dependency already in your pom) :
Let's use an approval test
Add ApprovalTests dependency in your pom.xml :
Refactor the existing test using ApprovalTests :
Run the test
ApprovalTests library compares 2 files :
GildedRoseTests.updateQuality.received.txt that has been generated based on what is inside the verify method call
GildedRoseTests.updateQuality.approved.txt a content that has already been approved
In this case we have not approved anything and our approved file is empty.
The actual implementation is functionally good. So we must approve what is currently generated / calculated by the system.
Approve the content of the file :
If you run the test again, it should be green now
When you work with Approval Tests never commit what you receive from the tests verifications but only the generated files (Golden Master or Snapshot)
What about coverage ?
Before making a refactoring a good practice is to be confident about the tests covering the code you want to refactor. To do so you can run your test with Coverage :
Because there are plenty of hardcoded strings and paths in the code, we have areas for improvement regarding our code coverage.
If you use IntelliJ IDEA
By default test coverage is poor but you can boost it by editing the "Run/Debug" configurations and enabling the Tracing option.
If you run the test again, you should now have more information and new colors on the screen :
Use Code Coverage to increase our confidence
What I recommend when you use Code Coverage or design tests is to always have your Subject Under Test in front of you : split your screen vertically.
Use CombinationApprovals
CombinationApprovals allow to combine a lot of inputs in the same ApprovalTests.
We need to provide a Function as a first parameter and then the parameters.
Refactor the test with CombinationApprovals
Note that the received version has changed now because when you use CombinationApprovals it adds a description of the combination for each input :
[a common item, 0, 0] => a common item, -1, 0
Cover new lines of codes
By discovering them with the Code Coverage tool :
At the end you should have a code coverage of 100% with a test looking like this :