cantest: avoid regression in HTML5 canvas rendering code
In a project we are working on we have quite a lot of code that eventually draws on an HTML5 canvas. Since we work in a team and have multiple layers of code involved in generating the final result, there’s always a chance for regression (like in any other codebase).
This is applicable to any codebase that generates visual elements on an HTML5
As an initial step, we needed a very basic tool that can simply run a canvas drawing function and compare the result to a predefined expectation PNG. This way, we can create a set of tests that protect us against regressions. In the future we might extend this tool to support testing regression of dynamic scenes as well.
node-cantest is a simple library/tool that runs test functions (exported by a node.js module) and compares the rendered result to a pre-existing expectation PNG file. If the results differ, it optionally pops up a browser window with the two PNGs.
Let’s run through a simple example.
Create a drawing function
Create a directory for our hello world example:
$ mkdir hello-cantest $ cd hello-cantest
Create a test
First, install node-canvas so that the test can create a
$ npm install canvas
Now, let’s create the test
Run for the first time to generate output
First, install node-cantest globally (might require
$ npm install -g cantest
Now, run it:
$ cantest hello-test.js [hello-test.js] hello-test.png created
This test basically failed (returned exit code
1). It failed because it could not find
hello-test.png which is the default name for an expectation file. Since this is a common use case, cantest simply created this file for you.
Let’s verify that the result is what we wanted. Open
hello-test.png and it should look like this:
This looks exactly like we expected, so I’m feeling okay keeping it as the expectation file. In case this is not what you wanted, this is where you iterate on your rendering code and make it look awesome.
If we run cantest again, the test will succeed:
$ cantest hello-test.js --verbose [hello-test.js] Test passed
So our project now consists of the following files:
hello.js- That’s our rendering function. All it does is export a
function(ctx)which, when called, draws those nasty looking rectangles on the canvas. You can use onejs or node-browserify to bundle CommonJS modules for client side consumption so exporting a function for client use makes a lot of sense for us.
hello-test.js- That’s the test. Basically, it calls the rendering function, passing in a node-canvas
hello-test.png- That’s the expectation file. It was conveniently generated by node-cantest and should be kept alongside
hello-test.jsso that cantest can compare the rendering result to it.
The next step will be to make some change in our
hello.js function and rerun the test to make sure we didn’t break anything:
…and run cantest:
$ cantest hello-test.js [hello-test.js] Test failed. See actual output in .actual.hello-test.png
Your browser window should pop up and show something like this (
--no-browse will disable this behavior):
Oh, yeah… Now we can see that the inner blue rectangle changed it’s height. Obviously this is a stupid example, but you get the point. In more complex rendering systems, there are multiple layers of rendering and if something changes in lower layers you would want to know if it has any affect on higher layers.
At this point we could:
- Decide that this is a good output and overwrite our expectation file by overwriting
.actual.hello-test.png(this file is created by node-cantest and not deleted in case the test fails).
- Figure out that we found a bug and fix it by editing
Static only for now
In this initial version node-cantest is good only for static rendering. It’s good enough for us for now, but we do understand that we might need to provide some support for changing scenes. This is basically just recording the rendered results and storing it as expectation.
Fonts on multiple platforms
If you run the tests on multiple platforms (e.g. locally on your Mac and in a build server like travis-ci, you should be aware that text is rendered differently on different platforms. There’s a different variety of fonts and the results are not always the same.
This essentially might cause your tests to break because the resulting PNG will be different from the expectation PNG. To work around this, use common fonts that exist on most platforms (or make sure you are using a single platform to run your tests).
See the cantest README file for more details on the command line interface and the library API.
cantest is open source under the MIT license. Contributions/issues/pull requests/feedback/remarks are very welcome and appreciated.
Elad Ben-Israel @emeshbi