Testing the DOM in JavaScript

This article was initially published at the Fortnox developer blog and republished here after Jonas left Fortnox in 2015. Some enhancements, such as adding code blocks around some inline code, and correction have been made to this version.

Here at Fortnox we use QUnit for testing our JavaScript. It is simple enough to get around and allows for easy integration with third party libraries, such as Sinon for mocking. I recently went to Valtech’s JS mingle in Stockholm and listened to a talk about testing javascript.

The speaker, Janko Luin, talked a bit about mitigating problems when testing JavaScript that integrates tightly into the DOM. His point was good, basically simplify the interactions and bind locally. He wanted to minimize the mocking during testing by simplifying the actual code. While a heartedly agree that you should follow everything he suggested and simplify your code and interactions as far as possible, there are limits.

Sometimes the use case is complicated and requires a complicated solution. We should try to avoid it. We should argue against the requirements that require us to build complicated and brittle software. But in the end we have to follow the money so sometimes we do end up where we don’t want to be. Just think of inheriting legacy code that you have to write tests against …

This is my take on how to handle this. It is a technique that I developed on my own over time, although it is probably independently discovered by many. The inspiration is drawn from several years of studying code quality and refactoring. I hope you can get some use of it to.

The code to test

As I said: normally simplifying works fine, but sometimes you get into test cases where you need to mock some DOM to perform your tests. Let’s look at some code:

pageIsVisibleInViewport: function( $viewport, $page ){
  var viewportTop, viewportBottom, pageTop, pageBottom,
    pageTopAboveViewportBottom, pageBottomBelowViewportTop,

  viewportTop = $viewport.scrollTop();
  viewportBottom = viewportTop + $viewport.height();
  pageTop = viewportTop + $page.position().top;
  pageBottom = pageTop + $page.height();

  pageTopAboveViewportBottom = pageTop < viewportBottom;   pageBottomBelowViewportTop = pageBottom > viewportTop;
  pageVisibleInViewPort = pageTopAboveViewportBottom && pageBottomBelowViewportTop;

  return pageVisibleInViewPort;

The function takes two arguments, a viewport and a page, both as jQuery selections. The viewport is basically a div with fixed size and overflow hidden, so it can have more content than is visible. The page, or pages actually, is what is potentially visible at the moment in the viewport. All the function does is take various dimensions and measurements and checking if the page is actually, maybe just partially, visible in the viewport.


To create tests for this we need to inject some html in the, by QUnit provided, fixture element:

function createViewportFixture(){
    '<style>' +
    '  .viewport{ height:120px; width:120px; overflow:hidden; background:lightgrey; }' +
    '  .page{ height:100px; width:100px; margin:10px; }' +
    '  .page1{ background:red; }' +
    '  .page2{ background:orange; }' +
    '  .page3{ background:green; }' +
    '</style>' +
    '<div class="viewport">' +
    '  <div class="page page1"></div>' +
    '  <div class="page page2"></div>' +
    '  <div class="page page3"></div>' +

  return {
    viewport: $('#qunit-fixture .viewport'),
    page1: $('#qunit-fixture .page1'),
    page2: $('#qunit-fixture .page2'),
    page3: $('#qunit-fixture .page3')

This function injects some fixture HTML and CSS to create our mock DOM elements and it also returns a fixture object with access to these elements. The reason why you want to do it like that is that you can later change the mocked content without having to change the tests, the tests doesn’t know about the actual DOM.

What this gives us is something like the image to the left. The top part, in full opacity, is the viewport and the visible content. The semi-transparent part, delimited by the darker border, is the rest of the content of the viewport that is currently not in view.

The tests

The actual tests looks something like this:

QUnit.test( 'Only the first page is visible in viewport without scrolling.', function( assert ){
  var documentView, fixture, result;

  assert.expect( 3 );
  documentView = new pdf.DocumentView({model: new pdf.Document()});
  fixture = createViewportFixture();

  result = documentView.pageIsVisibleInViewport( fixture.viewport, fixture.page1 );
  equal( result, true, 'Page1 is visible in the viewport' );
  result = documentView.pageIsVisibleInViewport( fixture.viewport, fixture.page2 );
  equal( result, false, 'Page2 is not visible in the viewport' );
  result = documentView.pageIsVisibleInViewport( fixture.viewport, fixture.page3 );
  equal( result, false, 'Page3 is not visible in the viewport' );

It creates a new document view, the module under test, and sets up the fixture from above. Since the fixture by default is scrolled all the way to the top only the first of the three pages should be visible so we assert that that is true.

The problems

I can see a couple of problems with what we have so far:

We are passing jQuery elements directly to our internal logic. This creates coupling between our business logic and an external framework. Now, it is not that I think we will get rid of jQuery any time soon, but still, it is a bad idea and we should fight it. The fixture is rather large and that fixture is a gross simplification of our reality. It may work as expected but we can never be sure that it accurately represents the real DOM in a way that let’s us test for all potential bugs in our code. If the mocked DOM is an oversimplification we will miss edge cases and we won’t know it!

The solution

The problem with tight coupling to an external framework has a ready made solution, the adapter pattern. What it means is that instead of having your object, O, directly dependent on the external framework, $, like so: O -> $, you introduce an adapter, A, in between and let that hold the tight coupling, like so: O -> A -> $.

While you still have an object that is tightly bound to the external library you can now create an internal API from the adapter that the rest of the code can use. In my case I’m not going to create an entire object to abstract jQuery away, but a small, dumb piece of code that can hold the DOM coupling and no logic and another that can own all the logic and no DOM coupling. This is what I refactored it to:

getGeometryFromViewportAndPage: function( $viewport, $page ){
  return {
    viewport: {
      scrollTop: $viewport.scrollTop(),
      height: $viewport.height()
    page: {
      top: $page.position().top,
      height: $page.height()

checkPageVisibilityInViewport: function( geometry ){
  var viewportBottom, pageTop, pageBottom, pageTopAboveViewportBottom,

  viewportBottom = geometry.viewport.scrollTop + geometry.viewport.height;
         pageTop = geometry.viewport.scrollTop + geometry.page.top;
      pageBottom = pageTop + geometry.page.height;

  pageTopAboveViewportBottom = pageTop < viewportBottom;   pageBottomBelowViewportTop = pageBottom > geometry.viewport.scrollTop;

  return pageTopAboveViewportBottom && pageBottomBelowViewportTop;

pageIsVisibleInViewport: function( $viewport, $page ){
  var geometry, pageVisibleInViewPort;

  geometry = this.getGeometryFromViewportAndPage( $viewport, $page );
  pageVisibleInViewPort = this.checkPageVisibilityInViewport( geometry );

    return pageVisibleInViewPort;

Instead of the original, single method we now have three. The original is now in essence a composed method that delegates its work to two other extracted methods.

The new getGeometryFromViewportAndPage method takes the jQuery selections and extracts the few values we needed from them into a geometry object. The geometry object is fed into the second new method, checkPageVisibilityInViewport, that contains the actual business logic.

Step 3, profit!

The new architecture gives us some nice separation of data collection and processing which helps us preserve the command query separation. We better adhere to the single responsibility principle since each new method has a cleaner area of responsibility. We introduce some explaining variables and method names to name our concepts. We reduce the method length which increases readability. And make our code easier to test since we can now send a mocked geometry object to the checkPageVisibilityInViewport when testing and won’t have to depend on the DOM to test our business rules.

This methodology, inserting small adapter methods to abstract away the DOM, is something I have been using more and more. Partially it is for the nice wins listed above, but there is another, less tangible win as well. Your code just feels cleaner.

In a larger component I wrote I refactored to introduce an adapter layer early and then delegated all the behaviours to business logic objects. The amount of code that deals with trivial crap relating to the DOM and event handling shrinks drastically. You cleanly bind and extract everything in one object and delegate all the event handling to methods on the business logic objects.

You can even intercept the events by binding the handlers to a method on the adapter, extracting the relevant information from the event object and sending just that to the business logic objects. This lets you test all of your business logic, including event handling, cleanly, without mocking or triggering events or any such silly shenanigans.

It is a better life to live, you should try it :)