Building a self-testing application

Andrew Gibson
6 min readMar 28, 2021

This is a continuation of a series of articles about Self Testing Applications. Here’s the first one: Self Testing Applications

Without diving into the technicalities, here’s a walkthrough of a self-testing application. It’s not a big example — it’s the smallest possible thing to get a feel for the main concepts.

Start with a goal

The goal of this app is to replicate the feel of Doogie Howser’s diary:

So I start by capturing it in the following feature files. Ideally, I’d do this in collaboration with a diehard Doogie Howser fan, but I’ll have to make do with my own best efforts:

Get it running

So now that we have our features defined, I’m placing them into a running application. I’m not going to go into the details here (I’ll share some of that in the next few articles), but in summary it has the following bits (all implemented in javascript):

  1. A web application (with a simulated in-memory database)
  2. Some libraries for parsing feature files, and an API exposing them to the browser
  3. A module which executes our feature files when the application loads, and another which listens for the status of tests and outputs them to the console
  4. A module which intercepts our requests to the server and specifies whether or not we’re running a test

When I load the home page, it’s blank, but I see some initial output in the console, which shows me that our app is trying to test itself (click to expand):

A functioning dashboard

Obviously, we could now start fleshing out the scenarios for our features, but lets start as we mean to continue by adding a dashboard for the user to see the status of the application’s health. I’ll do this by dropping in a Web Component which listens for the test output and displays it in a dashboard. In this basic implementation, we’ll just add a link to display the dashboard in a dialog box:

And, clicking the link:

So we can see that the application is testing itself, but so far isn’t making any claims about it’s functionality — the dashboard only shows us placeholders for features which don’t yet exist.

However, if I click the Detail toggle, I can already see a bit of information about what the application will do:

At this point, because I’m actively developing the application, I’ll keep the console logger turned on, but when we publish the app to production, the dashboard is probably sufficient for our needs.

Note: if you’re curious about integration with a build pipeline, fear not — there’s a logger for that… stay tuned.

Our first feature

Let’s start by adding scenarios for our first “branding” feature. The feature file now looks like this:

And, if I refresh the web app and open the dashboard, I now see this:

The dashboard shows an amber light for the feature which we’ve defined
The dashboard’s detail mode highlights the section for which we need to implement tests

So, as expected, our application has tested itself and found a problem. If we click the “Detail mode” button we can see that the application is saying that we need to specify how to carry out the first part of the scenario — “Given I have loaded the app”.

Again, I’m going to skip over the implementation to focus on the usage pattern here but rest assured I’ll dive into more detail in an upcoming article.

Once, we implement that testing behaviour in our application, the next line of the scenario is also pending, but once we implement this one, our application knows that our branding isn’t working as specified, so we get a red light on our dashboard:

We get a flashing red light in the Dashboard mode, and a red highlighted section in Details mode

To get it working, I implement the functionality and the application now looks like this:

The rest of the branding functionality gets implemented until our app looks like this:

A test involving data

For the next feature — the Reading feature — we’re going to define it like this:

You can see that it requires data to exist on the server for the test to pass. However, note that it’s important we don’t corrupt actual data which the user creates. When we implement this test, we need to ensure a few things:

  1. The server creates data in the database for us as part of the setup process
  2. The test data is rendered in the application during the test phase (but only during the test phase)
  3. We verify that the expected output matches what the server did during the setup process (in this case, the diary entry date was generated on the server)

Again, I’m skipping the implementation here — there are various ways to do it and I’ll explore one variant in upcoming articles.

When I implement this set of tests, initially I tell the application that the setup needs to happen on the server. This causes the application to inform me that the server doesn’t know how to do that yet:

At this point I’ll implement the logic on the server to create the specified test data.

Note — I’m not showing the implementation, but it’s important to note that this is just another piece of our application logic which shares the database access code with the rest of the application — not some separate test project.

Once we have implemented the set up logic, and the implement the next pending test, our application reports failing status again:

And now we can implement the logic. The actual tests run without the user seeing them so there’s no screenshot of the diary entry in our tests to show you.

However, if I temporarily insert an entry into the database, we can see it rendered on the screen like this:

Completing our MVP

I’ll finish up by implementing the Writing feature which results in an application which works like this:

Coming next

Next time, before diving into some implementation specifics, we’ll look at two refinements to this pattern:

  1. Customising application status
  2. Using our tests as an aid to Observability

--

--

Andrew Gibson

Business and technology in the software engineering space