WebTechDev Whatsapp

Principles of Writing Automated Tests

Principles of Writing Automated Tests

I've discovered that there aren't enough static analyzers and code formatters for developing effective tests from my work on test automation in several projects. The team had to come to a consensus on how the exams should be written.

It will be chaos if each team member writes tests according to his or her ideal vision. Some patterns for Java or Python unit tests do not work for JavaScript integration tests, and vice versa. A particular type of test should be consistent and adhere to the project's approved rules.

The concepts that are listed here are founded on years of expertise and have been successfully applied to actual projects. They are most appropriate for JavaScript/TypeScript end-to-end tests (API and UI) on relevant test frameworks such as mocha, Jest, Webdriver IO, Playwright, and others. Common sense will decide whether certain principles overlap, clash, or are even problematic; it all depends on the circumstances of the testing project.

No tests without assertions

We all know that tests without assertions aren’t testing much. After all, they may pass even if the behavior isn’t as expected. The question then is: If we all know tests should have assertions, why do we see tests without them?

Maybe the simplest reason is that the developer wanted the test to break in case the production code threw an exception. Sure, this works. If no exceptions are thrown, the test will pass. If an exception happens, JUnit will make sure to fail the test.

In such cases, I always ask myself: "Isn’t there something to be asserted in case the behavior is as expected?" Think of some batch job that summarizes the billing of customers with a due date of today. Can’t the test assert that the invoices were generated correctly?

Tests that just expect the production code not to break are often weak. We want the opposite: tests with strong assertions that would capture any slight deviation from the expected behavior.

Now, I fear the most common reason for tests without assertions is a lack of observability. It is so hard to observe the output of the program under test, that developers simply can’t write assertions, even if they want to.

Think of the batch job example again. To ensure that the invoices are generated correctly, the developer needs to make two or three HTTP calls to different services. Or maybe the batch job simply writes files on the server, and there’s no easy way to read them.

Bad observability tends to happen more often in integration and system tests where you have multiple components and external parties working together. I’ll repeat my advice in the Larger Tests chapter of my book: you should invest heavily in test infrastructure.

For this particular batch job example, you may create an API that hides all the complexity of interacting with the many web services. From the test code, all the developer would then need to do is call a method, say, getGeneratedInvoices(), and the generated invoices are collected from the many services.

You may spend some time building it, but once such an infrastructure is in place, writing such tests will be much easier. And more importantly, your tests will have proper, strong assertions.

To sum up: The lack of observability may cause developers to write tests without assertions. Good test infrastructure is key to solving the issue.

There should not be test steps without checks.

Do not make test steps like this:

test("Should open menu", async () => {

  await page.locator('.button').click();

});

Each test step has to have an assertion:

test("Should open menu", async () => {

  await page.locator('.button').click();

  const locator = await page.locator('.dropdown-menu');

  await expect(locator).toBeVisible();

});

No assertions in before or after hooks

Assertions shouldn't be used in the beforeAll, beforeEach, afterAll, or afterEach hooks. Only pure actions should be included in preconditions and postconditions (for example authorization). Tests ought to have checks.

Use try...catch and/or throw errors if you still need to check something in the preconditions or postconditions.

No actions without expectations

All test actions, including clicks, hovers, and gotos, should be assertions with the expectation of verification that the action was actually taken.

Example 1:-

test("Should open menu", async () => {
await page.locator('.button').click();
await page.locator('.dropdown-menu').waitFor({ state: 'visible' });
});                                           

Example 2:-

test("Should open menu", async () => {
await page.locator('.button').click();
const locator = await page.locator('.dropdown-menu');
await expect(locator).toBeVisible();
});

The second example is valid because expect(locator).toBeVisible() contains conditional expectation.

No unconditional expectation

Do not add pause and timeouts for N seconds between action and assertion to prevent flakiness — it only slows down the tests.

Instead of unconditional expectation:

it('Should open menu', async () => {
  const button = await $('.button');
  await button.click();
  await browser.pause(3000);
  const menu = await $('.dropdown-menu');
  await menu.isDisplayedInViewport();
});

Use wait for something (for some element’s state):

it('Should open menu', async () => {
  const button = await $('.button');
  await button.click();
  const menu = await $('.dropdown-menu');
  await menu.waitForExist({timeout: 3000});
  await menu.isDisplayedInViewport();
});

The second example works faster in case of passing the test and will fail for obvious and unambiguous reasons.

No commented test

If the test should be turned off, it should be skipped by test framework feature (skip), not by commented code.

Instead of:

// test("Should have a menu", async () => {

//  const locator = await page.locator('.dropdown-menu');

//  await expect(locator).toBeVisible();

// });

Do:

test.skip("Should have a menu", async () => {

  const locator = await page.locator('.dropdown-menu');

  await expect(locator).toBeVisible();

});

The number of skipped tests will be presented in the test report.

If the test is outdated and/or not needed it should be deleted without regret.

No hanging locators

Tests should not contain lines of code with «meaningless» locators:

test("Should do something", async () => {

  await page.locator('.button');

  …The code in the tests has to do something: perform actions and/or assertions.

One expect for each test step

Each test step should only check one thing, and test steps should be brief.

One test step should not contain more than one or two assertions.

Avoid attempting to complete all tasks and/or checks in one go.

The more "atomic" the test procedures, the more understandable the test logs and results will be.

Do not put await inside expect

One operation inside another operation leads to a complication.

Instead of:

test("Should have title on the button", async () => {
expect(await page.locator('.button')).toHaveText(/Menu/);
});

Do:

test("Should have title on the button", async () => {
const button = await page.locator('.button');
expect(button).toHaveText(/Menu/);
});

It is more verbose, but less chance to forget about await.

Do not reload the page, reopen it

Refreshing a page by a standard command (page.reload() for Playwright or browser.refresh() for WebdriverIO) is not a good idea — it makes the test flaky.

Instead of:

test("Should have something after reload", async () => {
await page.reload();

});

Get the current page URL and just open it:

test("Should have something after reload", async () => {
const uri = await page.url();
await page.goto(uri);

});

This makes tests robust.

This pattern also applies for goBack() and goForward() methods, but unfortunately does not fit for SPA web applications in which the state of the page can differ from the URL.

Do not check URLs through includes

Do not use string.prototype.includes() for string comparison in assertions, because includes() returns true or false. When your check fails, you will get a report that false is not true — and no more details.

Instead of:

test("Should have corresponding URL", async () => {
const uri = await page.url();

await expect(uri.includes('example')).toBeTruthy();
});

Use appropriate method:

test("Should have corresponding URL", async () => {
const uri = await page.url();
await expect(uri).toHaveURL(/example/);
});

Or builtin assertions in case of an unusual checks:

test("Should have corresponding URL", async () => {
const uri = await page.url();
await expect(uri).toEqual(expect.stringContaining('example'));
});

This pattern applies for checking any strings and affects the readability and clarity of test reports.

Avoid regexp in checks

Regular expression checks make tests too sensitive and do not significantly increase test reliability, but they do make it more challenging to evaluate test failures.

Two exceptions exist:

regexp for URL verification;

date and time regexp.

Both of these sorts of data are appropriate for regexp testing.

Regexp testing is also acceptable if your testing project involves domain-specific IDs that can be linked to a pattern.

Wrap clicks and expectations into a promise

Instead of:

await page.locator('.button').click();
const response = await page.waitForResponse('https://example.com/');
await expect(response.ok()).toBe(true);

Do:

const [response] = await Promise.all([
page.waitForResponse('https://example.com/'),
page.locator('.button').click(),
]);
await expect(response.ok()).toBe(true);

Promise.all prevents a race condition between clicking and waiting for something. The first example is likely to be extremely flaky.

Do not use global variables for page object methods

Isolate tests/steps from each other. Do not use global variables which are used and rewritten by multiple test steps in a single test suite.

Instead of:

const myPageObject = new MyPageObject(page);test('Should do something', async () => {
await myPageObject.doSomething();

});test('Should have something', async () => {
await myPageObject.haveSomething();

});

Do:

test('Should do something', async () => {
const myPageObject = new MyPageObject(page);
await myPageObject.doSomething();

});test('Should have something', async () => {
const myPageObject = new MyPageObject(page);
await myPageObject.haveSomething();


});

If variables are not rewritten, it reduces the probability of rewriting them incorrectly or asynchronously — it increases the overall stability of the tests. 

Do not scatter test cases

Everywhere should be checked using the same functionality.

For instance, in test-1.spec.ts, check a banner using expect A, and in test-2.spec.ts, check the same banner (perhaps on a different page) using expect B. Instead, check the banner using both expects (A and B) in each of the tests (test-1.spec.ts and test-2.spec.ts).

Do not mix different kind of tests

Do two tests: an API test and a UI test if you wish to check the API and UI for a single user activity.

Do two tests: the UI test and the screenshot test if you wish to simultaneously validate UI functioning and screenshot the layout.

Do two integrated API tests, each of which performs specific checks, if you wish to simultaneously test JSON schemes and end-to-end API situations.

Use linters and formatters from the testing project

Tests should inherit linter and formatter rules from the testing (parent) project if a directory containing tests is situated inside the testing project, tests are located in a separate repository, or tests are developed by dedicated autotest engineers or developers.

If the rules for ESLint and Prettier are the same, tests will be closer to the testing code (and autotest engineers will be closer to developers).