Knowledge Base > Cover CLI > Cover Replay: Getting started with Linux, macOS and MingGW

Cover Replay: Getting started with Linux, macOS and MinGW

Introduction

Cover Replay is an optional, advanced feature for the Cover CLI. It allows you to record an arbitrary execution of your application and then extract unit tests that exercise your application in the same way that your execution did. This feature is available for Diffblue Cover Enterprise Edition users only.

For instance, if you have a web app, you can record its execution while you interact with it on a web browser. You can then extract unit tests that call your methods in exactly the same fashion as your recorded execution. You could also record the execution of some end-to-end tests and extract unit tests that “shift left” your testing.

A diagram of Cover Replay is shown below:

Cover Replay has two significant advantages:

  • The extracted tests exercise your code using real world scenarios.
  • No matter how complex your code is, if you can record an execution that covers a method, then you can extract a test for that method.

When your recorded execution doesn’t cover a method you want to test, you can still get tests for it (we will use the default test-creation engine). Replay does not require any changes to your code. We record your application using a Java agent attached to your Java Virtual Machine (JVM). The agent is configurable. You can use Cover Replay locally on your machine or in a CI pipeline.

Getting Started

This section shows how to use Cover Replay to record and extract tests for a demo project in Linux, macOS, and MinGW. You can also follow the instructions for Windows.

Before you start, please get the Cover CLI and unpack the release ZIP in ~/cover/. If you unpack it elsewhere, please adapt steps 3 and 4 (below) accordingly.

1. Get the project under test

Download the CoreBanking demo project and unzip it. Then compile it:

cd CoreBanking-replay-demo/
mvn compile
mvn dependency:copy-dependencies # necessary for steps 2 & 3

2. Run the application

The demo project has a batch application that processes bank transactions stored in .txt files. Run it like this:

java -cp 'target/classes:target/dependency/*' \
   io.diffblue.corebanking.ui.batch.BatchProcessor src/test/resources/batch/*.txt

This is an example from one of the input files (batch1.txt):

CLIENT|James Brown
ACCOUNT|1234|James Brown|10
...
CASH|120000|2020-02-04|1234
CASH|-125|2020-02-05|1234
CASH|120000|2020-02-28|1234
...

We know that this execution calls the methods which we want to test (see step 4 below). Checking this on your project.

3. Record the application

We now record the execution of this application using the Cover Replay agent. To do this, just execute the following script (if Cover is not unpacked in ~/cover/, please amend line 4 of the script):

./record.sh

The recording will be saved to a new folder within ~/.diffblue/rec/. Learn more about recording by checking agent documentation or reading the comments inside the script.

4. Extract tests from the recording

Extract tests for one package:

$HOME/cover/dcover create --trace --verbose io.diffblue.corebanking.compliance.rules

Option --trace tells Cover to use the last recording to extract tests. Learn more about test extraction.

5. Inspect the created tests

Search for files named *DiffblueTest.java in src/test/java/. For instance, the test shown below (from src/test/java/io/diffblue/corebanking/compliance/rules/ComplianceRuleLargeCashDepositsDiffblueTest.java) was extracted from your recording. Note the references to James Brown and the CashTransaction with the amount 120000L, which we saw in step 2:

@Test
public void testValidateAccountCompliance2() throws AccountException {
  // Arrange
  Account account = new Account(1234L, new Client("James Brown"), 10L);
  account.addTransaction(new CashTransaction(15L, new Date(1577836800000L), account));
  account.addTransaction(new CashTransaction(120000L, new Date(1580774400000L), account));
  account.addTransaction(new CashTransaction(-125L, new Date(1580860800000L), account));
  account.addTransaction(new CashTransaction(120000L, new Date(1582848000000L), account));
  ComplianceRuleLargeCashDeposits complianceRuleLargeCashDeposits =
     new ComplianceRuleLargeCashDeposits();

  // Act
  complianceRuleLargeCashDeposits.validateAccountCompliance(account);

  // Assert
  assertEquals(1, complianceRuleLargeCashDeposits.getCompliantAccounts().size());
  assertFalse(complianceRuleLargeCashDeposits.hasNonCompliantAccounts());
}

Selecting an execution to record

The first step to use Cover Replay is to record an execution of the code you want to test. When selecting what to record, please consider this:

  • The execution must call the methods that you want to test.
  • The better the execution covers the code to test, the better the extracted tests will cover it.
  • The tests will contain the method inputs (strings, numbers…) that were used in the recorded execution.

The execution will most likely be a high-level integration test. It could also be a main class, a batch application, or some main class that you write for this purpose. The application or test that you run may live in a Maven/Gradle module different than the one containing the code to test.

  • Web services: you could use some end-to-end integration tests, or assemble a script that makes requests to the main web service using curl.
  • Libraries: think of other applications or projects that use the library. You could record integration tests or other executions of such applications.
  • Parsers: you might already have dozens of valid and malformed documents to parse. You could write an ad-hoc main class that calls the parser in a loop with such documents.
  • Demo projects: many applications, specially libraries, come with an extra module that demonstrates how to use the application. Demo projects are usually an ideal candidate to record. Their coverage might not be very high, though.
  • Multi-module projects: the existing (integration) tests of some module A might actually exercise quite well the code of a different module B. So recording the tests of A could be appropriate to cover the code in B.

Once you have decided what execution to record, you will need to check the coverage it produces on the code to test. If you think you should get better coverage from that execution, try to change the application inputs: input files, settings, HTTP requests…

When you are happy with the coverage, you will need to record the execution. Check step 3 of Getting started for an example how to record a main class, or the guidance on recording Maven/Gradle tests. You might also want to check Agent configuration.

Ensuring that your execution covers the code you want to test

Replay will only extract tests for methods that are called in the execution that you record. If the methods you want to test are not called in the execution, Replay will not extract tests for them (the default test-creation engine will, instead).

The best way to determine which methods are called in the execution you record is to measure the coverage of such execution. Here is how you can do it:

Recording (integration) tests

If you decided to record an (integration) test, here is how you can configure Maven/Gradle to do it:

Maven

We recommend that you add a new replay profile to your pom.xml, where you run the selected tests (Test1 and Test2 in the snippet below) with the Replay agent, and configure the agent to record your target package (com.example below). Like this:

<profiles>
  <profile>
    <id>replay</id>
    <build>
      <pluginManagement>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
              <environmentVariables>
                <COVER_TARGET>com.example.</COVER_TARGET>
              </environmentVariables>
              <argLine>-javaagent:${env.AGENT}</argLine>
              <test>Test1,Test2</test>
            </configuration>
          </plugin>
        </plugins>
      </pluginManagement>
    </build>
  </profile>
</profiles>

You then need to make available the environment variable AGENT to Maven when you use the replay profile:

export AGENT=path/to/cover-replay-agent.jar
mvn test -P replay

If you need to record a test ran by the Maven Failsafe plugin instead of the Surefire plugin, then please adapt the pom.xml snippet above for the Failsafe plugin.

Gradle

We recommend that you modify the test task in your build.gradle in the following way:

test {
  if (project.hasProperty('replay')) {
    String agent = System.getenv('AGENT')
    if (agent == null)
      throw new GradleException("Please set environment variable 'AGENT' to the path of the Cover Replay agent .jar")

    environment('COVER_TARGET', 'com.example.')
    jvmArgs("-javaagent:$agent")
    include('**/Test1.class')
    include('**/Test2.class')
  }
}

With the above, whenever you set the property replay, Gradle will only run the selected tests (Test1 and Test2), using the Replay agent, and setting the agent’s target package to com.example.

Next, you need to make available the environment variable AGENT to Gradle and use the replay property, like this:

export AGENT=path/to/cover-replay-agent.jar
gradle test -P replay

If you need to record an integration test rather than a regular unit test, then you might have to adapt the snippet above to your setup.

Recording agent

Cover Replay uses a Java agent to record your application.

A Java agent is a .jar file that you attach to your JVM using the -javaagent option of the java command. Java agents can modify the bytecode of classes loaded by the JVM they attach to. Our agent uses this to intercept method calls and get access to their arguments.

By default the Replay agent saves new recordings to the directory ~/diffblue/rec/ in Linux/macOS and %userprofile%\.diffblue\rec\ in Windows (but see Agent configuration). All recordings you make, regardless of which application or project you record, are saved as sub-folders of this directory. The names of the sub-folders are dates of the form 2021-11-08_20-13-21.573.

The size of a recording can vary from a few Kbs to up to 200-300Mb. For instance, the recording we made in the Getting started guide is around 600Kb. Various factors affect the size of the recording, but the two most important ones are:

  • How many different methods are called in the execution you record.
  • How many methods match the COVER_TARGET agent setting (see below).

Your application will run slower and use more memory when you record it. With default agent settings you should expect it to run 2-5x slower and use 2-8x more memory. Higher slowdowns are possible. It’s often possible to tweak the agent settings to reduce the time overheads. It might be also necessary to increase the maximum heap size of the JVM (option -Xmx). See Agent Configuration or contact us for details on how to do this.

Agent configuration

The Cover Replay agent can be configured via environment variables or Java system properties. The following shell snippet provides example values using environment variables:

export COVER_TARGET='com.example.'
export COVER_MAX_IPM='5'
export COVER_MOCKING_ENABLED='y'
export COVER_EXCLUDE='com.example.Person{getName; set*;}, com.example.*Provider}'
export COVER_INCLUDE='com.example.PersonProvider'
export COVER_REC_DIR='path/to/dir'
export COVER_LOG_LEVEL='none'

The only mandatory setting you need to make is COVER_TARGET (see below), all the other settings automatically get safe default values.

The table below explains the meaning and accepted values of every option:

Environment variable System property Valid values Default Description
COVER_TARGET com.diffblue.cover.target Prefix of a Java package name "" Mandatory. Defines what methods you want to extract tests for, i.e. the methods that may be called in the “act” section of an extracted test. The fewer methods that match this prefix, the smaller the agent overhead and recording size will be.
COVER_MAX_IPM com.diffblue.cover.max-ipm Non-negative integer 5 Defines the maximum number of tests that you may extract for any method that lives in the COVER_TARGET. The smaller the number, the smaller the agent overhead and recording size will be.
COVER_MOCKING_ENABLED com.diffblue.cover.mocking-enabled y or n y When enabled, the agent will save extra information to construct mocks in the extracted tests. Disable this if you believe that no extracted test will need to mock any object. Disabling reduces agent overhead and recording size.
COVER_EXCLUDE com.diffblue.cover.exclude Method pattern "" The agent will not track any method that matches this pattern (and consequently it will run faster). Use this option to provide a list of classes/methods that you would never want to see in the ‘arrange’ or ‘act’ section of a test. See Including and excluding methods in the recording.
COVER_INCLUDE com.diffblue.cover.include Method pattern "" The agent will track methods that match this pattern, even when they were excluded via COVER_EXCLUDE, or internally excluded by the agent. See Including and excluding methods in the recording.
COVER_REC_DIR com.diffblue.cover.rec-dir Path to a directory ~/.diffblue/rec in Linux/macOS, %userprofile%\.diffblue\rec\ in Windows The directory where the agent saves new recordings. New recordings become subdirectories of this directory.
COVER_LOG_LEVEL com.diffblue.cover.log-level none, debug, trace none Defines how much information the agent writes in the agent log. No log is created when none. Log location: /tmp/agent/agent.log in Linux, $TMPDIR/agent/agent.log in macOS, %TEMP%\agent\agent.log in Windows.

Including and excluding methods in the recording

The Cover Replay agent intercepts calls to methods to inspect their arguments. In general, tracking calls to all methods in the JVM is not necessary. And the fewer methods the agent tracks, the faster your application runs.

By default the agent will exclude tracking invocations to methods in certain well-known libraries, such as parts of the Spring framework, or the Hibernate library. But you can configure exactly which methods are tracked via the agent settings COVER_INCLUDE and COVER_EXCLUDE.

For every method, the agent will decide whether to track calls to it using the following procedure:

  1. If the method matches the COVER_INCLUDE settings, then do track invocations.
  2. Otherwise, if the method matches the COVER_EXCLUDE setting, then do not track invocations.
  3. Otherwise, if the method matches the internal exclusion patterns, then do not track invocations.
  4. Otherwise, do track invocations.

The internal exclusion patterns match methods of the following classes:

  • Most classes in org.springframework.
  • Classes in org.aspectj.
  • Classes in sun.security.
  • Classes in org.hibernate.

Method patterns

Agent settings COVER_INCLUDE and COVER_EXCLUDE accept a method pattern. A method pattern is a succinct expression that represents a set of Java methods, those that match the pattern. Here are some examples, and the methods they match:

  • Matches the method asList and any other method of the class java.util.Arrays which starts by s:

    java.util.Arrays { asList; s*; }
    
  • Matches all methods of all classes that live in the package com.example (but not its subpackages) and whose name starts by Ma and finishes by n:

    com.example.Ma*n
    
  • Matches method getName in any class that lives in com.example or its subpackages, and whose name finishes by n:

    com.example.**n { getName; }
    
  • Matches all methods matched by method patterns P1 or P2:

    P1, P2
    

Extracting tests from the recording

To extract tests from a recording please use Cover’s option --trace. For instance, in the Getting Started guide we used:

dcover create --trace --verbose
  • If you don’t provide --trace, Cover will write tests using its default test-creation engine.
  • If you provide --trace, Cover will write tests using both its default engine and the execution recording.
  • If you want Cover to only use the execution recording to extract tests (no default engine), please contact us. This is possible, and can be useful to understand what coverage or test inputs you get from the recording alone.

Currently, the --trace option only extracts tests from your last recording.

Using Cover Replay in your project

You can use Cover Replay on your local machine, during manual, interactive testing, or in a CI pipeline, to automatically create and update tests.

Using Cover Replay on your local machine

You might want to use Cover Replay on your machine:

  • During a one-off testing campaign, to increase the overall coverage of your project.
  • While you evaluate Replay on a new project.
  • While preparing to use Replay in your CI pipeline.

To use Replay on your machine we strongly recommend that you write a small script which:

  1. Runs the application that you want to record, using the same inputs every time.
  2. Calls the dcover command to extract tests.

Automating the execution of the application you want to record is necessary because you want to easily and deterministically run the same application, with the same inputs, every time you create tests. Remember that the Replay-extracted unit tests will mimic how methods were called during the recorded execution. Recording with different inputs may potentially cause methods to be called with different inputs, and consequently Replay would extract different tests.

Here is how to do it for the CoreBanking demo project used in the Getting started guide. We created a new script named cover.sh that essentially looks like this:

#!/bin/bash

DCOVER=path/to/dcover
./record.sh
$DCOVER create --trace --verbose

That calls record.sh (see Getting Started) to record the application, and then calls dcover to extract the tests. If you want to run this example in your local machine, do this:

  1. Get the Cover CLI and unzip it:

    cd ~
    mkdir cover
    cd cover
    unzip path/to/diffblue-cover-cli-<version>-<os>.zip
    
  2. Get the source code of the demo project, compile it, and run Replay on it:

    cd ..
    wget https://github.com/diffblue/CoreBanking/archive/refs/heads/replay-demo.zip
    unzip replay-demo.zip
    cd CoreBanking-replay-demo/
    mvn compile
    mvn dependency:copy-dependencies
    ./cover.sh # record the application and extract tests
    

Using Cover Replay in your CI pipeline

Cover Replay, much like Cover, can be integrated to a CI pipeline so that tests are created and updated by CI. The CI pipeline that you run on every pull request will need to:

  1. Check out the code of the ‘feature’ branch associated to the pull request.
  2. Compile the code, including the application that you are going to record.
  3. Run and record the application.
  4. Call dcover to extract tests from the recording.

Steps 1 and 2 are standard and most likely your CI pipeline already does those. To implement steps 3 and 4, please follow the instructions to run Cover Replay locally.