LLM Configuration (Beta)
Diffblue Cover can make use of Large Language Models (LLMs) as a beta feature in order to provide more context-aware inputs to tests. This page describes how to configure the LLM usage.
Diffblue Cover supports remote AI/LLM integration to enhance features such as object creation, string generation, and automated test completion. Configuration options for LLMs are managed using cover.properties, environment variables, and system properties, with the same precedence and formatting rules as other Diffblue Cover properties.
Configuring LLMs in cover.properties
cover.propertiesThe initial use case for Cover using LLMs is to produce more interesting String inputs. Multiple named LLM connections can be defined in ~/.diffblue/cover.properties so that experiments switching between various configurations can be performed with minimal fuss. The following example configuration defines a connection named "external" and configures String creation to make use of it:
cover.llm.strings=external
cover.llm.api.external.provider=OpenAI
cover.llm.api.external.model=gpt-4
cover.llm.api.external.key=sk-proj-ABCDEFGHIJKLMThis configuration tells cover to enable sourcing strings from the "external" LLM connection, which is then configured to use OpenAI's gpt-4 model with a specific key.
Some organisations have a custom LLM installation with embedded domain-specific knowledge, accessible at some internal URL. To begin comparisons with this you could define a new "internal" connection using these new settings:
cover.llm.api.internal.provider=Anthropic
cover.llm.api.internal.model=claude-sonnet-4-5
cover.llm.api.internal.key=sk-ant-NOPQRSTUVWXYZ
cover.llm.api.internal.url=https://claude.internal.example.comNow all that remains is to switch to use this new connection for String inputs:
cover.llm.strings=internalEnvironment Checks
During the environment checks phase the detected configuration will be logged (and shown in the console if running --verbose mode) indicating which named configuration was picked up:
INFO Detected LLM 'internal' configurationSome minimal validation is performed as part of the environment checks to ensure that Cover will have enough information in order to connect later on. If you've mistyped a provider so that it's not supported, provided a non-numeric token limit, or Cover has not been able to identify a default URL to use then the environment checks will fail with an error along the following lines:
ERROR E158: LLM misconfigured
The provided configuration for calling out to an LLM is invalid.
- Postprocessing should be true or false but found: yes
special:
- Unknown provider: anthropomorphic
- Provider should be one of Anthropic, AzureML, Copilot, Llama, OpenAI
- Missing url
- Invalid prompt token limit: 6000x
provider: anthropomorphic
model: claude-sonnet-4-5
key: *******
url:
deployment:
preferred:
- Invalid prompt token limit: 6000x
provider: OpenAI
model: gpt-5
key: *******
url: http://openai.internal.example.com/
deployment: internal
Please refer to configuration documentation.
Note that currently these checks do not include any active communication and so invalid URLs, models, and keys may cause failures later in the process.
Configuration Properties
cover.llm.strings
Enables string creation mode; set to "preferred" or "special".
cover.llm.postprocessing
Whether to post-process LLM results (default: true, set to false to disable).
cover.llm.api.<name>.provider
Name of the preferred LLM provider (e.g., OpenAI, Anthropic).
cover.llm.api.<name>.model
Name of the preferred LLM model to use.
cover.llm.api.<name>.key
API key for the preferred provider.
cover.llm.api.<name>.url
Custom API endpoint, if applicable.
cover.llm.api.<name>.deployment
Optional deployment specifier.
cover.llm.api.<name>.tokens.prompt
Token limit for LLM prompts (optional, default depends on provider/model).
cover.llm.api.<name>.tokens.response
Token limit for LLM responses (optional, default depends on provider/model).
Applying Changes
When adjusting cover.properties, ensure Cover is restarted (and Intellij, if using the Cover Plugin), so new settings are loaded by the JVM.
In time it is expected that Cover will automatically identify when to use LLM inputs based on various heuristics. For now the @InTestsUseLLM annotation is needed to identify the package / class / method context where LLM inputs should be provided. Please refer to LLM Input Annotations (Beta) for guidance on using the annotation.
OpenAI Example
To enable string creation with the "default" OpenAI gpt-4, use the following in .diffblue/cover.properties:
cover.llm.strings=default
cover.llm.api.default.provider=OpenAI
cover.llm.api.default.model=gpt-4
cover.llm.api.default.key=sk-proj-ABCDEFGHIJKLMAnthropic / Claude Example
Alternatively to do the same but using Anthropic's claude-sonnet-4-5 model, use the following:
cover.llm.strings=default
cover.llm.api.default.provider=Anthropic
cover.llm.api.default.model=claude-sonnet-4-5
cover.llm.api.default.key=sk-ant-NOPQRSTUVWXYZThis structure ensures LLM configuration is managed consistently with all other Diffblue Cover properties and can be overridden at runtime as necessary for flexibility in various execution and CI environments.
Last updated
Was this helpful?

