frontier.utils.quantumbenchmark
Classes
|
Abstract base class for all quantum device benchmarks. |
- class Benchmark(number_of_qubits: int, sample_size: int = 10, *, emitter_options: QasmEmitterOptions | None = None, format: str | None = None, target_sdk: str | None = None, shots: int | None = 1, benchmark_metadata: Dict[str, Any] | None = None, print_defaults: bool = False, workdir: str | Path | None = None, benchmark_id: str | None = None, auto_save: bool = True)[source]
Bases:
ABCAbstract base class for all quantum device benchmarks.
This class provides shared configuration and functionality for concrete benchmark implementations.
- Responsibilities:
- Store common configuration such as:
number_of_qubitssample_sizeemitter_options(QasmEmitterOptions)
- Provide a common API:
create_benchmark() – generate benchmark samples.
to_json_dict() – produce a JSON-serialisable payload.
save_json() – write the benchmark to disk.
load_json() / from_json_dict() – reconstruct from JSON.
Provide helpers for attaching and evaluating experimental results.
- Subclasses must implement:
_create_single_sample()
evaluate_benchmark()
- Subclasses may override:
_compute_number_of_measurements()
- BENCHMARK_NAME
Logical name of the benchmark, written to JSON and used when reloading. Subclasses are expected to override this constant.
- Type:
ClassVar[str]
- DEFAULT_WORKDIR
Default directory where benchmarks are stored if no explicit path is given.
- Type:
ClassVar[pathlib.Path]
- add_experimental_results(counts_data: Dict[str, Dict[str, int]] | List[Dict[str, int]], *, experiment_id: str | None = None, platform: str = 'Unknown', experiment_metadata: Dict[str, Any] | None = None, auto_save: bool | None = None, save_to: str | Path | None = None) None[source]
Attach experimental results (counts) to this benchmark.
This populates experimental_results in a shape that matches the “experimental_results” block of BENCHMARK_JSON_SCHEMA.
The counts data can be provided in two forms:
Dict keyed by circuit_id (recommended):
counts_data = { "0_stab_0": {"000": 512, "111": 488}, "0_destab_0": {"000": 500, "111": 500}, ... }
List in the same order as the circuits appear in self.samples:
counts_data = [ {"000": 512, "111": 488}, # for first circuit {"000": 500, "111": 500}, # for second circuit ... ]
- Parameters:
counts_data – Either a mapping circuit_id -> {state: count}, or a list of {state: count} dictionaries ordered by circuit traversal.
experiment_id – Identifier for this particular experimental run. If None, benchmark_id is used.
platform – Name of the platform / backend (for example, “ibm”, “ionq”, “simulator”).
experiment_metadata – Optional additional metadata about the run.
auto_save – Optional override for the instance-level auto_save setting before saving the updated benchmark JSON.
save_to – Optional file path or directory where the updated JSON file should be stored. See save_json() for resolution rules.
- Raises:
ValueError – If samples have not been generated or if the number of provided count entries does not match the number of circuits.
TypeError – If counts for a circuit are not provided as a dict mapping bitstring to count, or if counts cannot be converted to integers.
- create_benchmark(sample_size: int | None = None, auto_save: bool | None = None, save_to: str | Path | None = None) List[Dict[str, Any]][source]
Create and store benchmark samples in memory.
This method calls _create_single_sample() once per sample and stores the resulting list on samples. Optionally, it will also serialize the benchmark to JSON on disk immediately after generation.
- Parameters:
sample_size – Optional override for the number of samples to generate. If provided, this value replaces the existing sample_size.
auto_save – Optional override for the instance-level auto_save setting. If not None, the instance’s auto_save flag is updated before any saving occurs.
save_to – Optional explicit file path or directory to write the JSON to. If omitted and auto_save is True, the default filename ({benchmark_id}.json under workdir) is used. If save_to refers to a directory, the benchmark is saved inside that directory using the default filename.
- Returns:
A list of sample dictionaries. Each sample is expected to conform to the JSON schema for “sample” objects, for example:
{ "sample_id": int, "sample_metadata": {...}, "circuits": [ { "circuit_id": str, "observable": str | null, "qasm": str, "metadata": {...}, }, ... ], }
- Return type:
list[dict[str, Any]]
- abstractmethod evaluate_benchmark() Any[source]
Evaluate this benchmark using attached experimental results.
Subclasses must implement this method.
- Typical responsibilities:
Read samples and experimental_results.
Compute benchmark-specific quantities (for example, expectation values, fidelities, success probabilities, volumes).
Optionally store derived quantities back into experimental_results or additional attributes.
- Returns:
A benchmark-specific evaluation result, for example, a dict of metrics. The exact shape is defined by the subclass.
- Return type:
Any
- expected_value(counts: dict[str, int], pauli: str, *, little_endian: bool = False) float[source]
Compute the expectation value of a Pauli operator from counts.
The input is a classical bitstring distribution and a tensor-product Pauli label. The bitstrings are assumed to correspond to Z-basis measurements on each qubit.
- Parameters:
counts – Mapping from bitstring (for example, “010”) to counts.
pauli – Pauli label, for example, “XYZ”, “+XZI”, or “-XYI”. The first character may be ‘+’ or ‘-’; if omitted, ‘+’ is assumed.
little_endian – If True, interpret bitstrings Qiskit-style: the rightmost bit corresponds to qubit 0 (least significant).
- Returns:
Expectation value in the interval
[-1, 1].- Return type:
float
- Raises:
ValueError – If counts is empty, the Pauli length does not match the bitstring length, or the total shot count is zero.
- classmethod from_json_dict(data: Dict[str, Any], *, validate_schema: bool = True, strict_benchmark_name: bool = True) B[source]
Construct an instance of cls from a JSON dictionary.
This is the in-memory counterpart of to_json_dict(). It assumes that data conforms to BENCHMARK_JSON_SCHEMA and expects (at minimum) the keys “schema_version”, “number_of_qubits” and “sample_size”.
- Parameters:
data – Parsed JSON object (for example, returned by json.load()).
validate_schema – If True, validates data against BENCHMARK_JSON_SCHEMA using jsonschema.
strict_benchmark_name – If True, checks that the “benchmark_name” field (if present) matches BENCHMARK_NAME of cls. If the field is present and does not match, a ValueError is raised.
- Returns:
An instance of cls with samples, benchmark_metadata and configuration loaded from data.
- Return type:
B
- Raises:
ValueError – If the schema version is missing or incompatible, if benchmark names do not match (when strict_benchmark_name is True), or if the JSON fails validation against BENCHMARK_JSON_SCHEMA.
- get_all_circuit_ids() List[str][source]
Return a flat list of all circuit IDs in traversal order.
The order matches the internal sample/circuit ordering, which is the same order used when counts are provided as a flat list.
- Returns:
All circuit IDs in the order they appear in samples.
- Return type:
list[str]
- Raises:
ValueError – If samples have not been generated or loaded.
- get_all_circuits() List[str][source]
Return a flat list of QASM strings in canonical order.
The order matches the benchmark’s internal sample/circuit ordering:
samples[0].circuits[0].qasm samples[0].circuits[1].qasm ... samples[1].circuits[0].qasm ...
- Returns:
One QASM string per circuit.
- Return type:
list[str]
- Raises:
ValueError – If samples have not been generated or loaded.
- classmethod load_json(filepath: str | Path, *, validate_schema: bool = True, strict_benchmark_name: bool = True) B[source]
Load benchmark JSON from a file and construct an instance.
This is a convenience wrapper around from_json_dict(). After loading, the instance’s workdir is set to the directory containing the JSON file.
- Parameters:
filepath – Path to the JSON file to load.
validate_schema – If True, validates the parsed JSON object against BENCHMARK_JSON_SCHEMA.
strict_benchmark_name – If True, checks that the “benchmark_name” field (if present) matches BENCHMARK_NAME of cls. See from_json_dict() for details.
- Returns:
An instance of cls constructed from the JSON file.
- Return type:
B
- Raises:
ValueError – If the file content fails schema or benchmark-name checks in from_json_dict().
OSError – If the file cannot be opened or read.
json.JSONDecodeError – If the file does not contain valid JSON.
- save_json(filepath: str | Path | None = None, *, global_metadata: Dict[str, Any] | None = None, indent: int = 2) Path[source]
Serialize the benchmark to a JSON file on disk.
The target path is resolved according to the following rules:
If
filepathis None:The benchmark is stored under
workdirwith an auto-generated filename (see_default_filename()).If
filepathis an existing directory:The benchmark is stored inside that directory with an auto-generated filename.
workdiris updated to that directory.If
filepathrefers to a non-existing path with no suffix
(for example,
"results") and the path does not yet exist:It is treated as a directory. The directory is created if needed and the benchmark is stored inside it with an auto-generated filename.
workdiris updated to that directory.If
filepathis a bare filename (no directory component):The file is stored under
workdirwith that name.If
filepathcontains a directory component (absolute or relative),
for example
"results/foo.json"or"/tmp/foo.json":The file is stored exactly at that path.
workdiris updated to the parent directory. Any missing parent directories are created.- Parameters:
filepath – Optional target path. See rules above.
global_metadata – Optional global metadata to include in the JSON payload. If omitted,
benchmark_metadatais used.indent – Indentation level passed to
json.dump().
- Returns:
The final path where the JSON file was saved.
- Return type:
pathlib.Path
- to_json_dict(*, global_metadata: Dict[str, Any] | None = None) Dict[str, Any][source]
Build a JSON-serialisable dictionary representing the benchmark.
The returned dictionary is designed to conform to BENCHMARK_JSON_SCHEMA.
- Parameters:
global_metadata – Optional metadata to include at the top level of the JSON payload. If omitted, benchmark_metadata is used.
- Returns:
A JSON-serialisable dictionary ready to be passed to json.dump().
- Return type:
dict[str, Any]
- Raises:
ValueError – If samples is None (that is, the benchmark data has not been generated yet).