frontier.freefermionvolumebenchmark
Free fermion volumetric benchmark implementation.
- class FreeFermionVolumeBenchmark(number_of_qubits: int, sample_size: int = 10, **kwargs: Any)
Bases:
BenchmarkFree-fermion volumetric benchmark based on a random SO(2N) Gaussian unitary.
For each sample:
Draw a random O ∈ SO(2N) (N = number_of_qubits).
Decompose it into Givens rotations + diagonal ±1 using
so_decomposition. * Build a free-fermion circuit from these rotations. * Compute a Pauli correction M from the diagonal ±1 data. * Construct measurement circuits for each Majorana operator. * Export each measurement circuit as QASM + Pauli observable string.The per-sample dictionary matches the generic benchmark schema:
{ "sample_id": int, "sample_metadata": {...}, "circuits": [ { "circuit_id": str, "observable": str, "qasm": str, "metadata": {...} }, ... ] }
- evaluate_benchmark(*, auto_save: bool | None = None, save_to: str | Path | None = None) Dict[str, List[float]][source]
Evaluate the Free-Fermion benchmark using experimental results.
For each sample, the expectation values from its measurement circuits are combined into two benchmark metrics:
parallel_value = dot( O[state_index, indices], EVs )
orthogonal_value = dot( O[random_row, indices], EVs )
- where:
O is the SO(2N) orthogonal matrix for the sample,
state_index is the prepared Majorana index,
indices = self._compute_measurement_indices(O, state_index),
EVs is the vector of expectation values (one per circuit),
random_row is any j ≠ state_index.
The first quantity should be close to 1 for well-performing hardware (or an ideal simulator), while the second should be close to 0.
Each circuit’s expectation value and standard error are stored in-place inside self.experimental_results[“results”][circuit_id].
- Returns:
Dictionary with keys:
parallel_values— projected signal valuesorthogonal_values— projected null values
- Return type:
dict[str, list[float]]
- plot_all_expectation_values() None[source]
Plot parallel and orthogonal projection values across all samples.
Plots projection values (with approximate standard error bars) across the entire benchmark, with separate markers for:
Parallel values (stabilizer-like).
Orthogonal values (destabilizer-like).
Requires
evaluate_benchmark()to have been run so thatself.experimental_results["evaluation"]is populated.- Raises:
ValueError – If experimental results or evaluation entries are missing, or if
shotsis not a positive integer.
- plot_expectation_histograms(bins: int = 20) None[source]
Plot histograms of parallel and orthogonal projection values.
This is useful for understanding the distribution / quality of the projection values across the entire benchmark.
Requires
evaluate_benchmark()to have been run so thatself.experimental_results["evaluation"]is populated.- Parameters:
bins – Number of histogram bins to use.
- Raises:
ValueError – If experimental results or evaluation entries are missing.