3. qopt API Documentation
3.1. amplitude_function module
This class is designed to express a functional relationship between the optimization parameters, which can be directly controlled and the control amplitudes, which appear as factors in the Hamiltonian.
If the Hamiltonian is given as sum of a drift Hamiltonian and a control Hamiltonian described by operators multiplied with time dependent control amplitudes
then this class describes the control amplitudes as function of optimization parameters:
The AmplitudeFunction class is used as attribute of the Solver class.
3.1.1. Classes
AmplitudeFunction
Abstract base class of the amplitude function.
IdentityAmpFunc
The transferred optimization parameters are the control amplitudes.
UnaryAnalyticAmpFunc
An amplitude function which can be given by a unary function.
CustomAmpFunc
Applies functions handles specified by the user at the initialization.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class AmplitudeFunction[source]
Bases:
ABC
Abstract Base class of the amplitude function.
- abstract derivative_by_chain_rule(deriv_by_ctrl_amps, x)[source]
Calculates the derivatives of some function f by the optimization parameters x, when given the optimization parameters x and the derivative by the control amplitudes. The calculation is performed using the chain rule: df/dx = df/du * du/dx.
- Parameters
deriv_by_ctrl_amps (np.array, shape (num_t, num_f, num_ctrl)) – The gradients of num_f functions by num_ctrl different pulses at num_t different time steps, i.e. the derivatives df/du.
x (np.array) – Optimization parameters of shape (num_t, num_par), where num_t is the number of time slices and num_par the number of different optimization parameters.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters.
- Return type
np.array, shape: (num_t, num_f, num_par)
- class CustomAmpFunc(value_function: Callable[[ndarray], ndarray], derivative_function: Callable[[ndarray], ndarray])[source]
Bases:
AmplitudeFunction
A general amplitude function which is applied to the amplitude values.
- Parameters
value_function (Callable array to array) – This function expresses the functional dependency of the control amplitudes on the optimization parameters. The function receives the optimization parameters x as array of the shape (num_t, num_par) and must return the control amplitudes u as array of the shape (num_t, num_ctrl). Where num_t is the number of time slices, num_par the number of optimization parameters and num_ctrl the number of control operators in the Hamiltonian.
derivative_function (Callable array to array) – This function describes the derivative of the control amplitudes by the optimization parameters. The function receives the optimisation parameters x as array of shape (num_t, num_par) and must return the derivatives of the control amplitudes by the optimization parameters as array of shape (num_t, num_par, num_ctrl).
- class IdentityAmpFunc[source]
Bases:
AmplitudeFunction
The control amplitudes are identical with the optimization parameters.
- class UnaryAnalyticAmpFunc(value_function: Callable[[float], float], derivative_function: Callable[[float], float])[source]
Bases:
AmplitudeFunction
A unary analytic amplitude function which is applied to each amplitude value. This class can be used for every application case where all transferred parameters are mapped one-to-one to the control amplitudes by a single unary function.
- Parameters
value_function (Callable float to float) – This scalar function expresses the functional dependency of the control amplitudes on the optimization parameters. The function is vectorized internally.
derivative_function (Callable float to float) – This scalar function describes the derivative of the control amplitudes. The function is vectorized internally.
3.2. analyser module
The Analyser class operates on an instance of the DataContainer class and offers various convenience functions for the visualization and analysis of data acquired during the optimizations.
These features include the plotting of cost functions, either for a single optimization run or for a multitude. Also functions for the calculation of computational time for the various cost functions and if given their gradients are included.
3.2.1. Classes
Analyser
Holds convenience functions to visualize the optimization.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class Analyser(data: DataContainer)[source]
Bases:
object
Holds convenience functions to visualize the optimization.
The Analyser class can be used to make plots of the optimization data stored in an instance of the DataContainer class. This can be useful to judge the performance of optimization algorithms and investigate how fast the convergence is and whether the algorithm has fully converged.
- absolute_costs() ndarray [source]
Calculates for each optimization run and each iteration in the optimization algorithm the sum of the costs.
- Returns
costs – The sum of the costs.
- Return type
numpy array, shape (n_runs, n_iter)
- integral_cost_fkt_times(n: int = 0) ndarray [source]
Sum of the time required for the evaluation of the cost function.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
- Returns
integral_times – Integrated time required for the cost function evaluation.
- Return type
np.array
- integral_grad_fkt_times(n: int = 0)[source]
Sum of the time required for the evaluation of the cost function gradients.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
- Returns
integral_times – Integrated time required for the cost function gradient evaluation.
- Return type
np.array
- property n_least_square: int
Returns the number of the optimization run which yields the smallest total costs.
The total cost is measured as squared sum of the final cost function values.
- Returns
n_least_square – Number of optimization run with smallest final costs.
- Return type
int
- opt_times()[source]
Total optimization times.
- Returns
total_times – Time required per optimization run.
- Return type
np.array
- plot_absolute_costs() -> (<class 'matplotlib.figure.Figure'>, <class 'matplotlib.axes._axes.Axes'>)[source]
Plots the absolute costs.
- plot_costs(n=0, log_y=True, ax=None) None [source]
Plots the absolute cost values as function of optimization iteration.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
log_y (bool, optional) – If True then the costs are plotted logarithmically. Defaults to True.
ax (matplotlib.pyplot.axes) – Axes element in which the data is plotted. If not specified, a new one will be created.
- Returns
ax – Axes with the plot.
- Return type
matplotlib.pyplot.axes
Time share of the cost function evaluation.
Time share of the cost function gradient calculation.
3.3. cost_functions module
Cost functions which can be minimised as figure of merit in the control optimization.
Each CostFunction calculates a commonly used quantity for errors occurring during a quantum gate or algorithm. These include state and gate fidelities or leakages. These are also implemented in variations for the description of noise like the averaging in a Monte Carlo method or the compatibility with a linearized master equation in lindblad form. One cost function interfaces to the estimation of infidelities by generalized filter functions.
To support gradient based optimization algorithms such as quasi-Newton type algorithms the classes also calculate the gradients of the cost functions. (Jacobians in case of vector valued cost functions.)
3.3.1. Classes
CostFunction
Abstract base class of the fidelity computer.
OperatorMatrixNorm
Calculates the cost as matrix norm of the difference between the actual evolution and the target.
OperationInfidelity
Calculates the cost as operation infidelity of a propagator.
OperationNoiseInfidelity
Like Operationfidelity but averaged over noise traces.
OperatorFilterFunctionInfidelity
Estimates infidelities with filter functions.
LeakageError
Estimates the leakage of quantum gates.
3.3.2. Functions
state_fidelity()
The quantum state fidelity.
angle_axis_representation()
Calculates the representation of a 2x2 unitary matrix as rotation axis and angle.
entanglement_fidelity()
Calculates the entanglement fidelity between a unitary target evolution and a simulated unitary evolution.
deriv_entanglement_fid_sup_op_with_du()
Calculates the derivatives of the entanglement fidelity with respect to the control amplitudes.
entanglement_fidelity_super_operator()
Calculates the entanglement fidelity between two propagators in the super operator formalism.
derivative_entanglement_fidelity_with_du()
Calculates the derivatives of the entanglement fidelity in the super operator formalism with respect to the control amplitudes.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class CostFunction(solver: Solver, label: Optional[List[str]] = None)[source]
Bases:
ABC
Abstract base class of the fidelity computer.
- solver
Object that compute the forward/backward evolution and propagator.
- Type
Solver
- label
The label serves as internal name of the cost function values. The DataContainer class uses the label to distinct cost functions when storing the data.
- Type
list of str
- abstract costs() Union[float, ndarray] [source]
Evaluates the cost function.
- Returns
costs – Result of the cost function’s evaluation.
- Return type
np.array or float
- abstract grad() ndarray [source]
Calculates the gradient or Jacobian of the cost function.
- Returns
gradient – shape: (num_t, num_ctrl, num_f) where num_t is the number of time slices, num_ctrl the number of control parameters and num_f the number of values returned by the cost function. Derivatives of the cost function by the control amplitudes.
- Return type
np.array
- class IncoherentLeakageError(solver: SchroedingerSMonteCarlo, computational_states: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
This class measures leakage as quantum operation error.
The resulting infidelity is measured by truncating the leakage states of the propagator U yielding the Propagator V on the computational basis. The infidelity is then given as the distance from unitarity: infid = 1 - trace(V^dag V) / d
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
TODO –
adjust docstring
- class LeakageError(solver: Solver, computational_states: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
This class measures leakage as quantum operation error.
The resulting infidelity is measured by truncating the leakage states of the propagator U yielding the Propagator V on the computational basis. The infidelity is then given as the distance from unitarity: infid = 1 - trace(V^dag V) / d
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
- class LeakageLiouville(solver: Solver, computational_states: List[int], label: Optional[List[str]] = None, verbose: int = 0, input_unitary: bool = False, monte_carlo: bool = False)[source]
Bases:
CostFunction
This class measures leakage in Liouville space.
The leakage is calculated in Liouville space as matrix element. In pseudo Code:
L = < projector leakage space | Propagator | projector comp. space >
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
verbose (int) – Additional printed output for debugging.
input_unitary (bool) – If True, then the input is assumed to be formulated in the standard Hilbert space and thus expressed as unitary propagator. This propagator is then expressed as superoperator.
monte_carlo (bool) – If True, then we make a monte carlo simulation and average over the propagators.
- class LiouvilleMonteCarloEntanglementInfidelity(solver: SchroedingerSMonteCarlo, target: Optional[OperatorMatrix], label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None)[source]
Bases:
CostFunction
Entanglement infidelity for a combination of Monte Carlo and Liouville description.
The propagators are first mapped to the super operator formalism in Liouville space. Next, they are averaged and finally we calculate the entanglement infidelity.
Systematic errors cannot be neglected in the current formulation.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
- class OperationInfidelity(solver: Solver, target: OperatorMatrix, fidelity_measure: str = 'entanglement', super_operator_formalism: bool = False, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False)[source]
Bases:
CostFunction
Calculates the infidelity of a quantum channel.
The infidelity of a quantum channel described by a unitary evolution or propagator in the master equation formalism.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
fidelity_measure (string, optional) – If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
super_operator_formalism (bool, optional) – If true, the time slot computer is expected to return a propagator in the super operator formalism, while the target unitary is not given as super operator. If false, no super operators are assumed.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool, optional) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- solver
The time slot computer simulating the systems dynamics.
- Type
TimeSlotComputer
- target
Unitary target evolution.
- Type
ControlMatrix
- fidelity_measure
If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
- Type
string
- super_operator_formalism
If true, the time slot computer is expected to return a propagator in the super operator formalism, while the target unitary is not given as super operator. If false, no super operators are assumed.
- Type
bool
- Raises
NotImplementedError – If the fidelity measure is not ‘entanglement’.
Todo: –
add the average fidelity? or remove the fidelity_measure.
gradient does not truncate to the subspace.
- class OperationNoiseInfidelity(solver: SchroedingerSMonteCarlo, target: Optional[OperatorMatrix], label: Optional[List[str]] = None, fidelity_measure: str = 'entanglement', computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False, neglect_systematic_errors: bool = True)[source]
Bases:
CostFunction
Averages the operator fidelity over noise traces.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
fidelity_measure (string, optional) – If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool, optional) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
neglect_systematic_errors (bool) – If true, the mean operator fidelity is calculated with respect to the simulated propagator without statistical noise. Otherwise the mean operator fidelity is calculated with respect to the target propagator.
- neglect_systematic_errors
If true, the standard deviation of the operator fidelity is measured. Otherwise the mean operator fidelity is calculated with respect to the target propagator.
- Type
bool
- class OperatorFilterFunctionInfidelity(solver: Solver, noise_power_spec_density: Callable, omega: Sequence[float], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Calculates the infidelity with the filter function formalism.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
noise_power_spec_density (Callable) – The noise power spectral density in units of inverse frequencies that returns an array of shape (n_omega,) or (n_nops, n_omega). In the first case, the same spectrum is taken for all noise operators, in the second, it is assumed that there are no correlations between different noise sources and thus there is one spectrum for each noise operator. The order of the noise terms must correspond to the order defined in the solver by filter_function_h_n.
omega (Sequence[float]) – The frequencies at which the integration is to be carried out.
- costs() Union[float, ndarray] [source]
The infidelity is calculated with the filter function package. See its documentation for more information.
- Returns
costs – The infidelity.
- Return type
Union[float, np.ndarray]
- grad()[source]
The gradient of the infidelity is calculated with the filter function package. See its documentation for more information.
- Raises
NotImplementedError – This method has not been implemented yet.
- property omega
- class OperatorMatrixNorm(solver: Solver, target: OperatorMatrix, mode: str = 'scalar', label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Computes the fidelity as difference between the propagator and a target.
A global phase difference is ignored. The result can be returned as absolute value or vector. If the result shall be returned as absolute value it is calculated in a matrix norm.
- Parameters
solver (TimeSlotComputer) – Computes the evolution of the system.
target (ControlMatrix) – The ideal evolution.
mode (string) – The type of calculation. ‘scalar’: The difference is returned as scalar. ‘vector’: The difference of the individual elements is returned as vector. ‘rotation_axis’: For unitary evolutions only. The evolution is described by its rotation axis and a rotation angle. The first element of the rotation axis is multiplied by the angle so save one return argument.
- mode
Type of calculation
- Type
string
- costs() Union[ndarray, float] [source]
The costs or infidelity of the quantum channel.
These costs are given as difference between a simulated unitary evolution and the unitary target evolution depending on the mode. (See class documentation. )
- Returns
costs – The costs of infidelity.
- Return type
Union[np.ndarray, float]
- class StateInfidelity(solver: Solver, target: OperatorMatrix, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False)[source]
Bases:
CostFunction
Quantum state infidelity.
- class StateInfidelitySubspace(solver: Solver, target: OperatorMatrix, dims: List[int], remove: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Quantum state infidelity on a subspace.
Assume that the simulated system operates on a product space and the target states is described only on a subspace. This class then calculates the partial derivative over the neglected subspace.
- Parameters
dims (list of int,) – The dimensions of the subspaces. (Compare to the ptrace function of the MatrixOperator class.)
remove (list of int,) – The indices of the dims list corresponding to the subspaces that are to be eliminated. (Compare to the ptrace function of the MatrixOperator class.)
TODO –
support super operator formalism
handle leakage states?
Docstring
- class StateNoiseInfidelity(solver: SchroedingerSMonteCarlo, target: OperatorMatrix, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False, neglect_systematic_errors: bool = True)[source]
Bases:
CostFunction
Averages the state infidelity over noise traces.
- angle_axis_representation(u: ~numpy.ndarray) -> (<class 'float'>, <class 'numpy.ndarray'>)[source]
Calculates the representation of a 2x2 unitary matrix by a rotational axis and a rotation angle.
- Parameters
u (np.ndarray) – A unitary 2x2 matrix.
- Returns
beta, n – beta is the angle of the rotation and n the rotational axis.
- Return type
float, np.ndarray
- averge_gate_fidelity(unitary: OperatorMatrix, target_unitary: OperatorMatrix)[source]
Average gate fidelity.
- Parameters
unitary (ControlMatrix) – The evolution matrix of the system.
target_unitary (ControlMatrix) – The target unitary to which the evolution is compared.
- Returns
fidelity – The average gate fidelity.
- Return type
float
- default_set_orthorgonal(dim: int) List[OperatorMatrix] [source]
Set of orthogonal matrices for the calculation of the average gate fidelity.
Currently only for two dimensional systems implemented.
- Parameters
dim (int) – The systems dimension.
- Returns
orthogonal_operators – Orthogonal control matrices.
- Return type
List[ControlMatrix]
- deriv_entanglement_fid_sup_op_with_du(target: OperatorMatrix, forward_propagators: List[OperatorMatrix], unitary_derivatives: List[List[OperatorMatrix]], reversed_propagators: List[OperatorMatrix], computational_states: Optional[List[int]] = None)[source]
Derivative of the entanglement fidelity of a super operator.
Calculates the derivatives of the entanglement fidelity between a target unitary of dimension d x d and a propagator of dimension d^2 x d^2 with respect to the control amplitudes.
- Parameters
forward_propagators (List[ControlMatrix], len: num_t +1) – The forward propagators calculated in the systems simulation. forward_propagators[i] is the ordered sum of the propagators i..0 in descending order.
unitary_derivatives (List[List[ControlMatrix]],) – shape: [[] * num_t] * num_ctrl Frechet derivatives of the propagators by the control amplitudes.
target (ControlMatrix) – The target unitary evolution.
reversed_propagators (List[ControlMatrix]) – The reversed propagators calculated in the systems simulation. reversed_propagators[i] is the ordered sum of the propagators n-i..n in ascending order where n is the total number of time steps.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.s only calculated for the specified subspace.
- Returns
derivative_fidelity – The derivatives of the entanglement fidelity.
- Return type
np.ndarray, shape: (num_t, num_ctrl)
- derivative_average_gate_fid_with_du(propagators, propagators_past, unitary_derivatives, target_unitary)[source]
- derivative_average_gate_fidelity(control_hamiltonians, propagators, propagators_past, delta_t, target_unitary)[source]
The derivative of the average gate fidelity.
- derivative_entanglement_fidelity(control_hamiltonians: List[OperatorMatrix], forward_propagators: List[OperatorMatrix], reversed_propagators: List[OperatorMatrix], delta_t: List[float], target_unitary: OperatorMatrix) ndarray [source]
Derivative of the entanglement fidelity using the grape approximation.
dU / du = -i delta_t H_ctrl U
- Parameters
control_hamiltonians (List[ControlMatrix], len: num_ctrl) – The control hamiltonians of the simulation.
forward_propagators (List[ControlMatrix], len: num_t +1) – The forward propagators calculated in the systems simulation.
reversed_propagators (List[ControlMatrix]) – The reversed propagators calculated in the systems simulation.
delta_t (List[float], len: num_t) – The durations of the time steps.
target_unitary (ControlMatrix) – The target unitary evolution.
- Returns
derivative_fidelity – The derivatives of the entanglement fidelity.
- Return type
np.ndarray, shape: (num_t, num_ctrl)
- derivative_entanglement_fidelity_with_du(target: OperatorMatrix, forward_propagators: List[OperatorMatrix], propagator_derivatives: List[List[OperatorMatrix]], reversed_propagators: List[OperatorMatrix], computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False) ndarray [source]
Derivative of the entanglement fidelity using the derivatives of the propagators.
- Parameters
forward_propagators (List[ControlMatrix], len: num_t +1) – The forward propagators calculated in the systems simulation. forward_propagators[i] is the ordered sum of the propagators i..0 in descending order.
propagator_derivatives (List[List[ControlMatrix]],) – shape: [[] * num_t] * num_ctrl Frechet derivatives of the propagators by the control amplitudes.
target (ControlMatrix) – The target propagator.
reversed_propagators (List[ControlMatrix]) – The reversed propagators calculated in the systems simulation. reversed_propagators[i] is the ordered sum of the propagators n-i..n in ascending order where n is the total number of time steps.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- Returns
derivative_fidelity – The derivatives of the entanglement fidelity.
- Return type
np.ndarray, shape: (num_t, num_ctrl)
- derivative_state_fidelity(target: OperatorMatrix, forward_propagators: List[OperatorMatrix], propagator_derivatives: List[List[OperatorMatrix]], reversed_propagators: List[OperatorMatrix], computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False) ndarray [source]
Derivative of the state fidelity.
Leakage states can be defined and the propagator is projected onto the computational states.
- Parameters
target (OperatorMatrix) – The target state as bra vector.
forward_propagators (list of OperatorMatrix) – Forward propagated initial state.
propagator_derivatives (list of OperatorMatrix) – Frechet derivatives of the matrix exponential used to calculate the propagators.
reversed_propagators (list of OperatorMatrix) – Backward passed propagators.
computational_states (list of int) – States used for the qubit implementation. If this is not None, then all other inides are eliminated, by projection into the computational space.
rescale_propagated_state (bool) – If set to Ture, then the propagated state is rescaled after leakage states are eliminated.
- Returns
Derivative – The derivatives of the state fidelity by the
- Return type
numpy array, shape: (num_time_steps, num_ctrls)
- derivative_state_fidelity_subspace(target: OperatorMatrix, forward_propagators: List[OperatorMatrix], propagator_derivatives: List[List[OperatorMatrix]], reversed_propagators: List[OperatorMatrix], dims: List[int], remove: List[int]) ndarray [source]
Derivative of the state fidelity on a subspace.
The unused subspace is traced out.
- Parameters
target (OperatorMatrix) – The target state as bra vector.
forward_propagators (list of OperatorMatrix) – Forward propagated initial state.
propagator_derivatives (list of OperatorMatrix) – Frechet derivatives of the matrix exponential used to calculate the propagators.
reversed_propagators (list of OperatorMatrix) – Backward passed propagators.
dims (list of int,) – The dimensions of the subspaces. (Compare to the ptrace function of the MatrixOperator class.)
remove (list of int,) – The indices of the dims list corresponding to the subspaces that are to be eliminated. (Compare to the ptrace function of the MatrixOperator class.)
- Returns
Derivative – The derivatives of the state fidelity by the
- Return type
numpy array, shape: (num_time_steps, num_ctrls)
- entanglement_fidelity(target: Union[ndarray, OperatorMatrix], propagator: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False) float64 [source]
The entanglement fidelity between a simulated Propagator and target propagator.
- Parameters
propagator (Union[np.ndarray, ControlMatrix]) – The simulated propagator.
target (Union[np.ndarray, ControlMatrix]) – The target unitary evolution.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- Returns
fidelity – The entanglement fidelity of target_unitary.dag * unitary.
- Return type
float
- entanglement_fidelity_super_operator(target: Union[ndarray, OperatorMatrix], propagator: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None) float64 [source]
The entanglement fidelity between a simulated Propagator and target propagator in the super operator formalism.
The entanglement fidelity between a propagator in the super operator formalism of dimension d^2 x d^2 and a target unitary operator of dimension d x d. If the system incorporates leakage states, the propagator is projected onto the computational space [1].
- Parameters
propagator (Union[np.ndarray, ControlMatrix]) – The simulated evolution propagator in the super operator formalism.
target (Union[np.ndarray, ControlMatrix]) – The target unitary evolution. (NOT as super operator.)
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
- Returns
fidelity – The entanglement fidelity of target_unitary.dag * unitary.
- Return type
float
Notes
[1] Quantification and characterization of leakage errors, Christopher J. Wood and Jay M. Gambetta, Phys. Rev. A 97, 032306 - Published 8 March 2018
- state_fidelity(target: Union[ndarray, OperatorMatrix], propagated_state: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False) float64 [source]
Quantum state fidelity.
The quantum state fidelity between two quantum states is calculated as square norm of the wave function overlap as
\[F = \vert \langle \psi_1 \vert \psi_2 \rangle \vert^2\]- Parameters
target (numpy array or operator matrix of shape (1, d)) – The target state is assumed to be given as bra-vector.
propagated_state (numpy array or operator matrix of shape (d, 1)) – The target state is assumed to be given as ket-vector.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
rescale_propagated_state (bool) – If True, then the propagated state vector is rescaled to a norm of 1.
- Returns
quantum_state_fidelity (float) – The quantum state fidelity between the propagated and the target state.
TODO –
functions should not change type of input arrays
- state_fidelity_subspace(target: Union[ndarray, OperatorMatrix], propagated_state: Union[ndarray, OperatorMatrix], dims: List[int], remove: List[int]) float64 [source]
Quantum state fidelity on a subspace.
We assume that the target state is defined only on a subspace of the total simulated hilbert space. Thus we calculate the partial trace over our simulated state, rendering it into the density matrix of a potentially mixed state.
The quantum state fidelity between a pure $psi$ and a mixed quantum state $rho$ is calculated as
\[F = \langle \psi \vert \rho \vert \psi \rangle\]- Parameters
target (numpy array or operator matrix of shape (1, d)) – The target state is assumed to be given as bra-vector.
propagated_state (numpy array or operator matrix) – The target state is assumed to be given as density matrix of shape(d, d) or shape (d^2, 1), or as ket-vector of shape (d, 1).
dims (list of int,) – The dimensions of the subspaces. (Compare to the ptrace function of the MatrixOperator class.)
remove (list of int,) – The indices of the dims list corresponding to the subspaces that are to be eliminated. (Compare to the ptrace function of the MatrixOperator class.)
- Returns
quantum_state_fidelity (float) – The quantum state fidelity between the propagated and the target state.
TODO –
functions should not change type of input arrays
3.4. data_container module
Implements data storage.
The DataContainer class stores the contend of multiple Result class instances. Each ‘Result’ class instance holds the information gathered in an optimization run.
The DataContainer interfaces to the Analyser class, which visualizes the stored data. It has also the functionalities for writing data to and loading it from the hard drive.
3.4.1. Classes
DataContainer
Data storage class.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class DataContainer(storage_path: Optional[str] = None, file_name: str = 'Temp File', indices: Optional[List[str]] = None, final_costs: Optional[List] = None, init_parameters: Optional[List] = None, final_parameters: Optional[List] = None, costs: Optional[List[List]] = None, parameters: Optional[List[List]] = None, status: Optional[List] = None, optimization_stats: Optional[List] = None, append_time_to_path=True)[source]
Bases:
object
Stores data of the optimization.
This class gathers the information stored in multiple objects of the class OptimResult.
- Parameters
storage_path (string) – The path were this instance of DataContainer is to be stored or from where is shall be loaded.
file_name (string) – The file name will be appended to the path for storage and loading. The default value is an empty string assuming that the storage path already contains the file name.
indices (list of str) – The indices of the costs.
final_costs (list) – The final values of the cost function.
init_parameters (list) – The initial optimization parameters.
final_parameters (list) – The final optimization parameters.
costs (list of list) – All values of the cost functions during the optimization.
parameters (list of list) – All parameters for which the cost functions were evaluated during the optimization.
status (list of None or int) – The termination_reason as integer. Like in scipy.OptimizeResult None if the optimization has not started. -1: improper input parameters status 0: the maximum number of function evaluations is exceeded. 1: gradient norm termination condition is satisfied. 2: cost function termination condition is satisfied. 3: minimal step size termination condition is satisfied. 4: Both 2 and 3 termination conditions are satisfied. 5: Wall time exceeded.
optimization_stats (list) – Optimization statistics, which have been appended to the data.
append_time_to_path (bool) – If True, the current time is appended to the file name.
- append_optim_result(optim_result: OptimizationResult)[source]
Appends an instance of OptimizationResult to the stored data.
The Information gained in an optimization run is extracted and appended to the various lists of the DataContainer.
- Parameters
optim_result (OptimizationResult) – Result of an optimization run.
- classmethod from_pickle(filename)[source]
Read class from pickled file.
- Parameters
filename (str) – The name of the file which is loaded.
- property index
Indices of the cost functions.
3.5. energy_spectrum module
This file serves to plot energy spectra of a Hamiltonian.
The convenience functions implemented in this module can be used to plot the eigenvalues and eigenvectors of the Hamiltonian as function of a parameter. This is especially useful to analyse the theoretical properties of a system.
3.5.1. Functions
vector_color_map()
Maps eigenvectors to a coloring.
plot_energy_spectrum()
plot the energy spectrum of an Hamiltonian.
- plot_energy_spectrum(hamiltonian: List[OperatorMatrix], x_val: array, x_label: str, ax=None, use_spectral_decomposition=True, **scatter_kwargs)[source]
Calculates and plots the energy spectra of hamilton operators.
The colors demonstrate the contribution of individual base vectors.
- Parameters
hamiltonian (list of OperatorMatrix) – The Hamiltonians which shall provide the energy spectra. They need to be hermitian.
x_val (array of float, shape (n, )) – The x_vales by which the eigenvalues are plotted.
x_label (str) – Label of the x-axis.
ax (matplotlib pyplot axes) – Instance of axes to plot the data in. Defaults to None.
- vector_color_map(vectors: array)[source]
Maps eigenvectors to a coloring, encoding the contributions.
- Parameters
vectors (array) – Array of eigenvectors. The eigenvectors are given as columns. There may be no more than 7.
- Returns
color_values – The coloring is given as array. Each column signifies one tuple of RGB color values.
- Return type
array
3.6. matrix module
This file contains an operator / matrix class which encapsulates the numeric operations.
The operators can be stored as dense matrices and a sparse representation is planed. The most frequently used and computationally expensive function is the matrix exponential and its derivative. These operations are required to calculate the analytic solution of the Schroedinger and Lindblad master equation.
3.6.1. Classes
OperatorMatrix
Abstract base class.
OperatorDense
Dense control matrices, which are based on numpy arrays.
OperatorSparse
To be implemented
3.6.2. Functions
convert_unitary_to_super_operator()
Converts a unitary propagator into a super operator in the lindblad formalism.
closest_unitary()
Calculates the closest unitary propagator for a squared matrix.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class DenseOperator(obj: Union[Qobj, ndarray, csr_matrix, DenseOperator])[source]
Bases:
OperatorMatrix
Dense control matrix.
The data is stored as numpy array and uses the implementations of the numpy package.
- Parameters
obj (Qobj or numpy array or scipy csr_matrix) – The matrix to be stored and handled as dense matrix.
- data
The data stored in a two dimensional numpy array
- Type
numpy array
- conj(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- dag(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- dexp(direction: DenseOperator, tau: complex = 1, compute_expm: bool = False, method: str = 'spectral', is_skew_hermitian: bool = False, epsilon: float = 1e-10) Union[DenseOperator, Tuple[DenseOperator]] [source]
Frechet derivative of the matrix exponential.
- Parameters
direction (DenseOperator) – Direction in which the frechet derivative is calculated. Must be of the same shape as self.
tau (complex) – The matrix is multiplied by tau before calculating the exponential.
compute_expm (bool) – If true, then the matrix exponential is calculated and returned as well.
method (string) – Numerical method used for the calculation of the matrix exponential. Currently the following are implemented: - ‘Frechet’: Uses the scipy linalg matrix exponential for simultaniously calculation of the frechet derivative expm_frechet - ‘approx’: Approximates the Derivative by finite differences. - ‘first_order’: First order taylor approximation - ‘second_order’: Second order taylor approximation - ‘third_order’: Third order taylor approximation - ‘spectral’: Use the self implemented spectral decomposition
is_skew_hermitian (bool) – Only required, for the method ‘spectral’. If set to True, then the matrix is assumed to be skew hermitian in the spectral decomposition.
epsilon (float) – Width of the finite difference. Only relevant for the method ‘approx’.
- Returns
prop (DenseOperator) – The matrix exponential. Only returned if compute_expm is True!
prop_grad (DenseOperator) – The frechet derivative d exp(Ax + B)/dx at x=0 where A is the direction and B is the matrix stored in self.
- Raises
NotImplementedError: – If the method given as parameter is not implemented.
- exp(tau: complex = 1, method: str = 'spectral', is_skew_hermitian: bool = False) DenseOperator [source]
Matrix exponential.
- Parameters
tau (complex) – The matrix is multiplied by tau before calculating the exponential.
method (string) – Numerical method used for the calculation of the matrix exponential. Currently the following are implemented: - ‘approx’, ‘Frechet’: use the scipy linalg matrix exponential - ‘first_order’: First order taylor approximation - ‘second_order’: Second order taylor approximation - ‘third_order’: Third order taylor approximation - ‘spectral’: Use the self implemented spectral decomposition
is_skew_hermitian (bool) – Only important for the method ‘spectral’. If set to True then the matrix is assumed to be skew hermitian in the spectral decomposition.
- Returns
prop – The matrix exponential.
- Return type
- Raises
NotImplementedError: – If the method given as parameter is not implemented.
- identity_like() DenseOperator [source]
See base class.
- kron(other: DenseOperator) DenseOperator [source]
See base class.
- norm(ord: Union[str, None, int] = 'fro') float64 [source]
Calulates the norm of the matrix.
Uses the implementation of numpy.linalg.norm.
- Parameters
ord (string) – Defines the norm which is calculated. Defaults to the Frobenius norm ‘fro’.
- Returns
norm – Norm of the Matrix.
- Return type
float
- prop(tau: complex = 1) DenseOperator [source]
See base class.
- ptrace(dims: Sequence[int], remove: Sequence[int], do_copy: bool = True) DenseOperator [source]
Partial trace of the matrix.
If the matrix describes a ket, the corresponding density matrix is calculated and used for the partial trace.
This implementation closely follows that of QuTip’s qobj._ptrace_dense. :param dims: Dimensions of the subspaces making up the total space on which
the matrix operates. The product of elements in ‘dims’ must be equal to the matrix’ dimension.
- Parameters
remove (list of int) – The selected subspaces as indices over which the partial trace is formed. The given indices correspond to the ordering of subspaces specified in the ‘dim’ argument.
do_copy (bool, optional) – If false, the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
pmat – The partially traced OperatorMatrix.
- Return type
- Raises
AssertionError: – If matrix dimension does not match specified dimensions.
Examples
ghz_ket = DenseOperator(np.array([[1,0,0,0,0,0,0,1]]).T) / np.sqrt(2) ghz_rho = ghz_ket * ghz_ket.dag() ghz_rho.ptrace(dims=[2,2,2], remove=[0,2])
DenseOperator with data: array([[0.5+0.j, 0. +0.j],
[0. +0.j, 0.5+0.j]])
- transpose(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- truncate_to_subspace(subspace_indices: Optional[Sequence[int]], map_to_closest_unitary: bool = False) DenseOperator [source]
See base class.
- class OperatorMatrix[source]
Bases:
ABC
The abstract base class of the operator matrix for the qopt control package.
It offers an identical interface to use sparse and dense matrices and has hence the limitations of both representations in terms of usability.
- data
The stored data. Its type is defined in subclasses.
- abstract conj(do_copy: bool = True) Optional[OperatorMatrix] [source]
Complex conjugate of the matrix.
- Parameters
do_copy (bool, optional) – If false, then the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
out – If do_copy is true, then a new instance otherwise self.
- Return type
- conjugate(do_copy: bool = True) Optional[OperatorMatrix] [source]
Alias for conj.
- abstract dag(do_copy: bool = True) Optional[OperatorMatrix] [source]
Adjoint (dagger) of the matrix.
- Parameters
do_copy (bool, optional) – If false, then the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
out – If do_copy is true, then a new instance otherwise self.
- Return type
- abstract dexp(direction: OperatorMatrix, tau: complex = 1, compute_expm: bool = False, method: Optional[str] = None, is_skew_hermitian: bool = False) Union[OperatorMatrix, Tuple[OperatorMatrix]] [source]
The Frechet derivative of the exponential in the given direction
- Parameters
direction (OperatorMatrix) – The direction in which the Frechet derivative is to be calculated.
tau (complex) – A scalar by which the matrix is multiplied before exponentiation. This can be i. e. the length of a time segment if a propagator is calculated.
compute_expm (bool) – If set to false only the derivative is calculated and returned.
method (Optional[string]) – The method by which the exponential is calculated.
is_skew_hermitian (bool) – If set to true, the matrix is expected to be hermitian, which allows to speed up the spectral decomposition.
- Returns
prop (OperatorMatrix) – The matrix exponential: exp(self*tau) (Optional, if compute_expm)
derivative_prop (OperatorMatrix) – The frechet derivative of the matrix exponential: (exp((self+direction*dt)*tau)-exp(self*tau)) / dt
- abstract exp(tau: complex = 1, method: Optional[str] = None, is_skew_hermitian: bool = False) OperatorMatrix [source]
The matrix exponential.
- Parameters
tau (complex, optional) – A scalar by which the matrix is multiplied before calculating the exponential.
method (string, optional) – The method by which the matrix exponential is calculated.
is_skew_hermitian (bool) – If set to true, the matrix is expected to be skew Hermitian, which allows to speed up the spectral decomposition.
- Returns
exponential – exponential = exp(A * tau) where A is the stored matrix.
- Return type
- abstract flatten() ndarray [source]
Flattens the matrix.
- Returns
out – The flattened control matrix as one dimensional numpy array.
- Return type
np.array
- abstract identity_like() OperatorMatrix [source]
For square matrices, the identity of same dimension is returned.
- abstract kron(other: OperatorMatrix) OperatorMatrix [source]
Computes the kronecker matrix product with another matrix.
- Parameters
other (OperatorMatrix or np.ndarray) – Second factor of the kronecker product.
- Returns
out – Operator matrix of the same type containing the product.
- Return type
- Raises
ValueError: – If the operation is not defined for the input type.
- abstract norm(ord: str) float64 [source]
Calulates the norm of the matrix.
- Parameters
ord (string) – Defines the norm which is calculated.
- Returns
norm – Norm of the Matrix.
- Return type
float
- abstract ptrace(dims: Sequence[int], remove: Sequence[int], do_copy: bool = True) OperatorMatrix [source]
Partial trace of the matrix.
If the matrix describes a ket, the corresponding density matrix is calculated and used for the partial trace. :param dims: Dimensions of the subspaces making up the total space on which
the matrix operates. The product of elements in ‘dims’ must be equal to the matrix’ dimension.
- Parameters
remove (list of int) – The selected subspaces over which the partial trace is formed. The given indices correspond to the ordering of subspaces that are specified via the ‘dim’ argument.
do_copy (bool, optional) – If false, the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
pmat – The partially traced OperatorMatrix.
- Return type
- property shape: Tuple
Returns the shape of the matrix.
- abstract spectral_decomposition(hermitian: bool = False)[source]
Calculates the eigenvalues and eigenvectors of a square matrix.
- Parameters
hermitian (bool) – If True, the matrix is assumed to be hermitian.
- Returns
eig_vals (array of shape (n, )) – Eigenvalues
eig_vecs (array of shape (n, n)) – Right Eigenvectors. The normalized eigenvalue eig_vals[i] corresponds to the eigenvector eig_vec[:,i].
- abstract tr() complex [source]
Trace of the matrix.
- Returns
trace – Trace of the matrix.
- Return type
float
- abstract transpose(do_copy: bool = True) Optional[OperatorMatrix] [source]
Transpose of the matrix.
- Parameters
do_copy (bool, optional) – If false, then the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
out – If do_copy is true, then a new instance otherwise self.
- Return type
- abstract truncate_to_subspace(subspace_indices: Optional[Sequence[int]], map_to_closest_unitary: bool = False) OperatorMatrix [source]
Convenience Function to truncate a control matrix to a subspace.
- Parameters
subspace_indices (list of int, optional) – Indices of the subspace to which the control matrix shall be truncated. If None, then a reference to the original matrix will be returned.
map_to_closest_unitary (bool) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- Returns
truncated_matrix – The truncated operator matrix.
- Return type
‘OperatorMatrix’
- class SparseOperator[source]
Bases:
OperatorMatrix
- closest_unitary(matrix: OperatorMatrix)[source]
Calculate the unitary matrix U that is closest with respect to the operator norm distance to the general matrix A.
- Parameters
matrix (OperatorMatrix) – The matrix which shall be mapped to the closest unitary.
- Returns
unitary – The closest unitary to the propagator.
- Return type
- convert_ket_vectorized_density_matrix_to_square(vectorized_density_matrix: Union[OperatorMatrix, array])[source]
Bring vectorized density matrix back into square form.
- Parameters
vectorized_density_matrix (OperatorMatrix or numpy array) – The density matrix.
- Returns
The density matrix as ket vector for the Liouville formalism.
- Return type
density_ket_vector
- Raises
ValueError: – If the operation is not defined for the input type.
ValueError: – If the density matrix is not given as ket vector.
- convert_unitary_to_super_operator(unitary: Union[OperatorMatrix, array])[source]
We assume that the unitary U shall be used to propagate a density matrix m like
\[U m U^dag\]which is equivalent to
\[( U^\ast \otimes U) \vec{m}\]- Parameters
unitary (OperatorMatrix or numpy array) – The unitary propagator.
- Returns
The unitary propagator in the Lindblad formalism. Same type as input.
- Return type
unitary_super_operator
- Raises
ValueError: – If the operation is not defined for the input type.
- ket_vectorize_density_matrix(density_matrix: Union[OperatorMatrix, array])[source]
Vectorizes a density matrix column-wise as ket vector.
- Parameters
density_matrix (OperatorMatrix or numpy array) – The density matrix.
- Returns
The density matrix as ket vector for the Liouville formalism.
- Return type
density_ket_vector
- Raises
ValueError: – If the operation is not defined for the input type.
ValueError: – If the density matrix is not given in square shape.
3.7. noise module
This file contains classes and helper functions for the sampling of noise distributions.
The NoiseTraceGenerator class and its children are used by the Solver class to generate noise samples. They use various helper functions for the respective sampling methods. In the current version, the sampling of a quasi static distribution and the spectral noise density of fast noise is supported.
3.7.1. Classes
NoiseTraceGenerator
Abstract base class defining the interface of the noise trace generators.
NTGQuasiStatic
Generates noise traces for quasi static noise.
NTGColoredNoise
Generates noise traces of arbitrary colored spectra.
3.7.2. Functions
bell_curve_1dim()
One dimensional bell curve.
sample_1dim_gaussian_distribution()
Draw samples from the one dimensional bell curve.
bell_curve_2dim()
Two dimensional bell curve.
sample_2dim_gaussian_distribution()
Draw samples from the two dimensional bell curve.
fast_colored_noise()
Samples an arbitrary colored noise spectrum.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class NTGColoredNoise(n_samples_per_trace: int, noise_spectral_density: Callable, dt: float, n_traces: int = 1, n_noise_operators: int = 1, always_redraw_samples: bool = True, low_frequency_extension_ratio: int = 1)[source]
Bases:
NoiseTraceGenerator
This class draws noise samples from noises of arbitrary colored spectra.
- Parameters
n_samples_per_trace (int) – Number of noise samples per trace.
n_traces (int, optional) – Number of noise traces. Default is 1.
n_noise_operators (int, optional) – Number of noise operators. Default is 1.
always_redraw_samples (bool) – If true. The samples are always redrawn upon request. The stored samples are not returned.
noise_spectral_density (function) – The one-sided noise spectral density as function of frequency.
dt (float) – Time distance between two adjacent samples.
low_frequency_extension_ratio (int, optional) – When creating the time samples, the total time is multiplied with this factor. This allows taking frequencies into account which are lower than 1 / total time. Defaults to 1.
- noise_spectral_density
The noise spectral density as function of frequency.
- Type
function
- dt
Time distance between two adjacent samples.
- Type
float
- _sample_noise: None
Samples noise from an arbitrary colored spectrum.
See also
noise.NoiseTraceGenerator
Abstract Base Class
- plot_periodogram(n_average: int, scaling: str = 'density', log_plot: Optional[str] = None, draw_plot=True)[source]
Creates noise samples and plots the corresponding periodogram.
- Parameters
n_average (int) – Number of Periodograms which are averaged.
scaling ({'density', 'spectrum'}, optional) – If ‘density’ then the power spectral density in units of V**2/Hz is plotted. If ‘spectral’ then the power spectrum in units of V**2 is plotted. Defaults to ‘density’.
log_plot ({None, 'semilogy', 'semilogx', 'loglog'}, optional) – If None, then the plot is not plotted logarithmically. If ‘semilogy’ only the y-axis is plotted logarithmically, if ‘semilogx’ only the x-axis is plotted logarithmically, if ‘loglog’ both axis are plotted logarithmically. Defaults to None.
draw_plot (bool, optional) – If true, then the periodogram is plotted. Defaults to True.
- Returns
deviation_norm – The vector norm of the deviation between the actual power spectral density and the power spectral densitry found in the periodogram.
- Return type
float
- class NTGQuasiStatic(standard_deviation: List[float], n_samples_per_trace: int, n_traces: int = 1, noise_samples: Optional[ndarray] = None, always_redraw_samples: bool = True, correct_std_for_discrete_sampling: bool = True, sampling_mode: str = 'uncorrelated_deterministic')[source]
Bases:
NoiseTraceGenerator
This class draws noise traces of quasistatic noise.
The Noise distribution is assumed normal. It would not make sense to use the attribute always_redraw_samples if the samples are deterministic, and therefore always the same. If multiple noise operators are given, then the noise is sampled for each one separately.
- Parameters
standard_deviation (List[float], len (n_noise_operators)) – Standard deviations of the noise assumed on the noise operators.
n_samples_per_trace (int) – Number of noise samples per trace.
n_traces (int, optional) – Number of noise traces. Default is 1.
noise_samples (None or np.ndarray, optional) – shape (n_noise_operators, n_traces, n_samples_per_trace) Precalculated noise samples. Defaults to None.
sampling_mode ({'uncorrelated_deterministic', 'monte_carlo'}, optional) – The method by which the quasi static noise samples are drawn. The following are implemented: ‘uncorrelated_deterministic’: No correlations are assumed. Each noise operator is sampled n_traces times deterministically. ‘monte_carlo’: The noise is assumed to be correlated. Samples are drawn by pseudo-randomly. Defaults to ‘uncorrelated_deterministic’.
- standard_deviation
Standard deviations of the noise assumed on the noise operators.
- Type
List[float], len (n_noise_operators)
See also
noise.NoiseTraceGenerator
Abstract Base Class
- property n_traces: int
Number of traces.
The number of requested traces must be multiplied with the number of standard deviations because if standard deviation is sampled separately.
- class NoiseTraceGenerator(n_samples_per_trace: int, always_redraw_samples: bool, n_traces: int = 1, n_noise_operators: int = 1, noise_samples: Optional[ndarray] = None)[source]
Bases:
ABC
Abstract base class defining the interface of the noise trace generators.
- Parameters
n_samples_per_trace (int) – Number of noise samples per trace.
n_traces (int, optional) – Number of noise traces. Default is 1.
n_noise_operators (int, optional) – Number of noise operators. Default is 1.
noise_samples (None or np.ndarray, optional,) – shape (n_noise_operators, n_traces, n_samples_per_trace) Precalculated noise samples. Defaults to None.
always_redraw_samples (bool) – If true. The samples are always redrawn upon request. The stored samples are not returned.
- always_redraw_samples
If true. The samples are always redrawn upon request. The stored samples are not returned.
- Type
bool
- property n_noise_operators: int
Number of noise operators.
- property n_samples_per_trace: int
Number of samples per trace.
- property n_traces: int
Number of traces.
- property noise_samples: ndarray
- bell_curve_1dim(x: Union[ndarray, float], stdx: float) Union[ndarray, float] [source]
One dimensional Bell curve.
- Parameters
x (np.ndarray or float) – Point at which the bell curve is evaluated.
stdx (float) – Standard deviation of the bell curve.
- Returns
out – Function values.
- Return type
np.ndarray or scalar
- bell_curve_2dim(x: Union[ndarray, float], stdx: float, y: Union[ndarray, float], stdy: float) Union[ndarray, float] [source]
Two dimensional Bell curve.
- Parameters
x (np.ndarray or float) – First dimension value at which the bell curve is evaluated.
stdx (float) – Standard deviation of the bell curve in the x dimension.
y (np.ndarray or float) – Second dimension value at which the bell curve is evaluated.
stdy (float) – Standard deviation of the bell curve in the y dimension.
- Returns
out – Function values.
- Return type
np.ndarray or scalar
- fast_colored_noise(spectral_density: Callable, dt: float, n_samples: int, output_shape: Tuple, r_power_of_two=False) ndarray [source]
Generates noise traces of arbitrary colored noise.
Use this code for validation: >>> from scipy import signal >>> import matplotlib.pyplot as plt >>> traces = fast_colored_noise(spectral_density, dt, n_samples, >>> output_shape) >>> f_max = 1 / dt >>> f, S = signal.welch(traces, f_max, axis=-1) >>> plt.loglog(f, spectral_density(f)) >>> plt.loglog(f, S.mean(axis=0))
- Parameters
spectral_density (Callable) – The one sided spectral density as function of frequency.
dt (float) – Time distance between two samples.
n_samples (int) – Number of samples.
output_shape (tuple of int) – Shape of the noise traces to be returned.
r_power_of_two (bool) – If true, then n_samples is rounded downwards to the next power of 2 for an efficient fast fourier transform.
- Returns
delta_colored – Where actual_n_samples is n_samples or the largest power of 2 smaller than n_samples if r_power_of_two is true.
- Return type
np.ndarray, shape(output_shape, actual_n_samples)
- inverse_cumulative_gaussian_distribution_function(z: Union[float, array], std: float, mean: float)[source]
Calculates the inverse cumulative function for the gaussian distribution.
- Parameters
z (Union[float, np.array]) – Function value.
std (float) – Standard deviation of the bell curve.
mean (float) – Mean value of the gaussian distribution. Defaults to 0.
- Returns
selected_x – Noise samples.
- Return type
list of float
- sample_1dim_gaussian_distribution(std: float, n_samples: int, mean: float = 0) array [source]
Returns ‘n_samples’ samples from the one dimensional bell curve.
The samples are chosen such, that the integral over the bell curve between two adjacent samples is always the same. The samples reproduce the correct standard deviation only in the limit n_samples -> inf due to the discreteness of the approximation. The error is to good approximation 1/n_samples.
- Parameters
std (float) – Standard deviation of the bell curve.
n_samples (int) – Number of samples returned.
mean (float) – Mean value of the gaussian distribution. Defaults to 0.
- Returns
selected_x – Noise samples.
- Return type
numpy array of shape:(n_samples, )
- sample_2dim_gaussian_distribution(std1: float, std2: float, n_samples: int)[source]
Returns ‘n_samples’ samples from the two dimensional bell curve.
The samples are chosen such, that the integral over the bell curve between two adjacent samples is always the same.
- Parameters
std1 (float) – Standard deviation of the bell curve in the first dimension.
std2 (float) – Standard deviation of the bell curve in the second dimension.
n_samples (int) – Number of samples returned.
- Returns
selected_x (np.ndarray,) – X values of the noise samples.
selected_y (np.ndarray,) – Y values of the noise samples.
3.8. optimization_data module
This module stores information about the optimization and its result.
The OptimizationResult is generated with the final properties and initial optimization parameter values of each optimization run. The OptimizationSummary is only created when requested and stores the properties of each step in the optimization algorithm. This information is valuable for the choice of the best optimization algorithm.
3.8.1. Classes
OptimizationResult
Describes the information gained by an optimization run.
OptimizationSummary
Describes the whole information gained during an optimization run.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class OptimizationResult(final_cost=None, indices=None, final_parameters=None, final_grad_norm=None, init_parameters=None, num_iter=None, termination_reason='not started yet', status=None, optimization_stats=None, optimizer=None, optim_summary=None)[source]
Bases:
object
Resulting data of the optimization.
An instance of this class is returned by the Optimizer after the optimization has terminated. It holds the results of the optimization and can also contain an instance of OptimizationSummary to describe the optimization run itself, for example its convergence. The parameters of the initialization method are all optional. This class is intended to be initialized empty or loaded from a dictionary by the class method
from_dict()
.- termination_reason
Reason for the termination as string.
- Type
string
- status
The termination_reason as integer. Like in scipy.OptimizeResult None if the optimization has not started. -1: improper input parameters status 0: the maximum number of function evaluations is exceeded. 1: gradient norm termination condition is satisfied. 2: cost function termination condition is satisfied. 3: minimal step size termination condition is satisfied. 4: Both 2 and 3 termination conditions are satisfied.
- Type
None or int
- final_cost
Value of the cost functions after the optimization.
- Type
float
- final_grad_norm
Norm of the gradient after the optimization.
- Type
float
- num_iter
Number of iterations in the optimization algorithm.
- Type
integer
- init_parameters
The amplitudes at the start of the optimisation, where n_t is the number of time steps simulated and n_par the number of optimization parameters.
- Type
array, shape: (n_t, n_par)
- final_parameters
The optimization parameters at the end of the optimisation, where n_t is the number of time steps simulated and n_par the number of optimization parameters.
- Type
array, shape: (n_t, n_par)
- optimizer
Instance of the Optimizer used to generate the result
- Type
Optimizer
- optim_summary
None if no intermediary results are saved. Otherwise the infidelity during the optimization.
- Type
OptimizationSummary
- class OptimizationSummary(indices=None, iter_num=0, costs=None, gradients=None, parameters=None)[source]
Bases:
object
A summary of an optimization run.
This class saves the state of the optimization for each iteration. All parameters for the initialization are optimal. The class is intended to be either initialized empty.
- iter_num
Number of iterations stored. Serves as checksum to verify that full data has been stored.
- Type
int
- costs
Evaluation results of the cost functions. The dictionary is sorted by cost function indices. The lists hold one entry for each evaluation.
- Type
List[float]
- indices
The indices of the cost functions.
- Type
List[str]
- gradients
Gradients of the cost functions. The dictionary is again sorted by cost function indices and the lists hold one entry per evaluation.
- Type
List[array]
- parameters
Optimization parameters during the optimization.
- Type
List[array]
3.9. optimize module
This module contains interfaces to optimization algorithms or the algorithms themselves.
Currently it supports interfacing to least squares and minimization algorithms of the scipy package. Conditionally, a simulated annealing is supported, if the simanneal package is included in the environment. (See installation instructions on https://github.com/qutech/qopt) The classes for simulated annealing are in an experimental state and not well tested!
The Optimizer class uses the Simulator as interface to the cost functions evaluated after the simulation.
3.9.1. Classes
Optimizer
Base class optimizer.
LeastSquaresOptimizer
An interface to scipy’s least squares optimizer.
ScalarMinimizingOptimizer
An interface to scipy’s minimize functions.
WallTimeExceeded
Exception for exceeding the optimization’s time limit.
PulseAnnealer
Helper class for SimulatedAnnealing.
SimulatedAnnealing
Simulated annealing as optimization.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class LeastSquaresOptimizer(system_simulator: Optional[Simulator] = None, termination_cond: Optional[Dict] = None, save_intermediary_steps: bool = True, method: str = 'trf', bounds: Optional[Union[ndarray, List]] = None, use_jacobian_function=True, cost_func_weights: Optional[Sequence[float]] = None, store_optimizer: bool = False)[source]
Bases:
Optimizer
Uses the scipy least squares method for optimization.
- Parameters
system_simulator (Simulator) – The systems simulator.
termination_cond (dict) – Termination conditions.
save_intermediary_steps (bool, optional) – If False, only the simulation result is stored. Defaults to False.
method (str, optional) – The optimization method used. Currently implemented are: - ‘trf’: A trust region optimization algorithm. This is the default.
bounds (list of array-like, optional) – Attention: The boundary format can vary between optimizers! The boundary conditions for the pulse optimizations. If none are given then the pulse is assumed to take any real value. The boundaries are given as a list of two arrays. The first array specifies the upper and the second array specifies the lower bounds. Single parameters can be excepted by using np.inf with appropriate sign.
- run_optimization(initial_control_amplitudes: array, verbose: int = 0) OptimizationResult [source]
See base class.
- class Optimizer(system_simulator: Optional[Simulator] = None, termination_cond: Optional[Dict] = None, save_intermediary_steps: bool = True, cost_func_weights: Optional[Sequence[float]] = None, use_jacobian_function=True, store_optimizer: bool = False)[source]
Bases:
ABC
Abstract base class for the optimizer.
- Parameters
system_simulator (Simulator) – The simulator is the interface to the simulation.
termination_cond (dict, optional) – The termination conditions of the optimization.
save_intermediary_steps (bool, optional) – If True, then the results from intermediary steps are stored. Defaults to True.
cost_func_weights (list of float, optional) – The cost functions are multiplied with these weights during the optimisation.
use_jacobian_function (bool, optional) – If set to true, then the jacobians are calculated analytically. Defaults to True.
store_optimizer (bool, optional) – If True, then the optimizer stores itself in the result class. Defaults to False
- pulse_shape
The shape of the control amplitudes is saved and used for the cost functions while the optimization function might need them flatted.
- Type
Tuple of int
- cost_func_weights
The cost functions are multiplied with these weights during the optimisation.
- Type
list of float, optional
- use_jacobian_function
If set to true, then the jacobians are calculated analytically.
- Type
bool, optional
- store_optimizer
If True, then the optimizer stores itself in the result class. Defaults to False
- Type
bool, optional
- cost_func_wrapper(optimization_parameters)[source]
Wraps the cost function given by the simulator class.
The relevant information for the analysis is saved.
- Parameters
optimization_parameters (np.array) – Raw optimization parameters in a linear array.
- Returns
costs – Cost values.
- Return type
np.array, shape (n_fun)
- cost_jacobian_wrapper(optimization_parameters)[source]
Wraps the cost Jacobian function given by the simulator class.
The relevant information for the analysis is saved.
- Parameters
optimization_parameters (np.array) – Raw optimization parameters in a linear array.
- Returns
jacobian – Jacobian of the cost functions.
- Return type
np.array, shape (num_func, num_t * num_amp)
- prepare_optimization(initial_optimization_parameters: ndarray)[source]
Prepare for the next optimization.
- Parameters
initial_optimization_parameters (array) – shape (num_t, num_ctrl)
overwritten. (Data stored in this class might be) –
- abstract run_optimization(initial_control_amplitudes: ndarray, verbose) OptimizationResult [source]
Runs the optimization of the control amplitudes.
- Parameters
initial_control_amplitudes (array) – shape (num_t, num_ctrl)
verbose – Verbosity of the run. Depends on which optimizer is used.
- Returns
optimization_result – The resulting data of the simulation.
- Return type
OptimizationResult
- class ScalarMinimizingOptimizer(system_simulator: Optional[Simulator] = None, termination_cond: Optional[Dict] = None, save_intermediary_steps: bool = True, method: str = 'L-BFGS-B', bounds: Optional[Union[ndarray, List]] = None, use_jacobian_function=True, cost_func_weights: Optional[Sequence[float]] = None, store_optimizer: bool = False)[source]
Bases:
Optimizer
Interfaces to the minimize functions of the optimization package in scipy.
- Parameters
method (string) – Takes methods implemented by scipy.optimize.minimize.
bounds (sequence, optional) – Attention: The boundary format can vary between optimizers! The boundary conditions for the pulse optimizations. If none are given then the pulse is assumed to take any real value. The boundaries are given as a sequence of (min, max) pairs for each element. Defaults to None.
- cost_func_wrapper(optimization_parameters)[source]
Evalutes the cost function.
The total cost function is defined as the sum of cost functions.
- cost_jacobian_wrapper(optimization_parameters)[source]
The Jacobian reduced to the gradient.
The gradient is calculated by summation over the Jacobian along the function axis, because the total cost function is defined as the sum of cost functions.
- Returns
gradient – The gradient of the costs in the 2 norm.
- Return type
numpy array, shape (num_t * num_amp)
- run_optimization(initial_control_amplitudes: array, verbose: bool = False) OptimizationResult [source]
Runs the optimization of the control amplitudes.
- Parameters
initial_control_amplitudes (array) – shape (num_t, num_ctrl)
verbose – Verbosity of the run. Depends on which optimizer is used.
- Returns
optimization_result – The resulting data of the simulation.
- Return type
OptimizationResult
- class SimulatedAnnealing(system_simulator: Optional[Simulator], bounds: Optional[ndarray], store_optimizer: bool = False, initial_temperature: float = 1.0, final_temperature: float = 1e-06, step_size: int = 1, step_ratio: float = 1.0, steps: Optional[int] = 1000, updates: int = 100)[source]
Bases:
Optimizer
This class uses simulated annealing for discrete optimization.
The use of this class requires the installation of the simanneal package from pypi.
- Parameters
bounds (array of boundaries, shape: (2, num_t, num_ctrl)) – The boundary conditions for the pulse optimizations. bounds[0] should be the lower bounds, and bounds[1] the upper ones.
initial_temperature (float) – Initial or maximum temperature for the annealing algorithm. The initial temperature should be about as large as the initial infidelity. If pulses with random initial values are optimized, I suggest an initial temperature of 1. Defaults to 1.
final_temperature (float) – Final or minimal temperature. The final temperature should be at least 2 orders of magnitude smaller than the final fidelity you expect, such that the annealer does not jump close to the completion.
step_size (int) – The annealer will optimize taking steps of the size of all integers, whose absolute values are smaller or equal to step_size in any direction. Defaults to 1 for the optimization of integers. Defaults to 1.
step_ratio (float) – The step ratio determines how many of the time segments are optimized at once. Defaults to 1.
steps (int) – Number of optimization steps before the pulse terminates. Defaults to 1000.
updates (int) – Number of updates. At each update the program calls the update function to document how many steps were accepted since the last update and how many of them improved the energy or cost function. Defaults to 100.
3.10. qopt.parallel module
This module contains functions for the support of multiprocessing.
The function run_optimization_parallel can be used to perform the optimization for multiple initial conditions in parallel.
Caution! The solver class SchroedingerSMonteCarlo offers a functionality for the parallel execution of the simulation vor various noise samples. These features are not compatible. The program can only be parallelized once.
3.10.1. Functions
run_optimization()
Executes the run_optimization method of an optimizer.
run_optimization_parallel()
Parallel execution of the run_optimization Method of the Optimizer.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- run_optimization(optimizer, initial_pulse)[source]
Executes the run_optimization method of an optimizer.
- Parameters
optimizer (Optimizer) – The Optimizer.
initial_pulse (numpy array, shape (num_t, num_ctrl)) – The initial pulse.
- Returns
result – The result of the optimization.
- Return type
- run_optimization_parallel(optimizer, initial_pulses, processes=None)[source]
Parallel execution of the run_optimization Method of the Optimizer.
- Parameters
optimizer (Optimizer) – The Optimizer.
initial_pulses (numpy array, shape (num_init, num_t, num_ctrl)) – The initial pulse. Where num_init is the number of initial pulses.
processes (int, optional) – If an integer is given, then the propagation is calculated in this number of parallel processes. If 1 then no parallel computing is applied. If None then cpu_count() is called to use all cores available. Defaults to None.
- Returns
data – A DataContainer in which the OptimizationResults are saved.
- Return type
3.11. qopt.performance_statistics module
Statistics of the use of computational resources.
The class PerformanceStatistics gathers information about the wall time spend for the calculation of each cost function and its gradient. It can be used to evaluate the consumption of computational resources.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class PerformanceStatistics[source]
Bases:
object
Stores performance statistics.
- start_t_opt
Time of the optimizations start. None if it has not been set yet.
- Type
float or None
- end_t_opt
Time of the optimizations end. None if it has not been set yet.
- Type
float or None
- indices
The indices of the cost functions.
- Type
List[str]
- cost_func_eval_times
List of durations of the evaluation of the cost functions.
- Type
list of float
- grad_func_eval_times
List of durations of the evaluation of the gradients.
- Type
list of float
3.12. qopt.plotting module
Plotting functions.
The function plot_bloch_vector_evolution can be used to plot the evolution under a series of propagators on the bloch sphere. It uses QuTiP and is only available if QuTiP is installed in the environment. (See installation instructions on https://github.com/qutech/qopt)
3.12.1. Functions
plot_bloch_vector_evolution()
Plots the evolution of the forward propagators of the initial state on the bloch sphere.
Notes
The implementation was adapted from the filter_functions package.
- plot_bloch_vector_evolution(forward_propagators: Sequence[OperatorMatrix], initial_state: OperatorMatrix, return_bloch: bool = False, **bloch_kwargs)[source]
Plots the evolution of the forward propagators of the initial state on the bloch sphere.
- Parameters
forward_propagators (list of DenseOperators) – The forward propagators whose evolution shall be plotted on the Bloch sphere.
initial_state (DenseOperator) – The initial state aka. beginning point of the plotting.
return_bloch (bool, optional) – If True, the Bloch sphere is returned as object.
bloch_kwargs (dict, optional) – Plotting parameters for the Bloch sphere.
- Returns
Only returned if return_bloch is set to true.
- Return type
bloch_sphere
3.13. simulator module
The Simulator class provides the interface between the optimizer and the actual simulation.
The clean interface between the simulation and the optimization algorithm allows qopt to interface with a wide class of optimization algorithms. A special focus is set on gradient based optimization algorithms and analytic gradients are also wrapped by the Simulator class.
The correct setup of the entire simulation might contain the implementation of functions and their derivatives. This a common place for mistakes and therefore the Simulator class offers convenience functions to check the integrated calculation of gradients based on analytic results with finite differences.
3.13.1. Classes
Simulator
Base class.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class Simulator(solvers: Optional[Sequence[Solver]], cost_funcs: Optional[Sequence[CostFunction]], optimization_parameters=None, num_ctrl=None, times=None, num_times=None, record_performance_statistics: bool = True, numeric_jacobian: bool = False)[source]
Bases:
object
The Dynamics class provides the interface for the Optimizer class.
It wraps the cost functions and optionally the gradient of the infidelity.
- Parameters
solvers (Solver) – This object calculates the evolution of the system under consideration.
cost_funcs (List[FidelityComputer]) – These are the parameters which are optimized.
optimization_parameters (numpy array, optional) – The initial pulse of shape (N_t, N_c) where N_t is the number of time steps and N_c the number of controlled parameters.
num_ctrl (int, optional) – The number of controlled parameters N_c.
times (numpy array or list, optional) – A one dimensional numpy array of the discrete time steps.
num_times (int, optional) – The number of time steps N_t. Mainly for consistency checks.
record_performance_statistics (bool) – If True, then the evaluation times of the cost functions and their gradients are stored.
- solvers
Instances of the time slot computers used by the cost functions.
- Type
list of Solver
- cost_funcs
Instances of the cost functions which are to be optimized.
- Type
list of CostFunction
- stats
Performance statistics.
- Type
Stats
- TODO
properly implement check method as parser
flags controlling how much data is saved
is the pulse attribute useful?
- check attributes for duplication: should times, num_ctrl and
num_times be saved at this level?
- compare_numeric_to_analytic_gradient(pulse: Optional[ndarray] = None, delta_eps: float = 1e-08, symmetric: bool = False)[source]
This function compares the numerical to the analytical gradient in order to serve as a consistency check.
- Parameters
pulse (array) – The pulse at which the gradient is evaluated.
delta_eps (float) – The finite difference.
symmetric (bool) – If True, then the finite differences are evaluated symmetrically around the pulse. Otherwise by forward finite differences.
- Returns
gradient_difference_norm (float) – The matrix norm of the difference between the numeric and analytic gradient.
gradient_difference_relative (float) – The relation of the aforementioned norm of the difference matrix and the average norm of the numeric and analytic gradient.
- property cost_indices
Indices of cost functions.
- numeric_gradient(pulse: Optional[ndarray] = None, delta_eps: float = 1e-08, symmetric: bool = False) ndarray [source]
This function calculates the gradient numerically and analytically in order to serve as a consistency check.
- Parameters
pulse (array) – The pulse at which the gradient is evaluated.
delta_eps (float) – The finite difference.
symmetric (bool) – If True, then the finite differences are evaluated symmetrically around the pulse. Otherwise by forward finite differences.
- Returns
gradients – The gradients as numpy array of shape (n_time, n_func, n_opers).
- Return type
array
- property pulse
Optimization parameters.
- wrapped_cost_functions(pulse=None)[source]
Wraps the cost functions of the fidelity computer.
This function coordinates the complete simulation including the application of the transfer function, the execution of the time slot computer and the evaluation of the actual cost functions.
- Parameters
pulse (numpy array optional) – If no pulse is specified the cost function is evaluated for the attribute pulse.
- Returns
costs (numpy array, shape (n_fun)) – Array of costs (i.e. infidelities).
costs_indices (list of str) – Names of the costs.
- wrapped_jac_function(pulse=None)[source]
Wraps the gradient calculation functions of the fidelity computer.
- Parameters
pulse (numpy array, optional) – shape: (num_t, num_ctrl) If no pulse is specified the cost function is evaluated for the attribute pulse.
- Returns
jac – Array of gradients of shape (num_t, num_func, num_amp).
- Return type
numpy array
3.14. solver_algorithms module
Implements the algorithms to solve differential equations like Schroedinger’s equation or a master equation.
The Solver class is the central piece of the actual simulation. It calculates propagators from the differential equations describing the quantum dynamics. The abstract base class inherits among other things an interface to the PulseSequence class of the filter_functions package.
The Solver classes can have an amplitude and a transfer function as attribute and automate their use. The Monte Carlo solvers also hold an instance of a noise trace generator.
If requested, also derivatives of the propagators by the control amplitudes are calculated or approximated.
3.14.1. Classes
Solver
Abstract base class of the time slot computers.
SchroedingerSolver
Solver for the the unperturbed Schroedinger equation.
SchroedingerSMonteCarlo
Solver for the Schroedinger equation under the influence of noise.
SchroedingerSMCControlNoise
Solver for the Schroedinger equation under the influence of noise affecting the control terms.
LindbladSolver
Solves the master equation in Lindblad form.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class LindbladSControlNoise(h_drift, h_ctrl, initial_state, tau, ctrl_amps, transfer_function=None, calculate_unitary_derivatives=True, filter_function_h_n=None, exponential_method=None, lindblad_operators=None, constant_lindblad_operators=False, noise_psd=1)[source]
Bases:
LindbladSolver
Special case of the Lindblad master equation. It considers white noise on the control parameters. The same functionality should be implementable with the parent class, but less convenient.
- class LindbladSolver(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_unitary_derivatives: bool = False, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, initial_diss_super_op: Optional[List[OperatorMatrix]] = None, lindblad_operators: Optional[List[OperatorMatrix]] = None, prefactor_function: Optional[Callable[[array, array], array]] = None, prefactor_derivative_function: Optional[Callable[[array, array], array]] = None, super_operator_function: Optional[Callable[[array, array], List[OperatorMatrix]]] = None, super_operator_derivative_function: Optional[Callable[[array, array], List[List[OperatorMatrix]]]] = None, is_skew_hermitian: bool = False, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
SchroedingerSolver
Solves a master equation for an open quantum system in the Markov approximation using the Lindblad super operator formalism.
The master equation to be solved is
\[d \rho / dt = i [\rho, H] + \sum_k (L_k \rho L_k^\dagger - .5 L_k^\dagger L_k \rho - .5 \rho L_k^\dagger L_k)\]with the Lindblad operators L_k. The solution is calculated as
\[\rho(t) = exp[(-i \mathcal{H} + \mathcal{G})t] \rho(0)\]with the dissipative super operator
\[\mathcal{G} = \sum_k D(L_k)\]\[D(L) = L^\ast \otimes L - .5 I \otimes (L^\dagger L) - .5 (L^T L^\ast) \otimes I\]The dissipation super operator can be given in three different ways.
1. A nested list of dissipation super operators D(L_k) as control matrices. 2. A nested list of Lindblad operators L as control matrices. 3. A function handle receiving the control amplitudes as sole argument and returning a dissipation super operator as list of control matrices.
Optionally a prefactor function can be given for 1. and 2. This function receives the control parameters and returns an array of the shape num_t x num_l where num_t is the number of time steps in the control and num_l is the number of Lindblad operators or dissipation super operators.
If multiple construction arguments are given, the implementation prioritises the function (3.) over the Lindblad operators (2.) over the dissipation super operator (1.).
- Parameters
initial_diss_super_op (List[ControlMatrix], len num_t) – Initial dissipation super operator; num_l is the number of Lindbladians. Set if you want to use (1.) (See documentation above!). The control matrices are expected to be of shape (dim, dim) where dim is the dimension of the system.
lindblad_operators (List[ControlMatrix], len num_l) – Lindblad operators; num_l is the number of Lindbladians. Set if you want to use (2.) (See documentation above!). The Lindblad operators are assumend to be of shape (dim, dim) where dim is the dimension of the system.
prefactor_function (Callable[[np.array, np.array], np.array]) –
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns prefactors as numpy array of shape (num_t, num_l). The prefactors a_k are used as weights in the sum of the total dissipation operator.
\[\mathcal{G} = \sum_k a_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k.
\[L_k = b_k * C_k\]Then the prefactor is the squared absolute value of this number:
\[a_k = |b_k|^2\]Set if you want to use method (1.) or (2.). (See class documentation.)
prefactor_derivative_function (Callable[[np.array, np.array], np.array]) –
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the derivatives of the prefactors as numpy array of shape (num_t, num_ctrl, num_l). The derivatives d_k are used as weights in the sum of the derivative of the total dissipation operator.
\[d \mathcal{G} / d u_k = \sum_k d_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k. And this number depends on the control amplitudes u_k
\[L_k = b_k (u_k) * C_k\]Then the derivative of the prefactor is the derivative of the squared absolute value of this number:
\[d_k = d |b_k|^2 / d u_k\]Set if you want to use method (1.) or (2.). (See class documentation.)
super_operator_function (Callable[[np.array, np.array], List[ControlMatrix]]) – Receives the control amlitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the total dissipation operators as list of length num_t. Set if you want to use method (3.). (See class documentation.)
super_operator_derivative_function (Callable[[np.array, np.array],) – List[List[ControlMatrix]]] Receives the control amlitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the derivatives of the total dissipation operators as nested list of shape [[] * num_ctrl] * num_t. Set if you want to use method (3.). (See class documentation.)
is_skew_hermitian (bool) – If True, then the total dynamics generator is assumed to be skew hermitian.
- _diss_sup_op
Total dissipaton super operator.
- Type
List[ControlMatrix], len num_t
- _diss_sup_op_deriv
shape [[] * num_ctrl] * num_t Derivative of the total dissipation operator with respect to the control amplitudes.
- Type
List[List[ControlMatrix]],
- _initial_diss_super_op
Initial dissipation super operator; num_l is the number of Lindbladians.
- Type
List[ControlMatrix], len num_l
- _lindblad_operatorsList[ControlMatrix], len num_l
Lindblad operators; num_l is the number of Lindbladians.
- _prefactor_function
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns prefactors as numpy array of shape (num_t, num_l). The prefactors a_k are used as weights in the sum of the total dissipation operator.
\[\mathcal{G} = \sum_k a_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k.
\[L_k = b_k * C_k\]Then the prefactor is the squared absolute value of this number:
\[a_k = |b_k|^2\]Set if you want to use method (1.) or (2.). (See class documentation.)
- Type
Callable[[np.array], np.array]
- _prefactor_deriv_function
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the derivatives of the prefactors as numpy array of shape (num_t, num_ctrl, num_l). The derivatives d_k are used as weights in the sum of the derivative of the total dissipation operator.
\[d \mathcal{G} / d u_k = \sum_k d_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k. And this number depends on the control amplitudes u_k
\[L_k = b_k (u_k) * C_k\]Then the derivative of the prefactor is the derivative of the squared absolute value of this number:
\[d_k = d |b_k|^2 / d u_k\]- Type
Callable[[np.array], np.array]
- _sup_op_func
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the total dissipation operators as list of length num_t.
- Type
Callable[[np.array], List[ControlMatrix]]
- _sup_op_deriv_func
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the derivatives of the total dissipation operators as nested list of shape [[] * num_ctrl] * num_t.
- Type
Callable[[np.array], List[List[ControlMatrix]]]
- _parse_dissipative_super_operator: None
- _calc_diss_sup_op: List[ControlMatrix]
Calculates the total dissipation super operator.
- _calc_diss_sup_op_deriv: Optional[List[List[ControlMatrix]]]
Calculates the derivatives of the total dissipation super operators with respect to the control amplitudes.
- `Todo`
Write parser
- class SchroedingerSMCControlNoise(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, noise_trace_generator: Optional[NoiseTraceGenerator], initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = False, processes: Optional[int] = 1, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
SchroedingerSMonteCarlo
Convenience class like SchroedingerSMonteCarlo but with noise on the optimization parameters.
This time slot computer solves the Schroedinger equation explicitly for concrete control noise realizations. This time slot computer assumes, that the noise is sampled on the time scale of the already transferred optimization parameters. The control Hamiltionians are also used as noise Hamiltionians and the noise amplitude function adds the noise samples to the unperturbed transferred optimization parameters and applies the amplitude function of the control amplitudes.
- class SchroedingerSMonteCarlo(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, h_noise: List[OperatorMatrix], noise_trace_generator: Optional[NoiseTraceGenerator], initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = False, processes: Optional[int] = 1, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None, noise_amplitude_function: Optional[Callable[[array, array, array, array], array]] = None)[source]
Bases:
SchroedingerSolver
Solves Schroedinger’s equation for explicit noise realisations as Monte Carlo experiment.
This time slot computer solves the Schroedinger equation explicitly for concrete noise realizations. The noise traces are generated by an instance of the Noise Trace Generator Class. Then they can be processed by the noise amplitude function, before they are multiplied by the noise hamiltionians.
- Parameters
h_noise (List[ControlMatrix], len num_noise_operators) – List of noise operators occurring in the Hamiltonian.
noise_trace_generator (noise.NoiseTraceGenerator) – Noise trace generator object.
processes (int, optional) – If an integer is given, then the propagation is calculated in this number of parallel processes. If 1 then no parallel computing is applied. If None then cpu_count() is called to use all cores available. Defaults to 1.
noise_amplitude_function (Callable[[noise_samples: np.array,) – optimization_parameters: np.array, transferred_parameters: np.array, control_amplitudes: np.array], np.array] The noise amplitude function calculated the noisy control amplitudes corresponding to the noise samples. They recieve 4 keyword arguments being the noise samples, the optimization parameters, the transferred optimization parameters and the control amplitudes in this order. The noise samples are given with the shape (n_samples_per_trace, n_traces, n_noise_operators), the optimization parameters (num_x, num_ctrl), the transferred parameters (num_t, num_ctrl) and the control amplitudes (num_t, num_ctrl). The returned noise amplitudes should be of the shape (num_t, n_traces, n_noise_operators).
- h_noise
List of noise operators occurring in the Hamiltonian.
- Type
List[ControlMatrix], len num_noise_operators
- noise_trace_generator
Noise trace generator object.
- _dyn_gen_noise
shape [[] * num_t] * num_noise_traces Dynamics generators for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _prop_noise
shape [[] * num_t] * num_noise_traces Propagators for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _fwd_prop_noise
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of the propagators for the individual noise traces. They describe the forward propagation of the systems state.
- Type
List[List[ControlMatrix]],
- _reversed_prop_noise
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of propagators in reversed order for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _derivative_prop_noise
shape [[[] * num_t] * num_ctrl] * num_noise_traces Frechet derivatives of the propagators by the control amplitudes for the individual noise traces.
- Type
List[List[List[ControlMatrix]]],
- propagators_noise: List[List[ControlMatrix]],
shape [[] * num_t] * num_noise_traces Propagators for the individual noise traces.
- forward_propagators_noise: List[List[ControlMatrix]],
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of the propagators for the individual noise traces. They describe the forward propagation of the systems state.
- reversed_propagators_noise: List[List[ControlMatrix]],
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of propagators in reversed order for the individual noise traces.
- frechet_deriv_propagators_noise: List[List[List[ControlMatrix]]],
shape [[[] * num_t] * num_ctrl] * num_noise_traces Frechet derivatives of the propagators by the control amplitudes for the individual noise traces.
- create_ff_h_n(self): List[List[np.ndarray, list, str]],
shape [[]]*num_noise_operators Creates the noise hamiltonian of the filter function formalism.
- property create_ff_h_n: list
Creates the noise hamiltonian of the filter function formalism.
If filter_function_h_n is None, then the filter function noise Hamiltonian is created from the Monte Carlo noise Hamiltonian by directly using the Operators and assuming all noise susceptibilities equal 1.
- Returns
create_ff_h_n – Noise Hamiltonian of the filter function formalism.
- Return type
nested list
- property forward_propagators_noise: List[List[OperatorMatrix]]
Returns the forward propagation of the initial state for every time slice and every noise trace and calculate it if necessary. If the initial state is the identity matrix, then the cumulative propagators are given. The element forward_propagators[k][i] propagates a state by the first i time steps under the kth noise trace, if the initial state is the identity matrix.
- Returns
forward_propagation (List[List[ControlMatrix]],)
shape [[] * (num_t + 1)] * num_noise_traces – Propagation of the initial state of the system. fwd[0] gives the initial state itself.
- property frechet_deriv_propagators_noise: List[List[List[OperatorMatrix]]]
Returns the frechet derivatives of the propagators with respect to the control amplitudes for each noise trace.
- Returns
derivative_prop_noise (List[List[List[ControlMatrix]]],)
shape [[[] * num_t] * num_ctrl] * num_noise_traces – Frechet derivatives of the propagators by the control amplitudes.
- property propagators_noise: List[List[OperatorMatrix]]
Returns the propagators of the system for each noise trace and calculates them if necessary.
- Returns
propagators_noise (List[List[ControlMatrix]],)
shape [[] * num_t] * num_noise_traces – Propagators of the system for each noise trace.
- property reversed_propagators_noise: List[List[OperatorMatrix]]
Returns the reversed propagation of the initial state for every noise trace and calculate it if necessary. If the initial state is the identity matrix, then the reversed cumulative propagators are given. The element forward_propagators[k][i] propagates a state by the first i time steps under the kth noise trace, if the initial state is the identity matrix.
- Returns
reversed_propagation_noise (List[List[ControlMatrix]],)
shape [[] * (num_t + 1)] * num_noise_traces – Propagation of the initial state of the system. reversed[k][0] gives the initial state itself.
- class SchroedingerSolver(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = True, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
Solver
This time slot computer solves the unperturbed Schroedinger equation.
All intermediary propagators are calculated and cached. Takes also input parameters of the base class.
- Parameters
calculate_propagator_derivatives (bool) – If true, the derivatives of the propagators by the control amplitudes are always calculated. Otherwise only on demand.
frechet_deriv_approx_method (Optional[str]) – Method for the approximation of the derivatives of the propagators, if they are not calculated analytically. Note that this method is never used if calculate_propagator_derivatives is set to True! Methods: None: The derivatives are not approximated by calculated by the control matrix class. ‘grape’: use the approximation given in the original grape paper.
- _dyn_gen
The generators of the systems dynamics
- Type
List[ControlMatrix], len num_t
- calculate_propagator_derivatives
If true, the derivatives of the propagators by the control amplitudes are always calculated. Otherwise only on demand.
- Type
bool
- frechet_deriv_approx_method
Method for the approximation of the derivatives of the propagators, if they are not calculated analytically. Note that this method is never used if calculate_propagator_derivatives is set to True! Methods: ‘grape’: use the approximation given in the original grape paper.
- Type
Optional[str]
- _compute_derivative_directions: List[List[q_mat.ControlMatrix]],
- shape [[] * num_ctrl] * num_t
Computes the directions of change with respect to the control parameters.
- _compute_dyn_gen: List[ControlMatrix], len num_t
Computes the dynamics generators.
- `Todo`
- raise a warning if the approximation method although the gradient
is always calculated.
- raise a warning if the grape approximation is chosen but its
requirement of small time steps is not met.
- class Solver(h_ctrl: List[OperatorMatrix], h_drift: List[OperatorMatrix], tau: array, initial_state: Optional[OperatorMatrix] = None, opt_pars: Optional[array] = None, ctrl_amps: Optional[array] = None, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None, paranoia_level: int = 2)[source]
Bases:
ABC
Abstract base class for Solvers.
- Parameters
h_ctrl (List[ControlMatrix], len num_ctrl) – Control operators in the Hamiltonian as nested list of shape n_t, num_ctrl.
h_drift (List[ControlMatrix], len num_t or 1) – Drift operators in the Hamiltonian. You can either give a single element or one for each transferred time step.
initial_state (ControlMatrix) – Initial state of the system as state vector. Can also be set to the identity matrix. Then the forward propagation gives the total propagator of the system.
tau (array of float, shape (num_t, )) – Durations of the time slices.
opt_pars (np.array, shape (num_y, num_par), optional) – Raw optimization parameters.
ctrl_amps (np.array, shape (num_t, num_ctrl), optional) – The initial control amplitudes.
filter_function_h_n (List[List[np.array]] or List[List[Qobj]] or callable) –
Nested list of noise Operators. Used in the filter function formalism. _filter_function_h_n should look something like this:
>>> H = [[n_oper1, n_coeff1, n_oper_identifier1], >>> [n_oper2, n_coeff2, n_oper_identifier2], ...]
The operators may be given either as NumPy arrays or QuTiP Qobjs and each coefficient array should have the same number of elements as dt, and should be given in units of \(\hbar\). Alternatively, the argument can be a callable. This should have the signature of three input arguments, which are (Optimization parameters, transferred parameters, control amplitudes). The callable should return an nested list of the form given above.
filter_function_basis (Basis, shape (d**2, d, d), optional) – The operator basis in which to calculate. If a Generalized Gell-Mann basis (see
ggm()
) is chosen, some calculations will be faster for large dimensions due to a simpler basis expansion. However, when extending the pulse sequence to larger qubit registers, cached filter functions cannot be retained since the GGM basis does not factor into tensor products. In this case a Pauli basis is preferable.filter_function_n_coeffs_deriv (Callable numpy array to numpy array) – This function calculates the derivatives of the noise susceptibility in the filter function formalism. It receives the optimization parameters as array of shape (num_opt, num_t) and returns the derivatives as array of shape (num_noise_op, n_ctrl, num_t). The order of the noise operators must correspond to the order specified by filter_function_h_n.
exponential_method (string, optional) – Method used by the ControlMatrix class for the calculation of the matrix exponential. The default is ‘Frechet’. See also the Docstring of the file ‘qopt.matrix’.
is_skew_hermitian (bool) – Only important for the exponential_method ‘spectral’. If set to true, the dynamical generator is assumed to be skew hermitian during the spectral decomposition.
transfer_function (TransferFunction) – The transfer function for reshaping the optimization parameters.
amplitude_function (AmplitudeFunction) – The amplitude function connecting the transferred optimization parameters to the control amplitudes.
paranoia_level (int) – The paranoia_level determines how many checks are conducted. 0 No tests 1 Some tests 2 Exhaustive tests, dimension checks
- h_ctrl
Control operators in the Hamiltonian as list of length num_ctrl.
- Type
List[ControlMatrix], len num_ctrl
- h_drift
Drift operators in the Hamiltonian.
- Type
List[ControlMatrix], len num_t
- initial_state
Initial state of the system as state vector. Can also be set to the identity matrix. Then the forward propagation gives the total propagator of the system.
- Type
ControlMatrix
- transferred_time
Durations of the time slices.
- Type
List[float]
- filter_function_h_n
Nested list of noise Operators. Used in the filter function formalism.
- Type
List[List[np.array]] or List[List[Qobj]]
- filter_function_basis
The filter function pulse sequence will be expressed in this basis. See documentation of the filter function package.
- Type
Basis
- exponential_method
Method used by the ControlMatrix class for the calculation of the matrix exponential. The default is ‘Frechet’. See also the Docstring of the file ‘qopt.matrix’.
- Type
string, optional
- transfer_function
The transfer function for reshaping the optimization parameters.
- Type
- amplitude_function
The amplitude function connecting the transferred optimization parameters to the control amplitudes.
- Type
- _prop
Propagators of the system.
- Type
List[ControlMatrix], len num_t
- _fwd_prop
Ordered product of the propagators. They describe the forward propagation of the systems state.
- Type
List[ControlMatrix], len num_t + 1
- _reversed_prop
Ordered product of propagators in reversed order.
- Type
List[ControlMatrix], len num_t + 1
- _derivative_prop
Frechet derivatives of the propagators by the control amplitudes.
- Type
List[List[ControlMatrix]], shape [[] * num_t] * num_ctrl
- propagators: List[ControlMatrix], len num_t
Returns the propagators of the system.
- forward_propagators: List[ControlMatrix], len num_t + 1
Returns the forward propagation of the initial state. The element forward_propagators[i] propagates a state by the first i time steps, if the initial state is the identity matrix.
- frechet_deriv_propagators: List[List[ControlMatrix]],
shape [[] * num_t] * num_ctrl Returns the frechet derivatives of the propagators by the control amplitudes.
- reversed_propagators: List[ControlMatrix], len num_t + 1
Returns the reversed propagation of the initial state. The element reversed_propagators[i] propagates a state by the last i time steps, if the initial state is the identity matrix.
- _compute_propagation: abstract method
Computes the propagators.
- _compute_forward_propagation()[source]
Compute the forward propagation of the initial state / system.
- _compute_reversed_propagation()[source]
Compute the reversed propagation of the initial state / system.
- _compute_propagation_derivatives: abstract method
Compute the derivatives of the propagators by the control amplitudes.
- create_pulse_sequence(new_amps): PulseSequence
Creates a pulse sequence instance corresponding to the current control amplitudes.
- `Todo`
- Write parser
setter for new hamiltonians
make hamiltonians private
also for the initial state
extend constant drift hamiltonian
- Implement the drift operator with an amplitude. Right now,
the operator is already multiplied with the amplitude, which is
not coherent with the pulse sequence interface. Alternatively
amplitude=1?
transferred_time should be taken from the transfer function
Use own plotting for the plotting
Consequent try catches for the computation of the matrix exponential
- consistency_checks(paranoia_level: int)[source]
Checks attributes for inner consistency.
- Parameters
paranoia_level (int) – The paranoia_level determines how many checks are conducted. 0: No tests 1: Some tests 2: Exhaustive tests, dimension checks
- property create_ff_h_n: list
Creates the noise hamiltonian of the filter function formalism.
- Returns
create_ff_h_n – Noise Hamiltonian of the filter function formalism.
- Return type
nested list
- create_pulse_sequence(new_amps: Optional[array] = None, ff_basis: Optional[Basis] = None) PulseSequence [source]
Create a pulse sequence of the filter function package written by Tobias Hangleiter.
See the documentation of the filter function package.
- Parameters
new_amps (np.array, shape (num_t, num_ctrl), optional) – New control amplitudes can be set before the pulse sequence is initialized.
ff_basis (Basis) – The pulse sequence will be expanded in this basis. See documentation of the filter function package.
- Returns
pulse_sequence – The pulse sequence corresponding to the control model and the control amplitudes set.
- Return type
filter_functions.pulse_sequence.PulseSequence
- property filter_function_n_coeffs_deriv_vals: Optional[ndarray]
Calculates the derivatives of the noise susceptibilities from the filter function formalism.
- Returns
n_coeffs_deriv – Derivatives of the noise susceptibilities by the control amplitudes.
- Return type
numpy array of shape (num_noise_op, n_ctrl, num_t)
- property forward_propagators: List[OperatorMatrix]
Returns the forward propagation of the initial state for every time slice and calculate it if necessary. If the initial state is the identity matrix, then the cumulative propagators are given. The element forward_propagators[i] propagates a state by the first i time steps, if the initial state is the identity matrix.
- Returns
forward_propagation – Propagation of the initial state of the system. fwd[0] gives the initial state itself.
- Return type
List[ControlMatrix], len num_t + 1
- property frechet_deriv_propagators: List[List[OperatorMatrix]]
Returns the frechet derivatives of the propagators.
- Returns
derivative_prop – shape [[] * num_t] * num_ctrl Frechet derivatives of the propagators by the control amplitudes
- Return type
List[List[ControlMatrix]],
- plot_bloch_sphere(new_amps=None, return_Bloch: bool = False) None [source]
Uses the pulse sequence to plot the systems evolution on the bloch sphere.
Only available for two dimensional systems.
- Parameters
new_amps (np.array, shape (num_t, num_ctrl), optional) – New control amplitudes can be set before the pulse sequence is initialized.
return_Bloch (bool) – If True, then qutips Bloch object is returned.
- Returns
b – Qutips Bloch object. Only returned if return_Bloch is set to True.
- Return type
Bloch
- property propagators: List[OperatorMatrix]
Returns the propagators of the system and calculates them if necessary.
- Returns
propagators – Propagators of the system.
- Return type
List[ControlMatrix], len num_t
- property reversed_propagators: List[OperatorMatrix]
Returns the reversed propagation of the initial state for every time slice and calculate it if necessary. If the initial state is the identity matrix, then the reversed cumulative propagators are given. The element forward_propagators[i] propagates a state by the first i time steps, if the initial state is the identity matrix.
- Returns
reversed_propagation – Propagation of the initial state of the system. reversed[0] gives the initial state itself.
- Return type
List[ControlMatrix], len num_t + 1
3.15. transfer_function module
Models the response function of control electronics and pulse smoothing.
Due to the imperfection of the control electronics, the generated control pulse which is received by the physical qubit is not exactly the pulse which has been implemented at the control level.
If for example a voltage is changed from one value to another at a single point in time at the control level, then the control electronics might need some time to physically reach the new voltage.
Another example would be an amplifier, which has a non-linearity in the amplification of a control pulse.
There are two kinds of transfer functions. Those who are based on transfer matrices and those who are not. In principle every transfer function can be expressed as a matrix multiplication because it is linear by definition, but the transfer function will have n_time ** 2 entries which might be huge number if you consider a control pulse with a large number n_time of time steps. Therefore it is advantageous to use the matrix based transfer functions for small pulses with a lot of correlations.
The matrix based transfer function have their own classes for concatenation and parallel application.
See also
Optimal
, 2011
, Motzoi
, F.
, J.
, S.
, F.
, PhysRevA.84.022307
, https
, Classes
, -------
TransferFunction
Abstract base class.
IdentityTF
Optimization variables are the amplitudes of the control fields.
OversamplingTF
Oversamples the pulse and adds boundary conditions.
GaussianConvolution
Applies a Gaussian convolution.
ConcatenateTF
Concatenation of two transfer functions.
ParallelTF
Using to transfer functions for two sets of parameters in paralell.
MatrixTF
Abstract base class for transfer functions as matrices.
OversamplingMTF
Matrix version of OversamplingTF.
CustomMTF
Transfer function which receives an explicitly constructed constant transfer matrix.
ConcatenateMTF
Matrix version of ConcatenateTF.
ParallelMTF
Matrix version of ParallelTF.
ExponentialMTF
The amplitudes are smoothed by exponential saturation functions.
Functions
, ---------
exp_saturation()
Exponential saturation function.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- class ConcatenateMTF(tf1: MatrixTF, tf2: MatrixTF)[source]
Bases:
MatrixTF
Concatenates two matrix transfer functions.
Matrix version of ConcatenateTF.
- Parameters
- set_times(y_times: ndarray) None [source]
Sets x_times on the first transfer function and sets the resulting x_times on the second transfer function.
- Parameters
y_times (Optional[np.array, list]) – Time durations of the constant control steps of the raw optimization parameters.
- property transfer_matrix
The total transfer matrix is the product of the individual ones.
- class ConcatenateTF(tf1: TransferFunction, tf2: TransferFunction)[source]
Bases:
TransferFunction
Concatenates transfer functions.
This class can be used if there are two transfer functions which are to be applied one after another. For example if first the pulse generation and subsequently a pulse amplification shall be modeled.
- Parameters
tf1 (TransferFunction) – First matrix transfer function. This function operates directly on the optimization variables.
tf2 (TransferFunction) – Second matrix transfer function. This function operates on the output of the first transfer function.
- class ConvolutionTF(kernel: ndarray, mode: str = 'nearest', num_ctrls: int = 1, cval: float = 0.0)[source]
Bases:
TransferFunction
A convolution as filter function.
This implementation uses the function scipy.ndimage.convolve. For oversampling and boundaries use this TransferFunction in combination with EfficientOversamplingTF and ConcatenateTF.
- gradient_chain_rule(deriv_by_transferred_par: array) array [source]
See base class.
The application of the chain rule is another convolution.
- Parameters
deriv_by_transferred_par (np.array, shape (num_x, num_f, num_par)) – The gradients of num_f functions by num_par optimization parameters at num_x different time steps.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters at num_y time steps.
- Return type
np.array, shape: (num_y, num_f, num_par)
- property transfer_matrix: array
Overrides the base class method.
- class CustomMTF(transfer_matrix: array, x_times: Optional[array] = None, bound_type: Optional[Tuple] = None, oversampling: int = 1, offset: Optional[float] = None, num_ctrls: int = 1)[source]
Bases:
MatrixTF
This class implements a linear transfer function.
The action is fully described by the transfer function and a constant offset, given at the initialization of the instance.
- Parameters
transfer_matrix (np.array, shape (num_x, num_y, num_par)) – Constant transfer function.
offset (float) – Constant offset.
x_times (np.array, shape (num_x)) – Time slices of the control amplitudes. If they are not explicitly given, they are constructed from oversampling and bound_type.
bound_type (see base class) – If no bound_type is specified. The program assumes that there is no padding.
oversampling (int) – If the oversampling is not explicitly given, it is constructed from the bound_type and the transfer matrix.
- Todo
- does it make sense so set the utimes explicitly? breakes the usual
workflow
- property transfer_matrix: ndarray
See base class.
- class ExponentialMTF(awg_rise_time: float, oversampling: int = 1, bound_type: Tuple = ('x', 0), offset: Optional[float] = None, num_ctrls: int = 1)[source]
Bases:
MatrixTF
This transfer function model smooths the control amplitudes by exponential saturation.
The functionality is meant to model the finite rise time of voltage sources.
- Todo
add initial and final level. Currently fixed at 0 (or the offset)
- plot_pulse_old(x: ndarray) None [source]
Plot the control amplitudes corresponding to the given optimisation variables.
- reverse_state(amplitudes=None, times=None, targetfunc=None)[source]
I assume only to be applied to Pulses generated by self.__call__(x) If times is None: We either need to know num_x or the oversampling. For now I assume that self.num_x is valid for the input data. :param amplitudes: :param times :param targetfunc: :return:
- property transfer_matrix: ndarray
See base class.
- class GaussianConvolution(sigma: Union[float, Sequence[float]], num_ctrls=1, order: int = 0, mode: str = 'nearest', truncate: float = 4.0)[source]
Bases:
TransferFunction
A gaussian convolution is applied as filter function.
For oversampling and boundaries use this TransferFunction in combination with EfficientOversamplingTF and ConcatenateTF. The implementation makes use of the gaussian filter function in the scipy.ndimage package.
- Parameters
sigma (float or sequence of float) – standard deviation for Gaussian kernel
order (int, optional) – An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian.
mode ({‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional) –
The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘reflect’. Behavior for each valid value is as follows:
‘reflect’ (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel.
‘constant’ (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter.
‘nearest’ (a a a a | a b c d | d d d d) The input is extended by replicating the last pixel.
‘mirror’ (d c b | a b c d | c b a) The input is extended by reflecting about the center of the last pixel.
‘wrap’ (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge
Default is ‘nearest’.
truncate (float, optinal) – Truncate the filter at this many standard deviations. Default is 4.0.
- gradient_chain_rule(deriv_by_transferred_par: array) array [source]
See base class.
The application of the chain rule is another Gaussian filter.
- Parameters
deriv_by_transferred_par (np.array, shape (num_x, num_f, num_par)) – The gradients of num_f functions by num_par optimization parameters at num_x different time steps.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters at num_y time steps.
- Return type
np.array, shape: (num_y, num_f, num_par)
- property transfer_matrix: array
Overrides the base class method.
- class GaussianMTF(omega=1, over_sample_rate=5, start=0.0, end=0.0, bound_type=('w', 2))[source]
Bases:
MatrixTF
Deprecated! - Might be reimplemented upon reasonable request.
Represent square function filtered through a gaussian filter.
Can not be used in conjunction with the concatenate tf.
- set_times(times)[source]
See base class.
Times/transferred_time correspond to the timeslot before the interpolation.
- property transfer_matrix
See base class.
- class IdentityTF(num_ctrls=1)[source]
Bases:
TransferFunction
Numerically efficient identity transfer function which does not change pulse nor time steps.
Base class functions __call__ and gradient_chane_rule are reimplemented in order to avoid setting a transfer matrix.
- class MatrixTF(num_ctrls: int = 1, bound_type: Optional[Tuple[str, int]] = None, oversampling: int = 1, offset: Optional[float] = None)[source]
Bases:
TransferFunction
- property transfer_matrix: array
If necessary, calculates the transfer matrix. Then returns it.
- Returns
T – Transfer matrix (the linearization of the control amplitudes).
- Return type
ndarray, shape (num_u, num_x, num_ctrl)
- class OversamplingMTF(oversampling: int = 1, bound_type: Optional[Tuple] = None, num_ctrls: int = 1, offset: float = 0)[source]
Bases:
MatrixTF
Oversamples and applies boundary conditions.
- class ParallelMTF(tf1: MatrixTF, tf2: MatrixTF)[source]
Bases:
MatrixTF
This transfer function will parallelize two transfer functions, such that they are applied to different control terms. Thus adding in the third dimension of the transfer matrix.
- Parameters
- class ParallelTF(tf1: TransferFunction, tf2: TransferFunction)[source]
Bases:
TransferFunction
This transfer function will parallelize two transfer functions, such that they are applied to different control terms. Thus adding in the third dimension of the transfer matrix.
- Parameters
tf1 (TransferFunction) – First transfer function. This function operates on the first tf1.num_ctrls control pulses.
tf2 (TransferFunction) – Second transfer function. This function operates on the next tf2._num_ctrls number of control pulses.
- class TransferFunction(num_ctrls: int = 1, bound_type: Optional[Tuple[str, int]] = None, oversampling: int = 1, offset: Optional[float] = None)[source]
Bases:
ABC
A class for representing transfer functions, between optimization variables of the optimization algorithm and the amplitudes of the control fields.
The intended workflow is to initialise the transfer function object first and subsequently set the x_times, which is the time scale of the optimization variables. Then the transfer function is called to calculate control amplitudes and gradients.
Examples
Example work flow with the abstract base class:
>>> x_times = np.ones(5) >>> optimization_variables = np.random.rand(5) >>> gradient_fidelity_by_control_amplitudes = np.random.rand(shape=(30,5)) >>> transfer_function = TransferFunction(oversampling=5, >>> bound_type=('x', 1)) >>> transfer_function.set_times(x_times) >>> control_amplitudes = transfer_function(optimization_variables) >>> gradient_fidelity_by_optimization_variables = >>> transfer_function.gradient_chain_rule( >>> gradient_fidelity_by_control_amplitudes)
- Parameters
num_ctrls (int) – Number of controlled amplitudes.
oversampling (int) – Each time step of the optimization variables is sliced into a number of time steps of the control amplitudes.
bound_type ((str, int)) –
The pulse can be padded with zeros (before adding the offset) to avoid bleedthrough i.e. that the pulses overlap slightly and thereby influence each other.
The string states, whether you want to pad before or after the oversampling or just to the end of the pulse.
The integer specifies the amount of padding elements (also depending on the code of course).
If not all time steps have the same length: Let dt denote the first (or respectively last) time duration when you are padding to the beginning (end) of the sequence.
- string options:
”n”: n extra slice of dt/overSampleRate “x”: n extra slice of dt (default with n=1) “right_n”: n extra slice of dt/overSampleRage on the right side
offset (float) – Constant offset which is added to the optimization parameters.
- num_ctrls
Number of controlled amplitudes.
- Type
int
- oversampling
Each time step of the optimization variables is sliced into a number of time steps of the control amplitudes.
- Type
int
- bound_type
The pulse can be padded with zeros (before adding the offset) to avoid bleedthrough i.e. that the pulses overlap slightly and thereby influence each other.
The string states, whether you want to pad before or after the oversampling or just to the end of the pulse.
The integer specifies the amount of padding elements (also depending on the code of course).
If not all time steps have the same length: Let dt denote the first (or respectively last) time duration when you are padding to the beginning (end) of the sequence.
- string options:
“n”: n extra slice of dt/overSampleRate “x”: n extra slice of dt (default with n=1) “right_n”: n extra slice of dt/overSampleRage on the right side
- Type
(str, int) or None
- offset
Constant offset which is added to the optimization parameters.
- Type
float
- num_x
Number of time slices of the transferred optimization variables.
- Type
int
- x_times
Time values for the transferred optimization parameters. These describe the length of the time slices.
- Type
array, shape (num_u)
- _num_y
Number of time slices of the raw optimization variables.
- Type
int
- _y_times
Time values for the raw control variables. These describe the length of the time slices.
- Type
array, shape (num_x)
- _absolute_y_times
Absolute times of the raw optimization variables. The values describe the point in time where a time slice ends and the next one begins.
- Type
array_like, shape (num_x + 1)
- __call__(y):
Application of the transfer function.
- transfer_matrix: property, returns array, shape (num_x, num_y, num_par)
Returns the transfer matrix.
- num_padding_elements: property, returns list
Two elements list with the number of elements padded to the beginning and end, as specified by the bound type.
- set_times(times):
Set the times of the optimization variables and calculates the times of the optimization variables.
- set_absolute_times(absolute_y_times):
Set the absolute times (time points of beginning and ending a time step) of the optimization variables.
- plot_pulse(y):
For the raw optimisation variables (y), plot the resulting pulse.
- `Todo`
bound type seems to be buggy. test with exp_transfer
parse bound_type to raise exception only in one function.
add exception to the docstring
- abstract gradient_chain_rule(deriv_by_transferred_par: array) array [source]
Obtain the derivatives of a quantity a i.e. da/dy by the optimization variables from the derivatives by the amplitude of the control fields.
The chain rule applies: df/dy = df/dx * dx/dy.
- Parameters
deriv_by_transferred_par (np.array, shape (num_x, num_f, num_par)) – The gradients of num_f functions by num_par optimization parameters at num_x different time steps.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters at num_y time steps.
- Return type
np.array, shape: (num_y, num_f, num_par)
- property num_padding_elements: (<class 'int'>, <class 'int'>)
Convenience function. Returns the number of elements padded to the beginning and the end of the control amplitude times.
- Returns
num_padding_elements – (elements padded to the beginning, elements padded to the end)
- Return type
(int, int)
- plot_pulse(y: array) None [source]
Plot the control amplitudes corresponding to the given optimisation variables.
- Parameters
y (array, shape (num_y, num_par)) – Raw optimization parameters.
- set_absolute_times(absolute_y_times: ndarray) None [source]
Generate the time_slot duration array ‘transferred_time’ (here: x_times)
This time slices depend on the oversampling of the control variables and the boundary conditions. The differences of the absolute times give the time steps x_times.
- Parameters
absolute_y_times (Union[np.ndarray, list]) – Absolute times of the start / end of each time segment for the raw optimization parameters.
- set_times(y_times: array) None [source]
Generate the time_slot duration array ‘transferred_time’ (here: x_times).
The time slices depend on the oversampling of the control variables and the boundary conditions. The times are for the intended use cases only set once.
- Parameters
y_times (Union[np.ndarray, list], shape (num_y)) – The time steps / durations of constant optimization variables. num_y is the number of time steps for the raw optimization variables.
3.16. util module
Utility functions for the optimal control package.
3.16.1. Functions
deprecated()
decoratorMarks functions and methods which are deprecated.
needs_refactoring()
decoratorMarks objects which need refactoring.
timeit()
decoratorMeasures the run time of a function evaluation.
closest_unitary()
Calculates the closest unitary matrix to a square matrix.
Notes
The implementation was inspired by the optimal control package of QuTiP [1] (Quantum Toolbox in Python)
References
- 1
J. R. Johansson, P. D. Nation, and F. Nori: “QuTiP 2: A Python framework for the dynamics of open quantum systems.”, Comp. Phys. Comm. 184, 1234 (2013) [DOI: 10.1016/j.cpc.2012.11.019].
- closest_unitary(A)[source]
Closest unitary to given square matrix.
Calculate the unitary matrix U that is closest with respect to the operator norm distance to the general matrix A.
- Returns
U – Closest unitary.
- Return type
np.array
- deprecated(func)[source]
This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emitted when the function is used.
- needs_refactoring(func)[source]
This is a decorator which can be used to mark functions which need to be refactored. It will result in a warning being emitted when the function is used.
- timeit(function)[source]
Convenience function to measure the run time of a function.
This function can be applied as decorator to get a function that evaluates the input function an measures the run time.
- Parameters
function (Callable) – The function of which the run time is measured.
- Returns
timed – Timed function.
- Return type
Callable
3.17. Module contents
Hardware adapted quantum simulation and optimal control
- class Analyser(data: DataContainer)[source]
Bases:
object
Holds convenience functions to visualize the optimization.
The Analyser class can be used to make plots of the optimization data stored in an instance of the DataContainer class. This can be useful to judge the performance of optimization algorithms and investigate how fast the convergence is and whether the algorithm has fully converged.
- absolute_costs() ndarray [source]
Calculates for each optimization run and each iteration in the optimization algorithm the sum of the costs.
- Returns
costs – The sum of the costs.
- Return type
numpy array, shape (n_runs, n_iter)
- integral_cost_fkt_times(n: int = 0) ndarray [source]
Sum of the time required for the evaluation of the cost function.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
- Returns
integral_times – Integrated time required for the cost function evaluation.
- Return type
np.array
- integral_grad_fkt_times(n: int = 0)[source]
Sum of the time required for the evaluation of the cost function gradients.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
- Returns
integral_times – Integrated time required for the cost function gradient evaluation.
- Return type
np.array
- property n_least_square: int
Returns the number of the optimization run which yields the smallest total costs.
The total cost is measured as squared sum of the final cost function values.
- Returns
n_least_square – Number of optimization run with smallest final costs.
- Return type
int
- opt_times()[source]
Total optimization times.
- Returns
total_times – Time required per optimization run.
- Return type
np.array
- plot_absolute_costs() -> (<class 'matplotlib.figure.Figure'>, <class 'matplotlib.axes._axes.Axes'>)[source]
Plots the absolute costs.
- plot_costs(n=0, log_y=True, ax=None) None [source]
Plots the absolute cost values as function of optimization iteration.
- Parameters
n (int, optional) – Number of the optimization run. Defaults to 0.
log_y (bool, optional) – If True then the costs are plotted logarithmically. Defaults to True.
ax (matplotlib.pyplot.axes) – Axes element in which the data is plotted. If not specified, a new one will be created.
- Returns
ax – Axes with the plot.
- Return type
matplotlib.pyplot.axes
Time share of the cost function evaluation.
Time share of the cost function gradient calculation.
- class ConcatenateMTF(tf1: MatrixTF, tf2: MatrixTF)[source]
Bases:
MatrixTF
Concatenates two matrix transfer functions.
Matrix version of ConcatenateTF.
- Parameters
- set_times(y_times: ndarray) None [source]
Sets x_times on the first transfer function and sets the resulting x_times on the second transfer function.
- Parameters
y_times (Optional[np.array, list]) – Time durations of the constant control steps of the raw optimization parameters.
- property transfer_matrix
The total transfer matrix is the product of the individual ones.
- class ConcatenateTF(tf1: TransferFunction, tf2: TransferFunction)[source]
Bases:
TransferFunction
Concatenates transfer functions.
This class can be used if there are two transfer functions which are to be applied one after another. For example if first the pulse generation and subsequently a pulse amplification shall be modeled.
- Parameters
tf1 (TransferFunction) – First matrix transfer function. This function operates directly on the optimization variables.
tf2 (TransferFunction) – Second matrix transfer function. This function operates on the output of the first transfer function.
- class ConvolutionTF(kernel: ndarray, mode: str = 'nearest', num_ctrls: int = 1, cval: float = 0.0)[source]
Bases:
TransferFunction
A convolution as filter function.
This implementation uses the function scipy.ndimage.convolve. For oversampling and boundaries use this TransferFunction in combination with EfficientOversamplingTF and ConcatenateTF.
- gradient_chain_rule(deriv_by_transferred_par: array) array [source]
See base class.
The application of the chain rule is another convolution.
- Parameters
deriv_by_transferred_par (np.array, shape (num_x, num_f, num_par)) – The gradients of num_f functions by num_par optimization parameters at num_x different time steps.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters at num_y time steps.
- Return type
np.array, shape: (num_y, num_f, num_par)
- property transfer_matrix: array
Overrides the base class method.
- class CustomAmpFunc(value_function: Callable[[ndarray], ndarray], derivative_function: Callable[[ndarray], ndarray])[source]
Bases:
AmplitudeFunction
A general amplitude function which is applied to the amplitude values.
- Parameters
value_function (Callable array to array) – This function expresses the functional dependency of the control amplitudes on the optimization parameters. The function receives the optimization parameters x as array of the shape (num_t, num_par) and must return the control amplitudes u as array of the shape (num_t, num_ctrl). Where num_t is the number of time slices, num_par the number of optimization parameters and num_ctrl the number of control operators in the Hamiltonian.
derivative_function (Callable array to array) – This function describes the derivative of the control amplitudes by the optimization parameters. The function receives the optimisation parameters x as array of shape (num_t, num_par) and must return the derivatives of the control amplitudes by the optimization parameters as array of shape (num_t, num_par, num_ctrl).
- class CustomMTF(transfer_matrix: array, x_times: Optional[array] = None, bound_type: Optional[Tuple] = None, oversampling: int = 1, offset: Optional[float] = None, num_ctrls: int = 1)[source]
Bases:
MatrixTF
This class implements a linear transfer function.
The action is fully described by the transfer function and a constant offset, given at the initialization of the instance.
- Parameters
transfer_matrix (np.array, shape (num_x, num_y, num_par)) – Constant transfer function.
offset (float) – Constant offset.
x_times (np.array, shape (num_x)) – Time slices of the control amplitudes. If they are not explicitly given, they are constructed from oversampling and bound_type.
bound_type (see base class) – If no bound_type is specified. The program assumes that there is no padding.
oversampling (int) – If the oversampling is not explicitly given, it is constructed from the bound_type and the transfer matrix.
- Todo
- does it make sense so set the utimes explicitly? breakes the usual
workflow
- property transfer_matrix: ndarray
See base class.
- class DataContainer(storage_path: Optional[str] = None, file_name: str = 'Temp File', indices: Optional[List[str]] = None, final_costs: Optional[List] = None, init_parameters: Optional[List] = None, final_parameters: Optional[List] = None, costs: Optional[List[List]] = None, parameters: Optional[List[List]] = None, status: Optional[List] = None, optimization_stats: Optional[List] = None, append_time_to_path=True)[source]
Bases:
object
Stores data of the optimization.
This class gathers the information stored in multiple objects of the class OptimResult.
- Parameters
storage_path (string) – The path were this instance of DataContainer is to be stored or from where is shall be loaded.
file_name (string) – The file name will be appended to the path for storage and loading. The default value is an empty string assuming that the storage path already contains the file name.
indices (list of str) – The indices of the costs.
final_costs (list) – The final values of the cost function.
init_parameters (list) – The initial optimization parameters.
final_parameters (list) – The final optimization parameters.
costs (list of list) – All values of the cost functions during the optimization.
parameters (list of list) – All parameters for which the cost functions were evaluated during the optimization.
status (list of None or int) – The termination_reason as integer. Like in scipy.OptimizeResult None if the optimization has not started. -1: improper input parameters status 0: the maximum number of function evaluations is exceeded. 1: gradient norm termination condition is satisfied. 2: cost function termination condition is satisfied. 3: minimal step size termination condition is satisfied. 4: Both 2 and 3 termination conditions are satisfied. 5: Wall time exceeded.
optimization_stats (list) – Optimization statistics, which have been appended to the data.
append_time_to_path (bool) – If True, the current time is appended to the file name.
- append_optim_result(optim_result: OptimizationResult)[source]
Appends an instance of OptimizationResult to the stored data.
The Information gained in an optimization run is extracted and appended to the various lists of the DataContainer.
- Parameters
optim_result (OptimizationResult) – Result of an optimization run.
- classmethod from_pickle(filename)[source]
Read class from pickled file.
- Parameters
filename (str) – The name of the file which is loaded.
- property index
Indices of the cost functions.
- class DenseOperator(obj: Union[Qobj, ndarray, csr_matrix, DenseOperator])[source]
Bases:
OperatorMatrix
Dense control matrix.
The data is stored as numpy array and uses the implementations of the numpy package.
- Parameters
obj (Qobj or numpy array or scipy csr_matrix) – The matrix to be stored and handled as dense matrix.
- data
The data stored in a two dimensional numpy array
- Type
numpy array
- conj(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- dag(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- dexp(direction: DenseOperator, tau: complex = 1, compute_expm: bool = False, method: str = 'spectral', is_skew_hermitian: bool = False, epsilon: float = 1e-10) Union[DenseOperator, Tuple[DenseOperator]] [source]
Frechet derivative of the matrix exponential.
- Parameters
direction (DenseOperator) – Direction in which the frechet derivative is calculated. Must be of the same shape as self.
tau (complex) – The matrix is multiplied by tau before calculating the exponential.
compute_expm (bool) – If true, then the matrix exponential is calculated and returned as well.
method (string) – Numerical method used for the calculation of the matrix exponential. Currently the following are implemented: - ‘Frechet’: Uses the scipy linalg matrix exponential for simultaniously calculation of the frechet derivative expm_frechet - ‘approx’: Approximates the Derivative by finite differences. - ‘first_order’: First order taylor approximation - ‘second_order’: Second order taylor approximation - ‘third_order’: Third order taylor approximation - ‘spectral’: Use the self implemented spectral decomposition
is_skew_hermitian (bool) – Only required, for the method ‘spectral’. If set to True, then the matrix is assumed to be skew hermitian in the spectral decomposition.
epsilon (float) – Width of the finite difference. Only relevant for the method ‘approx’.
- Returns
prop (DenseOperator) – The matrix exponential. Only returned if compute_expm is True!
prop_grad (DenseOperator) – The frechet derivative d exp(Ax + B)/dx at x=0 where A is the direction and B is the matrix stored in self.
- Raises
NotImplementedError: – If the method given as parameter is not implemented.
- exp(tau: complex = 1, method: str = 'spectral', is_skew_hermitian: bool = False) DenseOperator [source]
Matrix exponential.
- Parameters
tau (complex) – The matrix is multiplied by tau before calculating the exponential.
method (string) – Numerical method used for the calculation of the matrix exponential. Currently the following are implemented: - ‘approx’, ‘Frechet’: use the scipy linalg matrix exponential - ‘first_order’: First order taylor approximation - ‘second_order’: Second order taylor approximation - ‘third_order’: Third order taylor approximation - ‘spectral’: Use the self implemented spectral decomposition
is_skew_hermitian (bool) – Only important for the method ‘spectral’. If set to True then the matrix is assumed to be skew hermitian in the spectral decomposition.
- Returns
prop – The matrix exponential.
- Return type
- Raises
NotImplementedError: – If the method given as parameter is not implemented.
- identity_like() DenseOperator [source]
See base class.
- kron(other: DenseOperator) DenseOperator [source]
See base class.
- norm(ord: Union[str, None, int] = 'fro') float64 [source]
Calulates the norm of the matrix.
Uses the implementation of numpy.linalg.norm.
- Parameters
ord (string) – Defines the norm which is calculated. Defaults to the Frobenius norm ‘fro’.
- Returns
norm – Norm of the Matrix.
- Return type
float
- prop(tau: complex = 1) DenseOperator [source]
See base class.
- ptrace(dims: Sequence[int], remove: Sequence[int], do_copy: bool = True) DenseOperator [source]
Partial trace of the matrix.
If the matrix describes a ket, the corresponding density matrix is calculated and used for the partial trace.
This implementation closely follows that of QuTip’s qobj._ptrace_dense. :param dims: Dimensions of the subspaces making up the total space on which
the matrix operates. The product of elements in ‘dims’ must be equal to the matrix’ dimension.
- Parameters
remove (list of int) – The selected subspaces as indices over which the partial trace is formed. The given indices correspond to the ordering of subspaces specified in the ‘dim’ argument.
do_copy (bool, optional) – If false, the operation is executed inplace. Otherwise returns a new instance. Defaults to True.
- Returns
pmat – The partially traced OperatorMatrix.
- Return type
- Raises
AssertionError: – If matrix dimension does not match specified dimensions.
Examples
ghz_ket = DenseOperator(np.array([[1,0,0,0,0,0,0,1]]).T) / np.sqrt(2) ghz_rho = ghz_ket * ghz_ket.dag() ghz_rho.ptrace(dims=[2,2,2], remove=[0,2])
DenseOperator with data: array([[0.5+0.j, 0. +0.j],
[0. +0.j, 0.5+0.j]])
- transpose(do_copy: bool = True) Optional[DenseOperator] [source]
See base class.
- truncate_to_subspace(subspace_indices: Optional[Sequence[int]], map_to_closest_unitary: bool = False) DenseOperator [source]
See base class.
- class ExponentialMTF(awg_rise_time: float, oversampling: int = 1, bound_type: Tuple = ('x', 0), offset: Optional[float] = None, num_ctrls: int = 1)[source]
Bases:
MatrixTF
This transfer function model smooths the control amplitudes by exponential saturation.
The functionality is meant to model the finite rise time of voltage sources.
- Todo
add initial and final level. Currently fixed at 0 (or the offset)
- plot_pulse_old(x: ndarray) None [source]
Plot the control amplitudes corresponding to the given optimisation variables.
- reverse_state(amplitudes=None, times=None, targetfunc=None)[source]
I assume only to be applied to Pulses generated by self.__call__(x) If times is None: We either need to know num_x or the oversampling. For now I assume that self.num_x is valid for the input data. :param amplitudes: :param times :param targetfunc: :return:
- property transfer_matrix: ndarray
See base class.
- class GaussianConvolution(sigma: Union[float, Sequence[float]], num_ctrls=1, order: int = 0, mode: str = 'nearest', truncate: float = 4.0)[source]
Bases:
TransferFunction
A gaussian convolution is applied as filter function.
For oversampling and boundaries use this TransferFunction in combination with EfficientOversamplingTF and ConcatenateTF. The implementation makes use of the gaussian filter function in the scipy.ndimage package.
- Parameters
sigma (float or sequence of float) – standard deviation for Gaussian kernel
order (int, optional) – An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian.
mode ({‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional) –
The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘reflect’. Behavior for each valid value is as follows:
‘reflect’ (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel.
‘constant’ (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value, defined by the cval parameter.
‘nearest’ (a a a a | a b c d | d d d d) The input is extended by replicating the last pixel.
‘mirror’ (d c b | a b c d | c b a) The input is extended by reflecting about the center of the last pixel.
‘wrap’ (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge
Default is ‘nearest’.
truncate (float, optinal) – Truncate the filter at this many standard deviations. Default is 4.0.
- gradient_chain_rule(deriv_by_transferred_par: array) array [source]
See base class.
The application of the chain rule is another Gaussian filter.
- Parameters
deriv_by_transferred_par (np.array, shape (num_x, num_f, num_par)) – The gradients of num_f functions by num_par optimization parameters at num_x different time steps.
- Returns
deriv_by_opt_par – The derivatives by the optimization parameters at num_y time steps.
- Return type
np.array, shape: (num_y, num_f, num_par)
- property transfer_matrix: array
Overrides the base class method.
- class IdentityAmpFunc[source]
Bases:
AmplitudeFunction
The control amplitudes are identical with the optimization parameters.
- class IdentityTF(num_ctrls=1)[source]
Bases:
TransferFunction
Numerically efficient identity transfer function which does not change pulse nor time steps.
Base class functions __call__ and gradient_chane_rule are reimplemented in order to avoid setting a transfer matrix.
- class IncoherentLeakageError(solver: SchroedingerSMonteCarlo, computational_states: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
This class measures leakage as quantum operation error.
The resulting infidelity is measured by truncating the leakage states of the propagator U yielding the Propagator V on the computational basis. The infidelity is then given as the distance from unitarity: infid = 1 - trace(V^dag V) / d
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
TODO –
adjust docstring
- class LeakageError(solver: Solver, computational_states: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
This class measures leakage as quantum operation error.
The resulting infidelity is measured by truncating the leakage states of the propagator U yielding the Propagator V on the computational basis. The infidelity is then given as the distance from unitarity: infid = 1 - trace(V^dag V) / d
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
- class LeakageLiouville(solver: Solver, computational_states: List[int], label: Optional[List[str]] = None, verbose: int = 0, input_unitary: bool = False, monte_carlo: bool = False)[source]
Bases:
CostFunction
This class measures leakage in Liouville space.
The leakage is calculated in Liouville space as matrix element. In pseudo Code:
L = < projector leakage space | Propagator | projector comp. space >
- Parameters
solver (TimeSlotComputer) – The time slot computer computing the propagation of the system.
computational_states (list of int) – List of indices marking the computational states of the propagator. These are all but the leakage states.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
verbose (int) – Additional printed output for debugging.
input_unitary (bool) – If True, then the input is assumed to be formulated in the standard Hilbert space and thus expressed as unitary propagator. This propagator is then expressed as superoperator.
monte_carlo (bool) – If True, then we make a monte carlo simulation and average over the propagators.
- class LeastSquaresOptimizer(system_simulator: Optional[Simulator] = None, termination_cond: Optional[Dict] = None, save_intermediary_steps: bool = True, method: str = 'trf', bounds: Optional[Union[ndarray, List]] = None, use_jacobian_function=True, cost_func_weights: Optional[Sequence[float]] = None, store_optimizer: bool = False)[source]
Bases:
Optimizer
Uses the scipy least squares method for optimization.
- Parameters
system_simulator (Simulator) – The systems simulator.
termination_cond (dict) – Termination conditions.
save_intermediary_steps (bool, optional) – If False, only the simulation result is stored. Defaults to False.
method (str, optional) – The optimization method used. Currently implemented are: - ‘trf’: A trust region optimization algorithm. This is the default.
bounds (list of array-like, optional) – Attention: The boundary format can vary between optimizers! The boundary conditions for the pulse optimizations. If none are given then the pulse is assumed to take any real value. The boundaries are given as a list of two arrays. The first array specifies the upper and the second array specifies the lower bounds. Single parameters can be excepted by using np.inf with appropriate sign.
- run_optimization(initial_control_amplitudes: array, verbose: int = 0) OptimizationResult [source]
See base class.
- class LindbladSolver(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_unitary_derivatives: bool = False, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, initial_diss_super_op: Optional[List[OperatorMatrix]] = None, lindblad_operators: Optional[List[OperatorMatrix]] = None, prefactor_function: Optional[Callable[[array, array], array]] = None, prefactor_derivative_function: Optional[Callable[[array, array], array]] = None, super_operator_function: Optional[Callable[[array, array], List[OperatorMatrix]]] = None, super_operator_derivative_function: Optional[Callable[[array, array], List[List[OperatorMatrix]]]] = None, is_skew_hermitian: bool = False, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
SchroedingerSolver
Solves a master equation for an open quantum system in the Markov approximation using the Lindblad super operator formalism.
The master equation to be solved is
\[d \rho / dt = i [\rho, H] + \sum_k (L_k \rho L_k^\dagger - .5 L_k^\dagger L_k \rho - .5 \rho L_k^\dagger L_k)\]with the Lindblad operators L_k. The solution is calculated as
\[\rho(t) = exp[(-i \mathcal{H} + \mathcal{G})t] \rho(0)\]with the dissipative super operator
\[\mathcal{G} = \sum_k D(L_k)\]\[D(L) = L^\ast \otimes L - .5 I \otimes (L^\dagger L) - .5 (L^T L^\ast) \otimes I\]The dissipation super operator can be given in three different ways.
1. A nested list of dissipation super operators D(L_k) as control matrices. 2. A nested list of Lindblad operators L as control matrices. 3. A function handle receiving the control amplitudes as sole argument and returning a dissipation super operator as list of control matrices.
Optionally a prefactor function can be given for 1. and 2. This function receives the control parameters and returns an array of the shape num_t x num_l where num_t is the number of time steps in the control and num_l is the number of Lindblad operators or dissipation super operators.
If multiple construction arguments are given, the implementation prioritises the function (3.) over the Lindblad operators (2.) over the dissipation super operator (1.).
- Parameters
initial_diss_super_op (List[ControlMatrix], len num_t) – Initial dissipation super operator; num_l is the number of Lindbladians. Set if you want to use (1.) (See documentation above!). The control matrices are expected to be of shape (dim, dim) where dim is the dimension of the system.
lindblad_operators (List[ControlMatrix], len num_l) – Lindblad operators; num_l is the number of Lindbladians. Set if you want to use (2.) (See documentation above!). The Lindblad operators are assumend to be of shape (dim, dim) where dim is the dimension of the system.
prefactor_function (Callable[[np.array, np.array], np.array]) –
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns prefactors as numpy array of shape (num_t, num_l). The prefactors a_k are used as weights in the sum of the total dissipation operator.
\[\mathcal{G} = \sum_k a_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k.
\[L_k = b_k * C_k\]Then the prefactor is the squared absolute value of this number:
\[a_k = |b_k|^2\]Set if you want to use method (1.) or (2.). (See class documentation.)
prefactor_derivative_function (Callable[[np.array, np.array], np.array]) –
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the derivatives of the prefactors as numpy array of shape (num_t, num_ctrl, num_l). The derivatives d_k are used as weights in the sum of the derivative of the total dissipation operator.
\[d \mathcal{G} / d u_k = \sum_k d_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k. And this number depends on the control amplitudes u_k
\[L_k = b_k (u_k) * C_k\]Then the derivative of the prefactor is the derivative of the squared absolute value of this number:
\[d_k = d |b_k|^2 / d u_k\]Set if you want to use method (1.) or (2.). (See class documentation.)
super_operator_function (Callable[[np.array, np.array], List[ControlMatrix]]) – Receives the control amlitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the total dissipation operators as list of length num_t. Set if you want to use method (3.). (See class documentation.)
super_operator_derivative_function (Callable[[np.array, np.array],) – List[List[ControlMatrix]]] Receives the control amlitudes u (as numpy array of shape (num_t, num_ctrl)) and the transferred optimization parameters (as numpy array of shape (num_t, num_opt)) and returns the derivatives of the total dissipation operators as nested list of shape [[] * num_ctrl] * num_t. Set if you want to use method (3.). (See class documentation.)
is_skew_hermitian (bool) – If True, then the total dynamics generator is assumed to be skew hermitian.
- _diss_sup_op
Total dissipaton super operator.
- Type
List[ControlMatrix], len num_t
- _diss_sup_op_deriv
shape [[] * num_ctrl] * num_t Derivative of the total dissipation operator with respect to the control amplitudes.
- Type
List[List[ControlMatrix]],
- _initial_diss_super_op
Initial dissipation super operator; num_l is the number of Lindbladians.
- Type
List[ControlMatrix], len num_l
- _lindblad_operatorsList[ControlMatrix], len num_l
Lindblad operators; num_l is the number of Lindbladians.
- _prefactor_function
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns prefactors as numpy array of shape (num_t, num_l). The prefactors a_k are used as weights in the sum of the total dissipation operator.
\[\mathcal{G} = \sum_k a_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k.
\[L_k = b_k * C_k\]Then the prefactor is the squared absolute value of this number:
\[a_k = |b_k|^2\]Set if you want to use method (1.) or (2.). (See class documentation.)
- Type
Callable[[np.array], np.array]
- _prefactor_deriv_function
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the derivatives of the prefactors as numpy array of shape (num_t, num_ctrl, num_l). The derivatives d_k are used as weights in the sum of the derivative of the total dissipation operator.
\[d \mathcal{G} / d u_k = \sum_k d_k * D(L_k)\]If the Lindblad operator is for example given by a complex number b_k times a constant (in time) matrix C_k. And this number depends on the control amplitudes u_k
\[L_k = b_k (u_k) * C_k\]Then the derivative of the prefactor is the derivative of the squared absolute value of this number:
\[d_k = d |b_k|^2 / d u_k\]- Type
Callable[[np.array], np.array]
- _sup_op_func
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the total dissipation operators as list of length num_t.
- Type
Callable[[np.array], List[ControlMatrix]]
- _sup_op_deriv_func
Receives the control amplitudes u (as numpy array of shape (num_t, num_ctrl)) and returns the derivatives of the total dissipation operators as nested list of shape [[] * num_ctrl] * num_t.
- Type
Callable[[np.array], List[List[ControlMatrix]]]
- _parse_dissipative_super_operator: None
- _calc_diss_sup_op: List[ControlMatrix]
Calculates the total dissipation super operator.
- _calc_diss_sup_op_deriv: Optional[List[List[ControlMatrix]]]
Calculates the derivatives of the total dissipation super operators with respect to the control amplitudes.
- `Todo`
Write parser
- class LiouvilleMonteCarloEntanglementInfidelity(solver: SchroedingerSMonteCarlo, target: Optional[OperatorMatrix], label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None)[source]
Bases:
CostFunction
Entanglement infidelity for a combination of Monte Carlo and Liouville description.
The propagators are first mapped to the super operator formalism in Liouville space. Next, they are averaged and finally we calculate the entanglement infidelity.
Systematic errors cannot be neglected in the current formulation.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
- class NTGColoredNoise(n_samples_per_trace: int, noise_spectral_density: Callable, dt: float, n_traces: int = 1, n_noise_operators: int = 1, always_redraw_samples: bool = True, low_frequency_extension_ratio: int = 1)[source]
Bases:
NoiseTraceGenerator
This class draws noise samples from noises of arbitrary colored spectra.
- Parameters
n_samples_per_trace (int) – Number of noise samples per trace.
n_traces (int, optional) – Number of noise traces. Default is 1.
n_noise_operators (int, optional) – Number of noise operators. Default is 1.
always_redraw_samples (bool) – If true. The samples are always redrawn upon request. The stored samples are not returned.
noise_spectral_density (function) – The one-sided noise spectral density as function of frequency.
dt (float) – Time distance between two adjacent samples.
low_frequency_extension_ratio (int, optional) – When creating the time samples, the total time is multiplied with this factor. This allows taking frequencies into account which are lower than 1 / total time. Defaults to 1.
- noise_spectral_density
The noise spectral density as function of frequency.
- Type
function
- dt
Time distance between two adjacent samples.
- Type
float
- _sample_noise: None
Samples noise from an arbitrary colored spectrum.
See also
noise.NoiseTraceGenerator
Abstract Base Class
- plot_periodogram(n_average: int, scaling: str = 'density', log_plot: Optional[str] = None, draw_plot=True)[source]
Creates noise samples and plots the corresponding periodogram.
- Parameters
n_average (int) – Number of Periodograms which are averaged.
scaling ({'density', 'spectrum'}, optional) – If ‘density’ then the power spectral density in units of V**2/Hz is plotted. If ‘spectral’ then the power spectrum in units of V**2 is plotted. Defaults to ‘density’.
log_plot ({None, 'semilogy', 'semilogx', 'loglog'}, optional) – If None, then the plot is not plotted logarithmically. If ‘semilogy’ only the y-axis is plotted logarithmically, if ‘semilogx’ only the x-axis is plotted logarithmically, if ‘loglog’ both axis are plotted logarithmically. Defaults to None.
draw_plot (bool, optional) – If true, then the periodogram is plotted. Defaults to True.
- Returns
deviation_norm – The vector norm of the deviation between the actual power spectral density and the power spectral densitry found in the periodogram.
- Return type
float
- class NTGQuasiStatic(standard_deviation: List[float], n_samples_per_trace: int, n_traces: int = 1, noise_samples: Optional[ndarray] = None, always_redraw_samples: bool = True, correct_std_for_discrete_sampling: bool = True, sampling_mode: str = 'uncorrelated_deterministic')[source]
Bases:
NoiseTraceGenerator
This class draws noise traces of quasistatic noise.
The Noise distribution is assumed normal. It would not make sense to use the attribute always_redraw_samples if the samples are deterministic, and therefore always the same. If multiple noise operators are given, then the noise is sampled for each one separately.
- Parameters
standard_deviation (List[float], len (n_noise_operators)) – Standard deviations of the noise assumed on the noise operators.
n_samples_per_trace (int) – Number of noise samples per trace.
n_traces (int, optional) – Number of noise traces. Default is 1.
noise_samples (None or np.ndarray, optional) – shape (n_noise_operators, n_traces, n_samples_per_trace) Precalculated noise samples. Defaults to None.
sampling_mode ({'uncorrelated_deterministic', 'monte_carlo'}, optional) – The method by which the quasi static noise samples are drawn. The following are implemented: ‘uncorrelated_deterministic’: No correlations are assumed. Each noise operator is sampled n_traces times deterministically. ‘monte_carlo’: The noise is assumed to be correlated. Samples are drawn by pseudo-randomly. Defaults to ‘uncorrelated_deterministic’.
- standard_deviation
Standard deviations of the noise assumed on the noise operators.
- Type
List[float], len (n_noise_operators)
See also
noise.NoiseTraceGenerator
Abstract Base Class
- property n_traces: int
Number of traces.
The number of requested traces must be multiplied with the number of standard deviations because if standard deviation is sampled separately.
- class OperationInfidelity(solver: Solver, target: OperatorMatrix, fidelity_measure: str = 'entanglement', super_operator_formalism: bool = False, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False)[source]
Bases:
CostFunction
Calculates the infidelity of a quantum channel.
The infidelity of a quantum channel described by a unitary evolution or propagator in the master equation formalism.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
fidelity_measure (string, optional) – If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
super_operator_formalism (bool, optional) – If true, the time slot computer is expected to return a propagator in the super operator formalism, while the target unitary is not given as super operator. If false, no super operators are assumed.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool, optional) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- solver
The time slot computer simulating the systems dynamics.
- Type
TimeSlotComputer
- target
Unitary target evolution.
- Type
ControlMatrix
- fidelity_measure
If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
- Type
string
- super_operator_formalism
If true, the time slot computer is expected to return a propagator in the super operator formalism, while the target unitary is not given as super operator. If false, no super operators are assumed.
- Type
bool
- Raises
NotImplementedError – If the fidelity measure is not ‘entanglement’.
Todo: –
add the average fidelity? or remove the fidelity_measure.
gradient does not truncate to the subspace.
- class OperationNoiseInfidelity(solver: SchroedingerSMonteCarlo, target: Optional[OperatorMatrix], label: Optional[List[str]] = None, fidelity_measure: str = 'entanglement', computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False, neglect_systematic_errors: bool = True)[source]
Bases:
CostFunction
Averages the operator fidelity over noise traces.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
target (ControlMatrix) – Unitary target evolution.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
fidelity_measure (string, optional) – If ‘entanglement’: the entanglement fidelity is calculated. Otherwise an error is raised.
computational_states (list of int, optional) – If set, the chosen fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool, optional) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
neglect_systematic_errors (bool) – If true, the mean operator fidelity is calculated with respect to the simulated propagator without statistical noise. Otherwise the mean operator fidelity is calculated with respect to the target propagator.
- neglect_systematic_errors
If true, the standard deviation of the operator fidelity is measured. Otherwise the mean operator fidelity is calculated with respect to the target propagator.
- Type
bool
- class OperatorFilterFunctionInfidelity(solver: Solver, noise_power_spec_density: Callable, omega: Sequence[float], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Calculates the infidelity with the filter function formalism.
- Parameters
solver (Solver) – The time slot computer simulating the systems dynamics.
label (list of str) – Indices of the returned infidelities for distinction in the analysis.
noise_power_spec_density (Callable) – The noise power spectral density in units of inverse frequencies that returns an array of shape (n_omega,) or (n_nops, n_omega). In the first case, the same spectrum is taken for all noise operators, in the second, it is assumed that there are no correlations between different noise sources and thus there is one spectrum for each noise operator. The order of the noise terms must correspond to the order defined in the solver by filter_function_h_n.
omega (Sequence[float]) – The frequencies at which the integration is to be carried out.
- costs() Union[float, ndarray] [source]
The infidelity is calculated with the filter function package. See its documentation for more information.
- Returns
costs – The infidelity.
- Return type
Union[float, np.ndarray]
- grad()[source]
The gradient of the infidelity is calculated with the filter function package. See its documentation for more information.
- Raises
NotImplementedError – This method has not been implemented yet.
- property omega
- class OperatorMatrixNorm(solver: Solver, target: OperatorMatrix, mode: str = 'scalar', label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Computes the fidelity as difference between the propagator and a target.
A global phase difference is ignored. The result can be returned as absolute value or vector. If the result shall be returned as absolute value it is calculated in a matrix norm.
- Parameters
solver (TimeSlotComputer) – Computes the evolution of the system.
target (ControlMatrix) – The ideal evolution.
mode (string) – The type of calculation. ‘scalar’: The difference is returned as scalar. ‘vector’: The difference of the individual elements is returned as vector. ‘rotation_axis’: For unitary evolutions only. The evolution is described by its rotation axis and a rotation angle. The first element of the rotation axis is multiplied by the angle so save one return argument.
- mode
Type of calculation
- Type
string
- costs() Union[ndarray, float] [source]
The costs or infidelity of the quantum channel.
These costs are given as difference between a simulated unitary evolution and the unitary target evolution depending on the mode. (See class documentation. )
- Returns
costs – The costs of infidelity.
- Return type
Union[np.ndarray, float]
- class OptimizationResult(final_cost=None, indices=None, final_parameters=None, final_grad_norm=None, init_parameters=None, num_iter=None, termination_reason='not started yet', status=None, optimization_stats=None, optimizer=None, optim_summary=None)[source]
Bases:
object
Resulting data of the optimization.
An instance of this class is returned by the Optimizer after the optimization has terminated. It holds the results of the optimization and can also contain an instance of OptimizationSummary to describe the optimization run itself, for example its convergence. The parameters of the initialization method are all optional. This class is intended to be initialized empty or loaded from a dictionary by the class method
from_dict()
.- termination_reason
Reason for the termination as string.
- Type
string
- status
The termination_reason as integer. Like in scipy.OptimizeResult None if the optimization has not started. -1: improper input parameters status 0: the maximum number of function evaluations is exceeded. 1: gradient norm termination condition is satisfied. 2: cost function termination condition is satisfied. 3: minimal step size termination condition is satisfied. 4: Both 2 and 3 termination conditions are satisfied.
- Type
None or int
- final_cost
Value of the cost functions after the optimization.
- Type
float
- final_grad_norm
Norm of the gradient after the optimization.
- Type
float
- num_iter
Number of iterations in the optimization algorithm.
- Type
integer
- init_parameters
The amplitudes at the start of the optimisation, where n_t is the number of time steps simulated and n_par the number of optimization parameters.
- Type
array, shape: (n_t, n_par)
- final_parameters
The optimization parameters at the end of the optimisation, where n_t is the number of time steps simulated and n_par the number of optimization parameters.
- Type
array, shape: (n_t, n_par)
- optimizer
Instance of the Optimizer used to generate the result
- Type
Optimizer
- optim_summary
None if no intermediary results are saved. Otherwise the infidelity during the optimization.
- Type
OptimizationSummary
- class OptimizationSummary(indices=None, iter_num=0, costs=None, gradients=None, parameters=None)[source]
Bases:
object
A summary of an optimization run.
This class saves the state of the optimization for each iteration. All parameters for the initialization are optimal. The class is intended to be either initialized empty.
- iter_num
Number of iterations stored. Serves as checksum to verify that full data has been stored.
- Type
int
- costs
Evaluation results of the cost functions. The dictionary is sorted by cost function indices. The lists hold one entry for each evaluation.
- Type
List[float]
- indices
The indices of the cost functions.
- Type
List[str]
- gradients
Gradients of the cost functions. The dictionary is again sorted by cost function indices and the lists hold one entry per evaluation.
- Type
List[array]
- parameters
Optimization parameters during the optimization.
- Type
List[array]
- class OversamplingMTF(oversampling: int = 1, bound_type: Optional[Tuple] = None, num_ctrls: int = 1, offset: float = 0)[source]
Bases:
MatrixTF
Oversamples and applies boundary conditions.
- class ParallelMTF(tf1: MatrixTF, tf2: MatrixTF)[source]
Bases:
MatrixTF
This transfer function will parallelize two transfer functions, such that they are applied to different control terms. Thus adding in the third dimension of the transfer matrix.
- Parameters
- class ParallelTF(tf1: TransferFunction, tf2: TransferFunction)[source]
Bases:
TransferFunction
This transfer function will parallelize two transfer functions, such that they are applied to different control terms. Thus adding in the third dimension of the transfer matrix.
- Parameters
tf1 (TransferFunction) – First transfer function. This function operates on the first tf1.num_ctrls control pulses.
tf2 (TransferFunction) – Second transfer function. This function operates on the next tf2._num_ctrls number of control pulses.
- class PerformanceStatistics[source]
Bases:
object
Stores performance statistics.
- start_t_opt
Time of the optimizations start. None if it has not been set yet.
- Type
float or None
- end_t_opt
Time of the optimizations end. None if it has not been set yet.
- Type
float or None
- indices
The indices of the cost functions.
- Type
List[str]
- cost_func_eval_times
List of durations of the evaluation of the cost functions.
- Type
list of float
- grad_func_eval_times
List of durations of the evaluation of the gradients.
- Type
list of float
- class ScalarMinimizingOptimizer(system_simulator: Optional[Simulator] = None, termination_cond: Optional[Dict] = None, save_intermediary_steps: bool = True, method: str = 'L-BFGS-B', bounds: Optional[Union[ndarray, List]] = None, use_jacobian_function=True, cost_func_weights: Optional[Sequence[float]] = None, store_optimizer: bool = False)[source]
Bases:
Optimizer
Interfaces to the minimize functions of the optimization package in scipy.
- Parameters
method (string) – Takes methods implemented by scipy.optimize.minimize.
bounds (sequence, optional) – Attention: The boundary format can vary between optimizers! The boundary conditions for the pulse optimizations. If none are given then the pulse is assumed to take any real value. The boundaries are given as a sequence of (min, max) pairs for each element. Defaults to None.
- cost_func_wrapper(optimization_parameters)[source]
Evalutes the cost function.
The total cost function is defined as the sum of cost functions.
- cost_jacobian_wrapper(optimization_parameters)[source]
The Jacobian reduced to the gradient.
The gradient is calculated by summation over the Jacobian along the function axis, because the total cost function is defined as the sum of cost functions.
- Returns
gradient – The gradient of the costs in the 2 norm.
- Return type
numpy array, shape (num_t * num_amp)
- run_optimization(initial_control_amplitudes: array, verbose: bool = False) OptimizationResult [source]
Runs the optimization of the control amplitudes.
- Parameters
initial_control_amplitudes (array) – shape (num_t, num_ctrl)
verbose – Verbosity of the run. Depends on which optimizer is used.
- Returns
optimization_result – The resulting data of the simulation.
- Return type
OptimizationResult
- class SchroedingerSMCControlNoise(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, noise_trace_generator: Optional[NoiseTraceGenerator], initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = False, processes: Optional[int] = 1, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
SchroedingerSMonteCarlo
Convenience class like SchroedingerSMonteCarlo but with noise on the optimization parameters.
This time slot computer solves the Schroedinger equation explicitly for concrete control noise realizations. This time slot computer assumes, that the noise is sampled on the time scale of the already transferred optimization parameters. The control Hamiltionians are also used as noise Hamiltionians and the noise amplitude function adds the noise samples to the unperturbed transferred optimization parameters and applies the amplitude function of the control amplitudes.
- class SchroedingerSMonteCarlo(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, h_noise: List[OperatorMatrix], noise_trace_generator: Optional[NoiseTraceGenerator], initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = False, processes: Optional[int] = 1, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None, noise_amplitude_function: Optional[Callable[[array, array, array, array], array]] = None)[source]
Bases:
SchroedingerSolver
Solves Schroedinger’s equation for explicit noise realisations as Monte Carlo experiment.
This time slot computer solves the Schroedinger equation explicitly for concrete noise realizations. The noise traces are generated by an instance of the Noise Trace Generator Class. Then they can be processed by the noise amplitude function, before they are multiplied by the noise hamiltionians.
- Parameters
h_noise (List[ControlMatrix], len num_noise_operators) – List of noise operators occurring in the Hamiltonian.
noise_trace_generator (noise.NoiseTraceGenerator) – Noise trace generator object.
processes (int, optional) – If an integer is given, then the propagation is calculated in this number of parallel processes. If 1 then no parallel computing is applied. If None then cpu_count() is called to use all cores available. Defaults to 1.
noise_amplitude_function (Callable[[noise_samples: np.array,) – optimization_parameters: np.array, transferred_parameters: np.array, control_amplitudes: np.array], np.array] The noise amplitude function calculated the noisy control amplitudes corresponding to the noise samples. They recieve 4 keyword arguments being the noise samples, the optimization parameters, the transferred optimization parameters and the control amplitudes in this order. The noise samples are given with the shape (n_samples_per_trace, n_traces, n_noise_operators), the optimization parameters (num_x, num_ctrl), the transferred parameters (num_t, num_ctrl) and the control amplitudes (num_t, num_ctrl). The returned noise amplitudes should be of the shape (num_t, n_traces, n_noise_operators).
- h_noise
List of noise operators occurring in the Hamiltonian.
- Type
List[ControlMatrix], len num_noise_operators
- noise_trace_generator
Noise trace generator object.
- _dyn_gen_noise
shape [[] * num_t] * num_noise_traces Dynamics generators for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _prop_noise
shape [[] * num_t] * num_noise_traces Propagators for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _fwd_prop_noise
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of the propagators for the individual noise traces. They describe the forward propagation of the systems state.
- Type
List[List[ControlMatrix]],
- _reversed_prop_noise
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of propagators in reversed order for the individual noise traces.
- Type
List[List[ControlMatrix]],
- _derivative_prop_noise
shape [[[] * num_t] * num_ctrl] * num_noise_traces Frechet derivatives of the propagators by the control amplitudes for the individual noise traces.
- Type
List[List[List[ControlMatrix]]],
- propagators_noise: List[List[ControlMatrix]],
shape [[] * num_t] * num_noise_traces Propagators for the individual noise traces.
- forward_propagators_noise: List[List[ControlMatrix]],
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of the propagators for the individual noise traces. They describe the forward propagation of the systems state.
- reversed_propagators_noise: List[List[ControlMatrix]],
shape [[] * (num_t + 1)] * num_noise_traces Cumulation of propagators in reversed order for the individual noise traces.
- frechet_deriv_propagators_noise: List[List[List[ControlMatrix]]],
shape [[[] * num_t] * num_ctrl] * num_noise_traces Frechet derivatives of the propagators by the control amplitudes for the individual noise traces.
- create_ff_h_n(self): List[List[np.ndarray, list, str]],
shape [[]]*num_noise_operators Creates the noise hamiltonian of the filter function formalism.
- property create_ff_h_n: list
Creates the noise hamiltonian of the filter function formalism.
If filter_function_h_n is None, then the filter function noise Hamiltonian is created from the Monte Carlo noise Hamiltonian by directly using the Operators and assuming all noise susceptibilities equal 1.
- Returns
create_ff_h_n – Noise Hamiltonian of the filter function formalism.
- Return type
nested list
- property forward_propagators_noise: List[List[OperatorMatrix]]
Returns the forward propagation of the initial state for every time slice and every noise trace and calculate it if necessary. If the initial state is the identity matrix, then the cumulative propagators are given. The element forward_propagators[k][i] propagates a state by the first i time steps under the kth noise trace, if the initial state is the identity matrix.
- Returns
forward_propagation (List[List[ControlMatrix]],)
shape [[] * (num_t + 1)] * num_noise_traces – Propagation of the initial state of the system. fwd[0] gives the initial state itself.
- property frechet_deriv_propagators_noise: List[List[List[OperatorMatrix]]]
Returns the frechet derivatives of the propagators with respect to the control amplitudes for each noise trace.
- Returns
derivative_prop_noise (List[List[List[ControlMatrix]]],)
shape [[[] * num_t] * num_ctrl] * num_noise_traces – Frechet derivatives of the propagators by the control amplitudes.
- property propagators_noise: List[List[OperatorMatrix]]
Returns the propagators of the system for each noise trace and calculates them if necessary.
- Returns
propagators_noise (List[List[ControlMatrix]],)
shape [[] * num_t] * num_noise_traces – Propagators of the system for each noise trace.
- property reversed_propagators_noise: List[List[OperatorMatrix]]
Returns the reversed propagation of the initial state for every noise trace and calculate it if necessary. If the initial state is the identity matrix, then the reversed cumulative propagators are given. The element forward_propagators[k][i] propagates a state by the first i time steps under the kth noise trace, if the initial state is the identity matrix.
- Returns
reversed_propagation_noise (List[List[ControlMatrix]],)
shape [[] * (num_t + 1)] * num_noise_traces – Propagation of the initial state of the system. reversed[k][0] gives the initial state itself.
- class SchroedingerSolver(h_drift: List[OperatorMatrix], h_ctrl: List[OperatorMatrix], tau: array, initial_state: Optional[OperatorMatrix] = None, ctrl_amps: Optional[array] = None, calculate_propagator_derivatives: bool = True, filter_function_h_n: Optional[Union[Callable, List[List]]] = None, filter_function_basis: Optional[Basis] = None, filter_function_n_coeffs_deriv: Optional[Callable[[ndarray], ndarray]] = None, exponential_method: Optional[str] = None, frechet_deriv_approx_method: Optional[str] = None, is_skew_hermitian: bool = True, transfer_function: Optional[TransferFunction] = None, amplitude_function: Optional[AmplitudeFunction] = None)[source]
Bases:
Solver
This time slot computer solves the unperturbed Schroedinger equation.
All intermediary propagators are calculated and cached. Takes also input parameters of the base class.
- Parameters
calculate_propagator_derivatives (bool) – If true, the derivatives of the propagators by the control amplitudes are always calculated. Otherwise only on demand.
frechet_deriv_approx_method (Optional[str]) – Method for the approximation of the derivatives of the propagators, if they are not calculated analytically. Note that this method is never used if calculate_propagator_derivatives is set to True! Methods: None: The derivatives are not approximated by calculated by the control matrix class. ‘grape’: use the approximation given in the original grape paper.
- _dyn_gen
The generators of the systems dynamics
- Type
List[ControlMatrix], len num_t
- calculate_propagator_derivatives
If true, the derivatives of the propagators by the control amplitudes are always calculated. Otherwise only on demand.
- Type
bool
- frechet_deriv_approx_method
Method for the approximation of the derivatives of the propagators, if they are not calculated analytically. Note that this method is never used if calculate_propagator_derivatives is set to True! Methods: ‘grape’: use the approximation given in the original grape paper.
- Type
Optional[str]
- _compute_derivative_directions: List[List[q_mat.ControlMatrix]],
- shape [[] * num_ctrl] * num_t
Computes the directions of change with respect to the control parameters.
- _compute_dyn_gen: List[ControlMatrix], len num_t
Computes the dynamics generators.
- `Todo`
- raise a warning if the approximation method although the gradient
is always calculated.
- raise a warning if the grape approximation is chosen but its
requirement of small time steps is not met.
- class Simulator(solvers: Optional[Sequence[Solver]], cost_funcs: Optional[Sequence[CostFunction]], optimization_parameters=None, num_ctrl=None, times=None, num_times=None, record_performance_statistics: bool = True, numeric_jacobian: bool = False)[source]
Bases:
object
The Dynamics class provides the interface for the Optimizer class.
It wraps the cost functions and optionally the gradient of the infidelity.
- Parameters
solvers (Solver) – This object calculates the evolution of the system under consideration.
cost_funcs (List[FidelityComputer]) – These are the parameters which are optimized.
optimization_parameters (numpy array, optional) – The initial pulse of shape (N_t, N_c) where N_t is the number of time steps and N_c the number of controlled parameters.
num_ctrl (int, optional) – The number of controlled parameters N_c.
times (numpy array or list, optional) – A one dimensional numpy array of the discrete time steps.
num_times (int, optional) – The number of time steps N_t. Mainly for consistency checks.
record_performance_statistics (bool) – If True, then the evaluation times of the cost functions and their gradients are stored.
- solvers
Instances of the time slot computers used by the cost functions.
- Type
list of Solver
- cost_funcs
Instances of the cost functions which are to be optimized.
- Type
list of CostFunction
- stats
Performance statistics.
- Type
Stats
- TODO
properly implement check method as parser
flags controlling how much data is saved
is the pulse attribute useful?
- check attributes for duplication: should times, num_ctrl and
num_times be saved at this level?
- compare_numeric_to_analytic_gradient(pulse: Optional[ndarray] = None, delta_eps: float = 1e-08, symmetric: bool = False)[source]
This function compares the numerical to the analytical gradient in order to serve as a consistency check.
- Parameters
pulse (array) – The pulse at which the gradient is evaluated.
delta_eps (float) – The finite difference.
symmetric (bool) – If True, then the finite differences are evaluated symmetrically around the pulse. Otherwise by forward finite differences.
- Returns
gradient_difference_norm (float) – The matrix norm of the difference between the numeric and analytic gradient.
gradient_difference_relative (float) – The relation of the aforementioned norm of the difference matrix and the average norm of the numeric and analytic gradient.
- property cost_indices
Indices of cost functions.
- numeric_gradient(pulse: Optional[ndarray] = None, delta_eps: float = 1e-08, symmetric: bool = False) ndarray [source]
This function calculates the gradient numerically and analytically in order to serve as a consistency check.
- Parameters
pulse (array) – The pulse at which the gradient is evaluated.
delta_eps (float) – The finite difference.
symmetric (bool) – If True, then the finite differences are evaluated symmetrically around the pulse. Otherwise by forward finite differences.
- Returns
gradients – The gradients as numpy array of shape (n_time, n_func, n_opers).
- Return type
array
- property pulse
Optimization parameters.
- wrapped_cost_functions(pulse=None)[source]
Wraps the cost functions of the fidelity computer.
This function coordinates the complete simulation including the application of the transfer function, the execution of the time slot computer and the evaluation of the actual cost functions.
- Parameters
pulse (numpy array optional) – If no pulse is specified the cost function is evaluated for the attribute pulse.
- Returns
costs (numpy array, shape (n_fun)) – Array of costs (i.e. infidelities).
costs_indices (list of str) – Names of the costs.
- wrapped_jac_function(pulse=None)[source]
Wraps the gradient calculation functions of the fidelity computer.
- Parameters
pulse (numpy array, optional) – shape: (num_t, num_ctrl) If no pulse is specified the cost function is evaluated for the attribute pulse.
- Returns
jac – Array of gradients of shape (num_t, num_func, num_amp).
- Return type
numpy array
- class StateInfidelity(solver: Solver, target: OperatorMatrix, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False)[source]
Bases:
CostFunction
Quantum state infidelity.
- class StateInfidelitySubspace(solver: Solver, target: OperatorMatrix, dims: List[int], remove: List[int], label: Optional[List[str]] = None)[source]
Bases:
CostFunction
Quantum state infidelity on a subspace.
Assume that the simulated system operates on a product space and the target states is described only on a subspace. This class then calculates the partial derivative over the neglected subspace.
- Parameters
dims (list of int,) – The dimensions of the subspaces. (Compare to the ptrace function of the MatrixOperator class.)
remove (list of int,) – The indices of the dims list corresponding to the subspaces that are to be eliminated. (Compare to the ptrace function of the MatrixOperator class.)
TODO –
support super operator formalism
handle leakage states?
Docstring
- class StateNoiseInfidelity(solver: SchroedingerSMonteCarlo, target: OperatorMatrix, label: Optional[List[str]] = None, computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False, neglect_systematic_errors: bool = True)[source]
Bases:
CostFunction
Averages the state infidelity over noise traces.
- class UnaryAnalyticAmpFunc(value_function: Callable[[float], float], derivative_function: Callable[[float], float])[source]
Bases:
AmplitudeFunction
A unary analytic amplitude function which is applied to each amplitude value. This class can be used for every application case where all transferred parameters are mapped one-to-one to the control amplitudes by a single unary function.
- Parameters
value_function (Callable float to float) – This scalar function expresses the functional dependency of the control amplitudes on the optimization parameters. The function is vectorized internally.
derivative_function (Callable float to float) – This scalar function describes the derivative of the control amplitudes. The function is vectorized internally.
- angle_axis_representation(u: ~numpy.ndarray) -> (<class 'float'>, <class 'numpy.ndarray'>)[source]
Calculates the representation of a 2x2 unitary matrix by a rotational axis and a rotation angle.
- Parameters
u (np.ndarray) – A unitary 2x2 matrix.
- Returns
beta, n – beta is the angle of the rotation and n the rotational axis.
- Return type
float, np.ndarray
- closest_unitary(matrix: OperatorMatrix)[source]
Calculate the unitary matrix U that is closest with respect to the operator norm distance to the general matrix A.
- Parameters
matrix (OperatorMatrix) – The matrix which shall be mapped to the closest unitary.
- Returns
unitary – The closest unitary to the propagator.
- Return type
- convert_ket_vectorized_density_matrix_to_square(vectorized_density_matrix: Union[OperatorMatrix, array])[source]
Bring vectorized density matrix back into square form.
- Parameters
vectorized_density_matrix (OperatorMatrix or numpy array) – The density matrix.
- Returns
The density matrix as ket vector for the Liouville formalism.
- Return type
density_ket_vector
- Raises
ValueError: – If the operation is not defined for the input type.
ValueError: – If the density matrix is not given as ket vector.
- convert_unitary_to_super_operator(unitary: Union[OperatorMatrix, array])[source]
We assume that the unitary U shall be used to propagate a density matrix m like
\[U m U^dag\]which is equivalent to
\[( U^\ast \otimes U) \vec{m}\]- Parameters
unitary (OperatorMatrix or numpy array) – The unitary propagator.
- Returns
The unitary propagator in the Lindblad formalism. Same type as input.
- Return type
unitary_super_operator
- Raises
ValueError: – If the operation is not defined for the input type.
- entanglement_fidelity(target: Union[ndarray, OperatorMatrix], propagator: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None, map_to_closest_unitary: bool = False) float64 [source]
The entanglement fidelity between a simulated Propagator and target propagator.
- Parameters
propagator (Union[np.ndarray, ControlMatrix]) – The simulated propagator.
target (Union[np.ndarray, ControlMatrix]) – The target unitary evolution.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
map_to_closest_unitary (bool) – If True, then the final propagator is mapped to the closest unitary before the infidelity is evaluated.
- Returns
fidelity – The entanglement fidelity of target_unitary.dag * unitary.
- Return type
float
- entanglement_fidelity_super_operator(target: Union[ndarray, OperatorMatrix], propagator: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None) float64 [source]
The entanglement fidelity between a simulated Propagator and target propagator in the super operator formalism.
The entanglement fidelity between a propagator in the super operator formalism of dimension d^2 x d^2 and a target unitary operator of dimension d x d. If the system incorporates leakage states, the propagator is projected onto the computational space [1].
- Parameters
propagator (Union[np.ndarray, ControlMatrix]) – The simulated evolution propagator in the super operator formalism.
target (Union[np.ndarray, ControlMatrix]) – The target unitary evolution. (NOT as super operator.)
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
- Returns
fidelity – The entanglement fidelity of target_unitary.dag * unitary.
- Return type
float
Notes
[1] Quantification and characterization of leakage errors, Christopher J. Wood and Jay M. Gambetta, Phys. Rev. A 97, 032306 - Published 8 March 2018
- ket_vectorize_density_matrix(density_matrix: Union[OperatorMatrix, array])[source]
Vectorizes a density matrix column-wise as ket vector.
- Parameters
density_matrix (OperatorMatrix or numpy array) – The density matrix.
- Returns
The density matrix as ket vector for the Liouville formalism.
- Return type
density_ket_vector
- Raises
ValueError: – If the operation is not defined for the input type.
ValueError: – If the density matrix is not given in square shape.
- plot_bloch_vector_evolution(forward_propagators: Sequence[OperatorMatrix], initial_state: OperatorMatrix, return_bloch: bool = False, **bloch_kwargs)[source]
Plots the evolution of the forward propagators of the initial state on the bloch sphere.
- Parameters
forward_propagators (list of DenseOperators) – The forward propagators whose evolution shall be plotted on the Bloch sphere.
initial_state (DenseOperator) – The initial state aka. beginning point of the plotting.
return_bloch (bool, optional) – If True, the Bloch sphere is returned as object.
bloch_kwargs (dict, optional) – Plotting parameters for the Bloch sphere.
- Returns
Only returned if return_bloch is set to true.
- Return type
bloch_sphere
- plot_energy_spectrum(hamiltonian: List[OperatorMatrix], x_val: array, x_label: str, ax=None, use_spectral_decomposition=True, **scatter_kwargs)[source]
Calculates and plots the energy spectra of hamilton operators.
The colors demonstrate the contribution of individual base vectors.
- Parameters
hamiltonian (list of OperatorMatrix) – The Hamiltonians which shall provide the energy spectra. They need to be hermitian.
x_val (array of float, shape (n, )) – The x_vales by which the eigenvalues are plotted.
x_label (str) – Label of the x-axis.
ax (matplotlib pyplot axes) – Instance of axes to plot the data in. Defaults to None.
- run_optimization_parallel(optimizer, initial_pulses, processes=None)[source]
Parallel execution of the run_optimization Method of the Optimizer.
- Parameters
optimizer (Optimizer) – The Optimizer.
initial_pulses (numpy array, shape (num_init, num_t, num_ctrl)) – The initial pulse. Where num_init is the number of initial pulses.
processes (int, optional) – If an integer is given, then the propagation is calculated in this number of parallel processes. If 1 then no parallel computing is applied. If None then cpu_count() is called to use all cores available. Defaults to None.
- Returns
data – A DataContainer in which the OptimizationResults are saved.
- Return type
- state_fidelity(target: Union[ndarray, OperatorMatrix], propagated_state: Union[ndarray, OperatorMatrix], computational_states: Optional[List[int]] = None, rescale_propagated_state: bool = False) float64 [source]
Quantum state fidelity.
The quantum state fidelity between two quantum states is calculated as square norm of the wave function overlap as
\[F = \vert \langle \psi_1 \vert \psi_2 \rangle \vert^2\]- Parameters
target (numpy array or operator matrix of shape (1, d)) – The target state is assumed to be given as bra-vector.
propagated_state (numpy array or operator matrix of shape (d, 1)) – The target state is assumed to be given as ket-vector.
computational_states (Optional[List[int]]) – If set, the entanglement fidelity is only calculated for the specified subspace.
rescale_propagated_state (bool) – If True, then the propagated state vector is rescaled to a norm of 1.
- Returns
quantum_state_fidelity (float) – The quantum state fidelity between the propagated and the target state.
TODO –
functions should not change type of input arrays
- state_fidelity_subspace(target: Union[ndarray, OperatorMatrix], propagated_state: Union[ndarray, OperatorMatrix], dims: List[int], remove: List[int]) float64 [source]
Quantum state fidelity on a subspace.
We assume that the target state is defined only on a subspace of the total simulated hilbert space. Thus we calculate the partial trace over our simulated state, rendering it into the density matrix of a potentially mixed state.
The quantum state fidelity between a pure $psi$ and a mixed quantum state $rho$ is calculated as
\[F = \langle \psi \vert \rho \vert \psi \rangle\]- Parameters
target (numpy array or operator matrix of shape (1, d)) – The target state is assumed to be given as bra-vector.
propagated_state (numpy array or operator matrix) – The target state is assumed to be given as density matrix of shape(d, d) or shape (d^2, 1), or as ket-vector of shape (d, 1).
dims (list of int,) – The dimensions of the subspaces. (Compare to the ptrace function of the MatrixOperator class.)
remove (list of int,) – The indices of the dims list corresponding to the subspaces that are to be eliminated. (Compare to the ptrace function of the MatrixOperator class.)
- Returns
quantum_state_fidelity (float) – The quantum state fidelity between the propagated and the target state.
TODO –
functions should not change type of input arrays