3. Method
In this page we discuss how the code attempts to solve the problems that were described in the Introduction.
Recall that the task is, in its essence, to find the maximum of a function \(G(u)\) of a set of parameters \(u_1, \dots, u_M\). This task may be accomplished with one of the thousands of optimization algorithms that have been invented. Many of them require the gradient \(\nabla G(u)\).
So, we will present in Section 3.1, Section 3.2, Section 3.3, Section 3.4, and Section 3.5 the equations that are used to compute that gradient.
Then, we will briefly discuss the possible parametrizations, and how the optimization is done (essentially, by externalizing the job to the nlopt library).
3.1. Closed systems, Hilbert state propagations
We consider a situation in which a quantum system is closed, and therefore we consider a situation modeled by:
and we wish to find the optimal control parameters \(u\) that maximize a functional of the system trajectory:
that is, the maximization of function
The key problem that qocttools is designed to solve is the computation of the gradient \(\nabla G(u)\) and, using this gradient, finding the maximal value of \(G\) using one optimization algorithm. The expression for the gradient that qocttools uses is:
where the costate \(\vert\chi_u(t)\rangle\) is given by:
One may define arbitrary expressions for \(F\); however, some typical
choices are the most common. See the documentation of the target.Target
class
for the various predefined options. For example, a very common target is would
be the expectation value of some operator \(O\):
In this case, Eq. (7) takes the simple form:
In any case, the user may define its own function for function \(F\) and pass it to qocttools, in case the options already defined are not suitable.
The code also admits the option of defining several initial states, and a target functional defined in terms of them. The model is in this case a set of equations:
The target definition would then involve all systems:
and the gradient expression would be a simple generalization of the previous gradient expression:
3.2. Closed systems, evolution operator propagations
Sometimes, the problem is posed in terms of the action of a given time-dependent perturbation on the whole Hilbert space. In other words, the problem is not optimizing the system trajectory when it starts from one initial state, but on designing simultaneously the behaviour of all states. This calls for a formulation of the problem in terms of the evolution operator:
The target is then defined in terms of this operator:
that is, the maximization of function
The expression for the gradient of this function implemented in the code is:
Here, we are using the Fröbenius dot product for operators: \(A\cdot B = {\rm Tr}A^\dagger B\).
The costate \(B_u(t)\) is now defined by:
For example, if the goal is to achieve a certain evolution operator (a quantum gate), we may define a target in the form:
If this is the case, Eq. (20) is:
Once again, the previous setup can be extended to multiple evolution operators driven by different Hamiltonians.
3.3. Open systems, density matrix propagations
Now we consider the problem of optimizing the evolution of an open quantum system. qocttools only considers one possible model for an open quantum system: Lindblad’s equation:
The target function is now a functional of $rho(T)$:
The gradient is given by:
where \(\rho_u\) is the solution density matrix for the \(u\) control parameters, and the costate \(\sigma_u\) density matrix is determined by:
Note that the dissipative terms for the costate \(\sigma\) differ from the one for \(\rho\) in two ways: first, they have the opposite sign, and second, the \(V_{ij}\rho(t)V_{ij}^\dagger\) term is changed by \(V_{ij}^\dagger\sigma(t)V_{ij}\).
3.4. Floquet optimizations: closed systems
Let us recall that the objective in this case is to optimize a function
of the pseudoenergies \(\varepsilon\) and Floquet modes \(\phi\). In fact, at the moment qocttools only accepts functions of the pseudoenergies: \(f=f(\varepsilon)\). Function \(G\) is then defined as \(G(u) = f(\varepsilon(u))\). There are two possibilities for the computation of the gradient of his function.
3.4.1. QOCT formula for the evolution operator
The first option consists in using the evolution operator in a similar way to the one described in Section 3.2. One may define a target functional:
and then, \(G(u) = F(U_u)\). The gradient formula would in this case be:
The costate \(B_u(t)\) is defined by:
Getting the boundary condition at time \(t=T\) through (34) is not trivial, because \(F\) is not defined directly in terms of the evolution operator, but indirectly through the pseudoenergies. Currently, the solution to this problem is implemented in qocttools in simple but probably not too efficient manner: by computing these derivatives with some finite differences formulas; for example:
3.4.2. Formula based on perturbation theory
There is an alternative to the method described previously to optimize properties of a system periodically driven using the Floquet psuedoenergies.
The gradient of \(G(u) = f(\varepsilon(u))\) is:
The user, that supplies \(f(\varepsilon)\) (the function that is to be maximized), should also supply functions for the derivatives \(\frac{\partial f}{\partial \varepsilon_\alpha}\) (some default options are included in the code). The problem lies then in the computation of \(\frac{\partial \varepsilon_\alpha}{\partial u_m}\).
In order to get those derivatives, one may work in Floquet-Hilbert space. Modes and psuedoenergies are define there through an eigenvalue equation:
One may then use perturbation theory in Floquet-Hilbert space and arrive to the following expression:
3.5. FLoquet optimizations: non-equilibrium steady states
Let us now consider an open quantum system, driven by a periodic perturbation. In general, the system will decay to a state that shares the same periodicity than the perturbation itself, a state that is called a non-equilibirum steady state (NESS).
The problem addressed by qocttools is finding the periodic perturbation whose associated NESS maximizes the (time-averaged) value of some observable. We briefly summarize here how this is done.
First, let us start by rewriting Lindblad’s equation, (23), in a simpler way:
The NESSs are the periodic solutions to this equation:
We assume that, for each choice of control parameters \(u\), there is one and only one NESS. If we now consider an observable \(O\), the goal is the maximization of function
The tasks that qocttools must perform is to compute the NESS \(\rho_u\), and the gradient \(\frac{\partial \rho}{\partial u_m}\), in order to be able to compute the gradient of \(G(u)\) as:
qocttools starts by considering the Fourier series of the elements of the density matrix and of the Lindbladian operator:
Note that \(\rho_\alpha(t)\) denotes the \(\alpha\) component of the (time-evolving) density matrix in vectorized form. Likewise, \(\mathcal{L}_{\alpha\beta}(u, t)\) is the \(\alpha\beta\) component of the Lioviliian operaor (we are dropping the dependence on \(u\) for clarity). Lindblad’s equation can then be rewritten as:
defining
Equation (45) is used by qocttools to compute the NESS. Then, by taking variations with respect to \(u\), one can compute the gradient of the NESS density matrix:
There are some difficulties related to these equations (since the linear operator is in fact not invertible. See [CS23] for details.
3.6. Parametrization and Optimization
The control parameters are used to parametrize
See pulses.pulse
for the precise definition
of each pulse.
In order to optimize function \(G(u)\), one may then use …