3. Method

In this page we discuss how the code attempts to solve the problems that were described in the Introduction.

Recall that the task is, in its essence, to find the maximum of a function \(G(u)\) of a set of parameters \(u_1, \dots, u_M\). This task may be accomplished with one of the thousands of optimization algorithms that have been invented. Many of them require the gradient \(\nabla G(u)\).

So, we will present in Section 3.1, Section 3.2, Section 3.3, Section 3.4, and Section 3.5 the equations that are used to compute that gradient.

Then, we will briefly discuss the possible parametrizations, and how the optimization is done (essentially, by externalizing the job to the nlopt library).

3.1. Closed systems, Hilbert state propagations

We consider a situation in which a quantum system is closed, and therefore we consider a situation modeled by:

(1)\[i\frac{\partial}{\partial t}\vert\phi(t)\rangle = H(u, t)\vert\phi(t)\rangle\]
(2)\[\vert\phi(0)\rangle = \vert\phi_0\rangle\]

and we wish to find the optimal control parameters \(u\) that maximize a functional of the system trajectory:

(3)\[F = F(\phi(T), u)\]

that is, the maximization of function

(4)\[G(u) = F(\phi_u(T), u)\]

The key problem that qocttools is designed to solve is the computation of the gradient \(\nabla G(u)\) and, using this gradient, finding the maximal value of \(G\) using one optimization algorithm. The expression for the gradient that qocttools uses is:

(5)\[\frac{\partial G}{\partial u_m}(u) = \frac{\partial F}{\partial u}(\psi_u(T), u) + 2 {\rm Im} \int_0^T\!\!{\rm d}t\; \langle \chi_u(t) \vert \frac{\partial H}{\partial u_m}(u) \vert \psi_u(t)\rangle\]

where the costate \(\vert\chi_u(t)\rangle\) is given by:

(6)\[i\frac{\partial}{\partial t}\vert\chi_u(t)\rangle = H^{\dagger}(u, t)\vert\chi_u(t)\rangle\]
(7)\[\vert\chi_u(T)\rangle = \frac{\partial F}{\partial \psi^*_u(T)}(\psi_u(T), u)\]

One may define arbitrary expressions for \(F\); however, some typical choices are the most common. See the documentation of the target.Target class for the various predefined options. For example, a very common target is would be the expectation value of some operator \(O\):

(8)\[F(\psi(T), u) = \langle\psi(T)\vert O \vert\psi(T)\rangle\]

In this case, Eq. (7) takes the simple form:

(9)\[\vert\chi_u(T)\rangle = O \vert\psi_u(T)\rangle\]

In any case, the user may define its own function for function \(F\) and pass it to qocttools, in case the options already defined are not suitable.

The code also admits the option of defining several initial states, and a target functional defined in terms of them. The model is in this case a set of equations:

(10)\[i\frac{\partial}{\partial t}\vert\phi^i(t)\rangle = H^i(u, t)\vert\phi^i(t)\rangle\]
(11)\[\vert\phi^i(0)\rangle = \vert\phi^i_0\rangle\]

The target definition would then involve all systems:

(12)\[F = F(\lbrace\phi^i(T)\rbrace_i, u)\]

and the gradient expression would be a simple generalization of the previous gradient expression:

(13)\[\frac{\partial G}{\partial u_m}(u) = \frac{\partial F}{\partial u}(\lbrace\psi^i_u(T)\rbrace_i, u) + \sum_i 2 {\rm Im} \int_0^T\!\!{\rm d}t\; \langle \chi^i_u(t) \vert \frac{\partial H}{\partial u_m}(u) \vert \psi^i_u(t)\rangle\]

3.2. Closed systems, evolution operator propagations

Sometimes, the problem is posed in terms of the action of a given time-dependent perturbation on the whole Hilbert space. In other words, the problem is not optimizing the system trajectory when it starts from one initial state, but on designing simultaneously the behaviour of all states. This calls for a formulation of the problem in terms of the evolution operator:

(14)\[i\frac{\partial}{\partial t}U(t) = H(u, t)U(t)\]
(15)\[U(0) = I\]

The target is then defined in terms of this operator:

(16)\[F = F(U(T), u)\]

that is, the maximization of function

(17)\[G(u) = F(U_u(T), u)\]

The expression for the gradient of this function implemented in the code is:

(18)\[\frac{\partial G}{\partial u_m}(u) = \frac{\partial F}{\partial u}(U_u(T), u) + 2 {\rm Im} \int_0^T\!\!{\rm d}t\; B_u(t) \cdot \frac{\partial H}{\partial u_m}(u) U_u(t)\]

Here, we are using the Fröbenius dot product for operators: \(A\cdot B = {\rm Tr}A^\dagger B\).

The costate \(B_u(t)\) is now defined by:

(19)\[i\frac{\partial}{\partial t}B_u(t) = H^\dagger(u, t)B_u(t)\]
(20)\[B_u(T) = \frac{\partial F}{\partial U^*_u(T)}(U_u(T), u)\]

For example, if the goal is to achieve a certain evolution operator (a quantum gate), we may define a target in the form:

(21)\[F(U(T), u) = \vert U(T)\cdot U_{\rm target}\vert^2\]

If this is the case, Eq. (20) is:

(22)\[B_u(T) = (U_{\rm target}\cdot U(T)) U_{\rm target}\]

Once again, the previous setup can be extended to multiple evolution operators driven by different Hamiltonians.

3.3. Open systems, density matrix propagations

Now we consider the problem of optimizing the evolution of an open quantum system. qocttools only considers one possible model for an open quantum system: Lindblad’s equation:

(23)\[\dot{\rho}(t) = -i\left[H(u, t), \rho(t)\right] + \sum_{ij} \gamma_{ij} \left( V_{ij}\rho(t)V^\dagger_{ij} - \frac{1}{2} \lbrace V_{ij}^\dagger V_{ij}, \rho(t) \rbrace\right)\]
(24)\[\rho(0) = \rho_0\]

The target function is now a functional of $rho(T)$:

(25)\[F = F(\rho(T), u)\]
(26)\[G(u) = F(\rho(T), u)\]

The gradient is given by:

(27)\[\frac{\partial G}{\partial u_m}(u) = \frac{\partial F}{\partial u_m}(\rho_u(T),u) + 2{\rm Im} \int_0^T\!{\rm d}t\; {\rm Tr}\left( \sigma^\dagger_u(t) \left[ \frac{\partial H}{\partial u_m}(u, t), \rho_u(t) \right] \right) \,.\]

where \(\rho_u\) is the solution density matrix for the \(u\) control parameters, and the costate \(\sigma_u\) density matrix is determined by:

(28)\[\dot{\sigma}_u(t) = -i \left[ H(u, t), \sigma_u(t)\right] -\sum_{ij}\gamma_{ij} \left( V^\dagger_{ij}\sigma_u(t)B_{ij} - \frac{1}{2}\lbrace V^\dagger_{ij} V_{ij}, \sigma_u(t)\rbrace\right)\]
(29)\[\sigma_u(T) = \frac{\partial F}{\partial \rho^*}(\rho_u(T), u)\,.\]

Note that the dissipative terms for the costate \(\sigma\) differ from the one for \(\rho\) in two ways: first, they have the opposite sign, and second, the \(V_{ij}\rho(t)V_{ij}^\dagger\) term is changed by \(V_{ij}^\dagger\sigma(t)V_{ij}\).

3.4. Floquet optimizations: closed systems

Let us recall that the objective in this case is to optimize a function

(30)\[f = f(\varepsilon, \phi)\,.\]

of the pseudoenergies \(\varepsilon\) and Floquet modes \(\phi\). In fact, at the moment qocttools only accepts functions of the pseudoenergies: \(f=f(\varepsilon)\). Function \(G\) is then defined as \(G(u) = f(\varepsilon(u))\). There are two possibilities for the computation of the gradient of his function.

3.4.1. QOCT formula for the evolution operator

The first option consists in using the evolution operator in a similar way to the one described in Section 3.2. One may define a target functional:

(31)\[F(U) = f(\varepsilon(U))\]

and then, \(G(u) = F(U_u)\). The gradient formula would in this case be:

(32)\[\frac{\partial G}{\partial u_m}(u) = 2 {\rm Im} \int_0^T\!\!{\rm d}t\; B_u(t) \cdot \frac{\partial H}{\partial u_m}(u) U_u(t)\]

The costate \(B_u(t)\) is defined by:

(33)\[i\frac{\partial}{\partial t}B_u(t) = H^\dagger(u, t)B_u(t)\]
(34)\[B_u(T) = \frac{\partial F}{\partial U^*_u(T)}(U_u(T), u)\]

Getting the boundary condition at time \(t=T\) through (34) is not trivial, because \(F\) is not defined directly in terms of the evolution operator, but indirectly through the pseudoenergies. Currently, the solution to this problem is implemented in qocttools in simple but probably not too efficient manner: by computing these derivatives with some finite differences formulas; for example:

(35)\[\frac{\partial F}{\partial {\rm Re}U_{ij}} = \lim_{\Delta \to 0} \Delta^{-1} \left[f(\varepsilon_\alpha(U + \Delta \delta_{ij})) - f(\varepsilon_\alpha(U)) \right]\,.\]

3.4.2. Formula based on perturbation theory

There is an alternative to the method described previously to optimize properties of a system periodically driven using the Floquet psuedoenergies.

The gradient of \(G(u) = f(\varepsilon(u))\) is:

(36)\[\frac{\partial G}{\partial u_m}(u) = \sum_{\alpha} \frac{\partial f}{\partial \varepsilon_\alpha}(\varepsilon(u)) \frac{\partial \varepsilon_\alpha}{\partial u_m}(u)\]

The user, that supplies \(f(\varepsilon)\) (the function that is to be maximized), should also supply functions for the derivatives \(\frac{\partial f}{\partial \varepsilon_\alpha}\) (some default options are included in the code). The problem lies then in the computation of \(\frac{\partial \varepsilon_\alpha}{\partial u_m}\).

In order to get those derivatives, one may work in Floquet-Hilbert space. Modes and psuedoenergies are define there through an eigenvalue equation:

(37)\[\left[ H(u) -i\frac{\rm d}{{\rm d}t}\right]\vert \phi_\alpha(u)\rangle\rangle = \varepsilon_\alpha(u) \vert \phi_\alpha(u)\rangle\rangle\,.\]

One may then use perturbation theory in Floquet-Hilbert space and arrive to the following expression:

(38)\[\frac{\partial \varepsilon_\alpha}{\partial u_m} = \langle\langle \phi_\alpha(u)\vert \frac{\partial }{\partial u_m} \left[H(u) -i \frac{\rm d}{{\rm d}t} \right]\vert \phi_\alpha(u)\rangle\rangle = \frac{1}{T}\int_0^T\!\!{\rm d}t\; \langle \phi_\alpha(u; t)\vert \frac{\partial H}{\partial u_m}(u; t)\vert \phi_\alpha(u; t)\rangle\]

3.5. FLoquet optimizations: non-equilibrium steady states

Let us now consider an open quantum system, driven by a periodic perturbation. In general, the system will decay to a state that shares the same periodicity than the perturbation itself, a state that is called a non-equilibirum steady state (NESS).

The problem addressed by qocttools is finding the periodic perturbation whose associated NESS maximizes the (time-averaged) value of some observable. We briefly summarize here how this is done.

First, let us start by rewriting Lindblad’s equation, (23), in a simpler way:

(39)\[\dot{\rho}(t) = \mathcal{L}(u, t) \rho(t)\]

The NESSs are the periodic solutions to this equation:

(40)\[\rho_u(t+T) = \rho_u(t)\]

We assume that, for each choice of control parameters \(u\), there is one and only one NESS. If we now consider an observable \(O\), the goal is the maximization of function

(41)\[G(u) = \frac{1}{T}\int_0^T\;\; {\rm Tr}\left[O\rho_u(t)\right]\]

The tasks that qocttools must perform is to compute the NESS \(\rho_u\), and the gradient \(\frac{\partial \rho}{\partial u_m}\), in order to be able to compute the gradient of \(G(u)\) as:

(42)\[\frac{\partial G}{\partial u_m} = \frac{1}{T}\int_0^T\;\; {\rm Tr} \left[O\frac{\partial \rho_u}{\partial u_m}(t)\right]\]

qocttools starts by considering the Fourier series of the elements of the density matrix and of the Lindbladian operator:

(43)\[\rho_{\alpha, n} = \frac{1}{T}\int_0^T\!\!{\rm d}t\; e^{-i\omega_n t}\rho_{\alpha}(t)\]
(44)\[\mathcal{L}_{\alpha\beta, n}(u) = \frac{1}{T}\int_0^T\!\!{\rm d}t\; e^{-i\omega_n t}\mathcal{L}_{\alpha\beta}(u, t)\]

Note that \(\rho_\alpha(t)\) denotes the \(\alpha\) component of the (time-evolving) density matrix in vectorized form. Likewise, \(\mathcal{L}_{\alpha\beta}(u, t)\) is the \(\alpha\beta\) component of the Lioviliian operaor (we are dropping the dependence on \(u\) for clarity). Lindblad’s equation can then be rewritten as:

(45)\[\sum_\beta \sum_{m=0}^{N-1} \overline{\mathcal{L}}_{\alpha n,\beta m}(u) \rho_{\beta, m} = 0.\]

defining

(46)\[\overline{\mathcal{L}}_{\alpha n,\beta m}(u) = \mathcal{L}_{\alpha\beta,n-m}(u) -i\delta_{nm}\delta_{\alpha\beta}\omega_m,\]

Equation (45) is used by qocttools to compute the NESS. Then, by taking variations with respect to \(u\), one can compute the gradient of the NESS density matrix:

(47)\[\overline{\mathcal{L}}(u)\frac{\partial \rho}{\partial u_k}(u) = - \frac{\partial \overline{\mathcal{L}}}{\partial u_m}(u)\rho_u.\]

There are some difficulties related to these equations (since the linear operator is in fact not invertible. See [CS23] for details.

3.6. Parametrization and Optimization

The control parameters are used to parametrize See pulses.pulse for the precise definition of each pulse.

In order to optimize function \(G(u)\), one may then use …