1. Introduction

The goal of this program is to solve the optical control problems (OCT) for quantum systems [GBC+15]. A “typical” OCT problem is formulated as follows: given a quantum system that evolves according to some time-dependent Hamiltonian, whose precise form is determined by a series of control parameters that can be varied, find the optimal values for those parameters, meaning that some function of the evolution of the system takes a maximal value when those optimal parameters are used.

Let us state this more precisely. We consider three types of model equations:

  1. Closed quantum systems, described by the Schrödinger equation:

(1)\[i\frac{\partial}{\partial t}\vert\phi(t)\rangle = H(u, t)\vert\phi(t)\rangle\]
  1. Closed quantum systems, described by the Schrödinger equation for the full evolution operator:

(2)\[i\frac{\partial}{\partial t}U(t) = H(u, t)U(t)\]
  1. Open quantum systems described by a (time-dependent) Lindblad equation:

(3)\[\dot{\rho}(t) = -i\left[H(u, t), \rho(t)\right] + \sum_{ij} \gamma_{ij} \left( V_{ij}\rho(t)V^\dagger_{ij} - \frac{1}{2} \lbrace V_{ij}^\dagger V_{ij}, \rho(t) \rbrace\right)\]

These three models differ in the definition of what the state is: a ket in the quantum Hilbert state in the first case, a unitary operator in the second case, and a density matrix in the third case.

The variable \(u \in \mathbb{R}^P\) is the set of control parameters mentioned above. These determine the shape of a series of control functions \(g_1(u, t), g_2(u, t),\dots g_n(u, t)\) that, in turn, determine the shape of the Hamiltonian operator \(H(u, t)\):

(4)\[H(u, t) = H(g_1(u, t), \dots, g_n(u, t))\]

Normally (but not necessarily), the Hamiltonian has the form:

(5)\[H(u, t) = H_0 + g_1(u, t)V_1 + \dots + g_n(u, t) V_n\]

The code addresses two different kind of situations, depending on what solutions to the previous differential equations are considered: (1) the initial states are fixed (initial value problems), or (2) we seek for periodic solutions, assuming that the perturbations are also periodic (Floquet problems). Let us consider those two options separately.

1.1. Optimization of initial value problems

In this case, the initial value is fixed. This may be a wave function (\(\phi(t=0)=\phi(0)\)), an evolution operator (normally, in this case, \(U(t=0)=I\)), or a density matrix (\(\rho(t=0)=\rho_0\)).

The system then describes a trajectory during a time interval \([0, T]\): \(y = y(t) \in [0, T]\) where \(y\) may be a wavefunction, an evolution operator, or a density matrix, as discussed above. The user of the code then defines a function of this trajectory:

(6)\[F = F(y)\]

Of course, the trajectory depends on the choice of the parameters \(u\): \(u \to y_u\), and we may then define a function of the control parameters:

(7)\[G(u) = F(y_u)\]

The OCT problem may then be formulated as: find the set of parameters \(u^0\) that maximizes function \(G\). In addition, there may be some constraints added to the set of parameters.

1.2. Floquet optimizations

We are now only concerned with periodic solutions to the previous equations of motion. When a system is subject to periodic perturbations, the mathematical tool used to describe it is normally Floquet theory, and hence we speak of Floquet optimizations.

In this case, we consider two possible situations: (1) closed systems, in which case the objective is the manipulation of the so-called Floquet psuedoenergies and Floquet modes (or functions of those), and (2) driven open systems, in whih case the objective is the manipulation of properties of the non-equilibrium steady states (NESSs).

1.2.1. Floquet optimization for closed sytems

Let us recall the key objects of (quantum) Floquet theory.

This Hamiltonian determines the evolution operator at all times; in particular at the periodic time \(T\), \(U(T)\). And, in turn, this evolution operator defines the Floquet modes and energies (the quasienergies or pseudoenergies, as its eigenstates and eigenvalues (or, more precisely, the arguments of the eigenvalues):

(8)\[U(T)\vert \phi^0_\alpha\rangle = e^{-i\varepsilon_\alpha T}\vert \phi^0_\alpha\rangle\,,\]

or, equivalently:

(9)\[U(T) = \sum_\alpha e^{-i\varepsilon_\alpha T}\vert \phi^0_\alpha\rangle\langle \phi^0_\alpha\vert\,.\]

Note that the pseudoenergies are only defined modulo \(\Omega = \frac{2\pi}T\), i.e. \(\varepsilon_\alpha + m\Omega \equiv \varepsilon_\alpha\) for any integer \(m\). In the following, as it is usually done, we assume \(\varepsilon_\alpha\) to belong to the first Floquet-Brilluin zone \([-\Omega/2, \Omega/2)\).

The eigenstates \(\phi^0_\alpha\) thus defined are time-independent objets. One however normally speaks of the Floquet modes or Floquet states as the periodic time-dependent objets defined by:

(10)\[\vert \phi_\alpha(t)\rangle = e^{i\varepsilon_\alpha t}U(t)\vert \phi^0_\alpha \rangle\,.\]
(11)\[\vert \phi_\alpha(0)\rangle = \vert \phi^0_\alpha \rangle\,.\]

The gist of Floquet theory is the fact that any solution \(\vert\psi(t)\rangle\) of Schrödinger’s equation can be expanded as a linear combination of these modes:

(12)\[\vert\psi(t)\rangle = \sum_\alpha f_\alpha e^{-i\varepsilon_\alpha t}\vert \phi_\alpha(t)\rangle\,.\]

Thus, the Floquet modes constitute a valid basis set for the expansion of the time dependent evolution of the system.

It is clear that modes and pseudoenergies are functions of the evolution operator from which they are computed via Eq. (8): we could write \(\varepsilon_\alpha = \varepsilon_\alpha(U)\) and \(\phi_\alpha = \phi_\alpha(U)\) – hereafter, it the time argument is not specified, \(U \equiv U(T)\). Since, in turn, the evolution operator depends on \(u\) (we could write it as \(U = U(u)\)), they can also be considered functions of the parameters \(u\); i.e. we could also write \(\varepsilon_\alpha = \varepsilon_\alpha(u)\) and \(\phi_\alpha = \phi_\alpha(u)\).

Any property of the driven system can be expressed in terms of the pseudoenergies and of the Floquet modes just as any property of the static system can be expressed in terms of energies and eigenstates of the Hamiltonian. Let us therefore consider any (real) function of all of these:

(13)\[f = f(\varepsilon, \phi)\,.\]

This includes any possible observable of the system, response property, etc. Without loss of generality, we now assume that the goal is to design a driven material whose \(f\) function is as high as possible.

Since, as we established above, pseudoenergies and modes are functions of the evolution operator, we can consider a function of evolution operators defined as:

(14)\[F(U) = f(\varepsilon(U), \phi(U))\,.\]

And since the evolution operator itself is determined by the parameters \(u\), we can finally define a function

(15)\[G(u) = F(U(u)) = f(\varepsilon(u), \phi(u))\,.\]

The quantum optimal control problem can thus be posed as find the parameters \(u\) that lead to the maximization of function \(G\).

1.2.2. Floquet optimization for NESSs

We depart from Lindblad’s equation [Eq. (3)]. In the absence of any external driving and in the presence of dissipation terms, any system normally decays to the thermal state. However, if the Hamiltonian is time-dependent, with a periodic perturbation, it will decay to a non-equilibrium steady state (NESS). This state is not time-independent, as is the thermal state, but changes in time with the periodicity of the Hamiltonian.

Let us call \(\rho_u\) to the NESS that emerges when using the control parameters \(u\). This means that it must verify the periodicity condition:

(16)\[\rho_u(t+T) = \rho_u(t)\]

when propagated using the Lindbladian in Eq. (3).

Let us conisder now any observable \(O\) defined on the system, and we consider a function defined

(17)\[G(u) = \frac{1}{T}\int_0^T\;\; {\rm Tr}\left[O\rho_u(t)\right]\]

In this case, the optimization problem is defined in terms of the maximization of this function \(G\).

1.3. Multi-target problems

The problems described above can be generalized to the case in which we want to simultaneously optimize the behaviour of several systems, defining a common target functional for them all. In other words, we wish for the control parameters or functions to be such that they optimize a target functional that is defined in terms of the behaviour of several systems.

A typical example of this is the case in which we have a material, and each k-point is treated independently because of symmetry. The target functional is then a function of the behaviour of all those subsystems.