See also arxiv
Dealing with ignorance: universal discrimination, learning and quantum correlations
The problem of discriminating the state of a quantum system among a number of hypothetical states is usually addressed under the assumption that one has perfect knowledge of the possible states of the system. In this thesis, I analyze the role of the prior information available in facing such problems, and consider scenarios where the information regarding the possible states is incomplete. In front of a complete ignorance of the possible states' identity, I discuss a quantum "programmable" discrimination machine for qubit states that accepts this information as input programs using a quantum encoding, rather than as a classical description. The optimal performance of these machines is studied for general qubit states when several copies are provided, in the schemes of unambiguous, minimum-error, and error-margin discrimination. Then, this type of automation in discrimination tasks is taken further. By realizing a programmable machine as a device that is trained through quantum information to perform a specific task, I propose a quantum "learning" machine for classifying qubit states that does not require a quantum memory to store the qubit programs and, nevertheless, performs as good as quantum mechanics permits. Such learning machine thus allows for several optimal uses with no need for retraining. A similar learning scheme is also discussed for coherent states of light. I present it in the context of the readout of a classical memory by means of classically correlated coherent signals, when these are produced by an imperfect source. I show that, in this case, the retrieval of information stored in the memory can be carried out more accurately when fully general quantum measurements are used. Finally, as a transversal topic, I propose an efficient algorithmic way of decomposing any quantum measurement into convex combinations of simpler (extremal) measurements.
The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty---naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behaviour. We also discuss local (online) protocols and compare them with the optimal procedure.
In supervised learning, an inductive learning algorithm extracts general rules from observed training instances, then the rules are applied to test instances. We show that this splitting of training and application arises naturally, in the classical setting, from a simple independence requirement with a physical interpretation of being non-signalling. Thus, two seemingly different definitions of inductive learning happen to coincide. This follows from the properties of classical information that break down in the quantum setup. We prove a quantum de Finetti theorem for quantum channels, which shows that in the quantum case, the equivalence holds in the asymptotic setting, that is, for large number of test instances. This reveals a natural analogy between classical learning protocols and their quantum counterparts, justifying a similar treatment, and allowing to inquire about standard elements in computational learning theory, such as structural risk minimization and sample complexity.
Sudden changes are ubiquitous in nature. Identifying them is of crucial importance for a number of applications in medicine, biology, geophysics, and social sciences. Here we investigate the problem in the quantum domain, considering a source that emits particles in a default state, until a point where it switches to another state. Given a sequence of particles emitted by the source, the problem is to find out where the change occurred. For large sequences, we obtain an analytical expression for the maximum probability of correctly identifying the change point when joint measurements on the whole sequence are allowed. We also construct strategies that measure the particles individually and provide an online answer as soon as a new particle is emitted by the source. We show that these strategies substantially underperform the optimal strategy, indicating that quantum sudden changes, although happening locally, are better detected globally.
Among the many facets of quantum correlations, bound entanglement has remained one the most enigmatic phenomena, despite the fact that it was discovered in the early days of quantum information. Even its detection has proven to be difficult, let alone its precise quantitative characterization. In this work, we present the exact quantification of entanglement for a two-parameter family of highly symmetric two-qutrit mixed states, which contains a sizable part of bound entangled states. We achieve this by explicitly calculating the convex-roof extensions of the linear entropy as well as the concurrence for every state within the family. Our results provide a benchmark for future quantitative studies of bipartite entanglement in higher-dimensional systems.
Gael Sentís, Christopher Eltschka, Otfried Gühne, Marcus Huber and Jens Siewert
Quantifying entanglement of maximal dimension in bipartite mixed states
Phys. Rev. Lett. 117, 190502 (2016), arXiv:1605.09783
The Schmidt coefficients capture all entanglement properties of a pure bipartite state and therefore determine its usefulness for quantum information processing. While the quantification of the corresponding properties in mixed states is important both from a theoretical and a practical point of view, it is considerably more difficult, and methods beyond estimates for the concurrence are elusive. In particular this holds for a quantitative assessment of the most valuable resource, the maximum possible Schmidt number of an arbitrary mixed state. We derive a framework for lower bounding the appropriate measure of entanglement, the so-called G-concurrence, through few local measurements. Moreover, we show that these bounds have relevant applications also for multipartite states.
We develop a quantum learning scheme for binary discrimination of coherent states of light. This is a problem of technological relevance for the reading of information stored in a digital memory. In our setting, a coherent light source is used to illuminate a memory cell and retrieve its encoded bit by determining the quantum state of the reflected signal. We consider a situation where the amplitude of the states produced by the source is not fully known, but instead this information is encoded in a large training set comprising many copies of the same coherent state. We show that an optimal global measurement, performed jointly over the signal and the training set, provides higher successful identification rates than any learning strategy based on first estimating the unknown amplitude by means of Gaussian measurements on the training set, followed by an adaptive discrimination procedure on the signal. By considering a simplified variant of the problem, we argue that this is the case even for non-Gaussian estimation measurements. Our results show that, even in absence of entanglement, collective quantum measurements yield an enhancement in the readout of classical information, which is particularly relevant in the operating regime of low-energy signals.
The problem of optimally discriminating between two completely unknown qubit states is generalized by allowing an error margin. It is visualized as a device---the programmable discriminator---with one data and two program ports, each fed with a number of identically prepared qubits---the data and the programs. The device aims at correctly identifying the data state with one of the two program states. This scheme has the unambiguous and the minimum-error schemes as extremal cases, when the error margin is set to zero or it is sufficiently large, respectively. Analytical results are given in the two situations where the margin is imposed on the average error probability---weak condition---or it is imposed separately on the two probabilities of assigning the state of the data to the wrong program---strong condition. It is a general feature of our scheme that the success probability rises sharply as soon as a small error margin is allowed, thus providing a significant gain over the unambiguous scheme while still having high confidence results.
We design an efficient and constructive algorithm to decompose any generalized quantum measurement into a convex combination of extremal measurements. We show that if one allows for a classical post-processing step only extremal rank-1 POVMs are needed. For a measurement with $N$ elements on a $d$-dimensional space, our algorithm will decompose it into at most $(N-1)d+1$ extremals, whereas the best previously known upper bound scaled as $d^2$. Since the decomposition is not unique, we show how to tailor our algorithm to provide particular types of decompositions that exhibit some desired property.
A quantum learning machine for binary classification of qubit states that does not require quantum memory is introduced and shown to perform with the minimum error rate allowed by quantum mechanics for any size of the training set. This result is shown to be robust under (an arbitrary amount of) noise and under (statistical) variations in the composition of the training set, provided it is large enough. This machine can be used an arbitrary number of times without retraining. Its required classical memory grows only logarithmically with the number of training qubits, while its excess risk decreases as the inverse of this number, and twice as fast as the excess risk of an estimate-and-discriminate machine, which estimates the states of the training qubits and classifies the data qubit with a discrimination protocol tailored to the obtained estimates.
Quantum state discrimination is a fundamental primitive in quantum statistics where one has to correctly identify the state of a system that is in one of two possible known states. A programmable discrimination machine performs this task when the pair of possible states is not a priori known, but instead the two possible states are provided through two respective program ports. We study optimal programmable discrimination machines for general qubit states when several copies of states are available in the data or program ports. Two scenarios are considered: one in which the purity of the possible states is a priori known, and the fully universal one where the machine operates over generic mixed states of unknown purity. We find analytical results for both, the unambiguous and minimum error, discrimination strategies. This allows us to calculate the asymptotic performance of programmable discrimination machines when a large number of copies is provided, and to recover the standard state discrimination and state comparison values as different limiting cases.