Phone: +49 176 6525 4016
See also arxiv
Jiangwei Shang and Otfried Gühne
Convex optimization over classes of multiparticle entanglement
A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010)].
Jiangwei Shang, Yi-Lin Seah, Boyu Wang, Hui Khoon Ng, David John Nott, Berthold-Georg Englert
Random samples of quantum states: Online resources
This is the documentation for generating random samples from the quantum state space in accordance with a specified distribution, associated with this webpage: http://tinyurl.com/QSampling. Ready-made samples (each with at least a million points) from various distributions are available for download, or one can generate one's own samples from a chosen distribution using the provided source codes. The sampling relies on the Hamiltonian Monte Carlo algorithm as described in New J. Phys. 17, 043018 (2015). The random samples are reposited in the hope that they would be useful for a variety of tasks in quantum information and quantum computation. Constructing credible regions for tomographic data, optimizing a function over the quantum state space with a complicated landscape, testing the typicality of entanglement among states from a multipartite quantum system, or computing the average of some quantity of interest over a subset of quantum states are but some exemplary applications among many.
Jiangwei Shang, Zhengyun Zhang, Hui Khoon Ng
Superfast maximum likelihood reconstruction for quantum tomography
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum likelihood reconstruction that avoids this slow convergence. Our method utilizes an accelerated projected gradient scheme that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for $n$-qubit state tomography. In particular, an 8-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow, and reduces the need for alternative methods that often come with difficult-to-verify assumptions. The same algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Jiangwei Shang, Hui Khoon Ng, Berthold-Georg Englert
Quantum state tomography: Mean squared error matters, bias does not
Because of the constraint that the estimators be bona fide physical states, any quantum state tomography scheme - including the widely used maximum likelihood estimation - yields estimators that may have a bias, although they are consistent estimators. Schwemmer et al. (arXiv:1310.8465 [quant-ph]) illustrate this by observing a systematic underestimation of the fidelity and an overestimation of entanglement in estimators obtained from simulated data. Further, these authors argue that the simple method of linear inversion overcomes this (perceived) problem of bias, and there is the suggestion to abandon time-tested estimation procedures in favor of linear inversion. Here, we discuss the pros and cons of using biased and unbiased estimators for quantum state tomography. We conclude that the little occasional benefit from the unbiased linear-inversion estimation does not justify the high price of using unphysical estimators, which are typically the case in that scheme.
Jiangwei Shang, Kean Loon Lee, Berthold-Georg Englert
SeCQC: An open-source program code for the numerical Search for the classical Capacity of Quantum Channels
SeCQC is an open-source program code which implements a Numerical Search for the classical Capacity of Quantum Channels (SeCQC) by using an iterative method. Given a quantum channel, SeCQC finds the statistical operators and POVM outcomes that maximize the accessible information, and thus determines the classical capacity of the quantum channel. The optimization procedure is realized by using a steepest-ascent method that fol- lows the gradient in the POVM space, and also uses conjugate gradients for speed-up.
Kean Loon Lee, Jiangwei Shang, Wee Kang Chua, Shiang Yong Looi, Berthold-Georg Englert
SOMIM: An open-source program code for the numerical Search for Optimal Measurements by an Iterative Method
SOMIM is an open-source code that implements a Search for Optimal Measurements by using an Iterative Method. For a given set of statistical operators, SOMIM finds the POVMs that maximizes the accessed information, and thus determines the accessible information and one or all of the POVMs that retrieve it. The maximization procedure is a steepest-ascent method that follows the gradient in the POVM space, and uses conjugate gradients for speed-up.
Quantum state estimation aims at determining the quantum state from observed data. Estimating the full state can require considerable efforts, but one is often only interested in a few properties of the state, such as the fidelity with a target state, or the degree of correlation for a specified bipartite structure. Rather than first estimating the state, one can, and should, estimate those quantities of interest directly from the data. We propose the use of optimal error intervals as a meaningful way of stating the accuracy of the estimated property values. Optimal error intervals are analogs of the optimal error regions for state estimation [New J. Phys. 15, 123026 (2013)]. They are optimal in two ways: They have the largest likelihood for the observed data and the pre-chosen size, and are the smallest for the pre-chosen probability of containing the true value. As in the state situation, such optimal error intervals admit a simple description in terms of the marginal likelihood for the data for the properties of interest. Here, we present the concept and construction of optimal error intervals, report on an iterative algorithm for reliable computation of the marginal likelihood (a quantity difficult to calculate reliably), explain how plausible intervals --- a notion of evidence provided by the data --- are related to our optimal error intervals, and illustrate our methods with single-qubit and two-qubit examples.
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.
Optimal error intervals for quantum parameter estimation
Oberwolfach Rep. 11 , 2338 ( 2014 )
Rather than point estimators, states of a quantum system that represent one's best guess for the given data, we consider optimal regions of estimators. As the natural counterpart of the popular maximum-likelihood point estimator, we introduce the maximum-likelihood region---the region of largest likelihood among all regions of the same size. Here, the size of a region is its prior probability. Another concept is the smallest credible region---the smallest region with pre-chosen posterior probability. For both optimization problems, the optimal region has constant likelihood on its boundary. We discuss criteria for assigning prior probabilities to regions, and illustrate the concepts and methods with several examples.
We consider the implementation of a symmetric informationally complete probability-operator measurement (SIC POM) in the Hilbert space of a d-level system by a two-step measurement process: a diagonal-operator measurement with high-rank outcomes, followed by a rank-1 measurement in a basis chosen in accordance with the result of the first measurement. We find that any Heisenberg-Weyl group-covariant SIC POM can be realized by such a sequence where the second measurement is simply a measurement in the Fourier basis, independent of the result of the first measurement. Furthermore, at least for the particular cases studied, of dimension 2, 3, 4, and 8, this scheme reveals an unexpected operational relation between mutually unbiased bases and SIC POMs; the former are used to construct the latter. As a laboratory application of the two-step measurement process, we propose feasible optical experiments that would realize SIC POMs in various dimensions.
We propose an experiment that realizes a symmetric informationally complete (SIC) probability-operator measurement (POM) in the four-dimensional Hilbert space of a qubit pair. The qubit pair is carried by a single photon as a polarization qubit and a path qubit. The implementation of the SIC POM is accomplished with the means of linear optics. The experimental scheme exploits a new approach to SIC POMs that uses a two-step process: a measurement with full-rank outcomes, followed by a projective measurement on a basis that is chosen in accordance with the result of the first measurement. The basis of the first measurement and the four bases of the second measurements are pairwise unbiased --- a hint at a possibly profound link between SIC POMs and mutually unbiased bases.