Research Interests

My research lies approximately, though far from entirely, in the intersection of (quantum) computational complexity, quantum foundations, and (quantum) gravity. In a line, I aim to understand the physical limitations of efficient computation.

There is a more-or-less complete list of my published and unpublished works here, and a similarly complete list of my research talks here. My publications are also listed (albeit incompletely) on INSPIRE, Google Scholar, and ORCID. Below are descriptions of some of the projects that I'm currently working on.

Entanglement and Quantum Advantage

If \(\mathsf{BPP} \neq \mathsf{BQP}\), then why? What is it about quantum mechanics that enables the advantage? An oft-cited reason is entanglement—that quintessentially quantum correlation that seems to underlie every fast quantum algorithm we know. But is entanglement actually necessary for \(\mathsf{BPP} \neq \mathsf{BQP}\)? That is, if there is a language \(L \in \mathsf{BQP} \backslash \mathsf{BPP}\), then does every efficient quantum computer that decides \(L\) necessarily entangle its input?

Of course, this matter is trivial in the context of pure state quantum computation (i.e., quantum computation where at each stage the state of the computer is pure) because without entanglement no correlations can manifest. However, in the context of mixed state quantum computation, things are not so clear. Indeed, it may be the case that a quantum computer that at each stage is in a mixed and non-entangled state is sufficient to get a computational advantage.

From a parameter-counting point of view, this is actually plausible, because uncountably many separable mixed states have exactly the same space complexity as do general (possibly entangled!) mixed states. This follows from a beautiful theorem by Braunstein et al.: Given an arbitrary \(n\)-qubit state \(\sigma\) (possibly very entangled!), there is \(\epsilon > 0\) such that the state \[ \rho = \epsilon I + (1-\epsilon) \sigma \] is separable. Therefore, the separable mixed state \(\rho\) requires the same number of parameters for an exact description as does the general state \(\sigma\). Incidentally, this problem is one of Aaronson's "Ten Semi-Grand Challenges for Quantum Computing Theory".

Restricted Quantum Circuits

The Gottesman-Knill theorem proves that Clifford circuits are efficiently classically simulable. It is natural to wonder, then, what small tweaks you can make to Clifford circuits that suddently render them hard to simulate classically (or at least plausibly hard, in the sense that if they were efficiently classically simulable, then some extraordinary thing like polynomial hierarchy collapse would occur).

In this vein, mostly with my good friend and colleague Chai Karamchedu, we are investigating the power of a myriad of different "souped-up" Clifford circuits, including \(U\)-conjugated Clifford circuits and what we are calling \(U\)-infixed Clifford circuits.

At the same time, we are studying if the proven advantage of quantum shallow circuits is robust to changes in the underlying gate set. Interestingly, optimal circuit compilation algorithms suggest that it might not be, since these require logarithmic depth. Last but not least, having long been fascinated by the \(\mathsf{NC}\) hierarchy, we are trying to learn new facts about polylogarithmic depth quantum circuits, their sampling problems, and, ultimately, the \(\mathsf{QNC}\) hierarchy.

Is "Quantum" just Bell?

Quantum mechanics can be construed as a generalization of classical probability theory that is based on the theory of Hilbert spaces. In this view, much of what distinguishes quantum theory from classical theory is the set of correlations that can be achieved in different causal scenarios (which is the physical setting of non-local games).

Indeed, Bell's theorem proves that in the correlation scenario with Alice and Bob spacelike separated, but sharing a common quantum state, and each being acted on by independent POVMs, the correlations they can achieve is strictly more than in any classical theory. Similar results hold for a panolopy of other correlation scenarios. However, by applying a series of "reductions" (or more formally, projections in the probability simplex), many of the scenarios that exhibit such a "quantum-classical gap" reduce to the Bell correlation scenario. This begs the question if those correlations that do exhibit a quantum-classical gap always reduce to the Bell scenario. This is joint work with many folks at the Perimeter Institute.

Generalizations of Tsierlson's Problem

Tsierlson's problem formalizes the confusing fact that in quantum mechanics there are two obvious but ostensibly different ways to talk about measurements on two systems that are spatially separated. Suppose, for example, that Alice and Bob are in different rooms, and that they share the quantum state \(\rho_{AB}\). They generate some set of correlations, \(\{p(a,b \mid x,y)\}\) for some set of measurement outcomes \((a,b)\), which depend on the measurement settings \((x,y)\). Quantumly, then, one can describe \(\{p(a,b \mid x,y)\}\) as either stemming from probability distributions like \(\text{tr}(\rho_{AB}M_{ab\mid xy})\) for some POVM \(\{M_{ab \mid xy}\}_{abxy}\) on the combined Hilbert space of Alice and Bob, or as stemming from probability distributions like \(\text{tr}(\rho_{AB} M_{a\mid x} \otimes M_{b\mid y})\) for some POVM \(\{M_{a\mid x}\}_{ax}\) on Alice's system and some POVM \(\{M_{b\mid y}\}_{by}\) on Bob's system. Tsierlson's problem asks if these two different ways of talking about the statistics of spatially separated systems are actually different or not. Evidently, this has applications in trying to simulate quantum field theory using a finite number of qubits, since QFT employs the principle of microcausality, whereas in standard discrete quantum information theory, we tend to factorize the Hilbert spaces of spatially separated systems.

In 2020, the now famous paper titled \(\mathsf{MIP}^* = \mathsf{RE}\) proved that they are in fact different. In my head, it begs the question if the same holds true in more general correlation scenarios.

The Complexity of Generalized Probability Theories

As I said above, quantum mechanics can be construed as a generalization of classical probability theory that is based on the theory of Hilbert spaces. However, there is an even more general theory of this, called generalized probability theory, wherein both quantum and classical probability theory are special cases.

Piggybacking off of work by Barrett et al., I am interested to explore new complexity classes associated with generalized probability theories. In the same way results about \(\mathsf{BQP}\) have influenced complexity theory, I have a hunch that new results about the so-called class \(\mathsf{BGP}\) will too. For example, how powerful are polylogarithmic depth generalized probabilistic circuits?

Odd Perfect Numbers and Ramanujan Sums

For some inexplicable reason, the odd perfect numbers have captivated me for years. It is something to do with their profound impenetrablility that calls to me.

Given a positive integer \(n\), let \{\sigma_\ell(n) = \sum_{d \mid n} d^\ell\} be the sum of the \(\ell\)th-power of the divisors \(d\) of \(n\). We say \(n\) is perfect if and only if \(\sigma_1(n) = 2n\). The first four perfect numbers are 6, 28, 496, and 8128 (sequence A000396 in OEIS). It has been outstanding since Euclid if all perfect numbers are even.

While we don't expect to crack this nut, my colleague and friend Chai Karamchedu and I have a research agenda aimed at uncovering new results about perfect numbers. We have opted for a more analytic point of view. This is made possible by the one and only Srinivasa Ramanujan, who, in 1918, introduced what are today known as Ramanujan Sums, given by \{c_q(n) = \sum_{k \in (\mathbb{Z} / q\mathbb{Z})^\times} e^{2\pi i k n / q}.\} These sums are remarkable because many well-known arithmetic functions can be expressed as a linear combination of Ramanujan sums. For example, the following is an actual equality \{\sigma_k(n) = n^k\zeta(k + 1) \sum_{q \geq 1} \frac{c_q(n)}{q^{k + 1}}.\} Using sums like this, we have developed a theory of continuous extensions of Ramanujan functions, and our hope is to be able to use the tools of complex analysis to gain a better understanding on the distribution of odd perfect numbers. While this is definitely a longshot, the approach as far as we know is novel, so why not try?

Modified Gravity and Higher Dimensional Black Holes

While I no longer research these areas, I invite you to read this Harvey Mudd College news article about me and my former work.