Automatic Quantum Defect Detection

3 minute read

Published:

This post is excerpted from my response to the Open AI Scholars application asking me to describe a challenging software system that I built.

On the Quantum Software Team at Rigetti Computing, I was responsible for building software that measured and calibrated the quantum computer. One of the primary challenges that we faced was maintaining the stability of the quantum computer. Over time, quantum defects would appear that disrupted qubits. Once disrupted, these qubits were no longer available to our users, and once enough defects accumulated, we had to take the entire quantum computer offline. These defects plagued every approach to superconductor-based quantum computing, affecting Rigetti, Google, and IBM.

I decided to focus my attention on understanding and solving the defect problem. In addition to reading papers and talking to researchers at Rigetti, I contacted researchers at Google and in Sweden to understand the latest thinking around the issue. I wanted to enable Rigetti to collect data on how the number, type, and intensity of the defects varied with the fabrication process used to produce the quantum device. Because the defects appeared randomly, and we wanted to understand their short-timescale dynamics, it was necessary to run experiments continuously to detect them and understand their evolution. These experiments looked for sharp drops in the coherence time and changes in the spectral response. We collected tens of thousands of coherence time sequences and spectral responses in a day. Since we could not analyze this much data manually, it was necessary to build an automated system.

I built the framework for automatically running experiments, detecting defects, and storing results. Detecting defects required understanding and implementing two different statistical techniques: 1. Changepoint Detection, and 2. Nested Model Selection. Note that nothing in the existing literature mentioned applying these techniques to detect defects, so this was a novel approach.

I used online changepoint detection to enable rapid detection of drops in the coherence time of a qubit. We had to detect these drops rapidly so that we could reconfigure which qubits were available to customers. I implemented the algorithm from Bayesian Online Changepoint Detection (2008) by Ryan Adams and David Mckay. The hardest part was confirming that the data generated by our experimental system satisfied the statistical constraints for applying this algorithm. In particular, I had to verify that the distribution of coherence times fit the model of piecewise constant with Gaussian noise.

I used Nested Model Selection to determine how many Lorentzian peaks appeared in a given spectral response. Having more than one Lorentzian Peak would indicate a defect, and knowing the frequency of the peak would tell us in what frequency bands the defects were most likely to occur. Knowing the problematic frequency bands would allow us to design systems to avoid them. I used LMFIT to determine which Lorentzian Mixture model minimized the BIC. The hardest part was deciding which measure of model complexity to use: AIC (Akaike Information Criterion), BIC (Bayesian Information Criterion), or another criterion. After extensive research, I learned that the BIC was the most appropriate for nested models.

Collaborating with Jen-Hao Yeh and Marcus Da Silva helped this project succeed. Marcus Da Silva and I discussed approaches to Model Selection and their various trade-offs. Jen-Hao manually selected 1600 models to compare to the automated system and verify that it was within acceptable error bounds.