Quantum Computing – The New Stack

We all have our own proven ways to increase productivity and focus. I grew up in a house filled with 60s music, video games and Cubs baseball shows. It’s perhaps unsurprising, in a career that oscillated between computational physics, app development, data science and engineering but always coding, that I’ve come to love working with game. in the background – especially during those Friday afternoon games at Wrigley.

For me, these things just go together. The same goes for quantum computing and machine learning (ML).

Motivating Quantum + Machine Learning

Brian Dellabetta

Brian Dellabetta is a Sr ML-DevOps Engineer at Zapata Computing. His doctorate. focused on the simulation of the electronic behavior of new topological systems of condensed matter. Prior to joining Zapata, he worked as a full-stack software developer, machine learning engineer, and data engineer in education technology and information security. He currently resides in Chicago, where he occasionally teaches at DePaul University’s College of Computing and Digital Media.

Quantum computing offers a whole new way of thinking about computing. The classical world of computing—that of digital circuits, discrete logic, and instruction sets—lends itself very well to business logic and information systems. This is very unlikely to change, even with a fault-tolerant quantum computer. But the natural world is not based on discrete logic and if/else statements.

Classical computing has been the only way to compute for so long that it can be hard to appreciate that there are fundamentally different approaches, let alone that they can make certain tasks extremely easy on a quantum computer ( and others extremely difficult).

Machine learning provides an excellent testing ground, as it falls outside the realm of rule engines or discrete logic, and there are many unaddressed limitations with current techniques. There is no denying that machine learning has flourished and seen massive adoption in the industry, especially over the past 15 years.

However, many models have become so complex and unwieldy that, recent article on IEEE Spectrum According to the states, ML researchers could be “nearing the frontier of what their tools can achieve”. Why are our brains so much more efficient and robust when it comes to image processing than the latest and greatest deep learning models? Does our approach make a relatively simple calculation very difficult?

We are free to explore but bound by our assumptions. From decision trees to regression, the legacy of classical computing is immediately apparent in machine learning. Even neural networks, which include an activation layer when the input exceeds a certain threshold, resemble the behavior of a transistor. For any new approach, we need to rethink every part of the learning algorithm, from coding the data, to preparing an architecture, to training against a set of data and decoding the result.

All of this makes for a very exciting and unique challenge. The silver lining is that there’s actually a lot of overlap with the prerequisites for quantum machine learning (QML) and most modern enterprise data architectures. While the algorithms will change, the way a company manages the lifecycles of its data and models (the business logic side of things) probably won’t change.

Change is constant, in life as in ML (and QML)

It is important to keep in mind the dynamic nature of ML. Search is constantly evolving and improving with new architectures and algorithms, so QML is, in a sense, competing with a moving target. It is our collective responsibility to stay on top of research in both areas to understand the pros and cons of each and come up with the hybrid architectures that take advantage of both, while benchmarking against the best classical algorithms.

Being a quantum scientist is not a prerequisite

Just as an ML engineer doesn’t need to know the semiconductor physics that underlie transistor behavior, QML engineers don’t need to know the quantum physics of a particular hardware implementation. Instead, we can focus on the basic types of operations available to us, called gates in classical and quantum computing, to create end-user applications that can be used by anyone.

QML applications and methods should be as user-friendly as possible. Consider the classic machine learning library examples PyTorch and TensorFlow, where you don’t necessarily need to know how backpropagation works to use it effectively and translate it into real business insights. The same goes for good QML software.

That said, there are certainly varying degrees of complexity depending on the use case and industry. If you think about the work that’s done in computational chemistry or some of the more physical sciences, it’s a very different language, a different vocabulary, and a much more scientific knowledge base required.

It’s the models that matter

Outside of data management, machine learning in business is all about finding the neural network models or architectures that best align with your data set and the desired insights one hopes to derive from it. These models can be classical, quantum or involve a hybrid approach. In general, however, the process includes the following steps:

  • Start with a set of data and a goal. What do we hope to learn from this project?
  • Exploration — what’s going on in the dataset? How “clean” is it and how much munging is needed to map everything into a standard format? (The adage that 95% of a data scientist’s job is to munching and cleaning data is absolutely true.)
  • Feature engineering — what aspects of the dataset are important? Does the data contain features that reasonably correspond to a desired output or label? Information from other data sources may need to be integrated to form a reliable model.
  • Model building – which algorithms or architectures might be most beneficial? This is where hybrid and QML architectures come into play.
  • Training: Many tools are available for hyperparameter tuning to automate this process and ensure that you get the most out of your model.

Once identified, these use cases can then be put into production. The model is a living thing, just like the software is a living thing. You’ll always be updating, versioning, and tweaking things for new features, new datasets, or new hyperparameters. A proven and automated MLOps pipeline is essential for this lifecycle.

Cubs and Quantum Computing Have More in Common Than You Think

The novelty and potential of quantum computing, like ML before it, makes it a fascinating field of research. Similar to the graphics and speed of the video games I played as a teenager, it’s hard to believe how suddenly things can change in such a short time.

When I was in college, quantum computing — and the Cubs winning a World Series — were pipe dreams. I rolled my eyes every time these topics came up and thought they would never happen. Today, the Cubs are only a few years away from winning the World Series and there is an entire industry dedicated to quantum computing.

Never before has quantum computing attracted so much attention from experts in machine learning, numerical optimization and quantum simulation.

The impact of QML will grow with our theoretical understanding as much as it will grow with the hardware. We are truly in uncharted territory. Then again, the Cubs were in Game 7 of the 2016 World Series, too, and it worked out pretty well. I hope the same for QML.

To learn more about what quantum can do for ML, check out the podcast i did recently with my colleague Luis Serrano, Ph.D. If terms like “probability distributions” and “generative models” excite you, then you will love this discussion!

Image by Jennifer LaRue from Pixabay.

Sherry J. Basler