Select Page

Episode 15: Advanced Quantum Software Engineering

By Brinley Macnamara and Eliza Mace
Play Now

Preview

Show Notes

Want to dive even further into the details of quantum software engineering? In this follow-on to Episode 14, we delve into the principles of superposition, entanglement, and interference – all from the perspective of software engineering. Tune in and then check out guest speaker Richard Preston’s online course on quantum software development at stem.mitre.org/quantum

Transcript

Full text of transcript

[00:00:00] Brinley Macnamara: Hey, listener Brinley here. Today we’re going to be doing something a little different. Since we already gave you the basics of Quantum software engineering in our last episode, we’re going to delve into more advanced topics in Quantum software engineering in this episode. If you’re new to quantum computing and haven’t listened to our previous episode, I’d highly recommend putting this one on pause and listening to that one first. It’s listed as episode 14 in our feed. Thanks for bearing with me through this disclaimer. Let’s dive in.

All right, Richard. So my first question is, on behalf of everyone who’s listening to this podcast who might not be an expert in quantum physics, do you need to be a quantum physicist to work in the field of quantum computing?

[00:00:39] Richard Preston: No, you do not. So historically that has definitely been the case. But for example, my background is mostly in software engineering not quantum physics. I took a modern physics class in college, but that was about the extent of it. So yeah, I learned the principles of quantum computing by approaching it from a software perspective and learning how to program quantum computers. And I think that’s a pretty accessible way for people to get into the field as opposed to having to go out and get a PhD in physics.

[00:01:05] Brinley Macnamara: Hello and welcome to MITRE’s Tech Features podcast. I’m your host, Brinley Macnamara. At MITRE, we offer unique vantage point and objective insights that we share in the public interest. And in this podcast series, we showcase emerging technologies that will affect the government and our nation in the future.

Today we’re going to be using tools and concepts from Quantum software engineering to teach you about quantum mechanics. Our goal is to prove that you can understand the fundamental principles of quantum mechanics, even if you’re not a quantum physicist. Now, I know this is a tall order; quantum mechanics is so unintuitive that it even spooked Einstein. That said, I think you’ll find that the software engineer’s perspective on quantum mechanics is a powerful lens that will make this esoteric topic seem well, a whole lot less spooky.

But before we begin, I want to say huge thank you to Dr. Kris Rosfjord, the Tech Futures innovation area leader in MITRE’s Research and Development program. This episode would not have happened without her support. Now without further ado, I bring you MITRE’s Tech Futures podcast, episode number 15.

 If you turn to a quantum physicist at a bar and ask them “what is quantum mechanics?” They tell you that quantum mechanics boils down to three fundamental principles: the principle of superposition, the principle of entanglement, and the principle of interference. In advanced physics courses, these concepts are usually taught from a mathematical perspective.

In other words, students are taught that these quantum mechanical phenomena boil down to systems of equations. But if you’re anything like me and systems of equations just aren’t your cup of tea, then you’re in luck. Because in this podcast we’re going to define superposition, entanglement, and interference from the software perspective, but we’re getting a little ahead of ourselves.

How does superposition, entanglement and interference have anything to do with software engineering? For the first time in human history, software engineers are able to leverage these quantum mechanical phenomena to solve problems that we humans have never been capable of solving. How are they able to do this exactly? Enter the quantum computer!

[00:03:21] Richard Preston: First of all it’s useful to define what we mean by classical or conventional computers so you can understand how they, they differ from quantum computers.

[00:03:29] Brinley Macnamara: That’s Richard Preston talking. You heard him at the start of this podcast. He’s an engineer at MITRE who specializes in quantum software engineering.

He’s also a group leader in MITRE’s network analytics department.

[00:03:40] Richard Preston: So classical computers, these are the types of computers that we use every day. So like your laptop, your smartphone, even like microchips in your car, pretty much any type of computer that you could think of is what we would call a classical computer.

So the thing that characterizes classical computers is that they process information using the principles of digital logic. So the idea behind Digital logic is that we have discrete values, which store information, maybe you’ve heard of bits. So we, we have these discrete bits, discrete pieces of information that we can manipulate to perform some calculation.

So that’s classical computing. A quantum computer is something that’s totally different. It does not operate on the principles of digital logic at all. Instead, it operates on the principles of quantum mechanics. So the thing that’s really important to understand about quantum computing, is that it’s not simply a fast version of a conventional or classical computer. It’s something totally new. It’s an entirely new way of performing computation that is capable of solving specific problems more efficiently than the classical computers that we use today.

[00:04:49] Brinley Macnamara:

Alright, you officially know what a quantum computer is. Give yourself a pat on the back. You now have a great topic for your next dinner conversation, but our goal for this podcast is to go deeper. We want to teach you enough so you can hold a conversation about quantum mechanics with your favorite physicist friend. So let’s return to the hypothetical bar where you’re sitting next to the quantum physicist who’s telling you about quantum mechanics.

Most likely they’ll start by explaining the principle of superposition. So that’s what we’ll start with in this podcast as well. Now in the context of quantum computing, superposition goes hand in hand with a fundamental unit of information in a quantum computer. The quantum bit, also known as a qubit. How does the fundamental unit of information in a quantum computer that is called the qubit, differ from the fundamental unit of information in a classical computer, which is called the bit?

[00:05:39] Gen Clark: Yeah, so I guess the main difference between the two is really the concept of superposition. So a classical b it can exist in two states. The zero and one state, which of course gives rise to binary. The qubit on the other hand, can exist in the zero or the one state, but it can also exist in a linear combination of both zero and one at the same time.

[00:06:01] Brinley Macnamara: That’s Dr. Gen Clark talking. She’s an experimental physicist and a member of the MITRE team that is working to build the world’s first scalable quantum computer,

[00:06:09] Gen Clark: which physically is pretty unintuitive. But one way that people like to visualize this is by thinking of a sphere where one is at the top and zero is at the bottom, and the state of the qubit, is a vector from the origin that can point to locations on the surface of the sphere.

So a classical bit, the vector can only point either straight up to one or downward to zero. A qubit, on the other hand, the vector can point anywhere on the surface of the sphere. So for example, it can lie along the equator in an equal superposition between zero and one. And this concept of superposition is one of the things that makes quantum computation offer a speed up over classical computation.

[00:06:47] Richard Preston: So in classical or digital computing the fundamental unit of information is a binary state called a bit, which bits can take on discrete value of zero or one, off or on, false or true, et cetera. And in software, we refer to a single bit as a boolean value. Of course, bits can be implemented using a variety of physical systems, for example, as a higher low voltage in an electrical signal.

So in quantum computing, fundamental unit of information is a quantum state called a qubit. And qubits can be zero or a one or some combination of both. And this linear combination of zero and one is called a superposition. So this might seem strange, but the way I really like to think about a qubit, putting my software engineering hat on, is as a data structure with two numerical attributes you might call these attributes, the zeroness and oneness of the qubit.

This is totally different from a bit, which is either a zero or a one. It wouldn’t make sense to talk about how much zero a bit has, right? But a qubit could have some non-zero amount of zeroness and oneness at the same time, which again, we would call that a superposition. So this zero and oneness, which are called amplitudes, are very closely related to the probability of observing a zero or a one when the qubit is measured . To be specific, the probability of observing a given measurement outcome is equal to the absolute value of its amplitude squared. So for example, if a qubit has a zeroness value or a zero amplitude of one over the square of two, which is about 0.7, then the probability of getting a zero when we measure the qubit is one half or 50%. So this probabilistic nature of measuring quantum systems in superposition is a fundamental property of the universe. Just like bits, qubits can be implemented using a variety of physical systems, and this is an area of active research and development .Some of the leading approaches are superconducting circuits, trapped ions, neutral atoms, and photons.

[00:09:08] Brinley Macnamara: To recap from the software perspective, a qubit is just an object with two values, one that describes its amount of zeroness and another that describes its amount of oneness. In my opinion, this is a really profound insight because thinking about superposition as just an object with two properties is much more intuitive than how it’s described outside of the software context. For instance, whenever I hear that an atom can be in two places at once, my brain starts to feel like it’s in two places at once.

Now, one of the most important implications of superposition is that it enables a quantum computer to encode much more information than a classical computer can ever dream of. If classical computers could dream, that is, I don’t think we’re there yet with AI. Probably a topic for a different podcast. Anyways, if you start to crunch the numbers, this means that a single qubit can represent the same amount of information in two classical bits. Two qubits can represent the same amount of information in four classical bits. Three qubits can represent the amount of information in eight classical bits and so on. In fact, as you increase the number of qubits in a quantum computer, the amount of classical bits it would take to encode the same amount of information grows exponentially. For instance, to store the amount of information that could be encoded in a quantum computer with just 280 qubits, you’d need two to the 280 classical bits. That’s more classical bits than the number of atoms in our universe.

Now the challenge of programming quantum computers isn’t so much that you have to put qubits into states of superposition. In fact, according to Dr. Clark, that’s the easy part. The real challenge lies in doing useful things with qubits in superposition. One reason this is hard is due to a phenomenon that physicists refer to as the measurement problem, which boils down to a qubit in superposition immediately collapsing to a zero or one state as soon as it is measured .

[00:10:52] Richard Preston: So if we have some qubit, which is in superposition, it has some amount of zeroness and some amount of oneness. Once we measure it, we said that we’re either gonna measure a zero or a one. Now this is a little bit more disruptive than it seems, right? Not only do we see a zero or a one, this would actually change the qubit state into a zero or one, depending on what we measured. So basically when we measure it, we turn the qubit back into a bit – it loses its superposition, its quantum properties. And so what we want to do is wait until the end of our program to actually measure the qubits because we can’t measure them while we’re trying to use this quantum computation, superposition and entanglement interference.

[00:11:37] Brinley Macnamara: While not being able to measure a qubit until the end of your program is quite inconvenient, the measurement problem does seem to be more of a feature than a bug in quantum mechanics. In fact, whenever you measure a particle, when it’s in a state of superposition, it will always collapse out of its superposition into a single classical state. Now what state you ultimately measure does depend on the particle’s original superposition returning to our qubit that’s in a superposition of one and zero. The probability that you’ll get a one or a zero when you measure the qubit is directly related to its amount of oneness and zeroness. For example, if you had a qubit with 80% oneness and 20% zeroness, you would ultimately measure the one state in 80% of your measurements and the zero state in 20 percent. So how could you write a useful quantum software program when you’re not allowed to measure anything until the end? Here is

[00:12:25] Richard Preston: Richard, this may be slightly confusing. So how could we possibly do useful computation without looking at our bits that we’re doing computation with? But what quantum computing does is it uses interactions between qubits – this notion of entanglements that would allow us to encode the control logic in any information and any logic that we’re trying to compute with into the state of the quantum computer, into the superposition of the quantum

[00:12:55] Brinley Macnamara: system. If you’re listening closely, you just heard Richard mention the second core principle of quantum mechanics, entanglement.

Before we define entanglement from the software perspective, it will help to first consider how you would implement the following program in a quantum computer. Let’s say you have two qubits, qubit, one, which is in a state of superposition and qubit two, which is in the zero state, and your job is to flip the second qubit if and only if the first qubit is a one.

On the surface, the answer seems straightforward. Just measure the value of the first qubit, and if it’s a one, flip the second qubit. But this wouldn’t work on a quantum computer. Why? Recall that anytime we measure a qubit, it will immediately collapse out of a superposition state into a discrete one or zero state.

And as Richard said, this could have a litany of unintended downstream effects. For instance, if this conditional qubit flip is part of a larger quantum algorithm that depends on qubit one remaining in a state of superposition, then prematurely measuring would be disastrous. So we need to find a clever way of flipping qubit two if and only if qubit one is in a one state without measuring qubit one.

And this is where entanglement comes into play. 

[00:14:02] Richard Preston: So like superposition where the word is very spooky. But the actual phenomenon is not that spooky when you start to look at it just as a programming concept. Entanglement is, I don’t think it’s as spooky as it seems. All we’re talking about when we talk about entanglement is that we have multiple qubits whose states are dependent on each other, so they cannot be described independently. So when you have multiple qubits, they’re not automatically entangled, right? We could say, let’s, we have two qubits and they’re both zero. Okay, then they don’t have anything to do with each other, right? So they’re not entagled. However, if I put the first qubit into superposition and then I can apply what’s called a C-NOT gate

[00:14:44] Brinley Macnamara: by the way, a C-NOT gate is a name of a special type of transformation we apply directly to a set of two qubits. CNOT will flip qubit two if and only if qubit one is in a one state. But very importantly, it applies this transformation without measuring anything.

[00:14:59] Richard Preston: This would entangle the states of the two qubits, so that when I measure the qubits, either they’re both going to be a zero or they’re both going to be a one.

This is how we actually encode the control logic into the quantum computer without measuring, because like we said before, if we said, all right, I want to say if this qubit is a one, now I want to flip the other qubit. Without measuring, we could do that by using this entanglement strategy. The first qubit, it’s in a superposition of a zero and a one, and we say, okay, I want to make sure that the second qubit is a one if and only if the first qubit is a one. This is a state, A two qubit state, 50% zero, zero and 50% one one, that really doesn’t code that logic. And so then when we get to the end of the program and we measure, we’re either always gonna measure a 11 or a zero zero. So the qubits are always gonna have the same value. We’re never gonna measure a zero and a one. That’s what entanglement means.

[00:16:03] Brinley Macnamara:

Now we’re ready to move on to the third and final core principle of quantum mechanics: interference.

[00:16:09] Richard Preston: Interference is a little trickier to explain without going into the math, but essentially interference is if we have a state of a quantum computer, we typically would describe this in Dirac notation and what we find is that the probability amplitudes of each term in this superposition, so for example, remember I talked about the zero ness and the oneness. These can have different phases. So the phases can be positive or negative. And so when we add these terms together after we do some, we apply some gate or some series of gates.

This would cause us to transform the state of the quantum computer and it ends up causing you to add some of these states together, some of these terms together. We see that the positive phase would constructively interfere, so they would add together and the positive phase with the negative phase would destructively interfere so they would cancel out.

So this is the concept of interference. It’s the same concept behind if you’re talking about audio or a hearing aid would amplify the sounds coming in by playing the sound louder. It would give you the same signal in phase with the sound that you’re hearing.

And so that would make that louder. Whereas if you had a noise canceling headphone, for example, this would repeat the same sound but out of phase with the opposite phase. And so this would cancel out that signal. And so that’s what we’re talking about with quantum computing. When we have these states, which is a superposition of many different measurement outcomes.

The trick is we need to be able to use this concept of interference to combine the information and combine the result in some way such that when we actually measure, we get something useful at the end of the program.

[00:17:54] Brinley Macnamara: So is that then the whole motivation for quantum algorithms? Is that the goal?

If anyone’s gonna sit down and take out a pencil and write a quantum algorithm, that’s really their end. is to basically just leverage interference to ensure that the probability of the right answer is increased over the course of the algorithm.

[00:18:14] Richard Preston: Yes. Generally speaking, the the concept that we really leverage to provide some advantage for a quantum computer is that we can use this principle of superposition to almost compute the result for our possible inputs, but now we have the result in a superposition of all the values that we’re looking for.

And so we would need to have some clever way, maybe it is amplifying the amplitude of the state that we’re looking for over a multiple iterations. This would be a strategy in something called Grover’s algorithm, or maybe it’s finding some periodic behavior in the superposition. To find some signal in there.

This would be like Shor’s algorithm, which helps prime factorization. And so it’s tricky. You have to be clever

[00:19:05] Brinley Macnamara: At this point we have all the concepts we need to build an intuition for the basic requirements of any quantum algorithm that performs some computations faster than a classical computer.

So what are some examples of quantum algorithms that can solve practical problem? Richard just mentioned two very famous ones, Grover’s and Shor’s. In the last podcast we discussed Grover’s algorithm, which you could use to search a database or crack a forgotten password on a quantum computer much more quickly than you could on a classical computer.

We also discussed how Grover’s could be used by a bad actor to gain unauthorized access to something like your online bank account. But there’s an algorithm that poses an even bigger threat to online security, and it’s arguably the most famous quantum algorithm that exists today. Can you guess what it is?

[00:19:47] Richard Preston: Shor’s algorithm actually solves prime factorization and also the discrete log problem. If you have two very large prime numbers, you multiply ’em together. Then in a classical computer, if you’re trying to find the primes that you multiply together to get the large number, you’d have to just brute force basically guess, and check. There’s ways of speeding it up, but you essentially have to brute force it. Whereas on a quantum computer you can really find those factors directly, essentially.

And so this is very in fact, important problem, which many of today’s public key cryptosystem s, cryptography and internet security relies on this, and so quantum computers are a threat to that. And so we need to come up with new encryption algorithms that would be resistant to that.

[00:20:32] Brinley Macnamara: I wanna reiterate the impact of Shor’s algorithm because it’s really important. In today’s world, the security of the entire internet relies on public key cryptography, which in turn depends on the hardness of finding the prime factors of large numbers. On a classical computer, it is really hard to factorize large numbers -so hard in fact that it would take you 300 trillion years to find the prime factors of today’s standard encryption keys.

Since even the most patient of hackers don’t have 300 trillion years of spare time, this has been enough to keep our internet activity safe from eavesdroppers. But recently, Shor’s algorithm has thrown a huge curve ball at our cryptosystem. According to a new study out of Google, a practical quantum computer could break our standard methods of public key cryptography in a mere eight hours.

But there’s an important caveat in every study that looks at Shor’s threat to public key cryptography, which is that you need a quantum computer with thousands, if not millions of qubits to factor the ultra large encryption keys that are used by our standard cryptosystem. But today’s most sophisticated quantum computers only have a hundred or so qubits, so we have a long way to go before we’ll have quantum computers that will break public key cryptography.

And currently there is one big challenge standing in the way of these practical quantum computers. Here is Dr. Clarke

[00:21:46] Gen Clark: . So qubits are generally very susceptible to de coherence or interactions with their environment that can alter or destroy the information that’s carried. So there are several different avenues for mitigating the de coherence. So at the hardware level just improvements that can help shield the cubits from environmental noise. For example, like electrical fluctuations or cooling to colder temperatures to prevent interactions with thermal lattice vibrations that are known as phonons.

But even for a fixed level of hardware, there are definitely other strategies that researchers can use to try to help mitigate and correct these errors. So error correction is pretty critical in quantum circuits and especially the smaller kind of noisy developing quantum computers.

It’s become kind of field in and of itself. So it can involve using a number of physical qubits to encode a single kind of logical qubit that carries the information in such a way that an error in any one of the constituent physical qubit can be corrected by the others, and it won’t automatically translate into an error or loss of information in the overall logical qubit.

But this does mean that you then need significantly more physical qubits to encode the same number of logical qubits and carry the same kind of information.

[00:23:00] Brinley Macnamara: As an outsider looking in on the quantum computing community, it can be difficult to gauge progress from the popular science headlines. For example, as I was Googling quantum computing, I saw many stories about research groups who claimed to have achieved quantum supremacy. The term sounded pretty important to me, so I asked Richard what I should make of it.

[00:23:19] Richard Preston: the idea of quantum supremacy, it’s really more of a milestone in the field than anything else. The idea of it is that we have some problem which we demonstrate that we can solve on a quantum computer that we couldn’t solve on a classical computer. And so the reason why I say it’s more of a milestone than anything else is because you can have a contrived problem, which is basically what Google did not to undermine their results.

But they had a relatively contrived problem, which they were able to run on a quantum computer that they claim couldn’t be run on a classical computer. Now that particular , it turned out that you could run it on a classical computer, but it was harder. The point being that we know that it’s definitely possible to have a quantum computer that can solve a problem better than a classical computer.

I think that has been established. Now the question now. Is, to what extent will quantum computers provide you an advantage over not having a quantum computer? And this is really what Quantum Advantage is about, and this is why Quantum Advantage is something that everybody really cares about. And not just the academics, right?

But industry, this is where everyone is driving toward. We want to be able to solve problems better with quantum computers, and that’s quantum advantage.

[00:24:35] Brinley Macnamara: Even with the noisy and highly expensive quantum computers we have today, it’s clear that quantum computing has already begun to democratize quantum physics only a decade ago this niche field was really only accessible to professional mathematicians and physicists. Today, quantum physics is accessible to a much broader range of people. There’s a very good chance that in your lifetime you’ll be able to access a practical quantum computer to run quantum software that solves a problem that was once considered to be impossible.

 The show was written by me. It was produced and edited by myself and my co-host, Eliza Mace, Dr. Kris Rosfjord, Dr. Heath Farris, and Beverly Wood. Our guests were Richard Preston and Dr. Jen Clark. The music in this episode was brought to you by Ooey, Blue Dot Sessions and Truvio. We’d like to give a special thanks to Dr. Kris Rosfjord, the Technology Futures Innovation area leader for all her support. We’d also like to give a special thanks to Dr. Gerry Gilbert, a theoretical quantum physicist leading MITRE’s Quantum Moonshot for his role in making the final version of this podcast the best that it could possibly be.

If you’d like to learn more about Quantum Software Engineering, check out Richard’s course at stem.mire.org/quantum

[00:25:53] Eliza Mace: Copyright 2023, the MITRE corporation. Approved for public release; distribution, unlimited. Public release case number 23-0268.

MITRE.

[00:26:06] Brinley Macnamara: Solving Problems for a Safer World.

Meet the Guests

Richard preston

Richard Preston is a Group Lead in the Network Analytics department at The MITRE Corporation. He graduated Tufts University in 2019 with an MS in Electrical Engineering and BS in Computer Engineering. Since then, Richard has worked on R&D projects spanning a wide variety of technology areas, including network security, machine learning, cloud & edge computing, IT automation, software engineering, and quantum computing. Currently, he leads a research effort aimed at helping software engineers apply quantum algorithms to real-world problems. Richard has taught quantum software development at MITRE, MIT’s Beaver Works Summer Institute, and IEEE Boston, and is passionate about helping to grow the quantum-capable workforce.

Dr. Gen Clark

Bio coming soon!