HN Academy

The best online courses of Hacker News.

Hacker News Comments on
Quantum Computing. Less Formulas - More Understanding

Coursera · Saint Petersburg State University · 1 HN comments

HN Academy has aggregated all Hacker News stories and comments that mention Coursera's "Quantum Computing. Less Formulas - More Understanding" from Saint Petersburg State University.
Course Description

This is yet one more introductory course on quantum computing. Here I concentrate more on how the mathematical model of quantum computing grows out from physics and experiment, while omitting most of the formulas (when possible) and rigorous proofs.

On the first week I try to explain in simple language (I hope) where the computational power of a quantum computer comes from, and why it is so hard to implement it. To understand the materials of this week you don't need math above the school level.

Second and third weeks are about the mathematical model of quantum computing, and how it is justified experimentally. Some more math is required here. I introduce the notion of a linear vector space, discuss some simple differential equations and use complex numbers.

The forth week is dedicated to the mathematical language of quantum mechanics. You might need this if you want to dig deeper into subject, however I touch only the tip of the iceberg here.

On the week 5 I finally introduce some simple quantum algorithms for cryptography and teleportation.

HN Academy Rankings
  • Ranked #6 this month (dec/jan) · view
Provider Info
This course is offered by Saint Petersburg State University on the Coursera platform.
HN Academy may receive a referral commission when you make purchases on sites after clicking through links on this page. Most courses are available for free with the option to purchase a completion certificate.
See also: all Reddit discussions that mention this course at

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this url.
Just last week I finished a course on Quantum Computing on coursera[1].

I found the idea of true randomness and multiverse really fascinating. However, I could not understand the following and would greatly appreciate if someone could help me out:

-- The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

-- The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?


> Why do we have to use complex numbers when the system is very much real

If you don't use complex numbers you end up with a bunch of maths that looks suspiciously like complex numbers. The algebraic properties of complex numbers make it a very natural way to express quantum mechanics.

I believe 3Blue1Brown has some good videos on the topic - you really need to learn about complex numbers separately to really see the magic of them (It's largely visual, so it's hard to transmit via text)

Edit: The book "Visual Complex Analysis" is very good if you resonate with its approach - I personally simply cannot click with almost any geometry, so it wasn't really for me although I appreciate that it's really very nice.

> The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

This isn't an answer to your question but an analogy. Negative numbers probably seemed strange when you learned about them as a child. After all they don't seem to exist in the "real" world. But they're still a useful mathematical tool for turning subtraction into addition, among other things. (And negative numbers do have real-world connotations, like floor numbers below ground level.)

Complex numbers are exactly like that: They are a useful tool for extending the idea of multiplication to "stuff that rotates or behaves like a wave", among other things.

Don't read too much into the names; "complex" and "imaginary" are meaningless labels that simply let us keep track of how multiplication and addition differ from those operations on so-called "real" numbers. Just like "negative" numbers should not be considered "less desirable" or "more pessimistic" than positive numbers, the everyday meaning of the labels doesn't apply in a math context.

Complex numbers are just a mathematical convention. You can't have (-1) apples so you could say negative numbers are equally weird. Complex numbers are just weird in different way.

Complex numbers are weird because we start with the square root of a negative number and then it turns out the same rules you invented for that actually then present rotations in a plane as well.

That Sqrt(-1) is isomorphic with a 90 degree rotation on a plane of a vector length of 1 never seizes to amaze me. It's not a trick - all the math works and it's fairly obvious once one does all the math. But still it feels like a magician pulled a rabbit out of his hat each time I think about it.

This rotation aspect is one of the main reasons why imaginary numbers are used so much in physics.

To your first question, two things. First it could be real, sure, and that's called an analog computer (not to be confused with an analog circuit). That subject has been studied too, but hasn't been terribly fruitful in terms of realizable improvements over classical. Second, complex because that's how physics works (we think). Scott Aaronson wrote up an article about what the alternatives might be.

To your second question, the quantum system can represent all N states simultaneously (until you observe it) because of entanglement. The classical system can only represent one at a time.

representation of all N (where N = 2^n) states simultaneously is due to superposition, not entanglement.
A quantum computer with real amplitudes is not an analog computer. The two are quite different. Note that a quantum computer with real amplitudes is actually just as powerful as one with complex amplitudes; the complex amplitudes are actually inessential.

Also, your answer to the second question fails to distinguish a quantum computer from a classical probabilistic computer. The key here is really cancellation of amplitudes, not the larger state space.

3blue1brown's group theory videos explain complex numbers better than i've ever understood before.

I wonder if every quantum programmer will need to be a theoretical physicist to some extent. I'm currently reading "physics from symmetry" which uses group theory intensively. It has a lot of math, but i'm ready to dive in this time. Excited for the special relativity sections.

I've also been watching Institute of Advanced Study videos lately which are way over my head, but from what I gather, the black hole information paradox has a solution using a method called entanglement wedge reconstruction. The model is holographic, and the goal is understanding the mapping of the physics of a quantum holographic surface onto the "bulk"(spacetime), and that mapping turns out to be very similar to a quantum error correcting code.

Complex numbers can represent wave functions. My understanding is that any function can be represented by a sum of wave functions. It turns out that all particles in the universe can be described well by wave functions and various operations performed upon the sums of them.

Classical systems have a number of n-bit register and a big bank of memory to play with. The register has n^2 different states for its n bits.

Quantum systems don’t require memory. Rather, they share one large register of n qubits. Qubits alone don’t do anything more than classical bits. But once quantum operators are applied allowing these bits to become entangled superpositions, it turns out that you can get 2^n different states from these n bits. With a few hundred qubits, you can crack most all encryption efficiently.

With QC, instead of writing just functions, you’re working in the space of unitary operators. Unitary operators basically transform functions into different functions to make all sorts of calculations a lot simpler and faster (in theory).

Complex numbers just allow you to add a dimension to real numbers (so think vectors) while allowing handy operations (rotations). So they end up being useful in all sorts of places, although you don’t have to use them.

An n qubit quantum computers allow you to “visit” 2^n states but to to store only a few states. A n bit computer allows you to store 2^n states but to visit only a few states (for n large)

> The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?

Let's consider the execution of an algorithm. In the classical case you start with _one_ of 2^n states and keep evolving that single state. In the case of a quantum computer you can start with a superposition of all of the 2^n states and evolve them simultaneously. This is called quantum parallelism.

This sounds very powerful but comes with a caveat: at the end of the algorithm you will end up with some other superposition of the 2^n states but you can only read one of them. Which one it is going to be will be decided by the square of the complex amplitudes associated with those states. The greater the squared amplitude, the more likely it is that you will end up measuring the state associated with it. A key requirement when a quantum algorithm is designed is that it should yield a useful state at the time of measurement with high probability. This is in general difficult to do.

> In the case of a quantum computer you can start with a superposition of all of the 2^n states and evolve them simultaneously. This is called quantum parallelism.

This is a common misrepresentation of how quantum computers work. Because we can start in one state (e.g. |0^n>, the all-zero n-qubit state), and apply the Hadamard gate to each qubit. And now we are in the universal superposition state (measuring has an equal probability for each n-bit bitstring). Oh no! what happened?

If you are under the belief that quantum computation is 'like classical computation, but in parallel', going from one single state to a superposition is impossible. In other words, it doesn't accurately capture what quantum computation does.

Granted, it's an oversimplification but I wouldn't call it a misrepresentation. It's one of the (arguably key) things that make quantum algorithms work.

To quote Nielsen & Chuang (section 1.4.2 on page 30):

> Quantum parallelism is a fundamental feature of many quantum algorithms. Heuristically, and at the risk of over-simplifying, quantum parallelism allows quantum computers to evaluate a function f(x) for many different values of x simultaneously. In this section we explain how quantum parallelism works, and some of its limitations.

Then they go on and explain how e.g. the Deutsch–Jozsa algorithm is a good example of this. But I think it's a key component in Shor's algorithm as well when it comes to the period-finding step.

Quantum numbers have a very nice property: they allow express sum of angles as multiplication. So anything periodic can take advantage of this property. Even if it's not periodic if you can take advantage of Fourier Transformation you can use complex numbers.
Here are my two cents on the subject:

-- The need for complex numbers (...)?

Complex numbers are just a mathematical tool. They are used to deal with calculations that involve magnitudes and phases, such as modeling qubit’s internal state.

-- The fact that an n-qubit quantum computing system is significantly powerful than classical system (...)?

Due to the physical phenomenon of entanglement quantum computers allow a coherent superposition of multiple states. Quantum algorithms sort of operate on all superposed states at the same time. Classical computing can only handle one state at a time.

A while ago I wrote a blog post explaining how quantum computing works, from a programming perspective, in case you're interested:


It's not entanglement that allows for coherent superpositions of states but, well, superposition. (Note, however, that entanglement is responsible for decoherence.)
You are correct in that superposition phenomenon is independent from entanglement!

Nevertheless, In addition to supporting coherent superpositions, what makes qubits powerful for information processing is the ability of coupling multiple qubits for creating entangled i.e., non-separable states. This property confers exponential complexity to a N-qubit system which can then encode and ultimately process 2^N bits of information simultaneously.

Quantum algorithms sort of operate on all superposed states at the same time.

This is not entirely correct.


Also, from your linked article:

I will now introduce the Deutsch Oracle algorithm, one of the first examples of a quantum algorithm that is exponentially faster than its classical counterpart on the size of the input.

This is not entirely correct either.


See [1] for an authoritative source.


Hi, thanks for going through my blog post!

Could you elaborate on why you believe my writings are imprecise? I read the link you provided, but couldn't find any divergence with its text though.

Here's the line of thought behind my statements if it's not clear to you:

1) Until a measure is made leading to the collapse of the quantum state qubits remain (possibly) in a superposed state. When a quantum gate is applied to these qubits it affects all superposed states (from a linear algebra point of view).

2) I took this statement from Wikipedia [1], forgot to link the source in the article: Simon, Daniel (November 1994). "On the Power of Quantum Computation". 94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science: 116–123.


1. Complex numbers are especially useful to describe systems dealing with rotation, or if you want to describe a 2-dimensional space, corresponding to a single variable. Instead of x/y, you can just have a real/imaginary component to x. The entire qubit state space (Bloch sphere is a great visual) is a 3 dimensional sphere with a radius of 1, and because that space will output a binary output that splits the sphere in half along some plane, 2 2-D vectors can describe the entire space, with a 2-D vector corresponding to each measurement output possibility. That being said, a 3-D vector could describe the entire system instead (though this would not be capable of describing global phase), but I believe that the common dual vector system is primarily favored due to the nature of the binary outputs. Each 2-D vector has a norm (amplitude) related to the probability of that measurement, and some single 3-D vector system would require more math to get those amplitudes. Alternatively, the real and imaginary components of each 2-D vector could be described individually as 4 real values (this is overkill, since we only need 3 real values to describe the whole system, unless we are considering global phase), but does that improve anything? Consideration would still need to be made to respect norm along each vector, and the values would correspond to directions orthogonal to each other. This effectively describes a complex number.

2. A classical system of n bits also has 2^n _possible_ states, and it "occupies" one of those states at any time, based on the concrete value of the classical bits. A quantum system of n qubits has 2^n _possible_ states, and it occupies X states (where 1 <= X <= 2^n). All n qubits could be set to pure states (say |000....0>), and this would mean that the entire system occupies one single state within the 2^n space. Alternatively, every single qubit could be set to a superposition (like |+++...+>), and this means that the system is "occupying" all 2^n states via superposition _simultaneously_. Now, of course, measuring such a system in the 0 1 basis will simply give you a random state from the list of 2^n possibilities. But, _before_ you measure, there are amplitudes corresponding to 2^n states. Manipulating those amplitudes for some computational goal is quantum computing.

A metaphor I've used to help myself understand superposition is flipping a coin. Before flipping, there are two possible outcomes, and I could say that an unflipped coin is "both" Heads and Tails. Once the coin is flipped (even if you haven't looked yet), the coin is either H or T (and _not_ both). Similarly, two unflipped coins have 4 potential outcomes, while 2 flipped coins is again a single state (one of: HH, HT, TH, TT). The space of possible outcomes remains the same, but the flipped coins are only a single state in the space. The key is that before measuring (flipping the coin), the only indication that a qubit has of its future value is stored in the qubit's amplitudes. This is why quantum speedups rely on manipulating interactions between amplitudes, like amplitude amplification used in Grover's Algorithm, or the Quantum Fourier Transform. If this metaphorical explanation isn't terribly useful, I'd encourage you to read others' descriptions of superposition. Unfortunately, I haven't found a great amount of support for describing superposition in the English language.

Alternatively, the dimensions of a space are defined as the number of vectors in that space that are all orthogonal to each other simultaneously. One great way to prove orthogonality is the Pythagorean theorem. For a single qubit system, we know that the amplitudes (a and b for simplicity) are related to the probability via a^2 = (P of a), and b^2 = (P of b). Because the measurement outcome must be either a or b, we can also say that a^2 + b^2 = 1. This generalizes to multiqubit systems, where summing the squares of each measurement outcome amplitude will equal 1. This implies that a superposed 2 qubit system (|++>) is described as a 4 dimensional vector in a 4 dimensional space, and not just a 1-D vector within that same space. A classical system has no equivalent multidimensional vector system, just concrete values (corresponding to a 1-D vector pointing along some axis in this visualization).

> The need for complex numbers in describing a quantum bit. Why do we have to use complex numbers when the system is very much real? What is the real world interpretation?

The world isn't "real" (people rightfully find imaginary numbers a bad name, but real is equally bad). Complex numbers show up all over in physics. As to why, who knows? We just model what nature does, and it turns out that complex numbers form a very simple (relatively speaking) accurate model.

> The fact that an n-qubit quantum computing system is significantly powerful than classical system because of 2^n states. But, a classical system with n-bits will also have 2^n states, isn't it?

A n-bit pure quantum state can be fully described using 2^n complex numbers, and thus have 2^(n+1) degrees of freedom, which is a vastly bigger space than n bits. While quantum amplitudes are distinctly not probabilities (and more powerful), you can think of the difference in state space roughly by comparing the difference of storing one n-bit number v.s. storing one specific probability distribution over all n-bit numbers.

> A n-bit pure quantum state can be fully described using 2^n complex numbers

Why 2^n? Isn't it 2 numbers per state (real and imaginary parts)? So it would be 2+2+2...+2 (n times) = 2n total numbers

Because the State Space of an n-qubit System is the Tensor product of the single qubit State Space.
Consider how the number of states are calculated: If you have n qubits, at the time of making the measurements those can be in 2^n distinct (basis) states. This is similar to how in the classical case you can use n bits to express at most 2^n different states.

Which one of the basis states we end up measuring depends on the squared of the complex amplitude of that state (Born rule). Therefore, if you were to simulate a quantum computer, in the general case you would need to keep track of these complex amplitudes belonging to the states, every 2^n of them. Because of the real and imaginary parts this amounts to 2*2^n=2^(n+1) numbers.

It's not a mystery, but after generations of lecturers teaching something they never learned themselves, physicists have forgotten the history of it.

Complex numbers turn up a lot because exp(2πiθ) is a convenient way to model any kind of rotation or cyclic process.

It turns out that there's lots of cyclic processes in physics, such as atomic orbitals or waves in fields.

Similarly, vector algebra is just terrible at modelling physical spaces, especially 2D or 4D. It just happens to work okay in 3D, and even there it has all sorts of sharp edges, such as gimbal-lock.

Geometric algebra is a lot better at modelling physical spaces of arbitrary dimension, and one of its strengths is the ability to take various interesting sub-spaces and use them without having to make changes to formulas.

Both the complex numbers and quarternions are such subspaces of matching geometric algebras, so it's no surprise that they "turn up" a lot in the algebras of geometric spaces.

Interestingly, some of the other subspaces of a GA are isomorphic to the various "complex valued matrices" used in Physics, such as the Pauli matrices and the Gell Mann matrices.

These matrices are introduced to students as essentially scalar-valued black boxes. "Don't worry about it, they just have the 'right' properties." is commonly heard in lectures.

This seems like mathematical magic, but no magic is necessary. A much more consistent and logical theory based around GA could have been used, but wasn't for merely historic reasons.

Unfortunately, lots of people will say that these alternative formulations are all isomorphic to each other, so who cares, just shut up and calculate. However one method yields an intuitive understanding, and the other one yields complex numbers all over the place with no clear picture of their origins...

Although they don't affect models due to the isomorphic quality you describe, these historical representational accidents seem to have a powerful effect on our intuition.

I am reminded of the conceptual differences between the Copenhagen and Bohmian interpretations. Using one or the other basically does not change our models or results, but what affect do these different perspectives have on the field?

Bohmian mechanics (pilot wave theory) is not the same as traditional QM, it is a stronger theory from which traditional QM can be derived (of course, we don't know if it's correct yet).

Examples of interpretations of QM that are mathematically equivalent are Copenhagen and Many Worlds. Examples of theories that are stronger (make more predictions than) QM are Objective Collapse theories, de Broglie - Bohm Pilot Wave theory, Superdeterminism.

HN Academy is an independent project and is not operated by Y Combinator, Coursera, edX, or any of the universities and other institutions providing courses.
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.