Jon Irenicus wrote: ↑July 17th, 2017, 5:04 pm
@Clayton, thanks for the explanation. I am not sure I fully grasp it, but I think the worker analogy helps bring to bear their potential. I think what is difficult to understand is the stochastic (vs deterministic) computation that you mention, which in turn requires an understanding of the qubits. Is there any background reading you can recommend on the topic? It's one area I consider to be lacking in my current knowledge base.
I'll see what I can dig up.
I'll give a brief explanation of what makes qubits different from classical bits. I am not a quantum physicist, so this is like notes from a guerrilla-warfare manual on quantum physics. A classical bit always has a definite state - definitely '0' or definitely '1' - even when we're not observing it. When we observe a classical bit, we are measuring this definite state. From a physics-perspective, the classical bit has a definite state precisely because it consists of many classical particles, all of which are sharing a common state. This state could be encoded mechanically (up/down), magnetically (north/south), voltaically (positive/negative voltage), and so on. There is always some error in measurement, however, so even a classical bit is best described with a probability distribution where the probability of an erroneous reading is extremely small - in modern CPU integrated circuits, it's probably less than 1 in a trillion, though it will vary significantly from one process technology to another.
The description of a classical bit is basically the same as a single qubit in isolation,
when we measure it. When we measure it, its state is either definitely '0' or definitely '1'. The difference is in the error margin which is many orders of magnitude higher for a qubit than a classical bit. Partly, this results from the fact that a qubit really is a single quantum particle, whereas a classical bit consists of countless particles, all sharing the same state. It is true that - when we are not measuring it - the isolated qubit can exist in a "superposition' of the '0' state and '1' state but this only affects the shape of the probability distribution of the particle's measured state over
multiple measurements - it does not alter the definiteness of the output state, nor does it alter the fundamental probability of error (measuring a '0' when the qubit's true state was '1', and vice-versa... this is called "measurement error" or "detector error").
In summary - in the case of a single qubit in isolation - there really is no mathematical difference between a qubit and a classical bit, it's just that the error margin in a classical bit is so low that we can ignore the mathematics of measurement errors in classical bits. As far as I understand QM, if we had a very noisy classical bit (high probability of measuring a '0' when the bit was actually '1' and vice-versa), the mathematics describing the behavior of the classical bit over multiple measurements would be
exactly the same as the mathematics of the qubit.
When you have two or more qubits, things get more complicated. The superposition of multiple qubits gives them a probability distribution over the entire space that the qubits can encode. For example, suppose we have qubit A and qubit B. Their measured states can take on the following values:
A=0; B=0
A=0; B=1
A=1; B=0
A=1; B=1
Let us assign a unique label to each of these measurements:
W = { A=0; B=0 }
X = { A=0; B=1 }
Y = { A=1; B=0 }
Z = { A=1; B=1 }
Superposition allows the qubits to occupy any probability distribution over all four combined states, {W, X, Y, Z}. For example, we could have a distribution that looks like { W=0.5, X=0.25; Y=0.25, Z=0 }. This distribution
is the state of the quantum system - arising from superposition - but the only way to measure it is to measure A and B a bunch of times and count how many times we get W, X, Y and Z, respectively.
Hopefully that didn't muddy things up for you.