### 1. An almost century-old problem

Back to business with the question of quantum mechanical completeness. My take on the story hasn’t changed in more than fifteen years. I have to insist:

**1A.** It *is* a problem

** 1B.** It is unsolved (debatable perhaps…)

A problem it is. That should be clear, if only because of the dozen-odd theories trying to solve it: the orthodox, by Von Neumann; De Broglie-Bohm’s; the transactional; the many-world, by Everett; consistent histories (inspired by the former), the “gravitational”… The panel of physics popes having busied themselves with it includes the likes of Weinberg, ‘t Hooft, Gell-Mann, or Penrose. Although many still shrug their shoulders and say, “what problem?”

The reasons are more that socio-scientific. The traditional picture due to V. Neumann for measurements, as opposed to the more fundamental quantum evolution, is that we sometimes have state sums, , each carrying the potentiality of one result among two possible (, ) for a given property . And when we check which result is verified, and the record produces “result “, we must update the state to,

Namely, we must kill the “unregistered amplitude”, , and resize the outgoing state dividing by the square root of its probability. This is necessary in order to update the statistics, but it *violates linearity* (proporcionality between the outgoing state and the incoming one). But if we drop this demand:

we ruin unitarity (conservation of probability.) Thus the question is: either unitarity or linearity; *we can’t have both.*

Looks like a recipe for chicken rather than a physical law…

As to the explanations based on the density matrix, I already said they ignore the question of records.

This is not a matter of words. Nowadays both theory and experiment have developed to a point where it has become possible (1): to erase a quantum measurement and (2): to measure counter-factually. It is thus necessary for both components of the quantum state to survive dynamically. The unregistered amplitudes (as in our example) must keep evolving if they are to give rise to the overall wave front in case we conduct a quantum erasing during the subsequent evolution; or if a detector is placed along a trajectory destined to have an “empty” amplitude in it, in a counter-factual measurement like those occurring for an Elitzur-Vaidman bomb tester. These considerations make the following conclusion inescapable:

*The current status of experimentation confirms that V. Neumann’s postulate has been ruled out, as unregistered amplitudes are shown to have physical consequences.*

### 2. One doesn’t need a *New York Times* revolution

There exist elements in the theory unequivocally pointing towards the solution. These are the key points:

**2A.** Gauge invariance Gauge *indeterminacy*

**2B.** Topological evolution Evolution without local degrees of freedom

These elements have been elucidated in the theory for some time now, but nobody has related them to the question of completeness to the best of my knowledge. The explanation, necessarily brief and provisional, is:

**2A:** Only by means of gauge invariance one can explain quantum indeterminism; and only having established this correspondence and having understood its implications, one may be able to complete quantum mechanics through dynamical variables of a topological character.

Dirac, 1964, *Lectures on Quantum Mechanics; *page 17:

*This provides a difference of the generalized Hamiltonian method from what one is familiar with in elementary dynamics. We have arbitrary functions of the time occurring in the general solution of the equations of motion with given initial conditions. These arbitrary functions of the time must mean that we are using a mathematical framework containing arbitrary features, for example, a coordinate system which we can choose in some arbitrary way, or the gauge in electrodynamics. As a result of this arbitrariness in the mathematical framework, the dynamical variables at future times are not completely determined by the initial dynamical variables, and this shows itself up through arbitrary functions appearing in the general solution.*

**2B:** Topological evolution is evolution without propagation, as the number of constraints exactly equals the number of degrees of freedom (field amplitudes).

And this is all I can read from cards **2A** and **2B.**

### 3. But hasn’t it been proved that it is impossible to complete quantum mechanics?

*No.* Theorems concerning hidden variables seem to imply either a non-local realism or else the traditional non-realism, etc. At the end of the day, they all leave the question untouched. They are affected, either from false premises (whenever I check that is -1, I’m also checking that is +1″, in CHSHB, also known as “Bell’s theorem”), or from insufficient conclusions (see **3A**). Here I have to postpone details for an upcoming entry, but suffice it to say so far that the key lies in the context. It is well known that Bell’s inequalities are violated by quantum mechanics. This only happens because one assumes that measurement output for particle 1 is tantamount to having measured for particle 2. Suppose, though, that at particle 2’s location someone’s measuring instead of . Then such assertion is no longer true. The result I’m telling you in advance is that, when one takes into account the interaction Hamiltonian on particle 2, the expected values for (that, mind you, hasn’t been measured), change instantly (in a completely local way) *at 2.* Any experimental verification (like those by Aspect *et al.)* of the validity of quantum probabilities is incapable of telling *what would have happened had I measured something else.* When one includes this, purely quantum, Hamiltonian description, the result is CHSHB are satisfied, so quantum mechanics does not violate them anymore. Unfortunately I have to postpone that discussion.

**3A.** Exact correlations at a distance *are not physical actions, but functional dependences * The impossibility proofs concerning any completion of quantum mechanics based on exact correlations (GHZ) are thus inconclusive, as they are equivalent to the (trivial) preliminary lemma in Bell’s theorem (“spin up” for particle (1) implies “spin down” for particle (2), with zero dispersion for the sum and non-zero dispersion for each of the terms in the sum). This is a functional dependence between compatible variables. They could hardly produce anything other than perfect correlation, as they are respective functions of each other. I already proved this point for GHZ.

**3B.** Non-exact correlations at a distance *are not physical actions either, but functional dependences between non-commuting variables* .

Besides, the demonstrations of impossibility of completion for quantum mechanics based on non-exact correlations (CHSHB theorem) are inconclusive because they ignore how the *context* (see 3C) affects the quantum state, when they assume that the exact correlations referred to in 3A are still valid, when they actually do not hold anymore (information about the value of variable at point 2, external to the causal cone of point 1, is *no longer valid* at point 1, in the sense of implying that, provided takes value +1 at point 2, then it takes value -1 at point 1, *if what is being done at point 1 is measuring * *with* ).

**3C.** The context thus completely changes the nature of the experimental question itself: *The measuring interaction automatically suspends the validity of the correlations at a distance for variables incompatible with those that are being measured.* That is because, provided at point (2) someone’s measuring component of spin, then they are destroying the profile of the quantum state corresponding to component (or any other incompatible with ). In other words: The physicist who is measuring the component of spin for particle (1) has no right to assert that component of spin for particle (2) is the opposite, if what’s going on at (2) out of reach of his causal influence is a measurement of an incompatible component.

**3D.** The notion that any hidden-variable model has the obligation to express the results of measurements as pre-existing properties of the system can seem natural, and it may have been held by Einstein, but it is *ultimately too strong and must be discarded.* When this demand is formulated in general (without appealing to quantum mechanics), it could be named prejudice of *ad infinitum* separability between system and environment. When it is formulated from the quantum formalism, it is always based on an erroneous notion known as *eigenvalue realism:* Eigenvalues are not properties of a system, it is the interaction term that selects them. In this sense, real eigenvalues of Hermitian operators (observables) that a naive examination of the quantum formalism seems to elevate to the category of *properties of a system* (say, ontological attributes), are really *properties of the interaction between a system and its physical environment,* which can be relevant or not, depending on the evolution of both.

The notion of context first appeared paper by Bohr answering to the famous EPR and with the same title. When I measure, I set strong condition or even determine what I’m going to obtain.

**3E.** Theorems of the ontological kind (in particular, the Bell-Kochen-Specker or BKS) “find observables” (or rather prove the existence, as the theorem is not constructive) which, while being mutually compatible, cannot be determined by any pre-existing variables. This line of enquiry is based on the concept of value definiteness, brought up by V. Neumann. In actuality, these so-called BKS “observables”, although they are Hermitian operators, and while they are embedded in the linear span of the spin subspace, are not themselves spin observables, and as a consequence they are devoid of physical content. The proof is so simple that one cannot help but feel puzzled that nobody, to my knowledge, has appealed to it so far. Those interested can follow the argument below.

XXX. Only for experts:

### Theorem of BKS irrelevance

There is a long tradition in physics of *no go* theorems. These consist in demonstrating that a certain theoretical conception is unfeasable, on account of being mathematically impossible. Known instances of these are the Coleman-Mandula theorem, that excludes the mixing of internal and space-time symmetries; or the Weinberg-Witten theorem, that states the impossibility of making up a graviton out of two photons, as it would be dynamically unfeasable. Parallel to this tradition is the one of examining possible loopholes. Because a mathematical theorem cannot be contradicted, such loophole always consists in scanning through the premises in order to remove or soften some of them. This way of thinking has led to such theories as supersymmetry (to escape the Coleman-Mandula theorem) or gauge-gravity duality (to escape the Weinberg-Witten theorem).

But, what if one examines the premises only to find out that, rather than a case of one of them beeing too strong, it is *inconsistent with what we already know? *In other words: it is only relevant for non-physical instances. That is the case of BKS.

The BKS argument has an overriding fault. Specifically:

*Let ** be a Hilbert space of finite dimension. And let*

*Consider* * and* *Hermitian operators such that, also,* .

*Let us call spin operator any projective combination,*

*where* , y *are generators for the Lie algebra of* :

*(where* , , * take the values* , , and all their cyclic *permutations)*

*Then, either both are the same spin operator (modulo sign) or at least one of them is not a spin operator.*

Any such operators are unphysical. The reason is that, while the dimension of can be as high as we want, that of the projective space of physical -parametrizations with remains anchored to dimension 2, irrespective of . Consider, e. g., spin 3/2:

For massive spin 1:

But for spin , con , the dimensions are the same:

The reason is that, for spin , any observable on the LHS is given by an arbitrary pair in,

that happens to have the same number of degrees of freedom than,

so that any of those is a diagonal injective function , of a spin operator for some . This is peculiar for spin , for which there is simply no room for unphysical BKS operators. For spin 1, the space of such operators has dimension 9, while the space of spin operators, as we said, has always dimension 2 (as they are anchored to their Lie algebra and they have, as their only parametric freedom, their unitary vector). In general:

as far as is half-odd or integer.

The only thing that’s left to do is almost a recall:

*Lemma: If* * and* *are two compatible angular-momentum operators in any dimension, then,* .

*Proof:* Let , , and . Then:

As and have to be real, unitary 3-vectors, they are the same (mod. sign).

It’s worth re-phrasing the result to collect physical implications: Even if we manage to (mathematicaly) build two compatible operators, at least one of them cannot represent an angular momentum projection with respect to any direction ; and is as irrelevant as, e.g., a symmetric state of two identical fermions, or a state superposition forbidden by superselection rules. The argument generalises easily to any observables dim-fixed by a Lie algebra , although the dimension of the physically-relevant irreps could be arbitrarily high.

In case the previous argument wasn’t enough, the BKS theorem deals with spin 3/2 or higher (massive case), of which:

- None have been discovered
- Soft-boson theorems assure they are inconsistent with quantum field theory

When the particles are non-massive, the compelling character of the BKS argument is even weaker: Transversality conditions reduce the dimension of the state space to 2, which is out of reach for the theorem.

## Leave a Reply