An important motivation for this blog is to reflect upon well-known topics in theoretical physics, searching for alternatives to the trodden paths, offering speculations, techniques, and anything that can give a new perspective without inventing hidden dimensions or invisible universes, but sticking instead to methods that, so I think, have some leftover juice to squash.

In analytical mechanics textbooks you’ll find the Lagrange-multipliers technique for solving constrained dynamical systems. There’s also Hamilton’s formulation of mechanics, as well as Poisson brackets. You may remember a limitation of both methods, Hamilton and Poisson, when dealing with constraints. After thinking intermittently but stubbornly over this question I’ve come up with a way of overcoming that limitation. I show it here and submit it to anybody who cares to make observations, objections, expansions; or hopefully, tell me they’ve found it useful. My ultimate goal is its application in the quantum formalism, but if anybody finds any benefit (or limitation that has escaped me) for systems of mechanical rollers, that’s welcome too.

Orthodoxy says: *There is no method of Lagrange multipliers in Hamilton’s formulation of mechanics. *I will prove that* there is such a method,* besides the one proposed by Dirac in the 60’s. But before that I’ll have to turn the question around a couple of times to see that our ancestors perhaps gave up too soon. I’ll briefly explain what are the methods of Lagrange, Hamilton and Poisson.

## Lagrange’s Method

Generalised coordinates: . It is the set of parameters (functions of time ) that specify a configuration (position) of the system.

Action:

Lagrange’s formulation of mechanics tells us that the action is stationary (doesn’t change at order in the variation parameters) under infinitesimal transformations (small arbitrary changes in coordinates and velocities). If one varies (A) under small arbitrary changes , , that are also time-independent and vanish at the limits of integration, one finds,

named Euler-Lagrange equations, and equivalent to Newton’s. is called Lagrange’s function or Lagrangian, and for all we care it’s just the kinetic minus the potential energy.

## Hamilton’s Method

Hamilton’s method is based on a change of variables plus the introduction of , called the system’s Hamiltonian, that starts life as an auxiliary function and ends up claiming center stage in physics:

It must be understood that and therefore . But making these substitutions in order to prove Hamilton’s eqs. is the wrong way. The easy proof can be found, e. g., on Wikipedia and is based on *differentials:*

Hamilton’s equations are thus,

## Poisson’s Bracket

Poisson’s bracket is a refined technique used to express the same with equations that make manifest the symmetry between positions and momenta in mechanics. They have a very profound geometric meaning, which is beautiful, with corollaries such as: For every motion with a certain there is a “dual” one with the corresponding , and ; . But unfortunately we have to ignore these mathematical-physics *delicatessen.*

Using (H.i) and (H.ii):

So differentiating a dynamical function (that doesn’t depend explicitly on time) with respect to time is equivalent to “bracketing” it with the Hamiltonian.

## Constraints

Constraints are mechanical limitations, conditions that make the coordinates mutually dependent. In the more general instance, they are expressed by means of equations, or perhaps inequalities. There are many kinds, with resounding names like holonomic, schleronomic, rheonomic… I’m interested in those that can be written as:

Constraint equations: , however they are named.

### Method of Lagrange multipliers

Constraint equation:

New Lagrangian:

Euler-Lagrange equations for the system with constraints:

As y , this gives,

What occurs on the right multiplying in eqs. (L.i) are the forces of constraint. The component is the component of the force of constraint along the direction corresponding to generalised coordinate ; and eq. (L.ii) is precisely the constraint equation. A more pedestrian method of solving the problem (alternative to the previous one) is to use the constraint equations in order to reduce the number of variables, and then make the change of variables reducing the dimension of the problem:

and with the new variables , with , set up the variational problem and obtain the reduced equations directly:

where is .

But the advantage of the method of Lagrange multipliers is that *it allows us to obtain the forces of constraint.* This can be useful in engineering, where forces of constraint are of interest, because materials do not satisfy an equation of constraint indefinitely, but, on the contrary, they suffer from mechanical fatigue and plastic deformations, so they slowly change their condition. Presumably also they are of interest in quantum mechanics as, provided they have been produced dynamically, the corresponding systems will undergo quantum fluctuations around the condition of constraint.

## Problems with constraints

When one has constraints, classical treatises go, one cannot use Hamilton’s method. Let’s see why. Describing a constraint forces us to expand

the configuration space by including a “variable” , and the reason for the quotation marks is because it’s really a constant.

You have to be a magician if you’re going to describe correctly a system with *less degrees of freedom* by introducing more degrees of freedom. Although a constant is no typical degree of freedom, considering it as such only to the effect of applying infinitesimal variations to it, allows us to deduce the equation of constraint as another Euler-Lagrange equation. It is equation (ii), recovering condition . The problem with Hamilton is that we need to introduce an associated canonical momentum for fictitious coordinate which, being zero by definition, does not allow for infinitesimal variations. (Dirac solved this by introducing condition as a constraint and proceeding to repeatedly use Poisson’s bracket, adding successive Lagrange multipliers while crossing your fingers so that, at a low order of iteration, Poisson-bracketing each constraint with the expanded Hamiltonian gives zero identically!) Methodologically speaking this is next to praying. What we wish is to have a way of introducing this momentum variable only to find later that *it vanishes as a consequence of the evolution equations.* Let’s see how is this possible.

## First Idea (Fail):

The idea is adding a total time derivative of an otherwise arbitrary function of our “dynamical variable” . If we do that,

the evolution equations are unchanged:

which reduces to,

So far, so good. Constraint force happens on the RHS “connected” to the problem via the constant . The variational equation for is no other than the constraint equation. The problem is that, if we want to translate this to Hamiltonian language, we have defined a canonical momentum being,

This does not vanish identically, though there is no doubt that it cannot be considered as a variable analytically independent from . In fact, the problem arises even before, when we try to express the velocities as functions of the momenta. Remember only makes sense when we can express the velocities in terms of both coordinates and momenta. As has disappeared in the relation that defines the associated canonical momentum, it is not possible to solve. That’s why sometimes you find the observation, without much explanation, (see, e. g., Wikipedia) that it is not possible to use a relation that is linear in for these auxiliary parameters. Why? That’s why.

In order to present my method, it is convenient to recall what a *variational derivative* is. There is practically no book (not at least the best known) in field theory that uses this more general definition of variational derivative. Although physicists in general are blissfully ignorant of this more general definition, I’m sure mathematicians who are well versed in variational calculus are familiar with it. If a Lagrangian depends on an arbitrarily high order of derivatives; the variational derivative is:

The Euler-Lagrange equations for this case generalise to,

which looks simple, but is actually *infinitely* more complicated, and will be important only to order 2.

In our case of interest there are several:

### Glitches

1) In a generic system, with both constraints and potential being velocity-dependent, it may be difficult, if not impossible, to express the ‘s as functions of the ‘s

2) cannot be linear in , otherwise will kill and we won’t be able to express as a function of

3) Should be independent of ?

Glitch 1) we just have to live with and hope for the best; glitch 2) is solved by observing it and writing a -order ; and glitch 3) is too pessimistic or only apparent: In fact, I have included it only to prepare the reader who might be surprised by a dependence on ; which, in reality, is not only consistent, but necessary: It is the equations of motion that should not depend on (o ); as we will see, the Hamiltonian can depend on and everything goes through. Actually, *it must depend on* for the term in to *disappear* from the Hamilton equations.

But the real glitch is:

4) The canonical momentum is no longer a linear function of the velocities, but it depends on the accelerations! How does *that* work if ?

The solution of this is shown next.

## The idea corrected

These are the steps:

1) Generalise Lagrange’s method by including a total time derivative with the appropriate function . This function of must depend at least on a -order derivative by the time:

2) Generalise the definition of the variational derivative by to a dependence in higher orders of time derivation. Order 2 will suffice:

3) Generalise the definition of the canonical momentum associated to coordinate in a way that completely parallels the extension we have practised on the variational derivative. If,

then,

Checking that the system so extended satisfies exactly the same Euler-Lagrange equations is easy: as we have only added a total derivative, equations (ELC.i) and (ELC.ii) are the same. The interesting part is prove that the Hamilton formalism goes through:

Euler-Lagrange:

Suppose there is no explicit time dependence (only to simplify):

The proof that goes next is somewhat tedious; if you are bored, go directly to the example next in order to convince yourself that everything really works. The proof completely parallels the deduction I gave of the Hamilton eqs. from the Euler-Lagrange ones. From,

it’s not hard to prove,

Equating this to,

we obtain,

That is, Hamilton’s eqs. are satisfied:

The most important part by far on the previous equations is that *the canonical momentum* *is not identically zero, neither it is a simple function of* *coordinates.* Its vanishing is deduced later, as a consequence of the evolution equations, so that it is an independent variable and the system can be endowed with a Hamiltonian structure.

## Example

The Hamiltonian is,

The pair of Hamilton eqs. for the ‘s is of the form ():

coinciding with,

And the ():

Those for , next. ():

():

Regrouping all Hamilton eqs. for the constrained system, we have,

## What About the Poisson Brackets?

Does all of this work for Poisson brackets? Yes, it does. Let’s see how. Remember that, from the beginning, the equations to be recovered are (E.i)-(E.iv). We omit now the bothersome index in (both ‘s satisfy analogous equations):

is zero because:

The last equation is,

Consequently:

Conclusion: *Hamilton’s method for systems with constraints can be used. The price to pay is generalising the variational derivative with respect to the Lagrange multiplier to higher orders of time derivation and extending in close analogy the definition of the associated canonical momentum.*

## Leave a Reply