Classical mechanics is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains, and is one of the oldest and largest subjects in science, engineering and technology.
Besides this, many related specialties exist, dealing with gases, liquids, and solids, and so on. In physics, classical mechanics is one of the two major sub-fields of study in the science of mechanics, which is concerned with the set of physical laws governing and mathematically describing the motions of bodies and aggregates of bodies.
The term classical mechanics was coined in the early 20th century to describe the system of mathematical physics begun by Isaac Newton and many contemporary 17th century workers, building upon the earlier astronomical theories of Johannes Kepler, which in turn were based on the precise observations of Tycho Brahe and the studies of terrestrial projectile motion of Galileo, but before the development of quantum physics and relativity. However, a number of modern sources do include Einstein’s mechanics, which in their view represents classical mechanics in its most developed and most accurate form. The initial stage in the development of classical mechanics is often referred to as Newtonian mechanics, and is associated with the physical concepts employed by and the mathematical methods invented by Newton himself, in parallel with Leibniz, and others. More abstract and general methods include Lagrangian mechanics and Hamiltonian mechanics. Much of the content of classical mechanics was created in the 18th and 19th centuries and extends considerably beyond (particularly in its use of analytical mathematics) the work of Newton .
The combination of Newton ‘s laws of motion and gravitation provide the fullest and most accurate description of classical mechanics.
THE BASIC APPROACH: HAMILTONIAN MECHANICS
Molecular dynamics (MD) is employed to study the classical motion of a manybody system and extract from the dynamics the experimental observables. As MD calculations provide a window into the detailed motion of individual atoms in a system, the microscopic mechanisms of energy and mass transfer can be gleaned.
Consider a system consisting of N particles moving under the influence of the internal forces acting between them. The spatial positions of the particles as functions of time will be denoted by r1 (t) ,…, rN (t), and their velocities, vı (t),…, vN (t). If the forces, F1,…, FN, on the N particles are specified, then the classical motion of the system is determined by Newton ‘s second law where are the masses of the N particles. Since the force on each particle is, in principle, a function of all of the N position variables, , Eqs. (3.1) constitute a set of 3N, or more generally, dN, where d is the number of spatial dimensions, coupled second-order differential equations. A unique solution to Eqs. (3.1) is obtained by choosing a set of initial conditions . Newton ‘s equations completely determine the full set of positions and velocities as functions of time and thus, specify the classical state of the system at time, t. Except in special cases, an analytical solution to the equations of motion, Eqs. (3.1), is not possible. An MD calculation, therefore, employs an iterative numerical procedure, called a numerical integrator or a map, to obtain an approximate solution [6, 21]. The accuracy of the numerical solution is determined by the time discretization, , also referred to as the time step. In most cases, the forces,are sufficiently nonlinear functions of position that, if the true solution could be obtained for a given choice of initial conditions, the numerical solution would bear little resemblance to it after enough iterations of the map. This is largely due to the fact that the initial conditions can only be specified to within a finite precision for numerical calculation. In large system with highly nonlinear forces, small differences between two sets of initial conditions lead to a divergence between the trajectories that become exponentially large as time increases. However, the numerical solution is statistically equivalent to the true solution within a bounded error, and this is sufficient to ensure that the same physical observables are obtained on average. It is important to note that small systems with closed orbits possesses other such statistical equivalences.
In order to demonstrate the conditions required for the statistical equivalence of the numerical and true solutions to the equations of motion, it is first useful to recast Eqs. (3.1) in Hamiltonian form. The Hamiltonian for an N-particle system subject only to interparticle interactions is
where are the momenta of the particles defined by and U (r1, …., rN) is the interparticle potential, in terms of which the forces are given by
The equations of motion (3.1) can be derived from Eq. (3.2) according to Hamilton ‘s equations,
Taking the time derivative of both sides of the first of Hamilton ‘s equations and substituting into the second is easily seen to yield Eqs. (3.1). Therefore, the classical state of a system at any instant in time can also be determined by specifying the complete set of particle positions and corresponding momenta. Alternatively, we may collect the full set of positions and momenta into a single vector vector called the phase space vector, which exists in a 2dN-dimensional phase space. A classical state of the system corresponds to a single point in the phase space. The phase space is thus the union of all possible classical states of a system.
Two important properties of the equations of motion should be noted. One is that they are time reversible, i.e., they take the same form when the transformation is made. The consequence of time reversal symmetry is that the microscopic physics is independent of the direction of the flow of time. The second important property of the equations of motion is that they conserve the Hamiltonian Eq. (3.2). This can be easily seen by computing the time derivative of H and substituting in Eqs. ({3.4),
The conservation of the Hamiltonian is equivalent to the conservation of the total energy of the system and provides an important link between molecular dynamics and statistical mechanics. Recall that the latter connects the microscopic details of a system to physical observables such as equilibrium thermodynamic properties, transport coefficients, and spectra. Statistical mechanics is based on the Gibbs’ ensemble concept. That is, many individual microscopic configurations of a very large system lead to the same macroscopic properties, implying that it is not necessary to know the precise detailed motion of every particle in a system in order to predict its properties. It is sufficient to simply, average over a large number of identical systems, each in a different such microscopic configuration, i.e. the macroscopic observables of a system are formulated in terms of ensemble averages. Statistical ensembles are usually characterized by fixed values of thermodynamic variables such as energy, E, temperature, T, pressure, P, volume, V, particle number, N, or chemical potential, . One fundamental ensemble is called the microcanonical ensemble and is characterized by constant particle number, N, constant volume, V and constant total energy, E, and is denoted as the NVE ensemble. Other examples include the canonical, or NVT ensemble, the isothermal-isobaric or NPT ensemble, and the grand canonicalor ensemble. The thermodynamic variables that characterize an ensemble can be regarded as experimental control parameters that specify the conditions under which an experiment is performed.
Now consider a system of N particles occupying a container of volume V and evolving under Hamilton ‘s equations of motion. According to Eq. (3.5), the Hamiltonian will be a constant, E, equal to the total energy of the system. In addition, the number of particles and the volume are assumed to be fixed. Therefore, a dynamical trajectory of this system will generate a series of classical states having constant N, V and E, corresponding to a microcanonical ensemble. If the dynamics generates all possible states having a fixed N, V and E, then an average over this trajectory will yield the same result as an average in a microcanonical ensemble. The energy conservation condition, , which imposes a restriction on the classical microscopic states accessible to the system, defines a hypersurface in the phase space called the constant energy surface. A system evolving according to Hamilton ‘s equations of motion will remain on this surface. The supposition that a system, given an infinite amount of time, will cover the entire constant energy hypersurface is known as the ergodic hypothesis. Thus, under the ergodic assumption, averages over a trajectory of a system obeying Hamilton ‘s equations are equivalent to averages over the microcanonical ensemble.
In mathematical terms, if is a function corresponding to a physical observable, then the microcanonical ensemble average of A is
where is the microcanonical partition function given by
Here, h is Planck’s constant and is a general combinatorial factor. As the prefactor, does not affect the analyses presented herein, it will be omitted from expressions subsequently presented in this paper. Equation (3.7) is a device for “counting” the number of microscopic states of a system that obey the condition for a given number of particles N and container volume V. The integral over the N Cartesian positions is restricted by the spatial domain D(V) defined by the walls of the container, while the momentum integral is unrestricted. The average of an observable, A, over a trajectory spanning a length of time is given by for a trajectory starting at t=0. The ergodic hypothesis is equivalent to the statement
(Note, the system need not be mixing or even chaotic in nature to obey Eq. (3.9) . A one dimensional harmonic oscillator is ergodic and samples all the phase space available to it!)
The meaning of the statistical equivalence between a numerical trajectory and the true trajectory of a system is now clear. Although a numerical trajectory may diverge in time from the true trajectory, as long as the numerical trajectory conserves the energy to within a given tolerance , the numerical trajectory will also generate configurations belonging to the constant energy surface that are never in error by more than. (The existence of bounds on the error of numerical trajectories are discussed further in Sec (6).) Assuming ergodicity, a numerical trajectory can also be used in Eq. (3.9) to compute the ensemble average of an observable. Note, this is equally true for a regular system with closed orbits and a chaotic or mixing system.
Finally, it should be noted that dynamical properties are also defined through a ensemble averages. Time correlation functions are important because of their relation to transport coefficients and spectra via linear response theory [22, 23]. Consider, for example, a time correlation function,between two observables . In order to calculate one can use a set of trajectories generated by Hamilton ‘s equations. Any trajectory is uniquely determined by its initial conditions. Suppose initial conditions for each trajectory in the set are sampled from an equilibrium phase space distribution function .The time correlation function is then defined to be
Thus, it can be seen that a time correlation function can be calculated by evolving a trajectory in time starting from each set of initial conditions and then averaging the product productover the set of trajectories at each instant in time. (In the microcanonical ensemble . In the thermodynamic limit, all equilibrium ensembles are equivalent, and thus, for very large systems, a single long trajectory can be used to generate a time correlation function, although the convergence of such an approach may be slow. For a detailed treatment of the properties of time correlation functions, the reader is refereed to the review by Berne and Harp [23].
Despite the utility of Hamiltonian molecular dynamics, its principle restriction is clear:
although, given correct forces, the dynamics is exact in the classical limit, it can only generate equilibrium properties of the NVE ensemble. However, microcanonical conditions (NVE) are not consistent with the many experimental measurements under conditions of constant temperature and pressure or constant temperature and volume. In order to describe the thermodynamic properties of a system under these conditions, it is necessary to generate the corresponding ensemble. One of the more fruitful and interesting approaches to generating alternative ensemble averages is based on properties of non-Hamiltonian dynamical systems.
The body of photometric and astrometric data on stars in the Milky Way Galaxy has been growing very fast in recent years (Hipparcos/Tycho, OGLE-3, 2-Mass, DENIS, UCAC2, SDSS, RAVE, Pan Starrs, Hermes, …) and at the time of the General Assembly the launch of ESA’s Gaia mission, which will measure astrometric data of unprecedented precision for ~ 109 stars, will be less than three years away. On account of our position within the Galaxy and the complex observational biases that are built into most catalogues, dynamical models of the Galaxy are a prerequisite full exploitation of these catalogues. On account of the enormous detail in which we can observe the Galaxy, models of great sophistication are required. Moreover, in addition to models we require algorithms for observing them with the same errors and biases as occur in real observational programs, and statistical algorithms for determining the extent to which a model is compatible with a given body of data.
Progress in our understanding of the Milky Way will be unnecessarily delayed if techniques for constructing dynamical models are not developed to the level required for modeling Gaia data before a preliminary version of the Gaia catalogue is released. Producing models and fitting them to the available data will involve complex software. It is clearly in the interests of the astronomical community that these costly tools be developed collaboratively in modular form. In particular it is desirable that most tasks can be accomplished by independently developed modules whose performance can be critically compared at di erent institutions. For this to be possible, standard interfaces between modules will have to be defined.
JD5 will review status of the different modeling techniques that might be used, and what will be required to extract our science goals from the data that will be on hand when the Gaia Catalogue becomes available. It would also consider the steps that should be taken to identify tasks and define interfaces.
Producing models of the Milky Way that combine dynamics and stellar evolution theory with the immense catalogues that will be available by 2015 is an enormous undertaking that will require the participation of astronomers from all over the World with expertise in almost every area of astronomy. The General Assembly of the IAU provides an opportunity for enthusiasts to get together and think about what can be done, and how to do it. We already have a formidable body of observational data to work with, and just six years before a preliminary version of the Gaia Catalogue should appear. So at JD5 we should consider what needs to be done, and set up international collaborations that have the expertise and technical resources to get the job done.
Works Cited
- http://homepages.nyu.edu/~mt33/jpc_feat/node3.html
- http://en.wikipedia.org/wiki/Classical_mechanics
- http://www-thphys.physics.ox.ac.uk/people/JamesBinney/jd5_home.html
- Binney, James. Classical Mechanics (Lagrangian and Hamiltonian formalisms)