Quantum Optics

1 / 20
Corporate Office
2 / 20
Refineries
3 / 20
Power Plant Design
4 / 20
Pipeline Construction
5 / 20
Caption Text
6 / 20
Caption Text
7 / 20
Caption Text
8 / 20
Caption Text
9 / 20
Caption Text
10 /20
Caption Text
11 / 20
Caption Text
12 / 20
Caption Text
13 / 20
Caption Text
14 /202
Caption Text
15 / 20
Caption Text
16 / 20
Caption Text
17 / 20
Caption Text
18 /20
Caption Text
19 / 20
Caption Text
20 / 20
Caption Text

CHAPTER ONE: PREAMBLE

The Senior Engineer has been working on the conceptual framework for a realistic engineering view of Optical Coherence and Quantum Optics. The Senior Engineer is a Graduate of Electrical and Computing Engineering with a masters degree in Plasma Physics. It is therefore natural that the Senior Engineer would concentrate his efforts on the semi-classical version of Optical Coherence and its use in the theories of lasers.

Part One:   Models and 'Reality'

The Academic idea that modelling and simulations can bring us to an understanding of reality is completely false. Certainly, modelling has become an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann:

... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area.

A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the Academic enterprise. Complete and true representation are impossible, but Academics debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.

Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.

For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented.

Part Two:   Prototypes

A prototype is an early physical model of a product built to test a concept or process. It is a term used in a variety of contexts, including semantics, design, electronics, and software programming. A prototype is generally used to evaluate a new design to enhance precision by system analysts and users and in our case to do value Engineering on the Product. Prototyping serves to provide specifications for a real, working system rather than a theoretical one. In some design workflow models, creating a prototype (a process sometimes called materialization) is the step between the formalization and the evaluation of an idea.

Prototypes explore the different aspects of an intended design:

Part Three:   Value Engineering

Value Engineering is a systematic analysis of the various components and materials of a system (The system under discussion is a Quantume Computer)in order to improve the performance or functionality. In the case of the Quantum Computer Project the first round of Value Engineering will consist of analysis of the optical portions of the system. There are two parts to the optics of the Paul Trap Quantum Computer. Value can therefore be manipulated by either improving the function or reducing the cost. It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements. The term "value management" is sometimes used as a synonym of "value engineering", and both promote the planning and delivery of projects with improved performance.

Value engineering is a key part of all Research and Development Projects within the project management, industrial engineering or architecture body of knowledge as a technique in which the value of a system’s outputs is superficially optimized by distorting a mix of performance (function) and costs. It is based on an analysis investigating systems, equipment, facilities, services, and supplies for providing necessary functions at superficial low life cycle cost while meeting the misunderstood requirement targets in performance, reliability, quality, and safety. In most cases this practice identifies and removes necessary functions of value expenditures, thereby decreasing the capabilities of the manufacturer and/or their customers. What this practice disregards in providing necessary functions of value are expenditures such as equipment maintenance and relationships between employee, equipment, and materials. For example, a machinist is unable to complete their quota because the drill press is temporarily inoperable due to lack of maintenance and the material handler is not doing their daily checklist, tally, log, invoice, and accounting of maintenance and materials each machinist needs to maintain the required productivity and adherence to section 4306.

VE follows a structured thought process that is based exclusively on "function", i.e. what something "does", not what it "is". For example, a screwdriver that is being used to stir a can of paint has a "function" of mixing the contents of a paint can and not the original connotation of securing a screw into a screw-hole. In value engineering "functions" are always described in a two word abridgment consisting of an active verb and measurable noun (what is being done – the verb – and what it is being done to – the noun) and to do so in the most non-descriptive way possible. In the screwdriver and can of paint example, the most basic function would be "blend liquid" which is less descriptive than "stir paint" which can be seen to limit the action (by stirring) and to limit the application (only considers paint).

Value engineering uses rational logic (a unique "how" - "why" questioning technique) and an irrational analysis of function to identify relationships that increase value. It is considered a quantitative method similar to the scientific method, which focuses on hypothesis-conclusion approaches to test relationships, and operations research, which uses model building to identify predictive relationships.

CHAPTER TWO: CLASSICAL MECHANICS

Classical mechanics is a mathematical formulation describing the motion of a point. Attached to this point is the mass of the partical. It was originally formulated by Newton in order to provide a mathematical model for the dynamics of planetary motion. The objects of his mechanical model were able to be expressed as differential equations. of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars, and galaxies. For objects governed by classical mechanics (there are no objects in the ontological Universe which are governed by the laws of classical mechanics), if the present state is known, it is possible to predict how it will move in the future (determinism), and how it has moved in the past (reversibility).

Section One: Newtonian Mechanics

The earliest development of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on foundational works of Sir Isaac Newton, and the mathematical methods invented by Gottfried Wilhelm Leibniz, Joseph-Louis Lagrange, Leonhard Euler, and other contemporaries, in the 17th century to describe the motion of bodies under the influence of a system of forces. Later, more abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics. These advances, made predominantly in the 18th and 19th centuries, extend substantially beyond earlier works, particularly through their use of analytical mechanics. They are, with some modification, also used in all areas of modern physics.

Classical mechanics provides a mathematical framework for the study of the dynamics of model objects that do not have an extensive number of internal degrees of freedom. When the objects being examined have the size of an atom diameter, it becomes necessary to introduce the other major sub-field of mechanics: quantum mechanics.The following introduces the basic concepts of classical mechanics. Classical mechanics always models objects as point particles (The particle is treated from the Center of Mass). The motion of a point particle is characterized by a small number of parameters: its position, mass, energy, momentum, angular momentum, and the forces applied to it.

In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The physics of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle.

Classical mechanics uses common sense notions of how matter and forces exist and interact. It assumes that the matter in question has definite, knowable attributes such as position in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously. Classica mechanics models real-world objects as point particles (objects with negligible size). The motion of a point particle is characterized by a small number of parameters: its position, mass, and the forces applied to it. In reality (ontologically), the kind of objects that classical mechanics can describe always have a non-zero size and the number of complete solutions is very small. Classical mechanics uses common sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance).

Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's Second Law":

\begin{equation} \boldsymbol{F} = m \boldsymbol{a}, \tag{1.1.1} \end{equation}

\begin{equation} \boldsymbol{F} = \frac{d\vec{p}}{dt}=\frac{d(m\vec{v})}{dt} \tag{1.1.2} \end{equation}

The quantity $m\vec{v}$ is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is   $\vec{a} =\frac{d\vec{v}}{dt}$, the second law can be written in the simplified and more familiar form:   $$\vec{F} =m\frac{d^2\vec{x}}{dt^2}$$,

If the forces acting on our hypothetical particle are known, Newton's second law can be used to describe the motion of a particle. If independent relations for each force acting on our hypothetical particle are available, they can be substituted into Newton's second law to obtain a set of ordinary differential equation, which is called the equation of motion.

Classica Harmonic Oscillator

Within the classical mechanics model a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass m, which experiences a single force F, which pulls the mass in the direction of the point x = 0 and depends only on the position x of the mass and a constant k. Balance of forces (Newton's second law) for the system is

$$ {\displaystyle F=ma=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=m{\ddot {x}}=-kx.}$$

Solving this differential equation, we find that the motion is described by the function: $$ {\displaystyle x(t)=A\cos(\omega t+\varphi ),}$$ where $ {\displaystyle \omega ={\sqrt {\frac {k}{m}}}.}$ The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude A. In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period $ {\displaystyle T=2\pi /\omega }$ , the time for a single oscillation or its frequency $ {\displaystyle f=1/T}$ , the number of cycles per unit time. The position at a given time t also depends on the phase $ {\displaystyle \varphi}$ , which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass m and the force constant k, while the amplitude and phase are determined by the starting position and velocity.

The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position, but with shifted phases. The velocity is maximal for zero displacement, while the acceleration is in the direction opposite to the displacement.

Section Two: Hamilton's Principal

Hamilton's principle states that the evolution $ {\displaystyle q(t) } $ of a system described by $ {\displaystyle N } $ generalized coordinates $ {\displaystyle q = (q_1, q_2, ..., q_N)} $ between two specified states $ {\displaystyle q_1 = q(t_1)} $ and $ {\displaystyle q_2 = q(t_2)} $ at two specified times $ {\displaystyle t_1} $ and $ {\displaystyle t_2} $ is a stationary point (a point where the variation is zero) of the action functional:

$$ {\displaystyle {\mathcal {S}}[\mathbf {q} ]\ {\stackrel {\mathrm {def} }{=}}\ \int _{t_{1}}^{t_{2}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt} $$

Where $ {\displaystyle L(\mathbf {q} ,{\dot {\mathbf {q} }},t)} $ is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in $ {\displaystyle {\mathcal {S}}} $ . The action $ {\displaystyle {\mathcal {S}}} $ is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation.

$$ {\displaystyle {\frac {\delta {\mathcal {S}}}{\delta \mathbf {q} (t)}}=0.} $$

That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path.

Legrangian Mechanics

In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his 1788 work, Mécanique analytique.

Lagrangian mechanics describes a mechanical system with a pair $ ( M , {\mathcal {L}})$, consisting of a configuration space $ M $, and a smooth function $ {\mathcal {L}} $, called a Lagrangian. By convention, $ {\mathcal {L}} = T − V $, where $ T $ and $ V $ are the kinetic and potential energy of the system, respectively.

The stationary action principle requires that the action functional of the system derived from $ {\mathcal {L}} $, must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.

$$ {\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}$$

Hamiltonian Mechanics

Let $( M , {\mathcal {L}})$ be a mechanical system with the configuration space $ M $ and the smooth Lagrangian $ {\mathcal {L}}$ . Select a standard coordinate system $ ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}) $ on $ M $ . The quantities $ {\displaystyle \textstyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)~{\stackrel {\text{def}}{=}}~{\partial {\mathcal {L}}}/{\partial {\dot {q}}^{i}}}$ are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant $ {\displaystyle t}$, the Legendre transformation of $ {\mathcal {L}} $ is defined as the map $ {\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\to \left({\boldsymbol {p}},{\boldsymbol {q}}\right)} $ which is assumed to have a smooth inverse $ {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})\to ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}).} $ For a system with $ {\displaystyle n} $ degrees of freedom, the Lagrangian mechanics defines the energy function:$${\displaystyle E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\,{\stackrel {\text{def}}{=}}\,\sum _{i=1}^{n}{\dot {q}}^{i}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\mathcal {L}}.} $$

The inverse of the Legendre transform of$ {\mathcal {L}}$ turns $ {\displaystyle E_{\mathcal {L}}}$ into a function $ {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)}$ known as the Hamiltonian. The Hamiltonian satisfies:

$${\displaystyle {\mathcal {H}}\left({\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {\dot {q}}}}},{\boldsymbol {q}},t\right)=E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}$$

which implies that:

$$ {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t),} $$

Rotational Dynamics of the Earth

Section One: Rotations in three dimensions

A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation. The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion: $$ {\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}}$$

Coriolis Force

The Coriolis force is an inertial force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with counterclockwise rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with atmospheric physics.

Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate.

The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation).

The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces. By introducing these forces to the rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system.

For our study of Atmospheric Physics the term "Coriolis effect" will apply to the rotating reference frame attached to the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each day/night cycle, so for motions of everyday objects the Coriolis force is usually quite small compared with other forces; its effects generally become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the oceans. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator (anticlockwise) and to the left of this direction south of it (clockwise). This effect is responsible for the rotation and thus formation of tornadoes.

Continuity Equation

A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.

Continuity equations can include "source" and "sink" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. Any continuity equation can be expressed in an "integral form" (in terms of a flux integral), which applies to any finite region, or in a "differential form" (in terms of the divergence operator) which applies at a point. Continuity equations underlie more specific transport equations such as the convection–diffusion equation, Boltzmann transport equation, and Navier–Stokes equations.

Definition of flux

The continuity equation is useful when a flux can be defined. To define flux, first there must be a quantity q which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let ρ be the volume density of this quantity, that is, the amount of q per unit volume. The way that this quantity q is flowing is described by its flux. The flux of q is a vector field, which we denote as j. Some of the important properties of flux are:

CHAPTER THREE: THERMODYNAMICS, STATISTICAL PHYSICS AND COHERENCE

Section Five:   Thermodynamics

Our interest in cloud thermodynamics has to do with our approach to the physics of water droplet formation via negative ion enhancement in the planetary boundary layer and the transport of these ions to the upper part of the troposphere. The thermodynamic function which we will be mainly interested in is the Gibbs Free Energy.

When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.

A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.

Thermodynamic Processes Important in Atmospheric Physics are:

1.    Adiabatic process:    occurs without loss or gain of energy by heatTroposphere

2.    Isenthalpic process:    occurs at a constant enthalpy

3.    Isentropic process:    occurs at constant entropy

4.    Isobaric process:    occurs at constant pressure

5.    Isothermal process:    occurs at a constant temperature

6.    Steady state process:    occurs without a change in the internal energy

Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure.

For example, the Gibbs Free energy will be used almost exclusively in our analysis of the thermodynamics of droplet creation and growth in the clouds.

Part One:   Background

To provide a review of the basic laws of thermodynamics which we will be using in this document, we will simply state the two laws of thermodynamics which are pertinent to our discussion of the creation of precipitation with the use of negative ions created within the planetary Boundary Layer.

The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic processes. In general, the conservation law states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.

In a closed system (i.e. there is no transfer of matter into or out of the system), the first law states that the change in internal energy of the system (ΔU system) is equal to the difference between the heat supplied to the system (Q) and the work (W) done by the system on its surroundings.

$${\displaystyle \Delta U_{\rm {system}}=Q-W}$$

The First Law asserts the existence of a state variable which is usually called: Internal Energy. This internal energy, $ {\displaystyle U} $ along with the volume, $ {\displaystyle V} $ of the system and the mole number, $ {\displaystyle N_i} $ of its chemical constituents will characterizes the macroscopic properties of the systems equilibrium states.

The Second law of thermodynamics

In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment ($ {\displaystyle \mathrm {d} S}$ ) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat ( $ {\displaystyle \delta Q}$ ) to the system of interest, divided by the thermodynamic temperature ( $ {\displaystyle (T)} $ ) of the system and the auxiliary thermodynamic system:

$${\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system; idealized, reversible process)}}.}$$

A convenient form of the second law, useful in atmospheric physics, is to state that for any equilibrium system there exists a state function called entropy, $ {\displaystyle S} $ and that this entropy has the property that it is a function of the extensive parameters of a composite system, such that:

$$ {\displaystyle S = S(U, V, N_1 , . . . , N_ r )} $$

Where $ {\displaystyle N_i} $ denotes the mole number of the ith constituent. We further assume that the entropy is additive over the constituent subsystems, that it is continuous and differentiable and a monotonically increasing function of the internal energy, $ {\displaystyle U} $ .

We further assume that in the absence of internal constraints the value of the extensive parameters in equilibrium are those that maximize the entropy.

Part Two:   Fundamental Relations

In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities. The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.

$$ {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-P\,\mathrm {d} V\,} $$ ,

Where, $ {\displaystyle U} $ is internal energy, $ {\displaystyle T} $ is absolute temperature, $ {\displaystyle S} $ is entropy, $ {\displaystyle P} $ is pressure, and $ {\displaystyle V} $ is volume.

This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy as:

$$ {\displaystyle \begin{equation} \label{E:firstInt} \mathrm {d} H=T\,\mathrm {d} S+V\,\mathrm {d} P\, \end{equation}} $$

In terms of the Helmholtz free energy ($ {\displaystyle F} $ ) as:

$$ {\displaystyle \mathrm {d} F=-S\,\mathrm {d} T-P\,\mathrm {d} V\,} $$ ,

and in terms of the Gibbs free energy ($ {\displaystyle G} $ ) as: $$ {\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P\,} $$ .

Sub-Section One:   Internal Energy

The properties of entropy outlined above, ensure that the entropy function can be used to define the Internal Energy, such that:

$$ {\displaystyle U = U(S, V, N_1 , . . . , N_ r )} $$

This expression is sometimes called the Fundamental Relation in energy form. In specific (per unit mole, or unit mass) form, it can be written as:

$$ {\displaystyle u = u(s, v, n_1 , . . . , n_r ) } $$

Where $ {\displaystyle n_i = {\frac {N_j}{{\mathrm \sum_j} N_j}}} $ is the mole fraction.

$$ {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {PL}{T^{2}R}}} $$

It then follows that:

$$ {\displaystyle \mathrm{d}U = ( \partial U / \partial S)_{V,N_i} dS + (\partial U / \partial V)_{S,N_i} dV + \sum_i (\partial U / \partial N_i)_{S,V} dN_i } $$

Sub-Section Two:   Entropy

According to the Clausius equality, for a closed homogeneous system, in which only reversible processes take place, $$ {\displaystyle \oint {\frac {\delta Q}{T}}=0.} $$

With $ {\displaystyle T } $ being the uniform temperature of the closed system and $ {\displaystyle \delta Q } $ the incremental reversible transfer of heat energy into that system.

That means the line integral $ {\textstyle \int _{L}{\frac {\delta Q}{T}}} $ is path-independent.

A state function $ {\displaystyle S } $ , called entropy, may be defined which satisfies $$ {\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}.} $$ .

Entropy Measurement:    The thermodynamic state of a uniform closed system is determined by its temperature $ {\displaystyle T } $ and pressure $ {\displaystyle P } $ . A change in entropy can be written as $$ {\displaystyle \mathrm {d} S=\left({\frac {\partial S}{\partial T}}\right)_{P}\mathrm {d} T+\left({\frac {\partial S}{\partial P}}\right)_{T}\mathrm {d} P.} $$

The first contribution depends on the heat capacity at constant pressure $ {\displaystyle C_P } $ through:

$$ {\displaystyle \left({\frac {\partial S}{\partial T}}\right)_{P}={\frac {C_{P}}{T}}.} $$

This is the result of the definition of the heat capacity by $ {\displaystyle \delta Q = CP dT } $ and $ {\displaystyle T dS = \delta Q } $ . The second term may be rewritten with one of the Maxwell relations:

$$ {\displaystyle \left({\frac {\partial S}{\partial P}}\right)_{T}=-\left({\frac {\partial V}{\partial T}}\right)_{P}} $$

And the definition of the volumetric thermal-expansion coefficient: $$ {\displaystyle \alpha _{V}={\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{P}} $$

So that:

$$ {\displaystyle \mathrm {d} S={\frac {C_{P}}{T}}\mathrm {d} T-\alpha _{V}V\mathrm {d} P.} $$

With this expression the entropy $ {\displaystyle S } $ at arbitrary $ {\displaystyle P } $ and $ {\displaystyle T } $ can be related to the entropy $ {\displaystyle S_0 } $ at some reference state at $ {\displaystyle P_0 } $ and $ {\displaystyle T_0 } $ according to:

$$ {\displaystyle S(P,T)=S(P_{0},T_{0})+\int _{T_{0}}^{T}{\frac {C_{P}(P_{0},T^{\prime })}{T^{\prime }}}\mathrm {d} T^{\prime }-\int _{P_{0}}^{P}\alpha _{V}(P^{\prime },T)V(P^{\prime },T)\mathrm {d} P^{\prime }.} $$

In classical thermodynamics, the entropy of the reference state can be put equal to zero at any convenient temperature and pressure.

$ {\displaystyle S(P, T) } $ is determined by followed a specific path in the $ {\displaystyle P-T } $ diagram: integration over $ {\displaystyle T } $ at constant pressure $ {\displaystyle P_0 } $ , so that $ {\displaystyle dP = 0 } $ , and in the second integral one integrates over $ {\displaystyle P } $ at constant temperature $ {\displaystyle T } $ , so that $ {\displaystyle dT = 0 } $ . As the entropy is a function of state the result is independent of the path.

The above relation shows that the determination of the entropy requires knowledge of the heat capacity and the equation of state. Normally these are complicated functions and numerical integration is needed.

The entropy of inhomogeneous systems is the sum of the entropies of the various subsystems. The laws of thermodynamics hold rigorously for inhomogeneous systems even though they may be far from internal equilibrium. The only condition is that the thermodynamic parameters of the composing subsystems are (reasonably) well-defined.

Sub-Section Three:   Enthalpy

The enthalpy $ {\displaystyle H} $ of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:

$$ {\displaystyle H = U + pV} $$

Where $ {\displaystyle U} $ is the internal energy, $ {\displaystyle p} $ is pressure, and $ {\displaystyle V} $ is the volume of the system.

Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy $ {\displaystyle h = H/m} $ is referenced to a unit of mass $ {\displaystyle m} $ of the system, and the molar enthalpy $ {\displaystyle H_n} $ is $ {\displaystyle H/n} $ , where $ {\displaystyle n} $ is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:

$$ {\displaystyle H=\sum _{k}H_{k},} $$

Where:

A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure $ {\displaystyle p} $ varies continuously with altitude, while, because of the equilibrium requirement, its temperature $ {\displaystyle T} $ is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:

$$ {\displaystyle H=\int (\rho_h)\,dV,} $$

Where:

The integral therefore represents the sum of the enthalpies of all the elements of the volume.

The enthalpy of a closed homogeneous system is its energy function $ {\displaystyle H(S,p) } $ , with its entropy $ {\displaystyle S[p] } $ and its pressure $ {\displaystyle p } $ as natural state variables which provide a differential relation for $ {\displaystyle dH } $ of the simplest form, derived as follows.

We start from the first law of thermodynamics for closed systems for an infinitesimal process:

$$ {\displaystyle dU=\delta Q-\delta W,} $$

Where:

In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives $ {\displaystyle \delta Q = T dS } $ , with $ {\displaystyle T } $ the absolute temperature and $ {\displaystyle dS } $ the infinitesimal change in entropy $ {\displaystyle S } $ of the system. Furthermore, if only $ {\displaystyle pV } $ work is done, $ {\displaystyle \delta W = p dV } $ . As a result:

$$ {\displaystyle dU=T\,dS-p\,dV.} $$

Adding $ {\displaystyle d(pV) } $ to both sides of this expression gives:

$$ {\displaystyle dU+d(pV)=T\,dS-p\,dV+d(pV),} $$

Or:

$$ {\displaystyle d(U+pV)=T\,dS+V\,dp.} $$

So:

$$ {\displaystyle dH(S,p)=T\,dS+V\,dp.} $$

And the coefficients of the natural variable differentials $ {\displaystyle dS } $ and $ {\displaystyle dp } $ are just the single variables $ {\displaystyle T } $ and $ {\displaystyle V } $ .

Sub-Section Four:    Helmhotz Free Energy

Definition:    The Helmholtz free energy is defined as:

$$ {\displaystyle F\equiv U-TS,} $$

Where:

The Helmholtz energy is the Legendre transformation of the internal energy $ {\displaystyle U}$ , in which temperature replaces entropy as the independent variable.

The first law of thermodynamics in a closed system provides:

$$ {\displaystyle \mathrm {d} U=\delta Q\ +\delta W} $$

Where $ {\displaystyle U } $ is the internal energy, $ {\displaystyle \delta Q} $ is the energy added as heat, and $ {\displaystyle \delta W} $ is the work done on the system. The second law of thermodynamics for a reversible process yields $ {\displaystyle \delta Q=T\,\mathrm {d} S} $ . In case of a reversible change, the work done can be expressed as $ {\displaystyle \delta W=-p\,\mathrm {d} V} $

and so:

$$ {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V.} $$

Applying the product rule for differentiation to $ {\displaystyle \mathrm {d} (TS)=T\mathrm {d} S\,+S\mathrm {d} T} $ , it follows:

$$ {\displaystyle \mathrm {d} U=\mathrm {d} (TS)-S\,\mathrm {d} T-p\,\mathrm {d} V,} $$

and $$ {\displaystyle \mathrm {d} (U-TS)=-S\,\mathrm {d} T-p\,\mathrm {d} V.} $$ The definition of $ {\displaystyle F=U-TS} $ enables to rewrite this as: $$ {\displaystyle \mathrm {d} F=-S\,\mathrm {d} T-p\,\mathrm {d} V.} $$

Because $ {\displaystyle F } $ is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.

Sub-Section Five:    Gibbs Free Energy

The Gibbs free energy (symbol $ {\displaystyle G} $ ) is a thermodynamic potential that can be used to calculate the maximum amount of work that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions.

The Gibbs free energy change

$$ {\displaystyle \Delta G=\Delta H-T\Delta S} $$

Measured in joules (in SI units) is the maximum amount of non-expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.

The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in $ {\displaystyle G} $ is necessary for a reaction to be spontaneous under these conditions.

The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as $ {\displaystyle \Delta G^{\circ }=\Delta H^{\circ }-T\Delta S^{\circ }} $ , where $ {\displaystyle H} $ is enthalpy, $ {\displaystyle T} $ is absolute temperature, and $ {\displaystyle S} $ is entropy.

Part One: Thermodynamics

A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The laws of thermodynamics define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various parameters for thermodynamic processes, such as thermodynamic work and heat, and establish relationships between them. They state empirical facts that form a basis of precluding the possibility of certain phenomena, such as perpetual motion. In addition to their use in thermodynamics, they have wide applicability in Engineering and atmospheric physics. Traditionally, thermodynamics has recognized three fundamental laws, simply named by numerical identification, the first law, the second law, and the third law.The definition of Temperature was incorporated as the zeroth law.

The first and second laws prohibit two kinds of perpetual motion machines, respectively: the perpetual motion machine of the first kind which produces work with no energy input, and the perpetual motion machine of the second kind which spontaneously converts thermal energy into mechanical work.

In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.

With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.

The zeroth law of thermodynamics provides for the foundation of temperature as an empirical parameter in thermodynamic systems and establishes the transitive relation between the temperatures of multiple bodies in thermal equilibrium. The law may be stated in the following form:

If two systems are both in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.

These concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century. The name 'zeroth law' was invented by Ralph H. Fowler in the 1930s, long after the first, second, and third laws were widely recognized. The law allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable.

The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic processes. In general, the conservation law states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.

In a closed system (i.e. there is no transfer of matter into or out of the system), the first law states that the change in internal energy of the system (ΔU system) is equal to the difference between the heat supplied to the system (Q) and the work (W) done by the system on its surroundings.

$${\displaystyle \Delta U_{\rm {system}}=Q-W}$$

This document is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field.

The Second Law of thermodynamics: The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat. The first law can be linked to the law of conservation of energy. The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. For example, when a path for conduction and radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, $ {\displaystyle S}$ , increases.

In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment ($ {\displaystyle \mathrm {d} S}$ ) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat ( $ {\displaystyle \delta Q}$ ) to the system of interest, divided by the common thermodynamic temperature ( $ {\displaystyle (T)} $ ) of the system of interest and the auxiliary thermodynamic system:

$${\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system; idealized, reversible process)}}.}$$

The Third Law of thermodynamics

The third law of thermodynamics states, regarding the properties of closed systems in thermodynamic equilibrium:

The entropy of a system approaches a constant value when its temperature approaches absolute zero.

This constant value cannot depend on any other parameters characterizing the closed system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy. Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy. In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system. The entropy is essentially a state-function meaning the inherent value of different atoms, molecules, and other configurations of particles including subatomic or atomic material is defined by entropy, which can be discovered near 0 K. The Nernst–Simon statement of the third law of thermodynamics concerns thermodynamic processes at a fixed, low temperature:

The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature at which it is performed approaches 0 K.

Here a condensed system refers to liquids and solids. A classical formulation by Nernst (actually a consequence of the Third Law) is:

It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations.

There also exists a formulation of the third law which approaches the subject by postulating a specific energy behavior:

If the composite of two thermodynamic systems constitutes an isolated system, then any energy exchange in any form between those two systems is bounded.

Part Two: Statistical Thermodynamics

The primary goal of statistical thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.

Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.

Fundamental postulate: A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.). There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.

A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that:

For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.

The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:

Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.

Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.

Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).

Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:

Part Three: Coherence in Physics

In Optics, two wave sources are coherent if their frequency and waveform are identical. Coherence is an ideal property of waves that enables stationary (i.e. temporally or spatially constant) interference. It contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets.

Interference is the addition, in the mathematical sense, of wave functions. A single wave can interfere with itself, but this is still an addition of two waves (see Young's slits experiment). Constructive or destructive interference are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable. When interfering, two waves can add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of lesser amplitude than either one (destructive interference), depending on their relative phase. Two waves are said to be coherent if they have a constant relative phase. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions.

Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferometer, when one of the mirrors is moved away gradually from the beam-splitter, the time for the beam to travel increases and the fringes become dull and finally disappear, showing temporal coherence. Similarly, in a double-slit experiment, if the space between the two slits is increased, the coherence dies gradually and finally the fringes disappear, showing spatial coherence. In both cases, the fringe amplitude slowly disappears, as the path difference increases past the coherence length.

The coherence function between two signals $ {\displaystyle x(t)} $ and $ {\displaystyle y(t)}$ is defined as: $$ {\displaystyle \gamma _{xy}^{2}(f)={\frac {|S_{xy}(f)|^{2}}{S_{xx}(f)S_{yy}(f)}}}$$ where $ {\displaystyle S_{xy}(f)}$ is the cross-spectral density of the signal and $ {\displaystyle S_{xx}(f)}$ and $ {\displaystyle S_{yy}(f)} $ are the power spectral density functions of $ {\displaystyle x(t)} $ and $ {\displaystyle y(t)} $ , respectively. The cross-spectral density and the power spectral density are defined as the Fourier transforms of the cross-correlation and the autocorrelation signals, respectively.

For instance, if the signals are functions of time, the cross-correlation is a measure of the similarity of the two signals as a function of the time lag relative to each other and the autocorrelation is a measure of the similarity of each signal with itself in different instants of time. In this case the coherence is a function of frequency. Analogously, if $ {\displaystyle x(t)} $ and $ {\displaystyle y(t)}$ are functions of space, the cross-correlation measures the similarity of two signals in different points in space and the autocorrelations the similarity of the signal relative to itself for a certain separation distance. In that case, coherence is a function of wavenumber (spatial frequency).

The coherence varies in the interval $ {\displaystyle 0\leq \gamma _{xy}^{2}(f)\leq 1}$ . If $ {\displaystyle \gamma _{xy}^{2}(f)=1}$ it means that the signals are perfectly correlated or linearly related and if $ {\displaystyle \gamma _{xy}^{2}(f)=0}$ they are totally uncorrelated. If a linear system is being measured, $ {\displaystyle x(t)} $ being the input and $ {\displaystyle y(t)} $ the output, the coherence function will be unitary all over the spectrum. However, if non-linearities are present in the system the coherence will vary in the limit given above.

The coherence of two waves expresses how well correlated the waves are as quantified by the cross-correlation function. Cross-correlation quantifies the ability to predict the phase of the second wave by knowing the phase of the first. As an example, consider two waves perfectly correlated for all times (by using a monochromatic lightsource). At any time, the phase difference will be constant. If, when combined, they exhibit perfect constructive interference, perfect destructive interference, or something in-between but with constant phase difference, then it follows that they are perfectly coherent.If, on the other hand the wave is split and recombined such that the two parts arrive at a different time or position. In this case, the measure of correlation is the autocorrelation function (sometimes called self-coherence). Degree of correlation involves correlation functions.

Fluid Dynamics

Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.  It has applications in a wide range of disciplines, including mechanical, civil, chemical and biomedical engineering, geophysics, oceanography, meteorology and astrophysics.

It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion.  It is a branch of continuum mechanics, a subject which models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach.

Fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids—liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines and is used extensively in the modelling of weather patterns. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.

Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.

Conservation laws

Three conservation laws are used to solve fluid dynamics problems, and maybe written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow.

Mass continuity (conservation of mass)

The rate of change of fluid mass inside a control volume must be equal to the net rate of fluid flow into the volume. Physically, this statement requires that mass is neither created nor destroyed in the control volume, and can be translated into the integral form of the continuity equation:

$$ {\displaystyle {\frac {\partial }{\partial t}}\iiint _{V}\rho dV} $$

CHAPTER FOUR: ELECTROMAGNETIC THEORY

Part One: Electrostatic and Coulomb's Law.

Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law that quantifies the amount of force between two stationary, electrically charged particles. The electric force between charged bodies at rest is conventionally called electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb, hence the name. Coulomb's law was essential to the development of the theory of electromagnetism because it made it possible to discuss the quantity of electric charge in a meaningful way.

The law states that the magnitude of the electrostatic force of attraction or repulsion between two point charges is directly proportional to the product of the magnitudes of the charges and inversely proportional to the square of the distance between them:

$$ {\displaystyle |F|=k_{\text{e}}{\frac {|q_{1}||q_{2}|}{r^{2}}}}$$

Here, $ {\displaystyle k_{\text{e}}}$ is Coulomb's constant $ {\displaystyle { k_{\text{e}} ≈ 8.988×109 N⋅m^2⋅C^{−2}}}$ , $ {\displaystyle {q_{1}}}$ and $ {\displaystyle {q_{2}}}$ are the signed magnitudes of the charges, and the scalar $ {\displaystyle r}$ is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.

Being an inverse-square law, the law is analogous to Isaac Newton's inverse-square law of universal gravitation, but while gravitational forces are always attractive, the electrostatic forces can be attractive or repulsive. Coulomb's law can be used to derive Gauss's law, and vice versa.

Part Two: Gauss's Law.

Gauss's law states that: The net electric flux through any closed surface is equal to $ {\displaystyle {\frac {1}{\varepsilon _{0}}}}$ times the net electric charge within that closed surface. Gauss's law may be expressed as:

$$ {\displaystyle \Phi _{E}={\frac {Q}{\varepsilon _{0}}}}$$ where $ {\displaystyle \Phi _{E}}$ is the electric flux through a closed surface S enclosing any volume V, Q is the total charge enclosed within V, and ε0 is the electric constant. The electric flux $ {\displaystyle \Phi _{E}}$ is defined as a surface integral of the electric field: $$ {\displaystyle \Phi _{E}= \iint _{S} \mathbf {E} \cdot \mathrm {d} \mathbf {A} }$$ where $ {\displaystyle E}$ is the electric field, $ {\displaystyle dA}$ is a vector representing an infinitesimal element of area of the surface, and · represents the dot product of two vectors. $$ {\displaystyle \iiint _{V}\nabla \cdot \mathbf {E} \,\mathrm {d} V=\iiint _{V}{\frac {\rho }{\varepsilon _{0}}}\,\mathrm {d} V}$$

Part Three: Faraday's Law.

Faraday's law of induction (briefly, Faraday's law) is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf)—a phenomenon known as electromagnetic induction. It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators and solenoids.

The most widespread version of Faraday's law states: The electromotive force around a closed path is equal to the negative of the time rate of change of the magnetic flux enclosed by the path. For a loop of wire in a magnetic field, the magnetic flux $ {\displaystyle \Phi _{B}}$ is defined for any surface ${\Sigma}$ whose boundary is the given loop. Since the wire loop may be moving, we write Σ(t) for the surface. The magnetic flux is the surface integral: $$ {\displaystyle \Phi _{B}=\iint _{\Sigma (t)}\mathbf {B} (t)\cdot \mathrm {d} \mathbf {A} \,,}$$ where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field, and B · dA is a vector dot product representing the element of flux through dA. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.

Part Four: Charge Conservation

In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density J (in amperes per square metre) is equal to the negative rate of change of the charge density ρ (in coulombs per cubic metre), $${\displaystyle \nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}= 0}$$

According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface $ {\displaystyle {\scriptstyle \partial \Omega}}$ can be rewritten as: $$ {\displaystyle {\iint _{\partial \Omega } \mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V}}$$ The integral version of Gauss's equation can thus be rewritten as $$ {\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0}$$ Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement.

Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives $$ {\displaystyle {\iint _{\partial \Omega } \mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V}= 0}$$ which is satisfied for all $ {\displaystyle \Omega} $ if and only if $ {\displaystyle \nabla \cdot \mathbf {B} =0}$ everywhere.

By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve $ {\displaystyle \partial \Sigma} $ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. $$ {\displaystyle \iint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,} $$ Hence the modified Ampere law in integral form can be rewritten as $$ {\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.}$$

Since $ {\displaystyle \Sigma} $ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if Ampere's modified law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.

Charge conservation

The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the modified Ampere's Law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: $$ {\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)} $$ i.e., $$ {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.}$$ By the Gauss Divergence Theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: $$ {\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}} \iiint _{\Omega }\rho \mathrm {d} V=- \iint_{ \partial \Omega } \mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.} $$ In particular, in an isolated system the total charge is conserved.

Maxwell's Equations

The mathematical equations below are called Maxwell's Equation and form the mathematical underpinning of the Electromagnetic Theory. These equations have a very broad area of application and are used almost exclusively in the design of laser systems.

\[\nabla \cdot \mathbf {E} = \frac{\rho}{\epsilon_0}\]

\[\nabla \cdot \mathbf {B} = 0\]

\[\nabla \times \mathbf {E} = \frac{\partial \mathbf {B}}{\partial t}\]

\[\nabla \times \mathbf {B} = -\mu_0 (\mathbf {J} + \epsilon_0 \frac{\partial \mathbf {E}}{\partial t})\]

In a region with no charges $ {\displaystyle \rho = 0} $ and no currents $ {\displaystyle \mathbf {J} = 0} $ , such as in a vacuum, Maxwell's equations reduce to: $$ {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}},\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} &=\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}.\end{aligned}}} $$ The quantity $ {\displaystyle \mu _{0}\varepsilon _{0}}$ has the dimension of $ (time/length)^2 $ . Defining $ {\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}}$ , the equations above have the form of the standard wave equations.

$$ {\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}} $$

Already during Maxwell's lifetime, it was found that the known values for $ {\displaystyle \varepsilon _{0}} $ and $ {\displaystyle \mu _{0}} $ give $ {\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}}$ , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of $ {\displaystyle \mu _{0}=4\pi \times 10^{-7}}$ and $ {\displaystyle c=299\,792\,458~{\text{m/s}}}$ are defined constants, (which means that by definition $ {\displaystyle \varepsilon _{0}=8.854...\times 10^{-12}~{\text{F/m}}}$ that define the ampere and the metre. In the new SI system, only$ {\displaystyle c} $ keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, $ {\displaystyle {\varepsilon _{r}}}$ , and relative permeability, $ {\displaystyle {\mu _{r}}}$ , the phase velocity of light becomes:

$$ {\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},}$$ which is usually less than $ {\displaystyle c} $ .

In addition,$ {\displaystyle \mathbf {E}} $ and $ {\displaystyle \mathbf {B}} $ are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's addition to Ampère's law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity $ {\displaystyle \mathbf {c}} $ .

Macroscopic Formulation of Maxwell's Equations

The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in a vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents. 

"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.

In the macroscopic equations, the influence of bound charge $ {\displaystyle \mathbf {Q_b}} $ and bound current $ {\displaystyle \mathbf {I_b}} $ . is incorporated into the displacement field $ {\displaystyle \mathbf {D}} $ . and the magnetizing field $ {\displaystyle \mathbf {H}} $ .H, while the equations depend only on the free charges $ {\displaystyle \mathbf {Q_f}} $ . and free currents $ {\displaystyle \mathbf {I_f}} $ . This reflects a splitting of the total electric charge $ {\displaystyle \mathbf {Q}} $ . and current $ {\displaystyle \mathbf {I}} $ (and their densities $ {\displaystyle \mathbf {\rho}} $ and $ {\displaystyle \mathbf {J}} $ .J) into free and bound parts: $$ {\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}}$$ The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum;[note 6] and the macroscopic equations, dealing with free charge and current, practical to use within materials.

CHAPTER FIVE: QUANTUM MECHANICS

Quantum Mechanics

Quantum mechanics is a theory that provides several useful mathematical models which are used in a description of such physical systems as atoms, moleacules and solids. These models provide the background mathematics for such diverse areas of Science as: quantum chemistry, quantum optics, quantum technology, and quantum computers.

Quantum mechanics requires that the energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values (quantization), objects have characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement (the uncertainty principle).

Each isolated physical system is associated with a complex Hilbert space $ {\displaystyle H}$ with inner product $ {\displaystyle \langle \phi |\psi \rangle}$ . Rays (that is, subspaces of complex dimension 1) in $ {\displaystyle H}$ are associated with quantum states of the system.

Postulate I

The state of an isolated physical system is represented, at a fixed time $ {\displaystyle t}$ , by a state vector $ {\displaystyle |\psi \rangle }$ belonging to a Hilbert space $ {\displaystyle {\mathcal {H}}}$ called the state space. In other words, quantum states can be identified with equivalence classes of vectors of length 1 in $ {\displaystyle {\mathcal {H}}}$ , where two vectors represent the same state if they differ only by a phase factor. Separability is a mathematically convenient hypothesis, with the physical interpretation that countably many observations are enough to uniquely determine the state.
"A quantum mechanical state is a ray in projective Hilbert space, not a vector. Many authors fail to make this distinction, which could be partly a result of the fact that the Schrödinger equation itself involves Hilbert-space "vectors", with the result that the imprecise use of "state vector" rather than ray is very difficult to avoid."
Accompanying Postulate I is the composite system postulate:

The Hilbert space of a composite system is the Hilbert space tensor product of the state spaces associated with the component systems. For a non-relativistic system consisting of a finite number of distinguishable particles, the component systems are the individual particles. In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally can't be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator ρ normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem).

In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as $$ {\displaystyle \rho =\sum _{k}p_{k}\rho _{1}^{k}\otimes \rho _{2}^{k}}$$ , where $ {\displaystyle \;\sum _{k}p_{k}=1}$ . If there is only a single non-zero $ {\displaystyle p_{k}}$ , then the state can be expressed just as $ {\textstyle \rho =\rho _{1}\otimes \rho _{2},}$ and is called simply separable or product state.

Postulate II: Measurements on a System

Description of physical quantities

Physical observables are represented by Hermitian matrices on $ {\displaystyle {\mathcal {H}}}$ . Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized.

Postulate II-a

Every measurable physical quantity $ {\displaystyle {\mathcal {A}}}$ is described by a Hermitian operator $ {\displaystyle {\mathcal {A}}}$ acting in the state space $ {\displaystyle {\mathcal {H}}}$ . This operator is an observable, meaning that its eigenvectors form a basis for $ {\displaystyle {\mathcal {H}}}$ . The result of measuring a physical quantity $ {\displaystyle {\mathcal {A}}}$ must be one of the eigenvalues of the corresponding observable $ {\displaystyle {\mathcal {A}}}$ .

Results of measurement: By spectral theory, we can associate a probability measure to the values of $ {\displaystyle {\mathcal {A}}}$ in any state $ {\displaystyle {\mathcal {\psi}}}$ . We can also show that the possible values of the observable $ {\displaystyle {\mathcal {A}}}$ in any state must belong to the spectrum of $ {\displaystyle {\mathcal {A}}}$ . The expectation value (in the sense of probability theory) of the observable $ {\displaystyle {\mathcal {A}}}$ for the system in state represented by the unit vector $ {\displaystyle \langle \psi |A|\psi \rangle }$ . If we represent the state $ {\displaystyle {\mathcal {\psi}}}$ in the basis formed by the eigenvectors of $ {\displaystyle {\mathcal {A}}}$ , then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue.

Postulate II-b

When the physical quantity $ {\displaystyle {\mathcal {A}}}$ is measured on a system in a normalized state $ {\displaystyle |\psi \rangle }$ , the probability of obtaining an eigenvalue (denoted $ {\displaystyle a_{n}} $ for discrete spectra and $ {\displaystyle \alpha } $ for continuous spectra) of the corresponding observable $ {\displaystyle {\mathcal {A}}}$ is given by the amplitude squared of the appropriate wave function (projection onto corresponding eigenvector).

$$ {\displaystyle {\begin{aligned}\mathbb {P} (a_{n})&=|\langle a_{n}|\psi \rangle |^{2}&{\text{(Discrete, nondegenerate spectrum)}}\\\mathbb {P} (a_{n})&=\sum _{i}^{g_{n}}|\langle a_{n}^{i}|\psi \rangle |^{2}&{\text{(Discrete, degenerate spectrum)}}\\d\mathbb {P} (\alpha )&=|\langle \alpha |\psi \rangle |^{2}d\alpha &{\text{(Continuous, nondegenerate spectrum)}}\end{aligned}}}$$

For a mixed state $ {\displaystyle {\mathcal {\rho}}}$ , the expected value of $ {\displaystyle {\mathcal {A}}}$ A in the state $ {\displaystyle {\mathcal {\rho}}}$ is $ {\displaystyle \operatorname {tr} (A\rho )}$ , and the probability of obtaining an eigenvalue $ {\displaystyle a_{n}}$ in a discrete, nondegenerate spectrum of the corresponding observable $ {\displaystyle {\mathcal {A}}}$ is given by $ {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (|a_{n}\rangle \langle a_{n}|\rho )=\langle a_{n}|\rho |a_{n}\rangle }$ .

If the eigenvalue $ {\displaystyle a_{n}}$ has degenerate, orthonormal eigenvectors $ {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}}$ , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace:

$$ {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|,}$$ and then $ {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (P_{n}\rho )}$ .

Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics.

Effect of measurement on the state: When a measurement is performed, only one result is obtained. This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured.

Postulate II.c

If the measurement of the physical quantity $ {\displaystyle {\mathcal {A}}}$ on the system in the state $ {\displaystyle |\psi \rangle }$ gives the result $ {\displaystyle a_{n}}$ , then the state of the system immediately after the measurement is the normalized projection of $ {\displaystyle |\psi \rangle } $ onto the eigensubspace associated with $ {\displaystyle a_{n}}$

$$ {\displaystyle \psi \quad {\overset {a_{n}}{\Longrightarrow }}\quad {\frac {P_{n}|\psi \rangle }{\sqrt {\langle \psi |P_{n}|\psi \rangle }}}}$$

For a mixed state ρ, after obtaining an eigenvalue $ {\displaystyle a_{n}} $ in a discrete, nondegenerate spectrum of the corresponding observable $ {\displaystyle A}$ , the updated state is given by $ {\textstyle \rho '={\frac {P_{n}\rho P_{n}^{\dagger }}{\operatorname {tr} (P_{n}\rho P_{n}^{\dagger })}}}$ . If the eigenvalue $ {\displaystyle a_{n}} $ has degenerate, orthonormal eigenvectors $ {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}}$ , then the projection operator onto the eigensubspace is $ {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|}$ .

Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s).

Postulate III

Though it is possible to derive the Schrödinger equation, which describes how a state vector evolves in time, most texts assert the equation as a postulate: The time evolution of the state vector $ {\displaystyle |\psi (t)\rangle }$ is governed by the Schrödinger equation, where $ {\displaystyle H(t)}$ is the observable associated with the total energy of the system (called the Hamiltonian).

Using the Dirac Notation

$$ {\displaystyle i\hbar {\frac{\partial}{\partial t}}|\psi (t)\rangle =\hat H (t)|\psi (t)\rangle }$$

Using the Operator Notation

\[i\hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},t) = \hat H \Psi(\mathbf{r},t)\]

If we use the one dimensional Hamiltonian $$\hat H\ = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\ +\ V(x)$$

We then have the following one dimensional, partial differential equation $$i\hbar \frac{\partial}{\partial t}\Psi(\mathbf{x},t) = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(\mathbf{x},t)\ +\ V(x)\Psi(\mathbf{x},t)$$

Equivalently, the time evolution postulate can be stated as: The time evolution of a closed system is described by a unitary transformation on the initial state.

$$ {\displaystyle |\psi (t)\rangle =U(t;t_{0})|\psi (t_{0})\rangle }$$

For a closed system in a mixed state $ {\displaystyle \rho}$ , the time evolution is $ {\displaystyle \rho (t)=U(t;t_{0})\rho (t_{0})U^{\dagger }(t;t_{0})}$ .

Important Note: The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism), and these systems are almost never unitary.

To the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle.

Spin: In addition to their other properties, all particles possess a quantity called spin which can be viewed as an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position r and time t as continuous variables, $ {\displaystyle \psi = \psi (r, t)}$ . For spin wavefunctions the spin is an additional discrete variable: $ {\displaystyle \psi = \psi (r, t,\sigma)}$ , where $ {\displaystyle \sigma}$ takes the values;

$$ {\displaystyle \sigma =-S\hbar ,-(S-1)\hbar ,\dots ,0,\dots ,+(S-1)\hbar ,+S\hbar \,.} $$

That is, the state of a single particle with spin $ {\displaystyle S}$ is represented by a $ {\displaystyle(2S + 1)} $ -component spinor of complex-valued wave functions.

Two classes of particles with very different behaviour are bosons which have integer spin $ {\displaystyle (S = 0, 1, 2, ...)} $ , and fermions possessing half-integer spin $ {\displaystyle(S = 1⁄2, 3⁄2, 5⁄2, ...)} $ .

Pauli's principle: The property of spin relates to another basic property concerning systems of $ {\displaystyle N}$ identical particles: Pauli's exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the $ {\displaystyle N}$ particles one always should have: $$ {\displaystyle \psi (\dots ,\,\mathbf {r} _{i},\sigma _{i},\,\dots ,\,\mathbf {r} _{j},\sigma _{j},\,\dots )=(-1)^{2S}\cdot \psi (\dots ,\,\mathbf {r} _{j},\sigma _{j},\,\dots ,\mathbf {r} _{i},\sigma _{i},\,\dots )}$$ i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor $ {\displaystyle((−1)^(2S))}$ which is +1 for bosons, but (−1) for fermions. Electrons are fermions with $ {\displaystyle(S = 1/2)} $ ; quanta of light are bosons with $ {\displaystyle(S = 1)}$ . In nonrelativistic quantum mechanics all particles are either bosons or fermions.

Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties.

Measurement in Quantum Mechanics

"Observables" as self-adjoint operators

In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable".  These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on.

The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space). These issues have not been satisfactorily resolved this document will avoid them whenever possible.

Projective Measurement

The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that: $$ {\displaystyle P(x_{i})=\operatorname {tr} (\Pi _{i}\rho ),}$$ where $ {\displaystyle \rho }$ is the density operator, and $ {\displaystyle \Pi _{i}}$ is the projection operator onto the basis vector corresponding to the measurement outcome $ {\displaystyle x_{i}}$ . The average of the eigenvalues of a von Neumann observable, weighted by the Born-rule probabilities, is the expectation value of that observable. For an observable $ {\displaystyle A}$ , the expectation value given a quantum state $ {\displaystyle \rho }$ is: $$ {\displaystyle \langle A\rangle =\operatorname {tr} (A\rho ).} $$ A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., $ {\displaystyle P(x)=1}$ for some outcome $ {\displaystyle x}$ ). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it.

The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in.

Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator.

Generalized measurement: In quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system; analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics. They are extensively used in the field of quantum information.

In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices $ {\displaystyle \{F_{i}\}}$ on a Hilbert space $ {\displaystyle {\mathcal {H}}} $ that sum to the identity matrix:  $$ {\displaystyle \sum _{i=1}^{n}F_{i}=\operatorname {I} .}$$ In quantum mechanics, the POVM element $ {\displaystyle F_{i}} $ is associated with the measurement outcome $ {\displaystyle i}$ , such that the probability of obtaining it when making a measurement on the quantum state $ {\displaystyle \rho }$ is given by: $$ {\displaystyle {\text{Prob}}(i)=\operatorname {tr} (\rho F_{i})}$$ , where $ {\displaystyle \operatorname {tr} } $ is the trace operator. When the quantum state being measured is a pure state $ {\displaystyle |\psi \rangle } $ this formula reduces to: $$ {\displaystyle {\text{Prob}}(i)=\operatorname {tr} (|\psi \rangle \langle \psi |F_{i})=\langle \psi |F_{i}|\psi \rangle }$$ .

State change due to measurement A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process.  To remedy this, further information is specified by decomposing each POVM element into a product: $$ {\displaystyle E_{i}=A_{i}^{\dagger }A_{i}.} $$

The Kraus operators $ {\displaystyle A_{i}} $ , named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products $ {\displaystyle A_{i}^{\dagger }A_{i}} $ are. If upon performing the measurement the outcome $ {\displaystyle E_{i}}$ is obtained, then the initial state $ {\displaystyle \rho }$ is updated to:

$$ {\displaystyle \rho \to \rho '={\frac {A_{i}\rho A_{i}^{\dagger }}{\mathrm {Prob} (i)}}={\frac {A_{i}\rho A_{i}^{\dagger }}{\operatorname {tr} (\rho E_{i})}}.}$$

An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable:

$$ {\displaystyle \rho \to \rho '={\frac {\Pi _{i}\rho \Pi _{i}}{\operatorname {tr} (\rho \Pi _{i})}}.}$$

If the initial state $ {\displaystyle \rho }$ is pure, and the projectors $ {\displaystyle \Pi _{i}}$ have rank 1, they can be written as projectors onto the vectors $ {\displaystyle |\psi \rangle } $ and $ {\displaystyle |i\rangle }$ , respectively. The formula simplifies thus to:

$$ {\displaystyle \rho =|\psi \rangle \langle \psi |\to \rho '={\frac {|i\rangle \langle i|\psi \rangle \langle \psi |i\rangle \langle i|}{|\langle i|\psi \rangle |^{2}}}=|i\rangle \langle i|.}$$

This has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state $ {\displaystyle |i\rangle } $ implies a probability-one prediction for any von Neumann observable that has $ {\displaystyle |i\rangle }$ as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again.

We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation: $$ {\displaystyle \rho \to \sum _{i}A_{i}\rho A_{i}^{\dagger }.}$$

Qubit: An appropriate example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states $ {\displaystyle |0\rangle } $ and $ {\displaystyle |1\rangle } $ with complex coefficients:

$$ {\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle }$$

A measurement in the $ {\displaystyle (|0\rangle ,|1\rangle )}$ basis will yield outcome $ {\displaystyle |0\rangle }$ with probability $ {\displaystyle |\alpha |^{2}}$ and outcome $ {\displaystyle |1\rangle }$ with probability $ {\displaystyle |\beta |^{2}} $ , so by normalization,

$$ {\displaystyle |\alpha |^{2}+|\beta |^{2}=1.} $$

An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for $ {\displaystyle 2\times 2}$ self-adjoint matrices:  $$ {\displaystyle \rho ={\tfrac {1}{2}}\left(I+r_{x}\sigma _{x}+r_{y}\sigma _{y}+r_{z}\sigma _{z}\right),}$$ where the real numbers $ {\displaystyle (r_{x},r_{y},r_{z})} $ are the coordinates of a point within the unit ball and $$ {\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}$$

Positive-operator-valued measure (POVM) elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates $ {\displaystyle (r_{x},r_{y},r_{z})}$ of the state $ {\displaystyle \rho }$ are the expectation values of the three von Neumann measurements defined by the Pauli matrices.  If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of $ {\displaystyle \sigma _{z}}$ are the basis states $ {\displaystyle |0\rangle } $ and $ {\displaystyle |1\rangle } $ , and a measurement of $ {\displaystyle \sigma _{z}}$ is often called a measurement in the "computational basis."  After a measurement in the computational basis, the outcome of a $ {\displaystyle \sigma _{x}} $ or $ {\displaystyle \sigma _{y}} $ measurement is maximally uncertain.

A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis, a set of four maximally entangled states:

$$ {\displaystyle {\begin{aligned}|\Phi ^{+}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}+|1\rangle _{A}\otimes |1\rangle _{B})\\|\Phi ^{-}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |0\rangle _{B}-|1\rangle _{A}\otimes |1\rangle _{B})\\|\Psi ^{+}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}+|1\rangle _{A}\otimes |0\rangle _{B})\\|\Psi ^{-}\rangle &={\frac {1}{\sqrt {2}}}(|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B})\end{aligned}}}$$

Quantum Coherence

In the Epistemological rendering of quantum mechanics, objects have particle-like and wave-like properties.

For instance, in Young's double-slit experiment electrons or neutrons can be used in the place of light waves. Each electron's wave-function goes through both slits, and hence has two separate split-beams that contribute to the intensity pattern on a screen. According to our understanding of Maxwell's wave theory these two contributions give rise to an intensity pattern of bright bands due to constructive interference, interlaced with dark bands due to destructive interference, on a downstream screen. This ability to interfere and diffract is related to coherence (classical or quantum) of the waves produced at both slits.

When the incident beam is represented by a quantum pure state, the split beams downstream of the two slits are represented as a superposition of the pure states representing each split beam. The quantum description of imperfectly coherent paths is called a mixed state. A perfectly coherent state has a density matrix (also called the "statistical operator") that is a projection onto the pure coherent state and is equivalent to a wave function, while a mixed state is described by a classical probability distribution for the pure states that make up the mixture.

Macroscopic scale quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (off-diagonal long-range order, ODLRO) for superfluidity, and laser light, is related to first-order (1-body) coherence/ODLRO, while superconductivity is related to second-order coherence/ODLRO. (For fermions, such as electrons, only even orders of coherence/ODLRO are possible.) For bosons, a Bose–Einstein condensate is an example of a system exhibiting macroscopic quantum coherence through a multiple occupied single-particle state.

The classical electromagnetic field exhibits macroscopic quantum coherence. The most obvious example is the carrier signal for radio and TV. They satisfy Glauber's quantum description of coherence. Recently M. B. Plenio and co-workers constructed an operational formulation of quantum coherence as a resource theory. They introduced coherence monotones analogous to the entanglement monotones. Quantum coherence has been shown to be equivalent to quantum entanglement in the sense that coherence can be faithfully described as entanglement, and conversely that each entanglement measure corresponds to a coherence measure.

Quantum Harmonic Oscillator

The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known and is therefore a very valuable model to develop. If we use the one dimensional Hamiltonian for the harmonic oscillator and substitute $\frac{1}{2}k\hat x^2 = V(x)$

We then have the following one dimensional, partial differential equation $$i\hbar \frac{\partial}{\partial t}\Psi(\mathbf{x},t) = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(\mathbf{x},t)\ +\ \frac{1}{2}k\hat x^2\Psi(\mathbf{x},t)$$

This partial differential equation has the following solution in terms of Hermite Polynomials $$\psi_n(\mathbf{x}) = \frac{1}{\sqrt(2^nn!)}{\left(\frac{m \omega}{\pi\hbar}\right)}^\frac{1}{4}e^\left(-\frac{m\omega^2}{2\hbar}\right)H_n\left(\sqrt\frac{m \omega}{\hbar}x\right)$$

The functions $H_n$ are the Hermite polynomials and are shown below in differential form: $$H_n(\mathbf{z}) =(-1)^n e^{z^2}\frac{d^n}{dz^n} \left( e^{-z^2}\right)$$

The Hamiltonian of the harmonic Oscillator is:

$$ {\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+{\frac {1}{2}}k{\hat {x}}^{2}} $$

Where $ {\textstyle m} $ is the particle's mass, $ {\textstyle k} $ is the force constant, $ {\textstyle \omega ={\sqrt {k/m}}} $ is the angular frequency of the oscillator, $ {\displaystyle {\hat {x}}} $ is the position operator (given by x in the coordinate basis), and $ {\displaystyle {\hat {p}}} $ is the momentum operator (given by $ {\displaystyle {\hat {p}}=-i\hbar \,\partial /\partial x} $ in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law.

One may write the time-independent Schrödinger equation: $$ {\displaystyle {\hat {H}}\left|\psi \right\rangle =E\left|\psi \right\rangle ~,} $$

Where $ {\displaystyle E} $ denotes a to-be-determined real number that will specify a time-independent energy level, or eigenvalue, and the solution $ {\displaystyle \left|\psi \right\rangle} $ denotes that level's energy eigenstate.

One may solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function $ {\displaystyle \langle x |\psi \rangle = \psi(x)} $ , using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions:

$$ {\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {2^{n}\,n!}}}\left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}e^{-{\frac {m\omega x^{2}}{2\hbar }}}H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad n=0,1,2,\ldots .} $$

The functions $ {\displaystyle H_{n}(z)} $ are the Hermite polynomials:

$$ {\displaystyle H_{n}(z)=(-1)^{n}~e^{z^{2}}{\frac {d^{n}}{dz^{n}}}\left(e^{-z^{2}}\right).} $$

The corresponding energy levels are:

$$ {\displaystyle E_{n}=\hbar \omega {\bigl (}n+{\tfrac {1}{2}}{\bigr )}=(2n+1){\hbar \over 2}\omega ~.} $$

This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ħω) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the n = 0 state, called the ground state) is not equal to the minimum of the potential well, but ħω/2 above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle.

The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian.

The Quantum Mechanical Ladder Operator Method for Harmonic Oscillator:

$$ {\displaystyle {\begin{aligned}\langle n|aa^{\dagger }|n\rangle &=\langle n|\left([a,a^{\dagger }]+a^{\dagger }a\right)|n\rangle =\langle n|(N+1)|n\rangle =n+1\\\Rightarrow a^{\dagger }|n\rangle &={\sqrt {n+1}}|n+1\rangle \\\Rightarrow |n\rangle &={\frac {a^{\dagger }}{\sqrt {n}}}|n-1\rangle ={\frac {(a^{\dagger })^{2}}{\sqrt {n(n-1)}}}|n-2\rangle =\cdots ={\frac {(a^{\dagger })^{n}}{\sqrt {n!}}}|0\rangle .\end{aligned}}} $$

The mathematical equations below is called the Dirac Equation and is the basis of the relativistic form of Quantum Mechanic.

\[\left(\beta mc^{2}+c\left(\sum _{n{\mathop {=}}1}^{3}\alpha _{n}p_{n}\right)\right)\psi (x,t)=i\hbar {\frac {\partial \psi (x,t)}{\partial t}}\]

The mathematical equations below is called the Mathieu Equation and is very important in understanding the dynymics of the ion in a Paul Trap. This trapped ion is the computational element used in Quantum Computing

\[\frac{d^2u}{dt^2} +( a_u-2q_u\cos(2\xi))u=0\]

CHAPTER SIX: COMPARISON OF HARMONIC OSCILLATOR MODELS

Part One:    Classical Newtonian Harmonic Oscillator

A simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass $ m $ , which experiences a single force $F$ , which pulls the mass in the direction of the point $x = 0 $ and depends only on the position $ x $ of the mass and a constant $ k $ . Balance of forces (Newton's second law) for the system is: $$ {\displaystyle F=ma=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=m{\ddot {x}}=-kx.} $$ Solving this differential equation, we find that the motion is described by the function: $$ {\displaystyle x(t)=A\cos(\omega t+\varphi ),}$$ where: $ {\displaystyle \omega ={\sqrt {\frac {k}{m}}}.}$ The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude $ A $ . In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period $ {\displaystyle T=2\pi /\omega }$ , the time for a single oscillation or its frequency $ {\displaystyle f=1/T}$ , the number of cycles per unit time. The position at a given time $ t$ also depends on the phase $ \phi $ , which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass $ m$ and the force constant $ k $ , while the amplitude and phase are determined by the starting position and velocity.

The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position, but with shifted phases. The velocity is maximal for zero displacement, while the acceleration is in the direction opposite to the displacement.

The potential energy stored in a simple harmonic oscillator at position $ x $ is: $ {\displaystyle U={\tfrac {1}{2}}kx^{2}.} $

CHAPTER SEVEN: LASERS AND QUANTUM OPTICS

The problems associated with the interaction between light and matter led to the development of quantum mechanics. However, the subfields of quantum mechanics dealing with matter-light interaction were principally regarded as research into matter rather than into light.

The first device using parametric amplification was named "maser", an acronym for "microwave amplification by stimulated emission of radiation". The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger at Columbia University in 1953. Townes, Nikolay Basov and Alexander Prokhorov were awarded the 1964 Nobel Prize in Physics for theoretical work leading to the maser. When similar optical devices were developed they were first known as "optical masers", until "microwave" was replaced by "light" in the acronym.

A laser is a device that emits light through a process of optical parametric amplification, initially based on the theory of stimulated emission of electromagnetic radiation articulated by Einstein. The word "laser" is an acronym for "light amplification by stimulated emission of radiation". The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.

A laser differs from other sources of light in that it emits light which is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers and lidar (light detection and ranging). Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum. Alternatively, temporal coherence can be used to produce ultrashort pulses of light with a broad spectrum but durations as short as a femtosecond.

Research into principles, design and application of these devices has become an important field, and the quantum mechanics processes underlying the photonic principles coupled to the study of the properties of light, and the name quantum optics became customary.

Laser Physics

1.    Stimulated emission:

Stimulated emission can be modelled mathematically by considering an atom that may be in one of two electronic energy states, a lower level state (possibly the ground state) (1) and an excited state (2), with energies $ E_1 $ and $ E_2 $ respectively.

If the atom is in the excited state, it may decay into the lower state by the process of spontaneous emission, releasing the difference in energies between the two states as a photon. The photon will have frequency $ { \nu _{0}} $ and energy ${ h\,\nu _{0}} $ , given by: $$ {\displaystyle E_{2}-E_{1}=h\,\nu _{0}}$$ where $ h $ is Planck's constant.

Alternatively, if the excited-state atom is perturbed by an electric field of frequency ν0, it may emit an additional photon of the same frequency and in phase, thus augmenting the external field, leaving the atom in the lower energy state. This process is known as stimulated emission.

In a group of such atoms, if the number of atoms in the excited state is given by $ N_2 $ , the rate at which stimulated emission occurs is given by: $$ {\displaystyle {\frac {\partial N_{2}}{\partial t}}=-{\frac {\partial N_{1}}{\partial t}}=-B_{21}\,\rho (\nu )\,N_{2}}$$ where the proportionality constant $ B_{21} $ is known as the Einstein $ B $ coefficient for that particular transition, and $ \rho (\nu) $ is the radiation density of the incident field at frequency $ \nu $ . The rate of emission is thus proportional to the number of atoms in the excited state $ N_2 $ , and to the density of incident photons.

At the same time, there will be a process of atomic absorption which removes energy from the field while raising electrons from the lower state to the upper state. Its rate is given by an essentially identical equation: $$ {\displaystyle {\frac {\partial N_{2}}{\partial t}}=-{\frac {\partial N_{1}}{\partial t}}=B_{12}\,\rho (\nu )\,N_{1}.}$$ The rate of absorption is thus proportional to the number of atoms in the lower state, $ N_1 $ . Einstein showed that the coefficient for this transition must be identical to that for stimulated emission:

$$ {\displaystyle B_{12}=B_{21}.}$$

Therefore absorption and stimulated emission are reverse processes proceeding at somewhat different rates.

Another way of viewing this is to look at the net stimulated emission or absorption viewing it as a single process. The net rate of transitions from $ E_2 $ to $ E_1 $ due to this combined process can be found by adding their respective rates, given above:

$$ {\displaystyle {\frac {\partial N_{1}^{\text{net}}}{\partial t}}=-{\frac {\partial N_{2}^{\text{net}}}{\partial t}}=B_{21}\,\rho (\nu )\,(N_{2}-N_{1})=B_{21}\,\rho (\nu )\,\Delta N.}$$

Thus a net power is released into the electric field equal to the photon energy hν times this net transition rate. In order for this to be a positive number, indicating net stimulated emission, there must be more atoms in the excited state than in the lower level: $ {\displaystyle \Delta N>0}$ . Otherwise there is net absorption and the power of the wave is reduced during passage through the medium. The special condition $ {\displaystyle N_{2}>N_{1}}$ is known as a population inversion, a rather unusual condition that must be effected in the gain medium of a laser.

The notable characteristic of stimulated emission compared to everyday light sources (which depend on spontaneous emission) is that the emitted photons have the same frequency, phase, polarization, and direction of propagation as the incident photons. The photons involved are thus mutually coherent. When a population inversion $ {\displaystyle \Delta N>0} $ is present, therefore, optical amplification of incident radiation will take place.

Although energy generated by stimulated emission is always at the exact frequency of the field which has stimulated it, the above rate equation refers only to excitation at the particular optical frequency $ {\displaystyle \nu _{0}}$ corresponding to the energy of the transition. At frequencies offset from $ {\displaystyle \nu _{0}}$ the strength of stimulated (or spontaneous) emission will be decreased according to the so-called line shape. Considering only homogeneous broadening affecting an atomic or molecular resonance, the spectral line shape function is described as a Lorentzian distribution: $$ {\displaystyle g'(\nu )={1 \over \pi }{(\Gamma /2) \over (\nu -\nu _{0})^{2}+(\Gamma /2)^{2}}}$$ where $ {\displaystyle \Gamma} $ is the full width at half maximum.

The peak value of the Lorentzian line shape occurs at the line center, $ {\displaystyle \nu =\nu _{0}}$ . A line shape function can be normalized so that its value at $ {\displaystyle \nu _{0}}$ is unity; in the case of a Lorentzian we obtain: $$ {\displaystyle g(\nu )={g'(\nu ) \over g'(\nu _{0})}={(\Gamma /2)^{2} \over (\nu -\nu _{0})^{2}+(\Gamma /2)^{2}}.}$$ Thus stimulated emission at frequencies away from $ {\displaystyle \nu _{0}}$ is reduced by this factor. In practice there may also be broadening of the line shape due to inhomogeneous broadening, most notably due to the Doppler effect resulting from the distribution of velocities in a gas at a certain temperature. This has a Gaussian shape and reduces the peak strength of the line shape function. In a practical problem the full line shape function can be computed through a convolution of the individual line shape functions involved. Therefore, optical amplification will add power to an incident optical field at frequency $ {\displaystyle \nu }$ at a rate given by: $$ {\displaystyle P=h\nu \,g(\nu )\,B_{21}\,\rho (\nu )\,\Delta N.}$$

2.    Stimulated emission Cross-Section

The stimulated emission cross section is: $$ {\displaystyle \sigma _{21}(\nu )=A_{21}{\frac {\lambda ^{2}}{8\pi n^{2}}}g'(\nu )} $$ where:

3.    Optical Amplification

Stimulated emission can provide a physical mechanism for optical amplification. If an external source of energy stimulates more than 50% of the atoms in the ground state to transition into the excited state, then what is called a population inversion is created. When light of the appropriate frequency passes through the inverted medium, the photons are either absorbed by the atoms that remain in the ground state or the photons stimulate the excited atoms to emit additional photons of the same frequency, phase, and direction. Since more atoms are in the excited state than in the ground state then an amplification of the input intensity results. The population inversion, in units of atoms per cubic meter, is $$ {\displaystyle \Delta N_{21}=N_{2}-{g_{2} \over g_{1}}N_{1}} $$ where $ g_1$ and $ g_2 $ are the degeneracies of energy levels 1 and 2, respectively.

3.    Small Signal Gain

The intensity (in watts per square meter) of the stimulated emission is governed by the following differential equation: $$ {\displaystyle {dI \over dz}=\sigma _{21}(\nu )\cdot \Delta N_{21}\cdot I(z)} $$ as long as the intensity $ I(z)$ is small enough so that it does not have a significant effect on the magnitude of the population inversion. Grouping the first two factors together, this equation simplifies as: $$ {\displaystyle {dI \over dz}=\gamma _{0}(\nu )\cdot I(z)}$$ where: $$ {\displaystyle \gamma _{0}(\nu )=\sigma _{21}(\nu )\cdot \Delta N_{21}}$$ is the small-signal gain coefficient (in units of radians per meter). We can solve the differential equation using separation of variables: $$ {\displaystyle {dI \over I(z)}=\gamma _{0}(\nu )\cdot dz}$$ Integrating, we find: $$ {\displaystyle \ln \left({I(z) \over I_{in}}\right)=\gamma _{0}(\nu )\cdot z} $$ or $$ {\displaystyle I(z)=I_{in}e^{\gamma _{0}(\nu )z}}$$ where: $ {\displaystyle I_{in}=I(z=0)\,} $ , is the optical intensity of the input signal (in watts per square meter).

4.    Saturation Intensity

The saturation intensity IS is defined as the input intensity at which the gain of the optical amplifier drops to exactly half of the small-signal gain. We can compute the saturation intensity as: $$ {\displaystyle I_{S}={h\nu \over \sigma (\nu )\cdot \tau _{S}}}$$ where: $ {\displaystyle h}$ is Planck's constant, and $ {\displaystyle \tau _{\text{S}}}$ is the saturation time constant, which depends on the spontaneous emission lifetimes of the various transitions between the energy levels related to the amplification and $ {\displaystyle \nu } $ is the frequency in Hz.

The minimum value of $ {\displaystyle I_{\text{S}}(\nu )}$ occurs on resonance, where the cross section $ {\displaystyle \sigma (\nu )}$ is the largest. This minimum value is: $$ {\displaystyle I_{\text{sat}}={\frac {\pi }{3}}{hc \over \lambda ^{3}\tau _{S}}} $$ For a simple two-level atom with a natural linewidth $ {\displaystyle \Gamma }$ , the saturation time constant $ {\displaystyle \tau _{\text{S}}=\Gamma ^{-1}}$ .

5.    General Gain Equation

The general form of the gain equation, which applies regardless of the input intensity, derives from the general differential equation for the intensity $ I $ as a function of position $ z $ in the gain medium: $$ {\displaystyle {dI \over dz}={\gamma _{0}(\nu ) \over 1+{\bar {g}}(\nu ){I(z) \over I_{S}}}\cdot I(z)} $$ where $ {\displaystyle I_{S}} $ is saturation intensity. To solve, we first rearrange the equation in order to separate the variables, intensity $ I$ and position $ z $ : $$ {\displaystyle {dI \over I(z)}\left[1+{\bar {g}}(\nu ){I(z) \over I_{S}}\right]=\gamma _{0}(\nu )\cdot dz} $$

Integrating both sides, we obtain $$ {\displaystyle \ln \left({I(z) \over I_{in}}\right)+{\bar {g}}(\nu ){I(z)-I_{in} \over I_{S}}=\gamma _{0}(\nu )\cdot z} $$ or $$ {\displaystyle \ln \left({I(z) \over I_{in}}\right)+{\bar {g}}(\nu ){I_{in} \over I_{S}}\left({I(z) \over I_{in}}-1\right)=\gamma _{0}(\nu )\cdot z} $$ The gain $ G $ of the amplifier is defined as the optical intensity $ I $ at position $ z $ divided by the input intensity: $$ {\displaystyle G=G(z)={I(z) \over I_{in}}} $$ Substituting this definition into the prior equation, we find the general gain equation: $$ {\displaystyle \ln \left(G\right)+{\bar {g}}(\nu ){I_{in} \over I_{S}}\left(G-1\right)=\gamma _{0}(\nu )\cdot z}$$

6.    Small Signal Approximation

In the special case where the input signal is small compared to the saturation intensity, in other words, $$ {\displaystyle I_{in}\ll I_{S}\,} $$ , then the general gain equation gives the small signal gain as: $$ {\displaystyle \ln(G)=\ln(G_{0})=\gamma _{0}(\nu )\cdot z}$$ or $$ {\displaystyle G=G_{0}=e^{\gamma _{0}(\nu )z}} $$ which is identical to the small signal gain equation (see above).

7.    Large Signal Asymptotic Behavior

For large input signals, where: $$ {\displaystyle I_{in}\gg I_{S}\,} $$ , the gain approaches unity $$ {\displaystyle G\rightarrow 1} $$ and the general gain equation approaches a linear asymptote: $$ {\displaystyle I(z)=I_{in}+{\gamma _{0}(\nu )\cdot z \over {\bar {g}}(\nu )}I_{S}} $$

APPENDIX ONE: MATHEMATICAL PRELIMINARIES

This chapter of this webpage will provide the reader with an absolute minimum of the mathematical background required for even a small understanding of a small part of basic mathematical structures required for a basic understanding of Newtonian Mechanical Theory, the Mechanical theories of Hamilton and Lagrange, the Electromagnetic Theory of Maxwell's, electrical circuit theory of Kirchoff and the Quantum Theory of Schrödinger. We will be using the Harmonic Oscillator Model as a minimum requirement for the study of Power Systems and Nuclear Reactor Theory. We therefore adopt a practical view (Engineering) of Model Building. This implies: All models are a lie but some models are useful!. We will be comparing the analysis of the harmonic oscillator in many representations.

Section One: Introduction to Functions

Part One: Definition of Set

A set is the mathematical model for a collection of different objects or things; a set contains elements or members, which can be mathematical objects of any kind: numbers, symbols, points in space, lines, other geometrical shapes, variables, or even other sets. The set with no element is the empty set; a set with a single element is a singleton. A set may have a finite number of elements or be an infinite set. Two sets are equal if they have precisely the same elements. Sets form the basis in modern mathematics and provide a foundations for all branches of mathematics.

Part Two: Introduction to Functions

A function from a set X to a set Y is an assignment of an element of Y to each element of X. The set X is called the domain of the function and the setY is called the codomain or range of the function.

A function, its domain, and its codomain, are declared by the notation f: X→Y, and the value of a function f at an element x of X, denoted by f(x), is called the image ofx under f, or the value of f applied to the argument x. Functions are also called maps or mappings.

Two functions f and g are equal if their domain and codomain sets are the same and their output values agree on the whole domain. More formally, given f: X → Y and g: X → Y, we have f = g if and only if f(x) = g(x) for all x ∈ X. The range or image of a function is the set of the images of all elements in the domain.

Part Three: Important Functions

1.   Trigonometric Functions:

Trigonometric functions (also called circular functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in Engineering and form the basis for the analysis of circulation models of the atmosphere. They are the simplest periodic functions, and as such are widely used in the study of elctrodynamic phenomena through Fourier analysis.

The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent. Their reciprocals are respectively the cosecant, the secant, and the cotangent, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions.

The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed.

$$ {\displaystyle \sin \theta ={\frac {\mathrm {opposite} }{\mathrm {hypotenuse} }}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, {\displaystyle \csc \theta ={\frac {\mathrm {hypotenuse} }{\mathrm {opposite} }}}$$ $$ {\displaystyle \cos \theta ={\frac {\mathrm {adjacent} }{\mathrm {hypotenuse} }}} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, {\displaystyle \sec \theta ={\frac {\mathrm {hypotenuse} }{\mathrm {adjacent} }}}$$ $$ {\displaystyle \tan \theta ={\frac {\mathrm {opposite} }{\mathrm {adjacent} }}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, {\displaystyle \cot \theta ={\frac {\mathrm {adjacent} }{\mathrm {opposite} }}}$$

2.   The Logarithmic Function:

In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g. since 1000 = 10 × 10 × 10 = 103, the "logarithm base 10" of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as $ log_b (x)$ .

The logarithm base 10 (that is b = 10) is called the decimal or common logarithm and is commonly used in engineering and Computer Science. The natural logarithm has the number e (that is b ≈ 2.718) as its base; its use is widespread in Engineering, mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base 2 (that is b = 2) and is frequently used in computer science.

Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because of the fact that the logarithm of a product is the sum of the logarithms of the factors: $$ {\displaystyle \log _{b}(xy)=\log _{b}x+\log _{b}y.}$$ provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter e as the base of natural logarithms.

Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). Logarithms are commonplace in Engineeering and in measurements of the complexity of algorithms.

3.   The Exponential Function:

The exponential function is a mathematical function denoted by $ {\displaystyle f(x)=\exp(x)}$ or $ {\displaystyle e^{x}}$ (where the argument x is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the notion of exponentiation (repeated multiplication), but modern definitions (there are several equivalent characterizations) allow it to be rigorously extended to all real arguments, including irrational numbers. Its application in Engineering qualifies the exponential function as one of the most important function in mathematics.

The exponential function satisfies the exponentiation identity: $ {\displaystyle e^{x+y}=e^{x}e^{y}{\text{ for all }}x,y\in \mathbb {R} ,}$ which, along with the definition $ {\displaystyle e=\exp(1)}$ , shows that $ {\displaystyle e^{n}=\underbrace {e\times \cdots \times e} _{n{\text{ factors}}}}$ for positive integers n, and relates the exponential function to the elementary notion of exponentiation. The base of the exponential function, its value at 1,$ {\displaystyle e=\exp(1)}$, is a ubiquitous mathematical constant called Euler's number.

While other continuous nonzero functions $ {\displaystyle f:\mathbb {R} \to \mathbb {R} } $ that satisfy the exponentiation identity are also known as exponential functions, the exponential function exp is the unique real-valued function of a real variable whose derivative is itself and whose value at 0 is 1; that is, $ {\displaystyle \exp '(x)=\exp(x)} $ for all real x, and $ {\displaystyle \exp(0)=1.}$ Thus, exp is sometimes called the natural exponential function to distinguish it from these other exponential functions, which are the functions of the form $ {\displaystyle f(x)=ab^{x},} $ where the base b is a positive real number. The relation $ {\displaystyle b^{x}=e^{x\ln b}}$ for positive b and real or complex x establishes a strong relationship between these functions, which explains this ambiguous terminology.

The real exponential function can also be defined as a power series. This power series definition is readily extended to complex arguments to allow the complex exponential function $ {\displaystyle \exp :\mathbb {C} \to \mathbb {C} }$ to be defined. The complex exponential function takes on all complex values except for 0.

4.   The Dirac Delta Function:

The delta function was introduced by physicist Paul Dirac as a tool for the normalization of state vectors. It also has uses in probability theory and signal processing. Its validity was disputed until Laurent Schwartz developed the theory of distributions where it is defined as a linear form acting on functions. Joseph Fourier presented what is now called the Fourier integral theorem in his treatise "Théorie analytique de la chaleur" in the form: $$ {\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}$$ which is tantamount to the introduction of the $ {\displaystyle \delta}$ -function in the form: $$ {\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}$$ Augustin Cauchy expressed the theorem using exponentials: $$ {\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.} $$ Cauchy pointed out that in some circumstances the order of integration is significant in this result. As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the $ {\displaystyle \delta}$ -function as: $$ {\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}$$ where the $ {\displaystyle \delta}$ -function is expressed as $$ {\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}$$

Section Two: Complex Numbers

A complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation $ {\displaystyle i^{2}= -1}$ ; every complex number can be expressed in the form $ {\displaystyle a +bi}$ , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number $ {\displaystyle a +bi}$ , a is called the real part and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols $ {\displaystyle \mathbb {C} } $ or C.

Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation $ {\displaystyle (x+1)^{2}=-9} $ has no real solution, since the square of a real number cannot be negative, but has the two nonreal complex solutions: $ (−1 + 3i)$ and $ ( −1 − 3i) $ .

Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule $ {\displaystyle i^{2}= -1}$ combined with the associative, commutative and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field that has the real numbers as a subfield.

The complex numbers can be viewed as a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely expressing in terms of complex numbers some geometric properties and constructions. For example, the real numbers form the real line which is identified to the horizontal axis of the complex plane. The complex numbers of absolute value one form the unit circle. The addition of a complex number is a translation in the complex plane, and the multiplication by a complex number is a similarity centered at the origin. The complex conjugation is the reflection symmetry with respect to the real axis. The complex absolute value is a Euclidean norm.

Section Three: Calculus

Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.

It has two major branches, differential calculus and integral calculus; differential calculus concerns instantaneous rates of change, and the slopes of curves, while integral calculus concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.

Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in Engineering and some sciences.

Part One: Differential Calculus

In mathematics, differential calculus is that portion of calculus that studies the rates at which quantities change. The primary objects of study in differential calculus are the calculation of the derivative of a function. The derivative of a function at a point describes the rate of change of the function near that point. The derivative of a function is then simply the slope of the tangent line at the point of contact.

Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar. The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph $ {\displaystyle (x,f(x))}$ and $ {\displaystyle (x+\Delta x,f(x+\Delta x))}$ where $ {\displaystyle \Delta x}$ is a small number. As before, the slope of the line passing through these two points can be calculated with the formula $ {\displaystyle {\text{slope }}={\frac {\Delta y}{\Delta x}}}$ . This gives: $$ {\displaystyle {\text{slope}}={\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$$ As $ {\displaystyle \Delta x}$ gets closer and closer to ${\displaystyle 0}$ , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as

$$ {\displaystyle \lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$$ The expression above means 'as $ {\displaystyle \Delta x} $ gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of $ {\displaystyle f(x)}$ ; this can be written as $ {\displaystyle f'(x)} $ . If $ {\displaystyle y=f(x)}$ , the derivative can also be written as$ {\displaystyle {\frac {dy}{dx}}}$ , with $ {\displaystyle d} $ representing an infinitesimal change. For example, $ {\displaystyle dx} $ represents an infinitesimal change in x. In summary, if $ {\displaystyle y=f(x)}$ , then the derivative of $ {\displaystyle f(x)} $ is: $$ {\displaystyle {\frac {dy}{dx}}=f'(x)=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}} $$ provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. The following is the long version of differentiation from first principles, that the derivative of $ {\displaystyle y=x^{2}}$ is $ {\displaystyle 2x}$ :

$$ {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {(x+\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {x^{2}+2x\Delta x+(\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {2x\Delta x+(\Delta x)^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}2x+\Delta x\\\end{aligned}}}$$

The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.

Differential calculus and integral calculus are connected by the fundamental theorem of calculus, which states that differentiation is the reverse process to integration.

Differentiation has applications in nearly all quantitative disciplines. In Engineering, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body. Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in modeling natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.

Part Two: Integral Calculus

In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with differentiation, integration is a fundamental, essential operation of calculus, and serves as a tool to solve problems in Engineering and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid.

The integrals enumerated here are those termed definite integrals, which can be interpreted as the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function. In this case, they are called indefinite integrals. The fundamental theorem of calculus relates definite integrals with differentiation and provides a method to compute the definite integral of a function when its antiderivative is known.

Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more robust than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.

Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting the two endpoints of the interval. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.

Terminology and notation

In general, the integral of a real-valued function $ {\displaystyle f(x)}$ , with respect to a real variable $ {\displaystyle x}$ , on an interval $ {\displaystyle [a,b]}$ , is written as $$ {\displaystyle \int _{a}^{b}f(x)\,dx.} $$ The integral sign $ {\displaystyle \int} $ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is $ {\displaystyle x}$ . The function $ {\displaystyle f(x)}$ is called the integrand, the points $ {\displaystyle a}$ and $ {\displaystyle b}$ are called the limits (or bounds) of integration, and the integral is said to be over the interval $ {\displaystyle [a, b]}$ , called the interval of integration. A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral. When the limits are omitted, as in: $$ {\displaystyle \int f(x)\,dx,} $$

The integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions. Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation. For example, to find the area of the region bounded by the graph of the function $ {\displaystyle f(x) = \sqrt{x}} $ between $ x = 0 $ and $ x = 1$ , one can cross the interval in five steps $ (0, 1/5, 2/5, ..., 1)$ , then fill a rectangle using the right end height of each piece $ {\displaystyle (thus \sqrt{0}, \sqrt{ {\frac {1}{5}}},\sqrt{\frac {2}{5}}, ...,\sqrt{1})}$ and sum their areas to get an approximation of: $$ {\displaystyle \textstyle {\sqrt {\frac {1}{5}}}\left({\frac {1}{5}}-0\right)+{\sqrt {\frac {2}{5}}}\left({\frac {2}{5}}-{\frac {1}{5}}\right)+\cdots +{\sqrt {\frac {5}{5}}}\left({\frac {5}{5}}-{\frac {4}{5}}\right)\approx 0.7497,} $$ which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increase to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes: $$ {\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx={\frac {2}{3}},} $$ which means 2/3 is the result of a weighted sum of function values, $ {\displaystyle \sqrt {x}} $ , multiplied by the infinitesimal step widths, denoted by $ {\displaystyle dx} $ , on the interval $ {\displaystyle [0, 1]} $ . There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions. The most commonly used definitions are Riemann integrals and Lebesgue integrals.

Riemann Integral

The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval $ {\displaystyle [a, b]} $ on the real line is a finite sequence: $$ {\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!} $$ This partitions the interval $ {\displaystyle [a, b]} $ into n sub-intervals $ {\displaystyle [xi-1, xi]} $ indexed by i, each of which is "tagged" with a distinguished point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as

$$ {\displaystyle \sum _{i=1}^{n}f(t_{i})\,\Delta _{i};} $$

Thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the width of sub-interval, $ {\displaystyle \delta i =xi-(xi-1)} $ . The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, $ {\displaystyle maxi = 1....n \delta i} $ . The Riemann integral of a function $ {\displaystyle f } $ over the interval $ {\displaystyle [a, b]} $ is equal to $ {\displaystyle S} $ if:

For all $ {\displaystyle \varepsilon >0} $ there exists $ {\displaystyle \delta >0}$ such that, for any tagged partition $ {\displaystyle [a,b]} $ with mesh less than $ {\displaystyle \delta } $ , $$ {\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\,\Delta _{i}\right|<\varepsilon .} $ When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.

Lebesgue Integral

It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.

Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:

I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.

As Folland puts it, To compute the Riemann integral of $ {\displaystyle f} $ , one partitions the domain $ {\displaystyle [a, b]} $ into subintervals, while in the Lebesgue integral, "one is in effect partitioning the range of $ {\displaystyle f} $ ". The definition of the Lebesgue integral thus begins with a measure, $ {\displaystyle \mu} $ . In the simplest case, the Lebesgue measure $ {\displaystyle \mu (A)} $ of an interval $ {\displaystyle A = [a, b]} $ is its width, $ {\displaystyle b-a} $ , so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.

Using the ( partitioning the range of $ {\displaystyle f} $ ) philosophy, the integral of a non-negative function $ {\displaystyle f : R → R } $ should be the sum over $ {\displaystyle t} $ of the areas between a thin horizontal strip between $ {\displaystyle y = t} $ and $ {\displaystyle y = t + dt} $ . This area is just $ {\displaystyle \mu { x : f(x) > t} dt}$ Let $ {\displaystyle { f^*(t) = \mu{ x : f(x) > t }}$ . The Lebesgue integral of f is then defined by: $$ {\displaystyle \int f=\int _{0}^{\infty }f^{*}(t)\,dt} $$ where the integral on the right is an ordinary improper Riemann integral. For a suitable class of functions (the measurable functions) this defines the Lebesgue integral. A general measurable function $ {\displaystyle f} $ is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of $ {\displaystyle f} $ and the x-axis is finite: $$ {\displaystyle \int _{E}|f|\,d\mu <+\infty .}$$ In that case, the integral is, as in the Riemannian case, the difference between the area above the $ {\displaystyle x} $ -axis and the area below the $ {\displaystyle x} $ -axis: $$ {\displaystyle \int _{E}f\,d\mu =\int _{E}f^{+}\,d\mu -\int _{E}f^{-}\,d\mu } $$ where $$ {\displaystyle {\begin{alignedat}{3}&f^{+}(x)&&{}={}\max\{f(x),0\}&&{}={}{\begin{cases}f(x),&{\text{if }}f(x)>0,\\0,&{\text{otherwise,}}\end{cases}}\\&f^{-}(x)&&{}={}\max\{-f(x),0\}&&{}={}{\begin{cases}-f(x),&{\text{if }}f(x)<0,\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}$$

Fourier Series

A Fourier series is a sum that represents a periodic function as a sum of sine and cosine waves. The frequency of each wave in the sum, or harmonic, is an integer multiple of the periodic function's fundamental frequency. Each harmonic's phase and amplitude can be determined using harmonic analysis. A Fourier series may potentially contain an infinite number of harmonics. Summing part of but not all the harmonics in a function's Fourier series produces an approximation to that function.

Almost any periodic function can be represented by a Fourier series that converges. Convergence of Fourier series means that as more and more harmonics from the series are summed, each successive partial Fourier series sum will better approximate the function, and will equal the function with a potentially infinite number of harmonics.

Fourier series can only represent functions that are periodic. However, non-periodic functions can be handled using an extension of the Fourier Series called the Fourier Transform which treats non-periodic functions as periodic with infinite period. This transform thus can generate frequency domain representations of non-periodic functions as well as periodic functions, allowing a waveform to be converted between its time domain representation and its frequency domain representation.

Since Fourier's time, many different approaches to defining and understanding the concept of Fourier series have been discovered, all of which are consistent with one another, but each of which emphasizes different aspects of the topic. Some of the more powerful and elegant approaches are based on mathematical ideas and tools that were not available in Fourier's time. Fourier originally defined the Fourier series for real-valued functions of real arguments, and used the sine and cosine functions as the basis set for the decomposition. Many other Fourier-related transforms have since been defined, extending his initial idea to many applications and birthing an area of mathematics called Fourier analysis.

Fourier Transforms

A Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial frequency or temporal frequency. That process is also called analysis. The premier Engineering application would be decomposing of the waveform of electrical signals used in communication technology. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.

The Fourier transform of a function is a complex-valued function representing the complex sinusoids that comprise the original function. For each frequency, the magnitude (absolute value) of the complex value represents the amplitude of a constituent complex sinusoid with that frequency, and the argument of the complex value represents that complex sinusoid's phase offset. The Fourier transform is not limited to functions of time, but the domain of the original function is commonly referred to as the time domain. The Fourier inversion theorem provides a synthesis process that recreates the original function from its frequency domain representation.

Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation.

The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. The most important example of a function requiring a sophisticated integration theory is the Dirac delta function.

The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn (viewed as groups under addition), notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT.

There are several common conventions for defining the Fourier transform of an integrable function $ {\displaystyle f:\mathbb {R} \to \mathbb {C} } $ . One of them is:Fourier transform integral: $$ {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-i2\pi \xi x}\,dx,\quad \forall \ \xi \in \mathbb {R} .} $$

Differential Equations

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering and physics.

The language of operators allows a compact writing for differentiable equations: if: $$ {\displaystyle L=a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},}$$ is a linear differential operator, then the equation: $$ {\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}=b(x)} $$ may be rewritten $$ {\displaystyle Ly=b(x).} $$

Mainly the study of differential equations consists of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.

Since closed-form solutions to differential equations are seldom available, Engineers have become experts at the numerical solutions of differential equations using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.

Part One: Ordinary differential equation and Linear differential equation

A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form: $$ {\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}$$

An ordinary differential equation is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. In general, the solutions of a differential equation cannot be expressed by a closed-form expression and therefore numerical methods are commonly used for solving differential equations on a computer.

Part Two:    Partial differential equations

A partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. Partial Differential Equations are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.

Partial Differential Equations are used to develop a wide variety of models. They are very phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness.

Part Three:    Non-Linear differential equations

A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives. There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear Partial Differential Equationss are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is often used as a starting point in representing nonlinear phenomenon.

Vector Calculus

Vector calculus, or vector analysis, is concerned with differentiation and integration of vector fields, primarily in 3-dimensional Euclidean space $ {\displaystyle \mathbb {R} ^{3}.}$ . The term "vector calculus" is used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, quantum mechanics, quantum optics, and fluid flow.

Vector calculus was developed from quaternion analysis by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis. In the conventional form using cross products, vector calculus does not generalize to higher dimensions, while the alternative approach of geometric algebra which uses exterior products does.

Scaler Fields

A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space and the pressure distribution in a fluid. These fields are the subject of scalar field theory.

Vector Fields

A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point.

Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator $ {\displaystyle \nabla }$ , also known as "nabla". The three basic vector operators are:

1.   The Gradient

The gradient of a scalar-valued differentiable function $ {\displaystyle f}$ of several variables is the vector field (or vector-valued function) $ {\displaystyle \nabla f}$ whose value at a point $ {\displaystyle p}$ is the vector whose components are the partial derivatives of $ {\displaystyle f}$ at $ {\displaystyle p}$ . That is, for $ {\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} }$ , its gradient $ {\displaystyle \nabla f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}}$ is defined at the point $ {\displaystyle p=(x_{1},\ldots ,x_{n})}$ in n-dimensional space as the vector.

$$ {\displaystyle \nabla f(p)={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}(p)\\\vdots \\{\frac {\partial f}{\partial x_{n}}}(p)\end{bmatrix}}.} $$

The nabla symbol $ {\displaystyle \nabla }$ , written as an upside-down triangle and pronounced "del", denotes the vector differential operator.

The gradient vector can be interpreted as the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative.[2] Further, the gradient is the zero vector at a point if and only if it is a stationary point (where the derivative vanishes). The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent.

The gradient is dual to the total derivative $ {\displaystyle df}$ : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear function on vectors. They are related in that the dot product of the gradient of $ {\displaystyle f}$ at a point $ {\displaystyle p}$ with another tangent vector $ {\displaystyle v}$ equals the directional derivative of $ {\displaystyle f}$ at $ {\displaystyle p}$ of the function along $ {\displaystyle v}$ v; that is, $$ {\textstyle \nabla f(p)\cdot \mathbf {v} ={\frac {\partial f}{\partial \mathbf {v} }}(p)=df_{p}(\mathbf {v} )}$$ .

The Laplacian

In Engineering, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols $ {\displaystyle \nabla \cdot \nabla }$ ,$ {\displaystyle \nabla ^{2}}$ (where $ {\displaystyle \nabla }$ is the nabla operator). In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form.

$$ {\displaystyle \nabla ^{2}f=\nabla \cdot \nabla f}$$

In Cartesian coordinates,

$$ {\displaystyle \nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.}$$

In cylindrical coordinates,

$$ {\displaystyle \nabla ^{2}f={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}\left(\rho {\frac {\partial f}{\partial \rho }}\right)+{\frac {1}{\rho ^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}},}$$

In spherical coordinates:

$$ {\displaystyle \nabla ^{2}f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},}$$

The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions.

The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow, the wave equation describes wave propagation, and the Schrödinger equation in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology.

The Divergence Theorem

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem which relates the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region inside the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region".

Suppose $ {\displaystyle V}$ is a subset of $ {\displaystyle \mathbb {R} ^{n}}$ \mathbb {R} ^{n}$ (in the case of n = 3, $ {\displaystyle V}$ represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary $ {\displaystyle S}$ (also indicated with $ {\displaystyle \partial V=S}$ . If $ {\displaystyle F}$ is a continuously differentiable vector field defined on a neighborhood of $ {\displaystyle V}$ , then:

Hilbert Space

In mathematics and Quantum Optics,a Hilbert spaces allow generalizing the methods of linear algebra and calculus from three-dimensional, Euclidean vector spaces to spaces that may be infinite-dimensional. A Hilbert space is a vector space equipped with an inner product which defines a distance function for which it becomes a metric space. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces.

Definition: A Hilbert space $ {\displaystyle H}$ is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product.

To say that $ {\displaystyle H}$ is a complex inner product space means that $ {\displaystyle H}$ is a complex vector space on which there is an inner product $ {\displaystyle \langle x,y\rangle } $ associating a complex number to each pair of elements $ {\displaystyle x,y} $ of $ {\displaystyle H}$ that satisfies the following properties:

  1. The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements: $ {\displaystyle \langle y,x\rangle ={\overline {\langle x,y\rangle }}\,.}$ Importantly, this implies that $ {\displaystyle \langle x,x\rangle } $ is a real number.
  2. The inner product is linear in its first argument. For all complex numbers $ {\displaystyle a}$ and $ {\displaystyle b,} $ $$ {\displaystyle \langle ax_{1}+bx_{2},y\rangle =a\langle x_{1},y\rangle +b\langle x_{2},y\rangle \,.} $$
  3. The inner product of an element with itself is positive definite:$$ {\displaystyle {\begin{alignedat}{4}\langle x,x\rangle >0&\quad {\text{ if }}x\neq 0,\\\langle x,x\rangle =0&\quad {\text{ if }}x=0\,.\end{alignedat}}}$$

    IDFS, INC. (International Diversified Financial Services, Inc.), an existing Texas corporation, in good standing, wholly owned by the Founder and CHIEF EXECUTIVE OFFICER of AscenTrust, LLC., is the financial arm of the Matagorda Power Project. The first phase of the power Project consists of the construction of a 560MWe, combined cycle power plant. Building this facility will establish the Company as an Independent Power Producer (IPP).When IDFS has been registered with the Public Utility Commission of the State of Texas, we will proceed to the Second phase of Funding.

    NITEX (Nitex International, LLC), an existing Texas Limited Liability Company filling Taxes as a C-corporation, in good standing, wholly owned by the Founder and Senior Project Manager of AscenTrust, LLC.. Nitex is being used as the Corporate vehicle for the refinery project in Louisiana which will be referred to in our documentatin as the Pelican Bay Refinery Project

    Advanced Software Development, Inc. , was originally Incorporated in the State of Delaware in 2000. The Senior System and Software Developer was working on the implimentation of a HIPAA (Health Insurance Portability and Accountability Act) compliant Electronic Medical Records to be used in the rural medical clinics which we were designing and building for a local medical practitioner (In the Conroe area of Montgomery County, Texas). When the internet bubble burst we were left with nothing more than a working prototype which we had installed on a laptop running Windows XP. All of the software being developed by the Senior Project Manager of AscenTrust, LLC. or his strategic Partners is being developed under the DBA of Advanced Software Development, Inc. .