Science, maths and computers.
If you study physics or engineering, chances are that looking at the above formula gives you painful flashbacks of either trying to prove in an exam that it’s its own inverse, or using it to program a Wiener filter (or in my case, both). But while the Fourier Transform is on the one hand an incredibly useful tool in data/image/audio analysis, it also provides some beautiful explanations for why the world is as it is.
Dubstep (wub wub wub)
Whether or not cooking with Skrillex is your thing, pretty much all music is made to sound better by the Fourier Transform. For the uninitiated, it transforms a function of time into a function of frequency. Think of a graphical equaliser - that is the Fourier Transform of a song as you listen to it in real time. It takes the waveform (which is a function of amplitude against time), and decomposes it into its constituent frequencies. It shows you how much bass, middle and treble there is at any given time, and this lets music producers have their way with your ears by altering the amplitude at different frequencies, or in other words, filtering it.
Dr. Dre is a big fan of the work of Joseph Fourier
If you’ve ever used photoshop then you might have used a Gaussian blur. What it’s mathematically doing is treating your image as a 2D grid of numbers, and convoluting it with a 2D Gaussian function. A single bright spot will spread out into a dim blur like this:
Now, if you have an image that has been blurred, knowing - or guessing - the function it’s been blurred by (doesn’t have to be digital: a camera lens is an analogue convoluting function) allows you to deconvolute the image, not too dissimilar to the zoom-enhance shtick you see in many sci-fi’s. This deconvolution can only really be performed in practice through the use of the Fourier Transform (in particular, by utilising the convolution theorem).
In fact, the Fourier Transform is used ubiquitously in data analysis, signal processing, and image/video/audio enhancement due to its ability to work magic on the crappy file you have and make it better and clearer. In fact I’m pretty sure it’s actual witchcraft.
Heisenberg’s uncertainty principle (wub wub wub?)
While the Fourier Transform is useful for many practical applications, it has a very profound impact on the nature of reality itself. This is all to do with the relationship it draws between its two variables.
Above, you can see it equates a function of frequency, ω, with a function of time, t. These variables are known as Fourier conjugates, and they’re flip sides of each other. You can’t just choose any variable you like to be on the left hand side; a transform of a function of time is always a function of frequency. However there are other pairs of variables you can use.
One such pair of Fourier conjugates are the vectors space, r, and wavenumber, k - the wavenumber is the reciprocal of wavelength, and turns out to be a more natural measure of the spatial variation of a wave.
One of the weirdest results in quantum physics is the Heisenberg Uncertainty Principle, which states that for a given particle (or more formally, a quantum system), certain pairs of variables are unknowable with arbitrary precision. For instance, a particle cannot have both an exact position and an exact momentum. The more precisely its position is defined, the more uncertain its momentum is and vice versa. It’s not a measurement problem but an inherent property of nature itself. Why?
Back to the Fourier Transform… It turns out that the transform of a sharp thin spike, is a spread out ‘blurry’ hill like the Gaussian above. Therefore, a function representing a very well defined position (or time), such as a sharp thin spike, would correspond to a delocalised, spread out function representing wavenumber (or frequency). In quantum mechanics, momentum and wavenumber are essentially the same (multiplied by a constant), and therefore Heisenberg’s Principle must necessarily hold true!
In fact, there also exists a Heisenberg Principle for time and frequency. In quantum mechanics, frequency is interchangeable with energy (again, multiplied by the same constant) and therefore the energy of a particle is uncertain over arbitrarily small time-frames. This allows particles in the quantum regime to ‘borrow’ enough energy to tunnel through a potential barrier so long as they pay it back in a small enough timeframe to be in keeping with Heisenberg.
Joseph Fourier is a big fan of the work of Dr. Dre
When you switch on a light or power up your computer, if you stop and think about what’s happening, you probably have a mental image of power pulsing through the cables at the speed of light, feeding energy to whatever it is you’re turning on.
Ask yourself how this works exactly. How does the energy from the power station get to your bulb and light it up? You’d probably say the energy travels through the wires, or more specifically, that the electrons in the wires ‘carry’ energy; that they somehow transfer their energy into the filament of the bulb (maybe by colliding with atoms), making it heat up and give off light. Measure the voltage across a bulb, you’d say, and you’d find that the electrons going into the bulb were at a higher potential than the electrons coming out the bulb. Therefore, you’d conclude, the energy is transferred along the wire, by the electrons to the bulb.
You’d be wrong. You wouldn’t just be wrong, in fact, you couldn’t be further from the truth. The electrons do not transfer any energy whatsoever.
The energy is transferred directly from the power source through the air and into the bulb perpendicular to the flow of current.
Wha…? This is one of the weirdest results of electrodynamics. No power is transferred through wires in a circuit!
You can prove this by considering the resistor circuit below. (This could just as easily be a lightbulb circuit)
Although there might be a drop in voltage across the resistor, the current I has to be the same everywhere. As current is the amount of charge passing through the wire per second, this is equivalent to saying ‘what goes in must come out’. In this situation, the current is proportional to the drift velocity of the electrons, i.e., the net speed of their flow through the wire. As we have said that the current going in is the same as the current going out, this means that the velocity of the electrons does not change. This in turn means that the kinetic energy (1/2 mv^2, remember) of the electrons does not change!
A more formal mathematical proof of this comes from the consideration of the Poynting Vector. This quantity is a vector field given by the equation at the top of this article and represents the flow of electromagnetic energy, in any system. In the system we’re considering an electric field is generated by the power source. The closed wire constitutes a current loop, which generates a magnetic field. The vector cross product of these two (divided by µ) gives S, the Poynting Vector. Now don’t worry if this is a little too much to grasp. It actually takes quite a lot of physics and mathematics to get this far. However if you were to calculate E and B for our little circuit, then compute S, you would get something that looked like this
As you can see, most of the power is actually being transferred to the resistor (or lightbulb, or computer) almost directly through the air, completely ignoring the wires at all!
‘What is even the point of the wires?’ I now hear you asking. Well, you need the wires to form a closed loop to enable a current to set up the magnetic field otherwise you’d get S=0 and no power would transfer at all!
If it were true that a power source worked by somehow giving electrons kinetic energy which mechanically powered a device, then you wouldn’t need to complete circuits at all. Just like a gushing river powering a mill, these high velocity electrons could power a device.
Unfortunately, in a DC circuit like above, electrons only travel at a rate of a few centimeters per second down a wire. In an AC circuit their motion is barely coherent at all (if you could see them moving, you’d see them diffuse in all directions in equal amounts), and when you start thinking about this it makes the whole Poynting Vector stuff seem more palatable.
Next time you plug your laptop charger in, just imagine that in that instant, you’re completing a circuit tens or hundreds of miles in diameter, which sets up a magnetic field, which then enables the power station to transmit energy to your laptop through the air! Cool thought, right?
The Navier-Stokes equation fully describes the flow of a fluid. It is essentially an application of Newton’s second law - that a force acting on a body is equal to its rate of change of momentum. Obviously, a fluid is much harder to model than a single solid body with a given mass and velocity. Instead, the liquid or gas is given a density ρ and a velocity vector field, v(x, y, z, t), which describes the speed and direction of flow at the point (x, y, z) and at a time t.
The left-hand side of the equation then is essentially the ‘mass times acceleration’ part of Newton’s second law, and the right-hand side is the total force - in this case, the sum of a pressure gradient, a stress divergence and an external force.
This equation will fully describe the mixing of milk and tea as well as the dynamics of hurricanes.
Unfortunately though, in almost all circumstances it is impossible to solve or even compute to a high level of accuracy! In fact, one of the Clay Institute’s $1m prizes awaits anyone who can gain a great deal of insight into Navier-Stokes existence and behaviour.
However, the amazing thing about this equation is that it tells us (in theory) everything we need to know about the flow of a fluid. The complex behaviour of a rushing river or a swirling vortex is all locked up in these five simple(ish) terms.
Think about that the next time you stir your tea.
In the early 1920s a young German PhD student called Ernst Ising developed a mathematical model to explain ferromagnetism. Though his work quickly faded into obscurity, it was later rediscovered and ended up revolutionising the field of theoretical physics. However by this time, Ising had long left academia to pursue an engineering career in America and had no idea that his name and eponymous model had become famous in the physics community.
The Ising model attempts to explain how magnetic order can be achieved in a material - that is, how atoms in a material come to mutually align their spins in order to form a magnet. His model assumes that each atom will have a magnetic spin, σ, that can be in one of two states: spin-up or spin-down. For a spin-up atom, σ=1 and for a spin-down atom, σ=-1. Two neighbouring magnetic dipoles contribute an energy -J when their spins align and an energy +J when their spins anti-align. On top of this, each atom feels a local magnetic field, h, which either increases or decreases its energy depending on the spin. The total energy of the system is found by adding up the contributions from each nearest neighbour pair.
Unfortunately this model almost completely fails at explaining how ferromagnetism works! The model is quite tricky to solve for a 1D crystal and fiendishly difficult for the 2D case. To this day, no one has managed to solve it for a 3D system. What it does offer, however is a very rich analytical toolkit which helps us understand a variety of nearest-neighbour interaction systems including (amongst others) particles in a gas, neural computation and bird flocking.
It was only much later in his life that Ising found out how famous he had become when he was invited to a physics conference to give a keynote speech. It must have been a great surprise to him to turn up and find that his PhD work had become a gospel for statistical mechanics. He became a professor of physics (though never published again) and died aged 98 at his home.
(Top photo source)
On the 5th of September 1906, while his family were taking an afternoon walk, the scientist and mathematician Ludwig Boltzmann hanged himself. Inscribed on his tombstone is this equation which marks his tremendous contribution to 20th century physics. His theories were well ahead of their time and were largely rejected by his contemporaries. Unfortunately he never lived to see the true impact of his work.
This is this Boltzmann Entropy Formula and it is one of the most fundamental equations in modern physics. It provides a theoretical basis for the 2nd Law of Thermodynamics – that the entropy (or disorder) of the universe is always increasing. It is a direct consequence of this that buildings decay and become decrepit when abandoned; that wine glasses shatter into shards of glass; and that the universe will one day evaporate into heat.
If you put ten coins in a row on a desk, there is only one way you can arrange them so they all show heads. In this case we would say the coins are well ordered (or in thermo language, have a low entropy). On the other hand, there are a couple of hundred ways of arranging the coins such that half of them are heads and half are tails, where we would say they were disordered.
Boltzmann formulated this relationship between S (the entropy), and W which counts the number of ways of microscopically arranging a system to give it the same overall state.
Though there are only 252 ways you can achieve full disorder with 10 coins, imagine a beaker of water and ink containing billions of billions (of billions) of molecules. When all the ink molecules are arranged together as a drop, the system is well ordered. However the number of ways of arranging all the molecules to be well mixed is astronomically larger! It is this reason, coupled with the law that entropy must always increase over time, that a drop of ink will dissipate and mix when placed in a beaker of water.
It is strangely appropriate that this equation appears on the tombstone of its creator as it acts as a constant reminder that everything dies, decays, and turns to dust.
(photo credit: illpadrino)