Wifi, Interference and Phasors
Before we get to the next part, which is fun, we need to talk about phasors. No, not the Star Trek kind, the boring kind. Sorry about that.
If you're anything like me, you might have never discovered a use for your trigonometric identities outside of school. Well, you're in luck! With wifi, trigonometry, plus calculus involving trigonometry, turns out to be pretty important to understanding what's going on. So let's do some trigonometry.
Wifi modulation is very complicated, but let's ignore modulation for the moment and just talk about a carrier wave, which is close enough. Here's your basic 2.4 GHz carrier:
- A cos (ω t)
Where A is the transmit amplitude and ω = 2.4e9 (2.4 GHz). The wavelength, λ, is the speed of light divided by the frequency, so:
- λ = c / ω = 3.0e8 / 2.4e9 = 0.125m
That is, 12.5 centimeters long. (By the way, just for comparison, the wavelength of light is around 400-700 nanometers, or 500,000 times shorter than a wifi signal. That comes out to 600 Terahertz or so. But all the same rules apply.)
The reason I bring up λ is that we're going to have multiple transmitters. Modern wifi devices have multiple transmit antennas so they can do various magic, which I will try to explain later. Also, inside a room, signals can reflect off the walls, which is a bit like having additional transmitters.
Let's imagine for now that there are no reflections, and just two transmit antennas, spaced some distance apart on the x axis. If you are a receiver also sitting on the x axis, then what you see is two signals:
- cos (ω t) + cos (ω t + φ)
Where φ is the phase difference (between 0 and 2π). The phase difference can be calculated from the distance between the two antennas, r, and λ, as follows:
- φ = r / λ
Of course, a single-antenna receiver can't *actually* see two signals. That's where the trig identities come in.
Let's do some simple ones first. If r = λ, then φ = 2π, so:
- cos (ω t) + cos (ω t + 2π)
= cos (ω t) + cos (ω t)
= 2 cos (ω t)
That one's pretty intuitive. We have two antennas transmitting the same signal, so sure enough, the receiver sees a signal twice as tall. Nice.
The next one is weirder. What if we put the second transmitter 6.25cm away, which is half a wavelength? Then φ = π, so:
- cos (ω t) + cos (ω t + π)
= cos (ω t) - cos (ω t)
The two transmitters are interfering with each other! A receiver sitting on the x axis (other than right between the two transmit antennas) won't see any signal at all. That's a bit upsetting, in fact, because it leads us to a really pivotal question: where did the energy go?
We'll get to that, but first things need to get even weirder.
Let's try φ = π/2.
- cos (ω t) + cos (ω t + π/2)
= cos (ω t) - sin (ω t)
This one is hard to explain, but the short version is, no matter how much you try, you won't get that to come out to a single (Edit 2014/07/29: non-phase-shifted) cos or sin wave. Symbolically, you can only express it as the two separate factors, added together. At each point, the sum has a single value, of course, but there is no formula for that single value which doesn't involve both a cos ωt and a sin ωt. This happens to be a fundamental realization that leads to all modern modulation techniques. Let's play with it a little and do some simple AM radio (amplitude modulation). That means we take the carrier wave and "modulate" it by multiplying it by a much-lower-frequency "baseband" input signal. Like so:
- f(t) cos (ω t)
Where ω >> 1, so that for any given cycle of the carrier wave, f(t) can be assumed to be "almost constant."
On the receiver side, we get the above signal and we want to discover the value of f(t). What we do is multiply it again by the carrier:
- f(t) cos (ω t) cos (ω t)
= f(t) cos2 (ω t)
= f(t) (1 - sin2 (ω t))
= ½ f(t) (2 - 2 sin2 (ω t))
= ½ f(t) (1 + (1 - 2 sin2 (ω t)))
= ½ f(t) (1 + cos (2 ω t))
= ½ f(t) + ½ f(t) cos (2 ω t)
See? Trig identities. Next we do what we computer engineers call a "dirty trick" and, instead of doing "real" math, we'll just hypothetically pass the resulting signal through a digital or analog filter. Remember how we said f(t) changes much more slowly than the carrier? Well, the second term in the above answer changes twice as fast as the carrier. So we run the whole thing through a Low Pass Filter (LPF) at or below the original carrier frequency, removing high frequency terms, leaving us with just this:
→ ½ f(t)
Which we can multiply by 2, and ta da! We have the original input signal.
Now, that was a bit of a side track, but we needed to cover that so we can do the next part, which is to use the same trick to demonstrate how cos(ω t) and sin(ω t) are orthogonal vectors. That means they can each carry their own signal, and we can extract the two signals separately. Watch this:
- [ f(t) cos (ω t) +
g(t) sin (ω t) ] cos (ω t)
= [f(t) cos2 (ω t)] + [g(t) cos (ω t) sin (ω t)]
= [½ f(t) (1 + cos (2 ω t))] + [½ g(t) sin (2 ω t)]
= ½ f(t) + ½ f(t) cos (2 ω t) + ½ g(t) sin (2 ω t)
→ ½ f(t)
Notice that by multiplying by the cos() carrier, we extracted just f(t). g(t) disappeared. We can play a similar trick if we multiply by the sin() carrier; f(t) then disappears and we have recovered just g(t).
In vector terms, we are taking the "dot product" of the combined vector with one or the other orthogonal unit vectors, to extract one element or the other. One result of all this is you can, if you want, actually modulate two different AM signals onto exactly the same frequency, by using the two orthogonal carriers.
But treating it as just two orthogonal carriers for unrelated signals is a little old fashioned. In modern systems we tend to think of them as just two components of a single vector, which together give us the "full" signal. That, in short, is QAM, one of the main modulation methods used in 802.11n. To oversimplify a bit, take this signal:
- f(t) cos (ω t) + g(t) sin (ω t)
And let's say f(t) and g(t) at any given point in time each have a value that's one of: 0, 1/3, 2/3, or 1. Since each function can have one of four values, there are a total of 4*4 = 16 different possible combinations, which corresponds to 4 bits of binary data. We call that encoding QAM16. If we plot f(t) on the x axis and g(t) on the y axis, that's called the signal "constellation."
Anyway we're not attempting to do QAM right now. Just forget I said anything.
Adding out-of-phase signals
Okay, after all that, let's go back to where we started. We had two transmitters both sitting on the x axis, both transmitting exactly the same signal cos(ω t). They are separated by a distance r, which translates to a phase difference φ. A receiver that's also on the x axis, not sitting between the two transmit antennas (which is a pretty safe assumption) will therefore see this:
- cos (ω t) + cos (ω t + φ)
= cos (ω t) + cos (ω t) cos φ - sin (ω t) sin φ
= (1 + cos φ) cos (ω t) - (sin φ) sin (ω t)
One way to think of it is that a phase shift corresponds to a rotation through the space defined by the cos() and sin() carrier waves. We can rewrite the above to do this sort of math in a much simpler vector notation:
- [1, 0] + [cos φ, sin φ]
= [1+cos φ, sin φ]
This is really powerful. As long as you have a bunch of waves at the same frequency, and each one is offset by a fixed amount (phase difference), you can convert them each to a vector and then just add the vectors linearly. The result, the sum of these vectors, is what the receiver will see at any given point. And the sum can always be expressed as the sum of exactly one cos(ω t) and one sin(ω t) term, each with its own magnitude.
This leads us to a very important conclusion:
- The sum of reflections of a signal is just an
arbitrarily phase shifted and scaled version of the original.
People worry about reflections a lot in wifi, but because of this rule, they are not, at least mathematically, nearly as bad as you'd think.
Of course, in real life, getting rid of that phase shift can be a little tricky, because you don't know for sure by how much the phase has been shifted. If you just have two transmitting antennas with a known phase difference between them, that's one thing. But when you add reflections, that makes it harder, because you don't know what phase shift the reflections have caused. Not impossible: just harder.
(You also don't know, after all that interference, what happened to the amplitude. But as we covered last time, the amplitude changes so much that our modulation method has to be insensitive to it anyway. It's no different than moving the receiver closer or further away.)
One last point. In some branches of eletrical engineering, especially in analog circuit analysis, we use something called "phasor notation." Basically, phasor notation is just a way of representing these cos+sin vectors using polar coordinates instead of x/y coordinates. That makes it easy to see the magnitude and phase shift, although harder to add two signals together. We're going to use phasors a bit when discussing signal power later.
Phasors look like this in the general case:
- A cos (ω t) + B sin (ω t)
= [A, B]
- Magnitude = M = (A2 +
tan (Phase) = tan φ = B / A
φ = atan2(B, A)
or the inverse:
= [M cos φ, M sin φ]
= (M cos φ) cos (ω t) - (M sin φ) sin (ω t)
= [A, B]
= A cos (ω t) + B sin (ω t)
There's another way of modeling the orthogonal cos+sin vectors, which is to use complex numbers (ie. a real axis, for cos, and an imaginary axis, for sin). This is both right and wrong, as imaginary numbers often are; the math works fine, but perhaps not for any particularly good reason, unless your name is Euler. The important thing to notice is that all of the above works fine without any imaginary numbers at all. Using them is a convenience sometimes, but not strictly necessary. The value of cos+sin is a real number, not a complex number.
Next time, we'll talk about signal power, and most importantly, where that power disappears to when you have destructive interference. And from there, as promised last time, we'll cheat Shannon's Law.