Understanding Mobile Spectrum
April 18, 2012 7:16 AM Subscribe
Understanding Mobile Spectrum (NY Times video) - A short video explaining mobile spectrum and the debate. A graphic that also explains. As the FCC plans to license spectrum previously used by TV broadcasters, FCC Chair Genachowski tries to convince the dubious. Mobile carriers say we are going to have a spectrum crunch. Technical details at the excellent FCC Spectrum Dashboard.
Or is there some inherent limit to how much traffic you can transmit on a particular frequency?
This. Ever try to pick out specific conversations at a crowded bar, party, or concert? While that's soudnwaves and not radio waves, it works mostly the same.
The more simultaneous traffic over a particular frequency range, the harder it is to pick out the particular signal you're interested in from the noise of all the other transmissions going on.
posted by radwolf76 at 8:20 AM on April 18, 2012
This. Ever try to pick out specific conversations at a crowded bar, party, or concert? While that's soudnwaves and not radio waves, it works mostly the same.
The more simultaneous traffic over a particular frequency range, the harder it is to pick out the particular signal you're interested in from the noise of all the other transmissions going on.
posted by radwolf76 at 8:20 AM on April 18, 2012
Also worth mentioning is the LightSquared fiasco. Moral: When you buy property, make sure your neighbors don't mind you having loud parties.
posted by RobotVoodooPower at 8:41 AM on April 18, 2012 [1 favorite]
posted by RobotVoodooPower at 8:41 AM on April 18, 2012 [1 favorite]
Or is there some inherent limit to how much traffic you can transmit on a particular frequency?
Yes, but in particular, in a range of frequencies, or a bandwidth. There is a definite law (as in law-of-physics) about how much information can be transmitted in a given bandwidth, the Shannon-Hartley Theorem. One of my BS detectors is claims of beating S-H.
Basically, the two key inputs -- the more bandwidth you have, the more data you get per second. The more noise you have, the less data you get per second.
The problem, part the first: There are a lot more transmitters out there -- cell phones. Worse, they're now wanting more data -- smart phones. To make a phone call doesn't take much bandwidth, but the new 4G technologies use much wider swaths of bandwidth to handle the increased data load.
The problem, part the second. If you're near me, and we're trying to download Game of Thrones at the same time, on the same frequency, then, well, my signal will appear as noise to you, and visa versa. If it was truly identical *and* broadcast only, then we could just both receive it, which is why TV works. But when we're dealing with two-way traffic and packet acknowledgement, even though the content may be the same, the data streams aren't -- and, of course, we need our acks to get back. So, ideally, we need to be on different frequencies, so that our bandwidths aren't overlapping.
So, as wireless usage grows in both number of stations *and* data rate to each station, we need more and more spectrum to service it. Even various cooperative wireless schemes are limited by this -- they make more efficient use of the available bandwidth, usually in the time domain*, but they're still limited by Shannon-Hartley in the end.
So, if there are too many stations sending too much data for a given bandwidth, the channel chokes. That's the Spectrum Crunch in a nutshell.
* Efficiency in the time domain can be simply stated as "Never let the channel be quiet." Every second that no transmission is made on a channel, that channel's capacity for that second is lost forever. So, if you can rapidly switch other connections on a network to the now-empty channel, you get a better usage of that channel over time, and thus, raise the network's efficiency. Usually, this manifests as better data rates -- "Hey, there's another channel clear, latch on to that for a bit and use it." The hard part is detecting this quickly, telling devices to use it quickly, and of course, telling them to back off some when you need channels for new connections.
Efficiency in the frequency domain means getting more bits through on a given bandwidth at a given time. When modems started using phase-shift keying, they were able to transmit more bits per second over the same channel width.
posted by eriko at 8:41 AM on April 18, 2012 [10 favorites]
Yes, but in particular, in a range of frequencies, or a bandwidth. There is a definite law (as in law-of-physics) about how much information can be transmitted in a given bandwidth, the Shannon-Hartley Theorem. One of my BS detectors is claims of beating S-H.
Basically, the two key inputs -- the more bandwidth you have, the more data you get per second. The more noise you have, the less data you get per second.
The problem, part the first: There are a lot more transmitters out there -- cell phones. Worse, they're now wanting more data -- smart phones. To make a phone call doesn't take much bandwidth, but the new 4G technologies use much wider swaths of bandwidth to handle the increased data load.
The problem, part the second. If you're near me, and we're trying to download Game of Thrones at the same time, on the same frequency, then, well, my signal will appear as noise to you, and visa versa. If it was truly identical *and* broadcast only, then we could just both receive it, which is why TV works. But when we're dealing with two-way traffic and packet acknowledgement, even though the content may be the same, the data streams aren't -- and, of course, we need our acks to get back. So, ideally, we need to be on different frequencies, so that our bandwidths aren't overlapping.
So, as wireless usage grows in both number of stations *and* data rate to each station, we need more and more spectrum to service it. Even various cooperative wireless schemes are limited by this -- they make more efficient use of the available bandwidth, usually in the time domain*, but they're still limited by Shannon-Hartley in the end.
So, if there are too many stations sending too much data for a given bandwidth, the channel chokes. That's the Spectrum Crunch in a nutshell.
* Efficiency in the time domain can be simply stated as "Never let the channel be quiet." Every second that no transmission is made on a channel, that channel's capacity for that second is lost forever. So, if you can rapidly switch other connections on a network to the now-empty channel, you get a better usage of that channel over time, and thus, raise the network's efficiency. Usually, this manifests as better data rates -- "Hey, there's another channel clear, latch on to that for a bit and use it." The hard part is detecting this quickly, telling devices to use it quickly, and of course, telling them to back off some when you need channels for new connections.
Efficiency in the frequency domain means getting more bits through on a given bandwidth at a given time. When modems started using phase-shift keying, they were able to transmit more bits per second over the same channel width.
posted by eriko at 8:41 AM on April 18, 2012 [10 favorites]
Gotcha -- so it's a local/regional problem -- you get traffic jams in particular places on particular bandwidths.
posted by eugenen at 9:24 AM on April 18, 2012
posted by eugenen at 9:24 AM on April 18, 2012
Shannon Smannon. New innovations in radio techniques, such as radio vorticity, are making old assumptions about radio capacity obsolete.
posted by three blind mice at 9:27 AM on April 18, 2012
posted by three blind mice at 9:27 AM on April 18, 2012
New innovations in radio techniques, such as radio vorticity,
1) Not new in any way. This effect has been known about for years -- Indeed, I think it was first described by Ralph Hartley, and there is a very definite reason that name should look familiar. The signals are out of phase along a circular path, so when the path is aligned, the integral of the signal is 0 -- thus, no interphase noise.
This in *no* way violates Shannon-Hartley, it merely decreases the noise over the given bandwidth. Note that noise is an explicit term of the theorem.
2) It is basically impractical to use in fixed to fixed installation, and utterly impossible to use in mobile situations, because the two circularly polarized antennas have to be perfectly aligned to get the noise dip and get the most out of the integrated phase. Every little bit your are off increases the integrated phase above 0, which increases noise, and drops data rate rapidly.
3) You also lose a whole bunch of gain. This increases noise over distance, and, lookie, your data rate drops further.
Basically, this is exploiting a known condition of perfectly aligned circular polarization for noise resistance. It's a great trick, and it's one that is, for all intents, useless in the real world, unless you can laser-align your antennas. In the real world, it is far easier to just run a shielded conduit, like, say, coaxial cable or fibre optics, which have vastly higher noise resistance and do not have the "sending the signal to where I don't care" problem that RF transmissions do.
Finally, though:
4) "The Open Access Journal for Physics?" When you can't even make it to ArXive, you win a "Sheesh" from me.
In fact, according to a friend of mine, there are some interesting possible applications at much higher frequencies for microscopy with this technique, but in terms of WiFi? Useless to the Effing degree.
posted by eriko at 9:52 AM on April 18, 2012 [5 favorites]
1) Not new in any way. This effect has been known about for years -- Indeed, I think it was first described by Ralph Hartley, and there is a very definite reason that name should look familiar. The signals are out of phase along a circular path, so when the path is aligned, the integral of the signal is 0 -- thus, no interphase noise.
This in *no* way violates Shannon-Hartley, it merely decreases the noise over the given bandwidth. Note that noise is an explicit term of the theorem.
2) It is basically impractical to use in fixed to fixed installation, and utterly impossible to use in mobile situations, because the two circularly polarized antennas have to be perfectly aligned to get the noise dip and get the most out of the integrated phase. Every little bit your are off increases the integrated phase above 0, which increases noise, and drops data rate rapidly.
3) You also lose a whole bunch of gain. This increases noise over distance, and, lookie, your data rate drops further.
Basically, this is exploiting a known condition of perfectly aligned circular polarization for noise resistance. It's a great trick, and it's one that is, for all intents, useless in the real world, unless you can laser-align your antennas. In the real world, it is far easier to just run a shielded conduit, like, say, coaxial cable or fibre optics, which have vastly higher noise resistance and do not have the "sending the signal to where I don't care" problem that RF transmissions do.
Finally, though:
4) "The Open Access Journal for Physics?" When you can't even make it to ArXive, you win a "Sheesh" from me.
In fact, according to a friend of mine, there are some interesting possible applications at much higher frequencies for microscopy with this technique, but in terms of WiFi? Useless to the Effing degree.
posted by eriko at 9:52 AM on April 18, 2012 [5 favorites]
If I understand it correctly, radio vorticity article is describing sending information over the same frequency with different orthogonal angular momentum to encode different information channel? If that is the case, we have done it already. The current CDMA and UMTS technology (3G) is using orthogonal coding. So user A will use code X and user B will use code Y (as opposed to the different angular momentum) over the same exact frequency channel. At the end, noise becomes the limiting factor (i.e. back to Shannon-Hartley)
posted by 7life at 9:58 AM on April 18, 2012
posted by 7life at 9:58 AM on April 18, 2012
Man, speaking of making things obsolete, the Dunning-Kruger effect is about to become a footnote in the history of psychology and cognition studies, for I am on the very edge of proving my conjecture that smugness and systematic misapprehension are directly proportional.
posted by invitapriore at 10:02 AM on April 18, 2012
posted by invitapriore at 10:02 AM on April 18, 2012
Wikipedia has a nice summary table of various wireless communication systems broken down by spectral efficiency, i.e. (bit/s)/Hz, both in terms of a single link and the system as a whole. This is a measure of how technically sophisticated each system is, for example by employing advanced noise compensation techniques like dynamic channel characterization, forward error correction, spread spectrum, sophisticated low noise front end amplifiers, and so on. You've got LTE at ~16, HSDPA at ~8, Wi-Fi at 2.4 for "n" and 0.9 for "g", and so on. Plain voice GSM and CDMA are both around 0.17, and the original analog cell service (AMPS) is 0.0015. You can see why the telecomms wanted to roll out digital, as even at its most primitive it is much more efficient.
It's also interesting to note that v.92 (dialup modems) has the second highest spectral efficiency of the list at 14.0. This implies that a lot of work was put into cramming as much data as possible through a very narrow bandwidth channel. We don't necessarily think of dialup as advanced or sophisticated, but given what it had to work with it did very well.
posted by Rhomboid at 10:11 AM on April 18, 2012
It's also interesting to note that v.92 (dialup modems) has the second highest spectral efficiency of the list at 14.0. This implies that a lot of work was put into cramming as much data as possible through a very narrow bandwidth channel. We don't necessarily think of dialup as advanced or sophisticated, but given what it had to work with it did very well.
posted by Rhomboid at 10:11 AM on April 18, 2012
Are spectrum auctions the reason why we can't have nice things like a world-wide standard frequency for UMTS/LTE networks? Is there any reason to suspect that such a coordination effort would be possible without massive institutional reorganization?
posted by invitapriore at 10:19 AM on April 18, 2012
posted by invitapriore at 10:19 AM on April 18, 2012
Spectrum has to be allocated somehow. It could be through auction, lottery, monopoly, or some other system. But regardless, each country has to have some regulatory body that decides who gets to speak on what frequencies, and that's necessarily going to vary from country to country. I'd say it's kind of a miracle that there less than a half dozen different bands in use worldwide, compared to, say, the various kinds of AC power plugs, where there's probably at least 20 in common use.
posted by Rhomboid at 11:05 AM on April 18, 2012
posted by Rhomboid at 11:05 AM on April 18, 2012
(I don't mean a half dozen wireless bands total, but for a given major cell technology. There's probably thousands of bands if you count all the random baby monitor/military/marine emergency/etc frequencies.)
posted by Rhomboid at 11:08 AM on April 18, 2012
posted by Rhomboid at 11:08 AM on April 18, 2012
I'd say it's kind of a miracle that there less than a half dozen different bands in use worldwide, compared to, say, the various kinds of AC power plugs, where there's probably at least 20 in common use.
There is an interesting ongoing case here in Australia over Apple's new iPad. It was marketed as being the "iPad Wifi + 4G", and capable of delivering 4G on the 700Mhz and 2.1Ghz spectrum bands. Unfortunately, Australian telcos currently don't offer 4G on these bands, and won't for a number of years.
posted by kithrater at 3:52 PM on April 18, 2012
There is an interesting ongoing case here in Australia over Apple's new iPad. It was marketed as being the "iPad Wifi + 4G", and capable of delivering 4G on the 700Mhz and 2.1Ghz spectrum bands. Unfortunately, Australian telcos currently don't offer 4G on these bands, and won't for a number of years.
posted by kithrater at 3:52 PM on April 18, 2012
« Older You're TV is Wrong | A shield, not a weapon Newer »
This thread has been archived and is closed to new comments
posted by eugenen at 7:59 AM on April 18, 2012