You are here.
December 16, 2006 9:40 PM Subscribe
I was going to whine and moan about another front page post to the (always excellent) xkcd, but the second link makes my day. Why would I ever attempt to second guess you?
posted by muddgirl at 9:48 PM on December 16, 2006
posted by muddgirl at 9:48 PM on December 16, 2006
I'm not trying to be snarky but what's so great about this? I saw it the first time and then when I could get a push pin placed in the North America squiggle box but I just don't really see what the awesome is here.
Am I missing something?
posted by fenriq at 9:57 PM on December 16, 2006
Am I missing something?
posted by fenriq at 9:57 PM on December 16, 2006
Amazing that Hewlett Packard, which through its acquisition of DEC, now controls the 15 and 16 nets, has more address space than Japan.
Internally, they've been talking about voluntarily handing back the 16 net for several years, but the legal issues are apparently staggering. It's crap like this that makes me long for the widespread adoption of IPv6, and an end to the preciousness through artificially maintained scarcity of something as ethereal as "address space."
posted by paulsc at 9:58 PM on December 16, 2006
Internally, they've been talking about voluntarily handing back the 16 net for several years, but the legal issues are apparently staggering. It's crap like this that makes me long for the widespread adoption of IPv6, and an end to the preciousness through artificially maintained scarcity of something as ethereal as "address space."
posted by paulsc at 9:58 PM on December 16, 2006
The first company I worked for out of college was Bolt Beranek and Newman (BBN). Whenever I mention this to someone they immediately betray themselves as an Internet history geek - or not.
posted by vacapinta at 10:05 PM on December 16, 2006
posted by vacapinta at 10:05 PM on December 16, 2006
fenriq: I'm loath to self moderate but here you go - The xkcd map itself is interesting and unique in the way that is able to both display a very complex network topography as well as the geo-political relationships between those network space in a very straightforward manner.
I've seen a lot of network maps but none have ever been so simple, clear and intuitive. Usually such maps are more "ooo, pretty. and overwhelmingly complex. damn. but pretty."
I would like to see the same map rendered with more detail and resolution, naming more ISPs and entities, but there's still an amazing amount of information available in that little comic if one is able to grasp what it is they're looking at. There's inferred, unspoken metadata from the juxtapositions and sizes of the netblocks and entities involved.
The "you are here" mashup is just delicious icing.
posted by loquacious at 10:14 PM on December 16, 2006
I've seen a lot of network maps but none have ever been so simple, clear and intuitive. Usually such maps are more "ooo, pretty. and overwhelmingly complex. damn. but pretty."
I would like to see the same map rendered with more detail and resolution, naming more ISPs and entities, but there's still an amazing amount of information available in that little comic if one is able to grasp what it is they're looking at. There's inferred, unspoken metadata from the juxtapositions and sizes of the netblocks and entities involved.
The "you are here" mashup is just delicious icing.
posted by loquacious at 10:14 PM on December 16, 2006
Thanks for the explanation, loquacious, I understand the relationships (to a degree) but still really just don't get it. But maybe that's why I'm in PR and not a programmer?
posted by fenriq at 10:33 PM on December 16, 2006
posted by fenriq at 10:33 PM on December 16, 2006
Fenriq, what's cool about it is by using the Hilbert curve mapping, contiguities in address space look more "geographically" contiguous than they would using other common address space visualization techniques, like, say, giving each /4 its own row. People tend to think of maps as being made up of regions, and this is the first time we've seen wide distribution of a map of the Internet that actually is, to my knowledge. Most of the really interesting Internet maps to date are focused on the routing topology or content-level link graphs, rather than addressing, and as such are more of the vertex-and-edge variety.
(Also, on preview, what loquacious said)
Loquacious: a couple of years ago I built some really crufty Perl scripts that pulled bulk information from the regional registries and built them into a single unified tree. Such a tree could easily be turned in an automated fashion into a Hilbert map of /24s. Such a beast would (as of 2004, anyway) have about 2.1 million rectangles on a square image about 260 million pixels to a side (assuming a 16x16 square for each /24 network so you could display something interesting about it). And yes, it would be very, very cool, and I'd probably work on it if I had the time. Of course, I'm a network addressing geek, if there is such a thing, so one should weigh my opinion on this sort of thing accordingly.
And paulsc, it's not quite that bad for Japan, which has more than just 43/8, 126/8, and 133/8 (many of the Asia-Pac regions and bits of the southeastern "various registrars" jungle in the xkcd map are Japanese, too) -- but it is indeed the relative scarcity of allocated IPv4 addresses per host in Japan that's led to wider adoption of IPv6 there: the IPv6 stack in Mac OS X, the BSDs, and (I think) Linux evolved from a Japanese research project undertaken out of a desire to move Japan away from IPv4 ahead of the rest of the world.
Scarcity of IPv4 space is also far less of a problem than it was once percieved, with the widespread adoption of NAT and DHCP as workarounds. Now, the majority of hosts don't have permanent routable addresses. Also note that much of the space in the northwest (former Class A space) is dark - not routed with no hosts connected to it. If v4 space scarcity did become a problem again, owners of dark space in this region could be compelled to return it for classless re-allocation.
I'd say that it's the lack of real scarcity more than anything else that's retarding adoption of IPv6. Upgrading equipment and operational policies to support IPv6 isn't free for ISPs, and since the pain is largely gone, they can't justify charging more for IPv6 transit (especially as almost all of the content available on the public IPv6 internet, which is what the end users care about, is available at IPv4 addresses as well). What IPv6 needs to see the real light of day is some killer application that people will pay for that won't work on IPv4, most probably due to requirements for large amounts of routable space per local network.
posted by Vetinari at 10:45 PM on December 16, 2006 [3 favorites]
(Also, on preview, what loquacious said)
Loquacious: a couple of years ago I built some really crufty Perl scripts that pulled bulk information from the regional registries and built them into a single unified tree. Such a tree could easily be turned in an automated fashion into a Hilbert map of /24s. Such a beast would (as of 2004, anyway) have about 2.1 million rectangles on a square image about 260 million pixels to a side (assuming a 16x16 square for each /24 network so you could display something interesting about it). And yes, it would be very, very cool, and I'd probably work on it if I had the time. Of course, I'm a network addressing geek, if there is such a thing, so one should weigh my opinion on this sort of thing accordingly.
And paulsc, it's not quite that bad for Japan, which has more than just 43/8, 126/8, and 133/8 (many of the Asia-Pac regions and bits of the southeastern "various registrars" jungle in the xkcd map are Japanese, too) -- but it is indeed the relative scarcity of allocated IPv4 addresses per host in Japan that's led to wider adoption of IPv6 there: the IPv6 stack in Mac OS X, the BSDs, and (I think) Linux evolved from a Japanese research project undertaken out of a desire to move Japan away from IPv4 ahead of the rest of the world.
Scarcity of IPv4 space is also far less of a problem than it was once percieved, with the widespread adoption of NAT and DHCP as workarounds. Now, the majority of hosts don't have permanent routable addresses. Also note that much of the space in the northwest (former Class A space) is dark - not routed with no hosts connected to it. If v4 space scarcity did become a problem again, owners of dark space in this region could be compelled to return it for classless re-allocation.
I'd say that it's the lack of real scarcity more than anything else that's retarding adoption of IPv6. Upgrading equipment and operational policies to support IPv6 isn't free for ISPs, and since the pain is largely gone, they can't justify charging more for IPv6 transit (especially as almost all of the content available on the public IPv6 internet, which is what the end users care about, is available at IPv4 addresses as well). What IPv6 needs to see the real light of day is some killer application that people will pay for that won't work on IPv4, most probably due to requirements for large amounts of routable space per local network.
posted by Vetinari at 10:45 PM on December 16, 2006 [3 favorites]
I had no idea Haliburton owned its own Class A IP block. Fascinating to see which multinationals got in on the ground early.
posted by Blazecock Pileon at 10:46 PM on December 16, 2006
posted by Blazecock Pileon at 10:46 PM on December 16, 2006
This comic is very cool, and seeing it is very cool. What might possibly be cooler is that I wondered why everything was in such an odd order, read a little about Hilbert curves and space-filling curves in general, and now it all makes perfect sense.
Awesome. I saw the internet and I learned about math.
posted by pinespree at 11:03 PM on December 16, 2006
Awesome. I saw the internet and I learned about math.
posted by pinespree at 11:03 PM on December 16, 2006
Amazing that Hewlett Packard, which through its acquisition of DEC, now controls the 15 and 16 nets, has more address space than Japan.
No, Japan has two seperate blocks, 126 and 133
posted by delmoi at 11:27 PM on December 16, 2006
No, Japan has two seperate blocks, 126 and 133
posted by delmoi at 11:27 PM on December 16, 2006
I am not here, I am there.
Which reminds me of a little poem.
Yesterday upon the stair
I met a man who had no hair.
He had no hair again today.
I wish that man would grow some fucking hair.
posted by grytpype at 11:44 PM on December 16, 2006 [1 favorite]
Which reminds me of a little poem.
Yesterday upon the stair
I met a man who had no hair.
He had no hair again today.
I wish that man would grow some fucking hair.
posted by grytpype at 11:44 PM on December 16, 2006 [1 favorite]
I've got your space-filling curve right here, buddy.
posted by hutta at 12:15 AM on December 17, 2006 [1 favorite]
posted by hutta at 12:15 AM on December 17, 2006 [1 favorite]
"... Scarcity of IPv4 space is also far less of a problem than it was once percieved, with the widespread adoption of NAT and DHCP as workarounds. Now, the majority of hosts don't have permanent routable addresses. Also note that much of the space in the northwest (former Class A space) is dark - not routed with no hosts connected to it. If v4 space scarcity did become a problem again, owners of dark space in this region could be compelled to return it for classless re-allocation. ..."
posted by Vetinari at 1:45 AM EST on December 17
Um, yes, and mostly, no.
NAT is a hack as either an address expanding strategy or a security mechanism, and DHCP is a convenience, more than an address management method. And quite frankly, the miserly attitude of ICANN/IANA towards consolidations of address blocks in the late 90's was a driving force in the adoption of policies now broadly accepted in the U.S. marketplace for address assignment. It's now accepted by millions of North Americans that a "permanent IP address" has some additional economic value over a temporary lease address, and that most ISPs are somehow justified in charging fee premiums for the use of a so called "permanent IP" address . This silly idea makes it seem like running servers is something only the elite should do, and continues to spawn really worse hacks, like dynamic DNS.
In my view, it's a bad chicken-and-egg situation. No one "needs" IPv6 since IPv4 has been "patched" up, if by "patching up" you mean to endorse Comcast or Time-Warner being able to charge upwards of $60/month for a miserly /29 or even /30 subnet, as standard practice. But since the majority of Internet end users are going to have to use NAT style connections for much of what they do, primarily for cost and complexity reasons, end to end tunneling applications continue to be hit-or-miss propositions, depending on the internal workings of NAT devices at ends of the tunnel, or in between. And new applications that might be demanding of address space can never gain traction, since they'll never become commercial on IPv4.
It's a high tech Whiskey Tax, all over again, and it sucks. It benefits entrenched interests over growth and innovation, and it simply doesn't bode well for the future of human communication, any more than telephone or postal monopolies, in hindsight, ever did.
posted by paulsc at 12:15 AM on December 17, 2006 [2 favorites]
posted by Vetinari at 1:45 AM EST on December 17
Um, yes, and mostly, no.
NAT is a hack as either an address expanding strategy or a security mechanism, and DHCP is a convenience, more than an address management method. And quite frankly, the miserly attitude of ICANN/IANA towards consolidations of address blocks in the late 90's was a driving force in the adoption of policies now broadly accepted in the U.S. marketplace for address assignment. It's now accepted by millions of North Americans that a "permanent IP address" has some additional economic value over a temporary lease address, and that most ISPs are somehow justified in charging fee premiums for the use of a so called "permanent IP" address . This silly idea makes it seem like running servers is something only the elite should do, and continues to spawn really worse hacks, like dynamic DNS.
In my view, it's a bad chicken-and-egg situation. No one "needs" IPv6 since IPv4 has been "patched" up, if by "patching up" you mean to endorse Comcast or Time-Warner being able to charge upwards of $60/month for a miserly /29 or even /30 subnet, as standard practice. But since the majority of Internet end users are going to have to use NAT style connections for much of what they do, primarily for cost and complexity reasons, end to end tunneling applications continue to be hit-or-miss propositions, depending on the internal workings of NAT devices at ends of the tunnel, or in between. And new applications that might be demanding of address space can never gain traction, since they'll never become commercial on IPv4.
It's a high tech Whiskey Tax, all over again, and it sucks. It benefits entrenched interests over growth and innovation, and it simply doesn't bode well for the future of human communication, any more than telephone or postal monopolies, in hindsight, ever did.
posted by paulsc at 12:15 AM on December 17, 2006 [2 favorites]
Interesting that they provide a mark for 192.168/16 and cite RFC1918, but don't for 172.16/12, which is sixteen times its size and allocated under the same RFC. Must be because $60 routers from CompUSA don't use it by default.
posted by George_Spiggott at 12:16 AM on December 17, 2006
posted by George_Spiggott at 12:16 AM on December 17, 2006
Actually a long time ago I had an idea for using fractals like that to show 'infinite' resolution analog TV. Kind of hard to explain without a visual, but the beam would move through time just like the number line in that map. Low bandwidth signals would simply have larger pixels automatically, and theoretically TV resolution could just go up and up with bandwidth and CRT quality.
posted by delmoi at 12:20 AM on December 17, 2006 [1 favorite]
posted by delmoi at 12:20 AM on December 17, 2006 [1 favorite]
Interesting that they provide a mark for 192.168/16 and cite RFC1918, but don't for 172.16/12, which is sixteen times its size and allocated under the same RFC.
I was surprised to see 10/8 on there labeled as "VPNs". I don't remember there being any difference between it and the other two, and I've never known it to be used specifically for VPNs.
posted by hutta at 12:31 AM on December 17, 2006
I was surprised to see 10/8 on there labeled as "VPNs". I don't remember there being any difference between it and the other two, and I've never known it to be used specifically for VPNs.
posted by hutta at 12:31 AM on December 17, 2006
What in the heck is delmoi talking about?
posted by loquacious at 12:49 AM on December 17, 2006
posted by loquacious at 12:49 AM on December 17, 2006
delmoi: I've seen that done, and for exactly that reason. Tomorrow's World, UK TV show, demo'd it I think in the late 80s. If anyone can find a link, I'd love to know why it didn't go into production.
posted by Leon at 1:06 AM on December 17, 2006
posted by Leon at 1:06 AM on December 17, 2006
What in the heck is delmoi talking about?
I realize it might sound a bit goofy, and it's also kind of hard to explain without diagrams. Just imagine traversing that space filling curve in order to make an image, where each 'block' represents a pixel like this. Each pixel could then be further sub-divided into smaller pixels like this. You could keep sub-dividing and sub-dividing to get a higher and higher resolution.
Then think of an analog signal. If you only wanted four pixels like in the first image, you would only sample four times. If you wanted 16 pixels, you would sample sixteen times, or you could sample 64 times, and so on on. At each step the resolution quadruples. If you subdivided at a higher level then the original camera, it wouldn't make any difference, because all of the nearby pixels would just end up the same color on the screen.
In other words, the resolution shown would be the lesser of the camera and the screen, and an el-chepo TV could display the same signal as an ultra-high resolution projector.
I have no idea if that's practical or not, it's probably much simpler to just sweep the beam across the screen then do something like this.
posted by delmoi at 1:36 AM on December 17, 2006
I realize it might sound a bit goofy, and it's also kind of hard to explain without diagrams. Just imagine traversing that space filling curve in order to make an image, where each 'block' represents a pixel like this. Each pixel could then be further sub-divided into smaller pixels like this. You could keep sub-dividing and sub-dividing to get a higher and higher resolution.
Then think of an analog signal. If you only wanted four pixels like in the first image, you would only sample four times. If you wanted 16 pixels, you would sample sixteen times, or you could sample 64 times, and so on on. At each step the resolution quadruples. If you subdivided at a higher level then the original camera, it wouldn't make any difference, because all of the nearby pixels would just end up the same color on the screen.
In other words, the resolution shown would be the lesser of the camera and the screen, and an el-chepo TV could display the same signal as an ultra-high resolution projector.
I have no idea if that's practical or not, it's probably much simpler to just sweep the beam across the screen then do something like this.
posted by delmoi at 1:36 AM on December 17, 2006
delmoi: Ah, I missed the tangent. Yeah, having played around a fair amount with fractals as well as a wide range of video technology, I don't think that the technology isn't well suited to full motion video - at least not with digital processing. Arbitrary precision and huge amounts of processing required for that precision and so on and so forth.
It might be doable with an extremely complex or extremely wizardly designed analog computer, but there's a whole bunch of reasons why analog computers and signal processing lost out to digital or discrete processing - a distinct lack of reliability and repeatability, huge variances in all parts and scales of the device from one "identical" device to the next, increased manufacturing costs, dangerous operating voltages and radiation levels, apalling power requirements and so on and so forth.
The closest thing I've seen in the field for commercial use is a fractal-engined photograph up-resolution tool. It's a rather pricy little program. Printers use it to enlarge those crappy 640x480 72 DPI thumbnails some clients insist is print quality art suitable for huge posters or billboards. It was fairly clever at guessing where subpixels could go but it had many drastic limitations and artifact issues that usually made it glaringly obvious that the image had been processed in some weird way.
The other general problems that I see with fractal image compression is that A) while the natural world is very fractal-like, none (or very few) of the fractals are simple, mathmatically pure fractals, either warped by environmental forces, or better described as compound, complex fractals, or not really fractals at all and B) Fractals really have little immediate use or resonance in the overlapping sciences of optics, photonics, imaging and sight.
Sure, there's all kinds of great stuff there that could be done, but practical application tends to be a lot less exotic, especially in well-defined and long-existing consumer/producer media industries. The crude optics and photonics we've used over the years have about as much use for fractals as I do for a for an exciting new macroeconomics theory describing the aquisition of meaningful soybean market indicators by tracking the production and consumption ratios of plastic children's novelty shoes.
It would pretty much require a complete retooling of the entire theory, process and methodology of modern video capture, editing and production. Bandwith problems will probably go away well before then.
All told what you're describing isn't entirely unlike how modern image or video compression works. The end result is the same - more (perceived) resolution per bandwidth unit.
posted by loquacious at 2:24 AM on December 17, 2006
It might be doable with an extremely complex or extremely wizardly designed analog computer, but there's a whole bunch of reasons why analog computers and signal processing lost out to digital or discrete processing - a distinct lack of reliability and repeatability, huge variances in all parts and scales of the device from one "identical" device to the next, increased manufacturing costs, dangerous operating voltages and radiation levels, apalling power requirements and so on and so forth.
The closest thing I've seen in the field for commercial use is a fractal-engined photograph up-resolution tool. It's a rather pricy little program. Printers use it to enlarge those crappy 640x480 72 DPI thumbnails some clients insist is print quality art suitable for huge posters or billboards. It was fairly clever at guessing where subpixels could go but it had many drastic limitations and artifact issues that usually made it glaringly obvious that the image had been processed in some weird way.
The other general problems that I see with fractal image compression is that A) while the natural world is very fractal-like, none (or very few) of the fractals are simple, mathmatically pure fractals, either warped by environmental forces, or better described as compound, complex fractals, or not really fractals at all and B) Fractals really have little immediate use or resonance in the overlapping sciences of optics, photonics, imaging and sight.
Sure, there's all kinds of great stuff there that could be done, but practical application tends to be a lot less exotic, especially in well-defined and long-existing consumer/producer media industries. The crude optics and photonics we've used over the years have about as much use for fractals as I do for a for an exciting new macroeconomics theory describing the aquisition of meaningful soybean market indicators by tracking the production and consumption ratios of plastic children's novelty shoes.
It would pretty much require a complete retooling of the entire theory, process and methodology of modern video capture, editing and production. Bandwith problems will probably go away well before then.
All told what you're describing isn't entirely unlike how modern image or video compression works. The end result is the same - more (perceived) resolution per bandwidth unit.
posted by loquacious at 2:24 AM on December 17, 2006
I wonder if he really did bake Google a cake shaped like the internet.
posted by weston at 11:39 AM on December 17, 2006
posted by weston at 11:39 AM on December 17, 2006
I am in the U.S.A., but the mashup says I'm in Canada.
posted by beagle at 12:54 PM on December 17, 2006
posted by beagle at 12:54 PM on December 17, 2006
« Older Save a sheep, don't use insulin. | Panem et circenses -- MSG style! Newer »
This thread has been archived and is closed to new comments
posted by Science! at 9:43 PM on December 16, 2006