To mend and defend
June 10, 2024 1:14 PM   Subscribe

ReBoot is widely considered the first all-CGI TV series (although that distinction may belong to the French show "Insektors"). Thirty years after its TV premiere, a team in British Columbia are working on a documentary (LinkTree link) about the show. But, they've run into a minor snag (Google Docs link).

As part of the documentary process, the filmmakers are working to digitize the original master tapes. Unfortunately, the tapes only exist in a now relatively rare format, Sony's D1 system, and there aren't a lot of surviving players able to read them. The team's hoping to find some experts and/or some documentation to help achieve their goal.

Reboot previously.
posted by hanov3r (31 comments total) 16 users marked this as a favorite
 
Given the passion of fans and nostalgia involved, I'm thinking this gets a crowdsource solution in days.. not months.
posted by Hardcore Poser at 1:16 PM on June 10 [3 favorites]


Welcome to my life as a librarian! (Though I am actually putting down most of the A/V work because it's too damn expensive to keep the equipment in working order. Gonna focus on super-common A/V and digital media archaeology.)
posted by humbug at 1:34 PM on June 10 [8 favorites]


I have one of those D1 tapes, thanks to dumpster diving -- I can't find a picture that really gives a good idea of size - this is probably the closest , the very first image - but they look sort of like cassette tapes but they're 14in / 36cm wide and 8in/20cm deep. You carry them around in a plastic case with a handle that looks like a briefcase. They look silly and impractical, or like you're a hobbit carrying around your girlfriend's mixtape.

On the other end of aging equipment, this weekend I bought another VCR at a thrift shop; I have like 6 but depending on the quality of the tape and how it was recorded, different machines are more likely to get a good image off a thirty year old tape. I don't have many left to digitize, but in case I find more, I want to make sure I'm equipped to archive them.
posted by AzraelBrown at 1:53 PM on June 10 [6 favorites]


I wish them luck, capturing old tapes is something I've had some painful experience with, and the older and more unusual the format is, the harder it gets, not just because of the availability of equipment, but also because the tapes themselves degrade.

I will say, though, that when they say in the document that:
[We're not looking for] External options to capture the tapes. We want to learn and share the knowledge, not just hire a capture service.
I think that might be a mistake, these things are so fickle that it's almost always better left up to experts, who will also have well-maintained machines instead of trying to cobble together something that works from three different units, like these people seem to be doing.
posted by Joakim Ziegler at 1:59 PM on June 10 [13 favorites]


Sony and their blasted proprietary formats for everything! I remember when they had their own special memory sticks for cameras. Cool gear that becomes impossible to work with in just a few years.
posted by grumpybear69 at 2:02 PM on June 10 [7 favorites]


Sony and their blasted proprietary formats

The Minidisk could have been a game-changer, reliable gigabit storage in a small format way before everyone else, if Sony wasn't so worried about piracy and hobbled it.
posted by AzraelBrown at 2:04 PM on June 10 [15 favorites]


Sony and their blasted proprietary formats for everything! I remember when they had their own special memory sticks for cameras. Cool gear that becomes impossible to work with in just a few years.

To be fair, there's never been a non-proprietary format for professional (broadcast) videotape. VHS was pretty much that for distribution to consumers, and DV and HDV kind of were for consumer and some prosumer camcorders and things, but for broadcast, I can't think of anything, and of course now, videotape is largely obsolete, it's all going to files. All hail LTO tape, I guess, which is an open standard, although it's not used that much in broadcast, but it's big in film.
posted by Joakim Ziegler at 2:04 PM on June 10 [3 favorites]


"Hal I thought you were going to copy the Reboot tape."
"WHAT DOES IT LOOK LIKE I'M DOING?"

What if they used an external option to CAPTURE the tapes, use the money generated from the reboot to work on paying for resources to solve this (secondary) problem. Like I'm not opposed to the ol fashioned call for help. But.

If your goal is to copy the tapes. DO THAT. This is like when I code and then get bogged down in a side mission that isn't necessary to the issue at hand. I can totally understand why you don't want to use an external tool/method to do so, but if you have this equipment? Better to copy this shit NOW (especially due to tape degradation issues). Worry about fixing up the equipment and educating people once you have a solution you know works. Because who knows how long it will take to find an answer.

They need to figure out what their goal is now.

Sometimes it's easy to get caught and stuck in something and not step outside of yourself during that heavy lifting and reassess what you really want/need, and prioritize as such, but I feel like that's the issue here.
posted by symbioid at 4:06 PM on June 10 [5 favorites]


The above of course assumes that there IS a method to capture outside of these machines (I would imagine you should be able to create a read head system that could pull the data and hopefully decode (these are digital tapes yes?) I say that like it's an easy feat. And of course if you're missing important info that may not work, but surely there's machines or techniques to do that? Have they talked to digital archivists? (jasonscott for example?)
posted by symbioid at 4:07 PM on June 10 [1 favorite]


Ooh, 30 year old helical scan digital magnetic media? So many dropouts
posted by scruss at 4:21 PM on June 10 [3 favorites]


The above of course assumes that there IS a method to capture outside of these machines

There certainly is, for instance, here's a commercial service here that does it for a hundred bucks per tape. D-1 is an old and obsolete format, but it's not that old and obsolete (I've had captures done from 1" Type C analog tape before, and that's a decade older than D-1, and the machines take up 5 times as much space and are much more finicky). So yes, I agree, insisting on fixing their own machines and doing it themselves seems like a weird digression from their main objective.
posted by Joakim Ziegler at 4:52 PM on June 10 [5 favorites]


[We're not looking for] External options to capture the tapes. We want to learn and share the knowledge, not just hire a capture service.

I’m going to join the chorus of people recoiling at this requirement. It really smacks of the not invented here anti-pattern I’m constantly fighting in engineering. My go-to saying when batting this kind of curiosity-itch-scratching down is “we innovate where we differentiate, and nowhere else.” I would assume their top goal is actually producing their documentary, and any efforts spent to “learn and share the knowledge” of an arcane tape format from the 1980s is diverting finite resource from that goal. If I was funding this project, I would be pretty furious that this was halting the project. If I was on the consumer end and paying to view a documentary about a beloved series from my childhood that influences me to this day, I would be kinda mad about paying for them to “learn and share” anything that isn’t actually ReBoot. They’re say they’re looking for manuals and don’t want armchair technicians, but why should I pay for them to gain experience as technicians? I just wanna see stuff about ReBoot.

When you are able to do something, it is very difficult to choose not to do something. I despise the 4-Hour Workweek, but it did have one concept that stuck with me: the people who most need to outsource are usually the least willing to do so. Not pulling on every thread you can and knowing when to just use a commercial-off-the-shelf (COTS) solution takes discipline. Developing this discipline is often a bit unintuitive, I think because most people’s career progression begins with their time valued very low and money valued very high, which spurs you to learn and throw loads of your time at problems instead of solving them with money you don’t have. But eventually the time value:money value flips, quite immediately in some cases, and it’s often hard for people to make that transition. It’s not easy to go from telling yourself “I can do this!” to constantly asking “should I do this?”

Even if this team of documentarians could pull off work that I expect will require at least an electrical engineer, I highly doubt they will be able to do so on their first try better than a capture service that gets data off magnetic tapes professionally and has institutional knowledge built up from experience. I don’t think they’d be able to do it cheaper either, unless they are valuing their time at $0. They have two different models of VTRs, and statements like “While all 3 machines have some sort of issue, there should be enough to cobble together something that works” do not fill me with confidence; that’s not really how electronics work.

What really gets my goat, however, is that I expect the supply of original master tapes is finite and actively degrading. Even if they value their time at zero, weren’t taking anyone else’s money and also willing to spend months or years working on skills and problems that are not documentary-making, if there is a non-zero chance that they could damage the original master tapes, either directly through a mistake while they’re learning or indirectly through the additional decay and degradation the tapes will experience in the time it takes for them to skill-up while the tapes could have been scanned by an external service… oh boy. Yeah, not happy. Send those tapes to a capture service immediately, and then if you really must play around more, do it later, after you’ve captured the data. I just.. I’m pretty certain an external service is going to do what they actually need done better and faster than they will, with less risk to boot. What they want and what they need here are different.

ReBoot is important to me. I would absolutely shell out a few bucks (especially if there’s a commercial service that does it for a hundred bucks a tape!!!! Thanks for putting that into perspective, Joachim Ziegler) for them to send those master tapes to the service and maybe hopefully upload the uncompressed data. The D1 wiki page says each cassette stores 94 minutes of footage. This is a finite, highly predictable cost. If they were to damage the masters in the process of experimenting when a professional service exists, I would have a very hard time forgiving them.



Bonus tangent: I read the D1 link and it’s is kinda insane for a format developed in 1986, especially at that price point (which I’m assuming is in 1986 dollars). 4:2:2 seems almost insane, I wouldn’t have expected NTSC to support more than 4:2:0 or CRTs to be able to reproduce the additional color information at that point in history. Shows what I know. Human vision processes about 3x more brightness/darkness data than it does color data (simplification! The ratio varies throughout the retina - peripheral is different from fovea which is different from the itty bitty center of your fovea. Also we have greater um, temporal resolution for color than luminance due to differences in response times of rods & cones). So you can safely chuck a bunch of color data from your recording and reallocate that bandwidth to luminance without much reduction in perceived image quality. That’s basically the difference between 4:2:2 and 4:2:0 – 4:2:2 has much greater color resolution. But not as much as 4:4:4! Always pick 4:4:4 if you want to use a TV as a monitor to display fine details like text, or else you’ll get tons of ugly artifacts that look like crappy anti-aliasing which will render small fonts unreadable; in general, edges can’t be as crisp under 4:2:2 or 4:2:0. Even today, most TVs are 4:2:2; it’s pretty hard for most people to notice the difference, especially in a moving image. But static text? Oh boy.
posted by 1024 at 5:17 PM on June 10 [11 favorites]


I'm on a few mailing lists for people who work with old video formats, and this was discussed at length back in December. The consensus was that D-1 was a finicky format that didn't interchange well between machines when it was originally launched, and the tapes have usually physically degraded over time. Transferring from D-1 these days needs not just an uncommon machine but full-time attention from an operator to keep the machine aligned and respond when something goes wrong to avoid damage to the tapes. There are specialist facilities houses that can do it, and some national archives (e.g. the BFI), but it's a difficult problem and not something they should be attempting themselves.
posted by offog at 6:12 PM on June 10 [7 favorites]


But static text? Oh boy.
posted by 1024 at 5:17 PM on June 10


Static text? You’ve been hanging out with 768 again, haven’t you.
posted by mubba at 6:17 PM on June 10 [2 favorites]


4:2:2 seems almost insane, I wouldn’t have expected NTSC to support more than 4:2:0 or CRTs to be able to reproduce the additional color information at that point in history.

Ironically, the whole reason we had 4:2:2 and not something like 4:2:0 at that point was exactly the state of the technology. 4:2:2 does half-resolution color sampling along the scanline, and so does not require any kind of memory to do, and was common in professional broadcast equipment in the 80s. 4:2:0, however, is strictly an artifact of the digital age. Why? Because to do 4:2:0, you need to keep the previous scanline in some sort of memory, a huge cost in the 80s.

Most analog video equipment had no concept of "memory" at all, the signal is in sync with the beam scanning the image onto the CRT, and once a pixel/part of the scanline passed you by, you had no way of looking at it again, much less looking at previous scanlines.
posted by Joakim Ziegler at 6:47 PM on June 10 [4 favorites]


Most analog video equipment had no concept of "memory" at all, the signal is in sync with the beam scanning the image onto the CRT, and once a pixel/part of the scanline passed you by, you had no way of looking at it again, much less looking at previous scanlines.

I can categorically say that, at least in the professional broadcast TV realm, there was memory in the analog video equipment. Dropout Compensators and Time Base Correctors were used to "hold" or replace lines of video. Without which, cutting live between two video devices (Cameras, Video Tape Recorders, etc) or editing between two VTR machines the resulting video would have artifacts like vertical rolling and/or horizontal jitter.

Early on these memory systems were simple bucket brigade or shift register type analog delay systems. Later on more sophisticated devices such as Frame Syncronizers holding entire interlaced frames used digital memory to store analog video.

And a nitpick: 4:2:2 seems almost insane, I wouldn’t have expected NTSC to support more than 4:2:0
NTSC never supported 4:2:2. The 4:2:2 format used in the D-1 VTR was based on the CCIR-601 digital video standard (now known as ITU-R BT.601 or Standard Definition)

NTSC is an analog format, so colour subsampling was done using analog component, dividing the source RGB signal into 3 parts, brightness (Luma or Y) Red minus Y (R-Y) and Blue minus Y. (B-Y) By not transmitting the green (G), analog signal bandwidth was reduced. Green was made up mathematically at the receiving end.
posted by Zedcaster at 9:50 PM on June 10 [4 favorites]


I was never a huge fan of this show but seeing it sure does rouse memories.
Looking forward to the doco.
posted by neonamber at 10:45 PM on June 10


I can categorically say that, at least in the professional broadcast TV realm, there was memory in the analog video equipment. Dropout Compensators and Time Base Correctors were used to "hold" or replace lines of video. Without which, cutting live between two video devices (Cameras, Video Tape Recorders, etc) or editing between two VTR machines the resulting video would have artifacts like vertical rolling and/or horizontal jitter.

Well, yes, this was why TBCs were at a time extremely expensive. Doing 4:2:0 in analog video would have required every piece of equipment that had to decode it, and every monitor, etc., to have a line store, though. If you wanted to somehow send 4:2:0 or its equivalent to people's TVs, you'd need a line store in every TV.

NTSC is an analog format, so colour subsampling was done using analog component, dividing the source RGB signal into 3 parts, brightness (Luma or Y) Red minus Y (R-Y) and Blue minus Y. (B-Y) By not transmitting the green (G), analog signal bandwidth was reduced. Green was made up mathematically at the receiving end.

Isn't this scheme, given half the bandwidth for the color components, functionally equivalent to 4:2:2 or similar, though?
posted by Joakim Ziegler at 10:46 PM on June 10 [2 favorites]


If you wanted to somehow send 4:2:0 or its equivalent to people's TVs, you'd need a line store in every TV.

It might sound odd in the NTSC world, but every PAL or SECAM TV does have a line store - traditionally a glass delay line that provided exactly one line's delay. SECAM sends Db and Dr on alternate lines, so the delay line gives you the other chroma component from the previous line; PAL sends U+V and U-V on alternate lines, so you subtract adjacent lines to recover the two components. In either case you are effectively getting half the vertical resolution for chroma that you have for luma.

(This is simplifying quite a bit, since (a) simple "PAL-S" TVs with small screens could do good-enough NTSC-style decoding without a delay line at the cost of horrible artefacts, and (b) fancy modern adaptive/frequency-domain decoders can do a better job with access to multiple fields and heuristics about what the content is likely to be...)
posted by offog at 12:53 AM on June 11 [2 favorites]


Transferring from D-1 these days needs not just an uncommon machine but full-time attention from an operator to keep the machine aligned and respond when something goes wrong to avoid damage to the tapes.

Gotta admit, when I read the links in the FPP, I couldn’t help brainstorming how I’d go about doing it myself, despite an utter lack of real knowledge in this domain beyond an extremely basic idea of how scan heads work. Two tidbits on D1 wiki scared the bejesus out of me: “The helical scan head drum rotates at 10,800 RPM for NTSC” and “Writing speed at the heads is 33.63 m/s, linear tape speeds are 286.588 mm/s for NTSC”. That is quite a bit more energy flying around than I expected, and I could see at least 10,800 ways things could go wrong in one minute with delicate tape that I assume must be tensioned. I am now realizing my expectations were probably set by depictions of reel-to-reel tape in pop culture which predate the D-1 by decades. The only thing I was certain of after reading that was that if I was forced to tackle this problem, it would be in the same manner that porcupines mate – very carefully. Thank you so much for sharing your knowledge here, offog. Forgive what may be a simple question, but just out of curiosity, do you know if it’s strictly necessary for read operations to be spinning the drum as fast as a high-speed centrifuge? I’ve seen an unbalanced centrifuge vibrate itself off a bench in shared lab once. I would want to go very, very slow, and as you shared here, watch it like a hawk.



Ironically, the whole reason we had 4:2:2 and not something like 4:2:0 at that point was exactly the state of the technology.

OMG thank you so much for answering that, Joachim Ziegler! I’ve been trying to get other stuff done tonight, but learning that 4:2:2 was in use before I was born has been low-key puzzling me for hours now. This whole time I’ve felt like an architect born after the loss of Roman concrete, but before the rediscovery of hydraulic lime. It hurt my brain, I knew I was missing something critical that had been obvious at some point but I would probably never learn – my thinking was stalled at “dafuq?” Seriously, thank you for explaining how and why that could happen, I had assumed that I’d have to live with that mystery. Now I don't even have to fall asleep with it.


Early on these memory systems were simple bucket brigade or shift register type analog delay systems. Later on more sophisticated devices such as Frame Syncronizers holding entire interlaced frames used digital memory to store analog video.

Zedcaster you are kind of blowing my mind here. My first memories of moving images were all analog video, my later experience was all digital video – obviously there was a transition. But I was very much a child while that transition was taking place, and only saw changes once they had filtered down to consumers. I wasn’t even a consumer, my parents were, and they were not exactly early adopters.

I came into this world knowing only VHS, there was a strange, short period where both enormous LaserDiscs at school and enormous vinyl records at home coexisted, and then seemingly overnight DVDs were everywhere and never again would I have to be kind and rewind. I grew up peering through the tiny CRT viewfinder of an enormous shoulder-mount Panasonic OmniMovie as I rocked the power zoom back and forth, until one day I discovered where the RCA output cable fit into my family’s TV, pointed the lens at the screen, and blew my damn mind. And then all of a sudden I was folding out the LCD screen of an unbelievably tiny Sony Handycam that recorded on fun-sized MiniDV cassettes with 10x the zoom and a stabilization button. I didn’t have any concept of the difference between optical and digital zoom at the time, I’m not even sure if it would be possible to tell with its LCD.

I had never imagined what it would be like to live through that transition as a professional. I probably could have realized that must have been happening for someone when I read the D1 specs, but your comment really drove things home. I’m making a bit of an assumption here, but I suspect that while I experienced that transition as stepwise jumps between epochs, you may have experienced a more continuous process. You might even say your experience was more… analog than digital. I’m wildly speculating at this point, but that seems like it would have been a fantastically exciting time to be on the production side of professional broadcast TV. Bucket brigades holding scanlines? If anyone I knew IRL told me delay-line memory was once used to cut analog video live, I would have called bullshit. Like, sure buddy, scanlines held in an enormous room full of hot mumbiling mercury tubes, hope there was good thermostat.

If you, or anyone else commenting here, have any more stories, anecdotes, insights, even nitpicks about this time, oh man I would just be absolutely fascinated to hear more.
posted by 1024 at 1:50 AM on June 11


Like, sure buddy, scanlines held in an enormous room full of hot mumbiling mercury tubes

Analog delay line

posted by flabdablet at 2:23 AM on June 11 [1 favorite]




NTSC is an analog format, so colour subsampling was done using analog component, dividing the source RGB signal into 3 parts, brightness (Luma or Y) Red minus Y (R-Y) and Blue minus Y.

Ok last post before sleep: I am kind of laughing my ass off at this.

First, I had no idea there were actual SMEs in the room when I was writing my first post, and tried to avoid terms of art and wrote stuff like “brightness/darkness data” and “color data” instead.

Second, as soon as I read Luma, well my work in this domain has largely been in machine vision (though I conveniently shared an office and many great conversations with a team of vision scientists for years) and I immediately thought “surely Luminance? Why on earth would anyone want to take the nice, practically linear count of photons hitting each photodiode on my sensor and perform a nasty nonlinear operation on it? I will keep every bit of the original data thank you very much. I’m already losing information I could be using because the stupid Bayer filter in front of all the photosites on pretty much every COTS sensor on the market today over-samples green, so I don’t get to have ground truth. Why would anyone care about gamma unless they were working in analog on a CRT or something and oh that’s exactly what we’re discussing here.”
posted by 1024 at 2:38 AM on June 11 [1 favorite]


Doing 4:2:0 in analog video would have required every piece of equipment that had to decode it, and every monitor, etc., to have a line store, though. If you wanted to somehow send 4:2:0 or its equivalent to people's TVs, you'd need a line store in every TV.

Okay real last post: this immediately reminded me of an amazing line from an article I read last year about the development of the Zenith Space Command wireless TV remote that gave rise to the term “clicker”. Robert Adler invented it in the 1950s, and explained his constraints and design decisions almost 50 years later to the Television Academy Foundation:


Now today, of course, you say, well, why don’t you encode the signal? We can’t encode the signal because we can’t use 100 vacuum tubes.
posted by 1024 at 2:52 AM on June 11


I don't understand much of this technical video/signal processing talk but boy do I love reading it and admiring the deep technical knowledge and friendly willingness to share it on display.
posted by signsofrain at 6:42 AM on June 11 [3 favorites]


I am not the SME in my household but I’ve been reading chunks of this aloud to the person who is, and we had a fun time watching some YouTube video about the D1 system last night. Thank you for this post! (I *am* the one in the household who remembers ReBoot fondly, so between us this is a great post for us.)
posted by Stacey at 6:54 AM on June 11 [1 favorite]


the SME in my household
the one in the household who remembers ReBoot fondly

I think there may be some overlap between these groups
posted by 1024 at 8:06 AM on June 11 [1 favorite]


Ha, yeah, you're probably not wrong. We have a ReBoot SME and an Old Computer Formats SME so this is squarely in the Venn diagram center where we can both play nicely together.
posted by Stacey at 9:02 AM on June 11 [1 favorite]


I was a baby engineer in the period of time when HDTV was first being developed and tentatively rolled out. (e.g. helped with tests to determine if our owned network would base their broadcast chains on 720p or 1080i) I helped set up a tape robot -> playback unit -> digital video recorder automated chain to allow a cable network to automatically grab the pre-programmed noon show at midnight, scan the tape into essentially a professional grade DVR and then spit it out on the feed at noon. (The earlier scan time intended to give Ops time to fix anything that went wrong) I've got a pile of DigiBeta and Sony CRVs sitting in drawer at my knee with project videos on them.

The weird complexities in analog broadcast chains were absolutely nutbar. So many different technologies that came and went in an attempt to solve a problem and when we were in the midst of the analog->digitial and SDTV->HDTV transition it got even weirder.

My favorite example of that was wandering the bowels of a major broadcast network distribution plant in NY. They had ~20ish broadcast chains to send out multiple feeds to the satellites and have redundant backups. The last few chains still had vacuum tubes in them out of the spirit of "if all else fails, this stuff will work. Also don't breathe on it"

All of this is to say - they really should hire experts to do the transfer because - nutbar!
posted by drewbage1847 at 9:06 AM on June 11 [2 favorites]


1024"any more stories, anecdotes, insights, even nitpicks about this time"

Well as a matter of fact, I worked in a network TV station in the same market (CBC TV Vancouver BC ) as the ReBoot gang over at Mainframe Entertainment. They were, as we know, a D1 plant and we, as mere broadcasters, were a D2 plant. It helped that we were also an Ampex station and D2 was their broadcast digital VTR "standard", and D1 was Sony (at least in our market). In fact when I read the initial post I was surprised that the filmmakers could only locate Bosch machines since they will be hard pressed to find Bosch techs in Canada or the US as Sony pretty much dominated the North American market.

BTW one of the main differences between D1 and D2 was that D1 used component video (Y, R-Y, B-Y) and D2 was digital composite (the entire signal combined). D2 was also waaay cheaper that D1, although the price per machine was north of $100,000 CDN.

Anyhoo, D2 was my introduction to all things digital, 4:2:2, DCT, even Fast Fourier Transform. This to a guy who began his career threading up 16mm film prints of B&W movies to be "projected" for the midnight movie. (kids, this is what TV did before the advent of late night chat. Carson was essentially alone in the market for many years).

Before digital, my video signal contained analog wonders like sync pulses, colour burst, front porch, timing pulses, I and Q, pluge, vertical blanking and many more jollies. Most of which went the way of the buffalo once digital came along and codified everything into data. Data that flowed not as a raw stream but in the case of D2 SDI (SMPTE259), serial standard def. video. Leading to all kinds of confusion with the analog old timers like, "What do you mean SDI doesn't do 525 lines, what's this 480i nonsense?" or "What the heck is a colour space?"
posted by Zedcaster at 9:27 AM on June 11 [2 favorites]


Zedcaster, you'll be pleased* to know that HDMI's "digital video" still includes H- and V-sync pulses, front and back porches, overscan, blanking intervals, and more! I recently designed an FPGA board that could receive HDMI for displaying on non-standard displays like Mac Plus CRTs or flip dots or whatever, and I had to revisit the ancient history of NTSC timings to figure out how to get devices to talk to my fake TV.

It gets much worse, as mjg59 wrote in TVs are all awful -- the EDID block sent from the display to the player encodes the horizontal resolution as a multiple of 8 pixels (so that it fits in a single byte), which means that common screens like 1366x768 aren't correctly represented since 1366/8 = 170.75. Vertical resolution isn't specified, only the aspect ratio, so you get either 765 or 769 pixels, depending on which choice you make to represent the horizontal pixels.

Or, as they summarized it:
tl;dr - Your 1920x1080 TV takes a 1920x1080 signal, chops the edges off it and then stretches the rest to fit the screen because of decisions made in the 1930s.
____
*: Pleasure not guaranteed.
posted by autopilot at 12:44 PM on June 11 [1 favorite]


« Older The G Word   |   Fuzzy wuzzy were five baby birds' head Newer »


This thread has been archived and is closed to new comments