We Were Shafted
In John Logie Baird’s time, videography was crude but simple. Now what?
Queen Elizabeth II and Marilyn Monroe, liquid-crystal and plasma TVs, Miami Vice and al-Jazeera — any way you look at it, we were shafted. And now we’re in for it big time.
Of course, being shafted wasn’t necessarily such a bad thing. Consider how it came to be.
John Logie Baird had two great achievements in 1925. One was transmitting the first recognizable human face on television. The other was simultaneously giving videographers the shaft.
The previous sentence was not intended to be taken in a pejorative, slangy sense. It was meant to be taken literally. Baird gave us the shaft.
Last month’s column covered research into videography in the 19th century, including the patent on television scanning issued to Paul Nipkow in Germany in 1885. Nipkow proposed the use of spinning, perforated disks at the camera and display. Each disk had a single spiral of holes. The circumferential distance between the holes determined one dimension of the image, and the radial distance between the first hole and the last determined the other dimension.
In the camera, each hole allowed light from the scene to hit a photosensitive surface as it passed by, creating a scanning line. In the receiver, each hole allowed light from a lamp to reach the eye, reproducing each scanning line and, ultimately, the scene.
As the patent was issued in 1885, one might well ask why it took 40 years before the first recognizable human face was seen on TV. The answer is that there’s a lot more to videography than just scanning lines (see “Deep Purple” page TK).
As a matter of fact, one of the mechanisms Baird had yet to perfect in 1925 was synchronization of the display to the camera. Even years later it would be a problem.
A photo of General Electric’s Dr. Ernst Alexanderson watching television in 1928 shows what appears to be a two-knobbed video-game console in his lap. One knob controlled the speed of the motor spinning the display disk so it could match the camera’s; the other controlled a gear that would adjust the angular position of the display disk so that the holes matched position with the camera disk’s.
That was after Baird’s 1925 achievement. Baird had the camera in one room and the display in another. To keep them perfectly in sync he resorted to an expedient he knew would be impossible for broadcasting. He drilled a hole in the wall between the rooms and ran a rotating shaft through it, the single shaft connecting one disk to the other.
Videography was shafted. And that was a good thing. It ensured that whatever a videographer captured in a camera would be reproduced correctly on a TV set.
When Britain gave the world its first television standard in 1937, Baird’s (and Nipkow’s) spinning disks were replaced by camera tubes and picture tubes. The physical shaft was replaced by synchronizing signals. But the concept of the shaft remained. Whatever the electron beam in the camera was doing, the electron beam in the picture tube duplicated precisely.
At least that was the theory. Spinning-disk-based television cameras and displays did match precisely, although changing channels in the era before standards involved not only tuning but also changing disks and motor speeds as well. In picture tubes, however, it was difficult to align electron beam positions precisely. As a result, viewers saw somewhat less than the camera picked up — and sometimes saw stretched or compressed images, too.
Then there was the coronation of Queen Elizabeth II, 50 years ago. Britain’s 1937 television standard called for 405 (total) scanning lines (both those devoted to carrying picture and those used to give the electron beam time to return to the top of the picture). The U.S. government standard, which went into effect in 1941, called for 525 (total) scanning lines. That and other differences, such as AM sound in Britain and FM in the U.S., meant that a TV set bought in one country wouldn’t work in the other, but that was no major concern.
By 1951, France had 819-line TV broadcasts, and Holland used 625 lines. Two years later, on June 2, 1953, came the coronation of Queen Elizabeth II. Viewers in North America saw it as fast as a jet could deliver the film. Viewers on the European continent, however, managed to watch it live.
For the first time, the differing standards in different countries did make a difference. Continental European TVs didn’t operate on the same virtual shaft as British TV cameras. A conversion device was necessary, with an accompanying loss of quality.
That same year, 1953, saw two other shaft breakers: the introduction of the first successful color-television standard and the release of the CinemaScope movie How to Marry a Millionaire, starring Marilyn Monroe, Betty Grable, and Lauren Bacall. Both took some time to achieve their full effect.
In black-&-white television, it was as easy to shoot a striped shirt as a solid one and as easy to shoot a red dress as a black one. In color television, striped shirts could produce a burst of false color when fine stripes — even black-&-white ones — produced a signal that color TVs could misinterpret as color. Red dresses could smear beyond their boundaries when color TV sets couldn’t get sufficient detail information from the video signal. Even viewers with black-&-white sets saw an annoying pattern of crawling dots wherever there were saturated colors.
Then there were the vagaries of transmission and TV-set adjustment. When the same scene was displayed on different sets, one might have shown the hero as a pale, indoor type, another with a healthy tan, another sunburned red as a beet, another greenish with nausea, and yet another as a purple-faced alien not of this earth.
The colors specified in the second National Television System Committee (NTSC) standard were almost never used. Set makers opted for brighter pictures with different colors, and, when the industry got used to those, they opted for even brighter pictures with yet other colors.
Marilyn Monroe’s problem came in 1961, when NBC broadcast How to Marry a Millionaire. The movie had been shot to take advantage of the very wide screen of CinemaScope, a picture roughly two-and-one-half times as wide as it was tall. A star might stretch out on a chaise filling the entire width of the screen.
Television, unfortunately, had a much squarer screen, just a third wider than it is high — an aspect ratio (ratio of width to height) of 4:3. Marilyn Monroe’s body, like that of a tall traveler accepting the hospitality of the legendary Greek Procrustes, had to be chopped to fit.
The stereophonic score of How to Marry a Millionaire was squeezed into mono sound when NBC broadcast the movie in 1961, a fate better than the network’s later stereo broadcast of an episode of Miami Vice in which the polarity of the sound effects tracks mixed to stereo didn’t match that of the dialogue. The master-control technician had a choice of transmitting the show so that viewers with monophonic TV sets couldn’t hear the dialogue or couldn’t hear the gunshots. The latter was chosen as the lesser of two evils.
Still later, the so-called 5.1-channel sound of a major dinosaur movie was squeezed into stereo for VHS-cassette distribution. The sound in the surround channels was carefully considered, but the .1 of the 5.1, the low-frequency effects (subwoofer) channel, wasn’t. As a result, cassettes had to be recalled due to tiptoeing (rather than thudding) dinosaurs.
Of course, those are isolated examples, and, with time, videographers have learned to create new shafts for new viewing options. Striped shirts are avoided. Stereo phase is carefully measured. Moviemakers, upon learning that video would be an important part of the income of a feature, made their framing more video friendly. But the shafts sometimes seem to break even before they are attached.
After videographers got used to dealing with stereo, they discovered that their sound might still be messed up by TV stations’ transmissions of non-program-related sound on their secondary-audio channels. Al-Jazeera, the Arabic news channel, is planning an English-language version, but recently the Spanish-language sound track of a program was accidentally transmitted to English-speaking viewers.
It’s not just audio. When Casio introduced tiny liquid-crystal TVs in 1983, those displays had only about a quarter of the detail of an ordinary TV in either direction. HDTV has still more detail. Can subtle detail be effective when there’s such a broad range of displays? Then there’s widescreen.
Consider a production team devoted to shooting ballet. Over the years, they have learned to deal with the overscan of picture tubes (having the edges of the image fall outside what’s visible to a viewer), adjusting shots so that feet remain visible without too much space between the soles and the visible bottom of the picture.
When they switched to widescreen and HDTV, they chose to preserve the aspect ratio even for viewers of squarer older TV sets by shrinking the image to fit, a technique called “letterbox” because the shrunken image, with black bands above and below, resembles a mail slot. As the letters in this month’s issue indicate, not all videographers fully understand the ramifications of widescreen production, but this team does.
Unfortunately, it’s not enough. The overscan foot positioning may work for widescreen TV sets, but letterbox images have no overscan on top and bottom. The letterbox black bands can be expanded to match typical widescreen overscan, but then widescreen TVs tuned in to the letterbox transmission might not be able to display the show without some residual black bands creeping into the picture.
How serious is that? Like many things, the phosphors that produce light in picture tubes, projection tubes, and plasma-display panels age, and blue phosphors age fastest. Where there’s a black band, aging ceases. After a while, full-screen programming appears with bluish stripes where the black bands used to be.
Perhaps someday all TVs will have the same shape or new display technologies will eliminate concern about phosphor aging. Perhaps then, as the song goes, we’ll cross the wide misery.
By the time John Logie Baird began to work on television, it had already been analyzed, discussed, patented, prototyped, and even transmitted. The only thing it hadn’t been was achieved. The closest anyone had come was moving silhouettes.
Like those who came before him, Baird initially was unable to get his apparatus to work. So, deciding a camera was really an electronic eye, he chose to go right to the source.
At the London Ophthalmic Hospital, he asked for a freshly extracted human eyeball. The request was so bizarre that those on duty thought it must have been official, and they gave it to him.
He wrapped the bloody orb in a handkerchief, stuffed it in his pocket, and set off to his lab, where he slit it open and attempted to extract rhodopsin, the chemical (also known as visual purple) used for rod-based vision. He failed and, disgusted, flung the gory mess into a nearby canal.
There is no historical record of what happened next, but it’s possible that the master of a passing boat asked a mate to check out the floating object. The sailor could have acknowledged the request and issued a report in just two nautical-rhythm words: “Aye, eye.”
Aside from picture-quality issues and cost, there’s only one problem with some widescreen LCD HDTV displays. They’re the wrong shape.
Those panels have 1280 picture elements (pixels) across by 768 down and traditionally use the same spacing in both directions. Dividing 1280 by 768 yields 5/3, a 15:9 aspect ratio instead of 16:9. Widescreen video must be truncated or squeezed.
Using the same logic, a 1024 x 1024 plasma TV ought to be perfectly square. But it isn’t. It’s 16:9. Plasma-panel manufacturers don’t use the same logic.
Plasma panels are available in 1280 x 1024, 1024 x 1024, 1366 x 768, 1280 x 768, 1280 x 720, 1024 x 852, 1024 x 768, and 852 x 480 resolutions. Only one of them (1280 x 720) matches any global video standard, and it’s one of the rarest plasma resolutions and least used video standards. Format conversion almost invariably must be used.
In computer graphics, it’s common to use a term that sounds like whizzy wig. Not a fast hairpiece, it’s an attempt to pronounce WYSIWYG, what-you-see-is-what-you-get.
These days, it’s more what-you-see-has-some-semblance-to-what-some-might-see, or WYSHSSTWSMS, pronounced why show sis ’twas mess.