Too Good For Its Own Good?

Originally published in Videography August 1999

Can brighter, sharper, more colorful pictures be bad? Adventures in electronic cinema offer mixed results.

Four Weddings and a Funeral, Three Men and a Baby, two screenings and a quandary: Is it possible for videography to be too good? Some recent events may provide a clue.

There have, of course, been many times since the advent of digital technology when quality has been weighed against another characteristic. In digital television (DTV), for example, a channel capable of transmitting roughly 19 million bits of information per second (19 Mbps) can currently be filled with (among other choices) a single high-definition television (HDTV) signal or multiple standard-definition television (SDTV) signals. Which is better, the one high-quality program or the multiple programs of lower quality?

One need not turn to DTV to become familiar with quality-vs.-something-else issues. The highest-quality videotape recording format currently available is the uncompressed HDTV cassette system D-6, but it’s also the most expensive currently being sold.

What do you use when you record in the field? Whoever you are, the chances are excellent that the answer is not D-6. Aside from their cost, D-6 machines are also large, heavy, and very power-consumptive. Videographers with aching backs may be relieved to know that there’s no such thing as a D-6 camcorder.

How about using D-6 in a teleproduction facility? Size, weight, and power consumption aren’t usually serious issues there, but cost and compatibility are. And, even if one could justify a D-6 recorder in a high-end teleproduction facility, one could hardly justify it instead of a VHS machine at home. Aside from the fact that the VHS machine is probably priced at less than a tenth of one percent of the cost of the highest-quality format’s recorder, it also includes a tuner, a programming timer, an output that can feed an ordinary TV set, little feet for sitting atop that TV set, a wireless remote control, and compatibility with the products of video rental stores — all features missing from D-6.

Even within the VHS format there are quality tradeoffs. Not even considering S-VHS or HQ video quality or Hi-Fi audio quality, all of which can increase cost, most VHS machines offer a choice of a higher-quality two-hour (SP) mode or a lower-quality six-hour (EP) mode. Why would one use the lower-quality mode? For increased capacity, of course. Similar choices are made in determining the bit rate of satellite or cable-TV digital programming.

There are yet more reasons to forego quality, besides capacity, multiple program choices, size, weight, power consumption, cost, features, convenience, simplicity of operation, and compatibility. Consider, for example, monitoring.

The best way to assess the quality of an HDTV production is with a high-quality HDTV monitor or projector. If a program is ultimately to be seen on an SDTV display, however, it may be equally if not more important to monitor it on a lower-quality monitor. Why?

Suppose there’s text to be seen. On the HDTV display, the text may be visible even if it occupies only a tiny portion of the screen; on the SDTV screen, the text may be nothing but a blur if it’s that small.

There’s another issue associated with displays: Some very-high-resolution displays are not as bright as their lower-resolution counterparts, but perhaps that raises a different issue: Which is the higher-quality display, one with the ability to present the finest detail or one with the ability to present the highest contrast? The human sensation of sharpness is related to both parameters.

Similar issues of one kind of quality versus another abound. In HDTV, CBS and NBC have chosen a video format designated 1080i — 1,080 active (picture-carrying) scanning lines per frame, with roughly 30 frames per second and with interlacing between odd- and even-numbered fields (the odd-numbered scanning lines are separated from the even and are presented alternately every 60th of a second, with 1, 3, 5, 7,… presented before 2, 4, 6, 8,…). ABC and Fox have chosen for their HDTV transmissions the format designated 720p — 720 active lines, 60 frames per second, and no interlacing; instead, lines are presented in progressive numerical order (1, 2, 3, 4,…).

Which is better? Looked at strictly from the standpoint of active lines per frame, 1,080 is clearly more than 720. But should that be the only criterion?

Looked at from the standpoint of lines presented in a 60th of a second, 720 is more than 540. Those two criteria offer opposite results, and they’re not the only ones.

Many researchers say 720 progressive lines actually offer slightly more subjectively perceived detail in the vertical direction than do 1,080 interlaced. But 1080i offers 1,920 active picture elements (pixels) per scanning line; 720p offers only 1,280. Which is more important, horizontal detail resolution or vertical?

There’s more: Interlacing can create serrated edges or even double images in fast horizontal motion, effectively reducing horizontal resolution. But the human visual system is less sensitive to detail in fast motion.

Even in relatively static images, 1,920 active pixels per scanning line may prove too stressful to some DTV compression systems. DirecTV has demonstrated 1080i HDTV with only 1,280 active pixels, the same number as in 720p, and the same number chosen by Panasonic for its 1080i DVCPRO HD. Sony made a similar tradeoff in HDCAM (1,440 active pixels) and offers demonstrations of how less detail can actually end up looking better than more.

Confused? There’s still more: The number of scanning lines per second for 1080i is roughly 33,750; for 720p it’s roughly 45,000. Those figures don’t measure quality, but they are related to how difficult it is to create scanning circuitry for a picture-tube- (or projection-tube-) based HDTV display. At this time, therefore, the vast majority of consumer HDTVs can display 1080i but not 720p.

Consumer DTV receivers can decode 720p, but it must be converted to something else (typically 1080i, 960i, 540p, or 480p) prior to display on those restricted screens. The conversion process (and the different display format) degrades 720p imagery. Under most conditions, therefore, 1080i looks better than 720p on such a display.

Many of the latest display technologies, however, such as some plasma-panel or liquid-crystal-based displays or Texas Instruments’ digital light processing (DLP) dynamic-micromirror-based projectors, may be friendlier to 720p than to 1080i. When 1080i must be converted to 720p and 720p doesn’t have to be converted, 720p usually looks better. Some also argue that 720p data can be compressed more easily than 1080i, resulting in better images after transmission.

All of that, however, is in the realm of tradeoffs. One might make decisions about quality based on displays, data compression, and format conversion in addition to the previously listed criteria.

Then there are nuisances brought about by increased quality. NBC redesigned its Meet the Press set for HDTV because they felt the old one didn’t look as good. Make-up, it is said, is another area that needs much more care in HDTV than in SDTV. Those nuisances, however, could be said to be financial issues. A better set and better make-up may cost more (and, perhaps, take more time), but they have no intrinsic negative characteristics.

Suppose, however, that none of the quality trade-offs mentioned to this point mattered. Suppose money were no object. Suppose time were not of the essence. Suppose everything could be controlled from camera to display, so there were no conversion issues. Suppose compression, transmission, and recording were also taken care of in the finest possible way. Is it possible that higher image quality — in and of itself — could be a problem?

This summer, thousands of people got to participate in experiments to determine just that, many of them were unwittingly. Consider, for example, what happened to moviegoers who showed up at the Clearview Chelsea Cinemas in Manhattan in early July to see An Ideal Husband. They would have paid their money as usual, received tickets as usual, had the tickets torn on entry as usual, been led to an auditorium as usual, and sat down to see the film as usual.

Unfortunately, despite being in a movie theater and perhaps munching on popcorn, they did not get to see the film of An Ideal Husband projected onto the screen. Instead, they watched images played from a videotape recorder projected by a video projector.

Was that bad or good? The answer is not simple.

First of all, they weren’t an ordinary videotape recorder and video projector. The recorder was a Panasonic high-definition version of a D-5 format, and the projector was a high-definition-capable Hughes-JVC image light amplifying (ILA) unit. The transfer from film to tape wasn’t ordinary, either. The parameters were controlled specifically for presentation to a theatrical audience in a dark room, not to a home viewer watching a TV set.

Similar care was taken for this summer’s non-stealth screenings of what has come to be known as digital cinema. With a great deal of publicity (including full-page newspaper ads), Star Wars Episode I: The Phantom Menace was screened digitally at two theaters in the New York area and another two in the Los Angeles area. In each locale, one theater used the Hughes-JVC ILA system and the other used the Texas Instruments DLP system.

A party of three — an television engineer, a musician who works in videography, and a schoolteacher unassociated with the field — attended a New York-area DLP screening of Star Wars and then immediately watched the movie again, projected from film, in a near-identical auditorium across the hall of the same multiplex. Afterwards, they compared their impressions.

All agreed that the electronic projection was brighter, sharper, and clearer. All agreed that its colors were richer. The engineer noted freedom from dirt and scratches on the electronic version, counterbalanced by a faintly visible grid structure and some jagged edges on text, primarily on Ws, but he said those flaws didn’t bother him. In short, from the standpoint of just about every standard measure of image quality, all agreed that the electronic version was superior.

Nevertheless, the musician said she preferred the film version. The engineer said he had to agree. The schoolteacher took longer to decide but eventually came to the same conclusion. Why?

The musician put it this way: “[The electronic version] looked like a video game. It was cartoonish.” All agreed on a number of scenes that didn’t look believable in the electronic version but did in the film version, but all of those scenes were based on computer graphics.

The engineer and schoolteacher, therefore, decided to try An Ideal Husband. The engineer found only one fake shot annoying and thought it might not have looked better on film. The schoolteacher noticed what she took to be hairpieces that she thought she shouldn’t have noticed. Both noticed what the schoolteacher called “focus problems” and the engineer explained was narrow depth of field, a small range of distances from the camera in which actors and objects stayed in focus, a characteristic of shooting, not projection.

They met again a week later to see some ordinary movies projected from film, and both agreed afterwards that they liked film projection better. But why? With its scratches and dirt and positional instability and visible grain, film projection has worse quality than digital projection. Why should it be desirable?

Before HDTV, videographers sometimes tried to emulate the look of film on television. Film has greater detail resolution than ordinary video, but on a TV set only video resolution can be seen. Film has a greater potential contrast range than does video, but, again, a TV set restricts the contrast.

To simulate the desirable “film look,” videographers had to take away some of the video quality rather than add to it. Some processes added jitter (positional instability); some made the motion jerkier; some effectively reduced the contrast in high-detail areas.

Consider the narrow depth of field noticed in the Ideal Husband screening by the engineer and the schoolteacher. According to one of the organizations keeping track of digital cinema comments, depth of field is often brought up, despite the fact that it is a shooting issue and not one of projection.

A theory about what is going on is that, in a film projector, the film moves back and forth slightly in the projection “gate,” changing the focus on the screen slightly. That changing focus may serve to help mask narrow depth of field. Similarly, horizontal and vertical positional instability may help serve to mask problems in special effects.

Doug Trumbull, a master of special effects (2001: A Space Odyssey among others), developed the Showscan film system to simulate reality. It used high-quality 70 mm film running at 60 frames per second instead of a movie’s usual 24.

As a reality simulator, it worked well. For storytelling, however, Trumbull explained in an interview in American Cinematographer in August 1994, it was lacking. What was it lacking? A lack of quality.

There was too much detail in Showscan — too many frames per second. It could display reality as well as the evening news, but it couldn’t help viewers suspend disbelief as well as could ordinary 24-frame-per-second film.

Could it be that projected film’s many “flaws” — varying focus, image jitter, visible grain, and the like — actually help audiences buy into the plot? Could shooting in HDTV pose a similar quandary for videographers?

More than two centuries ago, Voltaire seemed to understand the problem when he wrote, “The best is the enemy of the good.” More recently, however, a different quotable source said, “Always give your best.” That was Richard M. Nixon, addressing his staff upon resigning the presidency of the United States.

###

Sensitivity Training: Why Candlelight Is Romantic

On December 3, 1973, an unusual experiment in videography took place at the Metropolitan Opera. The media development department of Lincoln Center for the Performing Arts wanted to determine whether it was possible to shoot operas under existing performance conditions.

The conditions were, indeed, difficult. Not only was there no television lighting, but some scenes were dark even by opera standards, and were performed behind two gauzy black scrims.

There was a reason for the darkness and the scrims beyond dramatic effect. Stage technicians dressed in black moved scenery around in the darkness. They weren’t meant to be seen.

The TV experiment utilized four types of cameras. One was typical of its time, a tube-based camera with no special low-light features. Shooting the conductor, it turned his constantly moving arms into wing-like blurs.

Others were similar but had an added feature, bias-light, designed to reduce just such image “lag.” They made reasonable pictures, considering the light level and the year.

Two others were designed specifically for low-light applications. One had an image intensifier in front of each camera tube. The other, designed for the U.S. Air Force, used “secondary electron conduction” and was said to be able to provide an image of a black cat in an unlit coal mine at night that would let you tell the color of its eyes.

The ultra-low-light-level cameras performed perfectly… and uselessly. Clearly visible in every shot were the black-clad scenery movers.

###

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters