2010 HPA Tech Retreat: More 3D Follow-Ups
I wrote previously about the strange case of potential customers wanting to buy Panasonic’s 3D professional camcorder even before the company had finalized its optical system. Panasonic brought the AG-3DA1 to last month’s HPA Tech Retreat (along with a professional 3D monitor and a 3D demo truck). Mike Bergeron also provided a presentation about the camera in the main program.
The presentation offered a good overview of 3D production in general and then concentrated on the camera. Small image sensors were chosen to allow a side-by-side configuration, even though that led to reduced sensitivity (the 3D camcorder was the only one in the demo room to require additional lighting instruments). The lens motors are controlled together (Fujinon had a demo in the same room for a common calibrator and controller for paired 3D lenses), and the same system used for chromatic-aberration correction can be used to correct geometric distortion and differences between the left- and right-eye pickups.
So why isn’t the optical system finalized? Panasonic wants this to be a 3D-training camcorder and, as such, as simple to use as a bicycle with training wheels. Zoom range will be affected by convergence angle (and at the HPA Tech Retreat at least one participant expressed an opinion opposing convergence, because it effectively changes the frame shape at any fixed plane from the camera from two rectangles to two trapezoids, with opposite short and long sides for the two views, introducing an undesirable vertical-scale difference).
The presentation also covered the professional monitor, which, unlike consumer devices, uses crossed linear polarization. One advantage of linear polarization is that it tends to offer less ghosting (especially in blues) than circular polarization, if the viewer’s eyes are as horizontal as the display.
The seeming disadvantage is that a viewer cannot put her or his head on someone else’s shoulder without getting severe ghosting. But that same characteristic might be seen as an advantage for a professional monitor; when one’s eyes aren’t horizontal, one isn’t seeing 3D properly, regardless of ghosting. The ghosts act as a self-corrective to head position.
There were many more 3D presentations and demonstrations at the HPA Tech Retreat, including TDVision on 3D to mobile devices (they had an encoding demo), Miranda on the proper z-axis positioning of graphics, and Dolby showing HD 3D on a giant screen in just about 7.5 Mbps. There was the 1 Beyond Wrangler for portable 3D recording and review, Cobalt’s UDX 3D processor, Doremi’s 3D format converter, DVS’s system for 3D digital-cinema package (DCP) creation, including subtitles, Fraunhofer’s 3D DCP player (and immersive 3D-capture system), GIC’s 3DV quality-control (and subtitling) system, 3D-shooting tools from IFX, Imartis’s rapid-stereo-adjustment SwissRig, and JVC’s 3D upconverter-mixer-monitor system.
Oculus 3D showed a brand-new system for film-based 3D in movie theaters (Technicolor just announced another). OmniTek offered 3D test patterns and a range of 3D displays (waveform, histogram, metadata, and picture-difference among them). Quantel had their latest 4K 3D software. Ross Video had a 3D character generator. SpectSoft had a new 3D converter (as well as other 3D features). THX had 3D signaling for home theaters. T-VIPS had 3D-over-IP using JPEG2000. And I might have missed some of the 3D demos.
There were also at least a dozen breakfast roundtables at which 3D was discussed — everything from the effect of blur on the perception of depth to the use of 2D equipment in 3D production. What there didn’t seem to be was anything on the mysterious single-lens 3D camera shown by Sony at CEATEC.
The unofficial motto of the HPA Tech Retreat, however, is “someone will be there who knows the answer.” Sure enough, at a networking opportunity, it was possible to get not only the lowdown on the single-lens Sony 3D camera but even an unpublished paper about it and the published research on which it’s based.
The subject of that research is “microstereopsis,” and it has been studied in many labs around the world. At a breakfast roundtable at the HPA Tech Retreat, Professor Martin Banks of the University of California – Berkeley noted that the human visual sensitivity to disparity between the left- and right-eye views (stereopsis) is a full factor of magnitude greater than its sensitivity to detail resolution.
In an ideal 3D system, the cameras would shoot views matching the distance between the centers of a viewer’s pupils (pupillary distance, or PD), and the display would duplicate that. But different viewers have different PDs, and screens come in a range of sizes. To keep viewers from ever having to (unnaturally) diverge their eyes, the views, especially on TVs, are usually presented at considerably less than the viewer’s PD. If the cameras had a greater separation, the result can be a sensation of looking at a doll house, with a shortened depth sensation.
So why not shoot with a reduced PD in the first place? That’s the idea behind microstereopsis. One paper about it is called “Kinder Gentler Stereo.” Another is “Just Enough Reality.”
Digital Optical Television Systems used a single-lens 3D camera in the early 1970s. Sony’s CEATEC camera — not yet intended as a product — is another. It shoots at 240 frames per second for a different kind of sensation and uses ordinary zoom lenses, which cannot be mismatched because only one is used at a time. Behind the lens, a mirrored optical system splits the two views and sends them to different paths, but the two eye views can actually be recombined and viewed comfortably as 2D. Pictures from the camera were actually shown at last year’s Consumer Electronics Show.
There was lots more at the 2010 HPA Tech Retreat. If I get a chance, I’ll provide some more in a future post. Meanwhile, you can read up on it at the URLs I provided last time: http://schubincafe.com/blog/2010/03/2010-hpa-tech-retreat-what-gave-some-avatar-viewers-discomfort/