NAB Wrap-Up, SMPTE Bits by the Bay, May 24, 2017, Chesapeake Beach Resort

Annual NAB wrap-up at the SMPTE Washington section’s Bits by the Bay conference.

Warning: Creating default object from empty value in /home/sportsv5/public_html/new/wp-content/themes/svg2015/content-blog-schubincafe.php on line 272
  • Paul gallo

    Great article

  • I found your site on Google and read a few of your other entires. Nice Stuff. I’m looking forward to reading more from you.

  • Pingback: Mark Schubin: The First Sports Video | Sobel Media()

  • thanks very much for posting this!

  • Mark

    I strongly believe in your first suggestion of Clark Kent vs Superman. I have always said you really don’t know the quality of am image until you compare it to another (better one). Bottom line, is the HD image you are shooting good enough for your workflow and final viewing requirements.

    We use Flips,Sony HD Handycams,various Sony HDV 3 chip cameras,HVX200,EX3’s and beyond. Use the right camera for the right application and remember part of being a professional is knowing what you can get away with!!

  • There’s more to the story, reported here by Sports Video Group’s Carolyn Braff:

  • It seems to me that there would be other trade offs as well. A single lens feeding two sensors at high frame rates should yield low sensitivity. I haven’t done a ray tracing but it also seems that the edge of the optical frame would become more important, making the optics critically important.

  • Mark Schubin

    By the way, by my latest count, 25 new models of HD (or beyond) cameras were introduced at IBC 2009 from 14 brands.

  • fovealsystems

    Mark, Did you see any “box” cameras (e.g. Sony DXC-990 style, in HD)?
    What video wiring did you see? FireWire? HD/SDI? DV?

    And I’ve seen fans built into some HD box cameras, which I take as evidence we are still in the early days and the electronics gets hot. Are they still common?

  • There were many tiny cameras — smaller than the old boxes. HD-SDI was most common, but there were a number with FireWire. Fans are becoming less common.

  • John Shutt

    The camera device is worn on the Officer’s ear like a bluetooth headset. With such a tiny camera, all I can think of is what Mr. Schubin would say about the camera being truly “HD.”

  • JohnS

    Since 3D content requires separate images for the left and right eye, has anyone stated the MPEG compression penalty for compressing such a stream? It seems to me that Blu-Ray or OTA HD content would necessarily either need more bitrate for equal quality or lower quality for equal bitrate vs. “traditional” 2D content.

  • It’s true that there are two views in 3-D, but it’s also true that they’re extremely similar and that bit-rate-reduction systems like MPEG’s make use of redundancy to reduce data rate. There will certainly be some overhead for the second view, but it need not be anywhere near 100%.

  • patrickvons

    BBC Radio’s film critic Mark Kermode is somewhat famously 3D skeptic, so he was gifted special eye-wear with two left-eye pieces to wear to his next stereoscopic cinema trip. Something tells me that these will be of little use in cinemas with Dolby or XpanD systems, so he’d better be sure to only go to cinemas with RealD installs.

  • patrickvons

    Ooops, above link was anti-3D rant, below link is the 3D-to-2D glasses:

  • Walter Gish and Christopher Vogt had a paper at the 2009 SMPTE Technology Conference discussing the coding efficiency of the MVC (multi-view video coding) extension of MPEG-4 AVC/H.264, which will be used for 3D Blu-ray coding.

    They claim “the overhead for encoding stereo is between 65% and 100%, depending on
    source material and the quality of the encodings.”

    Which is odd, because as Mark points out about left and right eye views “they’re extremely similar”.

    Gish and Vogt hypothesize that the difference between a right eye view and a left eye view (disparity) is not typically linear, but typically a more complex transformation based on change of perspective.

    Unfortunately, H.264 only provides linear block motion & disparity prediction tools. This isn’t a problem for most 2D motion because most motion prediction is a short vector. However stereoscopic disparity requires much longer vectors.

    If we want greater stereoscopic compression, perhaps we may need to consider solutions with more complex disparity prediction like US Patent 6144701 “An apparatus and method that applies an affine transformation to achieve steroscopic coding of video.”

  • Yes. SMPTE Director of Engineering & Standards Peter Symes mentioned that in his presentation at the Sports Tech LA 3D conference at USC on January 19. His presentation will be available soon at either this site or the Sports Video Group site.

    By the way, Symes will be at the HPA Tech Retreat (along with many other 3D experts and equipment). There’s more info here:

  • Thanks for a great and thorough article, Mark. Even those of us who think about nothing but 3D all day long learned something new!

  • patrickvons

    Terrific and incredibly informative overview of the multitude of issues (and solutions) that still circle S3D. Thank you! It’s not clear to me if the Comcast “Final Destination” showing used standard red-and-blue anaglyph glasses or one of the new breed of Anaglyph 2.0 eye wear that the likes of ColorCode and TrioScopics are promoting.

  • patrickvons

    The TVs they use all seem to use passive/polarized eyewear. I guess you don’t want to risk having a pub full of drunken football supporters potentially getting rowdy and breaking expensive active shutter glasses. But will this impact the home where active glasses seem the preferred rout by the CE manufacturers at CES?

  • As can be seen from this site, it’s common anaglyph:

  • richiewirth

    Hi! Mark,
    That’s a pretty cool idea (half 3d glasses). I can also see it being very useful for video shaders, graphics, ADs following scripts, even audio mixers. Good luck

  • I think the Milton Cross audio “tent” decor is superb. I also like the pair of condenser microphones.

    I grew up listening to the weekly broadcasts hosted by Milton Cross and recall vividly the first broadcast by Peter Allen in 1974 when I believe it was announced the day before Milton Cross had died.

    Terry Harvey

  • The source of this story appears to be some of Deborah McAdams’s coverage of the HPA Tech Retreat:

  • It seems to me that a very major technical point Panasonic and other electronic manufacturers are not addressing it that for a 3D camera the horizontal spacing between the two lens needs to changes as the lens zooms between wide angle and telephoto. The nominal 2.5 inch distance between human eyes only applies for a camera using a lens with zero magnafication. If a 10X telephoto focal lenght lens is used then the spacing needs to increase to approx. 25 inches in order to maintain the correct depth information in the image. Having the ability to optically (or electronically) converge (horizontally shift) the right and left images does not address this issue. Lack of correct lens spacing is the cause of the “cardboard cutout” effect many people report when watching 3D movies. This is, for example, when a person is shown at the correct distance from the viewer, but their is essentially no depth informaton for the person’s image (they appear to the a 2D cardboard cutout standing in a 3D world.


    In a while there will be robotic versions of celebrities with they original voices.

    Mark, what happened to your podcast?

  • The sponsors pulled the plug.

    I’ll try to get around to doing some on this site.


    I finally saw a video of Roger Ebert “speaking” with his new voice but I didn’t get to convinced. Maybe the tone of his voice sounds the same but the software still reads preety much like an old text-to-speech software.

  • JohnS

    I just want to know when my local news will be in 3D.

  • JohnS

    That New York Times story about the “haptic interface” reminds me of Kentucy Fried Movie’s “Feel Around.”

  • contactmtemmer

    I would like to attend the Schubin NAB wind up meeting at HBO
    Michael Temmer

  • blulaserdigital

    Mark, You seem to have skipped over 1.4 lossless “Image Stacking.” We have begun working with these “Stacked” 1920 x 1080 + 1920 x 1080 full HD images (Over/Under per frame.)

  • That mode is only for 24 frames per second.

  • JohnS

    So does this mean the material on the disk is actually at 48 (47.952) framerate, and progressive, not interlace?

  • Pingback: Representations of Communication Technology: Videoconferencing 1878 –

  • These illustrations are exceedingly helpful, TVMark.

  • JohnS

    Maybe we all should start wearing LH circular and RH circular polarized contact lenses?

  • marrosoft

    We can expect other BigDogCameraNames to be marketing their own 3D camcorders pretty smartly …. it won’t be that many years before the first example of the Panasonic 3D HD http://SDT750.NET video camera (to be shipped in October) gets donated to the Smithsonian display of Historic Artefacts…. ūüėČ

  • ray.baldock

    Mark, you ought to be named a National Treasure for Television History. Kudos for your research on this and thanks for sharing – it was fascinating reading. Ray

  • larry.towers

    What this demonstrates, in addition to the fascinating history of wired technology, is how screwed up our patent system has become. From the purely conceptual level many patents should never have been approved because they are patents for concepts rather than strictly defined implementations. Imagine if any of the devices cited were prevented from being developed because someone patented the vague idea of sending a signal down a cable.

  • uglygeorge

    As usual, engineer Schubin is interested in the History of cable; failing 2 realize that Manhattan Cable TV was LOSING $2 million a month (with good engineering) until Ugly George went on & started attracting actual subscribers. Content is King!

  • 2010


    Nice Job. Very good questions. It seams that the WOW effect is individual by the viewer. Like glasses with a special diopter sharped the view. Than it is an obvious way to manipulate the stereo base individually. Songer did a good job when he put the beam-splitter, or I would like to say wavefront-splitter, in the aperture point where the iris plane of a lens are positioned. He took an old idea from Ernst Abbe who described the effect of stereoscopic in microscopes. His description aimed on the variability of recognition of stereoscopic effects and he resumed that a good 2D image is better than 3D. OK, black and white photos transport enough information but colored are nicer. But the the first fundamental studies of the wavefront splitting comes from Leonardo da Vinci, described again by Edward Adelson and John Wang by studies for there Plenoptic Camera. Even you took the the idea from Leonardo and split the wavefront, he called it ‚Äúpyramids‚ÄĚ which has the information of an infinitesimal picture by each different parallaxes and filtered for one parallax image more like Songer did, then you could reconstruct a 3D object. The three path aperture mask has the advantage to bring the full colored spectrum into the image if the trespassing is a RGB filter and much more you have a base to solve the trifocal tensor. This is the key to adjust the stereo base. I did some experimental pictures with this principles. You will find it on the Web but for the view you need the MS Silverlight runtime. The individual zooming factor adapted the ideal stereoscopic effect to the 3D image although there is a minimal stereo base.


  • 2010

    Sorry. The URL has changed:

  • retro_cycler

    It’s also true that as you become more experienced in stereoscopy – e.g. learning how to free-view in parallel or cross-eyed – you become essentially immune to the problems of vergence-accommodation mismatch. You get used to it, and you don’t notice it anymore.

    By that same argument, I can claim that any _experienced_ stereographer (which you want!), of any age, will not be able to “properly” judge the vergence-accommodation conflict.

    I’ve learned that, at least when dealing with cinema sized screens, the mismatch of vergence and accommodation is so minor as to present little problem. In the cinema, even the most extreme depth budgets put the near-point of imagery a couple dozen feet away from the average theater-goer’s eyes, and at that distance, accommodation is already nearly the same as at infinity.

    I think vergence-accommodation mismatch may be more of an issue with extreme depth effects on smaller screens, or with the use of stereoscopes.


  • tsiglin

    Mark, thanks for the fascinating walk through the science of fiction that became reality.

    In looking for comparisons between today’s web and previous real-time delivery system prior to radio, I came across a fascinating book called The Victorian Internet that showed just how uncanny a resemblance we face between today’s web and yesterday’s optical and electric telegraphs.

    Also talks about the demise of the telegraph, the rise of the telephone and ties nicely to your noted missing link that came through the Society of Telegraph Engineers in the mid- to late-1800s.

  • John

    I’m the John Nixon mentioned in this article written by Tom Pullar-Strecker in Wellinton. There are several errors in the article, and the SKY rumour is rubbish.
    RF Overlay is a very inexpensive and easy way of distributing broadcast TV and digital radio over fibre. John Fellett’s comments are totally wrong (“RF Overlay is terribly expensive”).
    Recent developments allow the transmission of native L-band satellite transmission alongside the normal Free to Air channels. There is no way that SKY or anybody else could monopolise the RF channel (third wavelength) over fibre to the home. See my website for more information:


    Always great to read your posts!


    Oh No! Apple rumors… even here on Mark Schubin’s Blog???

  • Mark Schubin

    Why was Early Bird significant in 1965 when Syncom III did essentially the same thing non-commercially in 1964? The business of television makes a difference. Here’s a story about how carriage on Syncom III actually delayed coverage of the 1964 Tokyo Olympic games in the U.S. (but not in Canada):

    Craig Johnston called this to my attention.

  • Pingback: Schubin Cafe: Miranda Board Agrees to Belden Buyout | Kelly On A Tangent()

  • schubincafe

    I should add a little about Goberman and the return of live television.  Those of you on LinkedIn can read a post I did on the Media-Technology and Opera History group there:

    For those of you who aren’t on LinkedIn, here’s the gist. ¬†After describing live opera television projects dating back to 1928, I add this:

    Almost all of the above, however, was studio-shot programming. When the NBC Opera Theater team went to public television, they still produced studio-shot opera.

    There had been some televised opera transmitted live from the stage. The first was by the BBC in 1947, the second by ABC in the U.S. in 1948. As recently as the late 1960s, however, when opera was shot from the stage, it almost might as well have been in a studio. Light levels soared. Cameras were mounted on platforms in the middle of the seating. Staging was adjusted for television needs. It was definitely not a typical live-opera experience.

    That‚Äôs where Goberman came in. His mantra for the television team was ‚ÄúWe are transmitting an opera, not producing it.‚ÄĚ He wanted his cameras and microphones to be invisible, and there was to be no interference with the performers, the staging, or the audience. That‚Äôs why broadcasters from around the world came to Goberman to learn how to transmit opera performances on television, not how to shoot television operas.

    Mark Schubin

  • James Gardiner

    I have a question for you.
    I understand the percentage shutter theory, but I don;t understand why the industry is still using it as opposed to being more direct.  For example, why say
    60fps at 180deg shutter.. and not say 60fps with 1/120 exposure.
    degree shutter means completely different exposure depending on the fps.
    I find this very confusing as giving any deg. shutter means nothing without fps.

    Why is the industry not just going to exposure (which has a certain image/motion look) and remove the issue of fps in regards to the look/motion blur.

    Ie using X exposure, giving a certain look is not necessarily a factor requiring the fps.

    Is there any particular reason to refer to it in this way as opposed to exposure time?
    Is this just old dogs and how they work applying it to a new world of many frame rates?

    With fps now a very variable item in film production should we not move away from deg.shutter?

    Can you expand and also give your opinion.

    James Gardiner

  • schubincafe

    I can think of at least three reasons. ¬†One is the cinematographic tradition. ¬†I’m currently working on a research project relating to electronic image-sensing technology used inside cameras, and I’ve been surveying many members of the industry about it. ¬†Of course, those who developed the technology are familiar with it. ¬†Among users, however, those who come from an electronic-imaging background — the ones who use the technology most — don’t seem to know much about it, whereas those who come from a film-imaging background do. ¬†Cinematographers have been figuring out what works and what doesn’t a lot longer than videographers, so if they prefer to speak of angles that’s good enough for me.

    Another reason may be found in my post above.  Park Road Post demonstrated multiple frame rates and multiple exposures at the SMPTE Technology Summit on Cinema, and in each case the change from 270 degree to 360 degree caused the greatest impact relative to a video-type look, regardless of frame rate.  If those results had been reported (rounded to three digits) as exposures of 0.03 second versus 0.04 second at 24 fps, 0.02 vs. 0.02 second at 48 fps, and 0.02 vs. 0.01 second at 60 fps, would you have had a clue of what was going on?  Even with more digits, that would be 0.04167 vs. 0.03125 at 24 fps,  0.015625 vs. 0.02083 second at 48 fps, and 0.0125 vs. 0.01667 second at 60 fps.  Is that clearer than saying that, regardless of frame rate, shifting from 270-degree to 360 degree created more of a video-type look?

    Finally, even in this supposedly all-electronic era, some of the latest and greatest cameras using electronic image sensors still offer mechanical rotary shutters.


  • James Gardiner

    Thanks Mark,

    But I must admit this effect you have been talking about in that full open 360 degree shutter makes it look more video at any fps.
    I have been racking my head all day on that discovery and came up with a question.

    If we firstly take out the issue that people do not seem to like the video look.  And just talk about the result..  the results seem to indicate that part of the video look appears to be the fact that there are no visual steps between frame 1 and 2.  
    If the shutter is 360 open, the motion blur would then overlap/join to the previous frame.  If there is a less the 360 degree shutter, the motion blurr will have a step or gap between when it stops and when it starts on the next frame.

    This is the main different I could come up with. ¬†Now is this step affect a “what we are use to” result of the historic film process. ¬† Could this hyper-reality result of the 360degree shutter be exactly what we should expect. ¬†Would it be welcomed by a person who has never seen film before. ¬†Ie is this a bad thing or just a issue with those not use to it. (Those use to traditional Film).

    You saw it and have great analogical eyes.  What do you think?

    This is a very interesting result to me as I am a big believer in HFR 2D and 3D. I think we will be shooting more HFR very soon, much more then 3D. 

    Results like this do make me wonder..  

    Wonder if this is more of a generational thing and not like going from SR to Digital Audio.  (Universally liked).  Going HFR is a similar transition for your eyes.  

    If anything it shows us how different our ears are to our eyes in the way we process information.


  • schubincafe

    Thanks for the comment on my eyes, but they belong to an engineer, and I hesitate to make aesthetic comments. ¬†That’s what artists are for. ¬†

    Once I worked on a classical-music television show in which the director chose to use an effect that I considered awful.  I was not alone; everyone on the technical crew considered it awful, too.  But the audience loved it, and the conductor said he found it so powerful that it moved him to tears.

    Here’s something I wrote in a previous post this year, “Smellyvision and Associates”:

    “Back in the publicized 70-mm film era, special-effects wizard, inventor, and director Douglas Trumbull created a system for increasing temporal resolution in the same way that 70-mm offered greater spatial resolution than 35-mm film. It was called Showscan, with 60 frames per second (fps) instead of 24.

    “The results were stunning, with a much greater sensation of reality. But not everyone was convinced it should be used universally. In the August 1994 issue of¬†American Cinematographer, Bob Fisher and Marji Rhea interviewed a director about his feelings about the process after viewing Trumbull‚Äôs 1989 short,¬†‘Leonardo‚Äôs Dream.’

    ‚Äú’After that film was completed, I drew a very distinct conclusion that the Showscan process is too vivid and life-like for a traditional fiction film. It becomes invasive. I decided that, for conventional movies, it‚Äôs best to stay with 24 frames per second. It keeps the image under the proscenium arch. That‚Äôs important, because most of the audience wants to be non-participating voyeurs.’

    “Who was that mystery director who decided 24-fps is better for traditional movies than 60-fps? It was the director of the major features ‘Brainstorm’¬†and¬†‘Silent Running.’ It was Douglas Trumbull.

    “As perhaps the greatest proponent of high-frame-rate shooting today, Trumbull was more recently asked about his 1994 comments. He responded that a director might still seek a more-traditional look for storytelling, but by shooting at a higher frame rate that option will remain open, and the increasedspatial¬†detail offered by a higher frame rate will also be an option.”

    At the NAB/SMPTE Technology Summit on Cinema, Phil Oatley of Park Road Post carefully pointed out that what we saw was experimental footage. ¬†There were no costumes, sets, real actors, real direction, or cinematographic decisions to create a particular look. I believe that, with those applied, the sensation would have been different. ¬†Conversely, there have been reports by some who say they have seen scenes from the movie that they don’t like the look.

    All of that being said, my research shows that perception is learned (there will be more about that in my next post), so, for whatever my opinion is worth, I tend to agree with you about generations. ¬†People who grew up in the era of monochrome movies feel differently about color vs. black-&-white than do those who grew up in the color-movie era. ¬†Yet, even today, when it’s hard to find a monochrome camera, directors sometimes intentionally choose black-&-white.


  • It is great to hear, because we can use 3D lenses near future.

  • GordanShumway

    Great article! I read the full version of Ergen’s comments
    on Reuters, and loved the bit about Auto Hop helping parents protect their kids
    from some of the more unsavory elements of TV commercials that seem to
    gradually slip into the prime time schedule. As both a Dish employee beta
    tester and a parent, I have to agree with him. I love Auto Hop because it gives
    me more control; a choice of whether or not I want to watch the commercials
    during a prime time show, and more control over what my kids do and don’t see.

  • HenryStoltz

    I don’t understand why CBS, FOX, & NBC execs don’t
    want us to enjoy commercial-free TV. I‚Äôm a DISH employee ‚Äď AutoHop is great
    because you can easily watch commercial-free TV. Public Knowledge, a consumer
    advocacy group, is taking a stand for consumers by creating a petition that
    tells CBS, FOX, & NBC media to keep their hands out of your living room
    & DVR. Sign their petition to keep control of how you watch TV

  • Gil Gunderson


    Why don’t
    CBS, FOX, and NBC execs want consumers to enjoy commercial-free TV? It’s what
    we want! I’m a customer and employee of Dish, and I think AutoHop is great
    because you can easily watch commercial-free TV.  A well known consumer
    advocacy group, Public Knowledge, agrees that people should have the right to
    control how they watch TV. They’re taking a stand for consumers by creating a
    petition that tells CBS, FOX, and NBC media to keep their hands out of your
    living room and DVR. Sign their petition to keep control of how you watch TV

  • I can‚Äôt wait till this release, I wanna get an experience with that 3D contact lenses and I think it will change the lenses world.

  • Christopher Wilkes

    Mark, always 100% on the heartbeat of video¬†acquisition¬†technology but as of late you seem to be unaware of the elephant in the room. Do not forget that the “TV” and “Cinema¬†experience” is currently morphing into hand held devices and possibly eyeglass devices. Resolution will still increase. It took too many years to upgrade color television and film based cinema. We are no longer hindered by¬†governmental¬†regulations or Hollywood’s¬†theoretical¬†distribution. With IP based distribution technology growth is only limited by patent litigation frenzy and consumers pockets.

  • schubincafe


    Perhaps it’s my unawareness, but I’m not sure I’d describe it as an elephant — or even necessarily in the room — at least not yet.¬†

    True, the new iPad has a screen with 2K horizontal resolution (slightly better than 1080-line HD’s), and, true, many people are watching some video on some form of mobile device. Because of the short viewing distances of such devices, eye-accommodation issues become even more significant, especially for stereoscopic 3D. And there are other viewing differences associated with them, including the problem of hand holding them for long-form programming. So it’s not clear to me that those devices, at least as we know them today, are in a position to change our TV-viewing habits. ¬†Goggle-type devices might be different; they can deal with both issues.

    Each year, I begin the HPA Tech Retreat with the latest statistics on mobile viewing (you may download the presentations in the “Get the Download” section of this site). This year, I began by noting the rise in numbers of people watching some form of video on a mobile device: 31 million as of the third quarter of 2011 in the U.S. alone, according to Nielsen. But, when you move to watching TV programming, the numbers are very different.¬†

    Rapid TV News reported “for Americans, watching TV is almost a full-time job” based on Nielsen’s figures reporting an average of 35 hours 8 minutes of TV viewing per week. On mobile devices, the highest demographic category, teens 13-17 equipped with those mobile video devices, watched just 17 minutes per week; even that group (actually, teens 12-17) watched 24 hours 11 minutes on traditional, non-time-shifted TV sets.

    Maybe things will change. Cinema viewing has certainly been affected by TV. On the other hand, sometimes they don’t change. Note from the first paragraph of my post that a hot theme at NAB was once teletext.

    Thanks, again!

  • Johnmartinturner

    Deepsea Challenge is being post produced at Digital Pictures in Melboune Australia.

  • schubincafe

    Professor Marty Banks of the Visual Space Perception Laboratory at the University of California – Berkeley reports the following:

    “We know Bruce. He’s a vision scientist at UC Santa Cruz. My colleague, Cliff Schor, measured his stereo vision about 20 years ago and documented that he has stereopsis. His stereo acuity was sub-normal, but he definitely had stereo vision. The way the article is written makes it sound as if he’s gone from no stereo to excellent stereo. That’s overselling.”

  • Ann


  • Lorayne Branch

    ¬†I came across Schubin Cafe a few years back and always enjoy checking it out I like the history stories especially the fact that you mention never fail to mention Henry Sutton in relation to the invention of television. A few weeks back I stood in front of Michael Faraday’s statue at the IET in London and then came back and gave a lecture on Henry Sutton. Keep reminding people of the long road of inventions Mark it is sometimes lost on the youth of today with how fast technology is now moving.

    Lorayne Branch
    Great Granddaughter of Henry Sutton

  • Mark Braunstein

    Edgy+Clever = Important.

    Mark Braunstein
    President & Founder

  • Chelsea Richards

    Sony 4K LCD looks like pretty neat. Love it! Thanks for sharing!

  • Deborah D. McAdams

    Mark, they need to take it to kickstarter.

  • Lloyd L. Thoms Jr.

    Age and infirmities prevent me from attending. Would it be possible to obtain a written copy of the lecture at some time?  Please advise. Thank you.
    An opera fan(atic).
    Lloyd L. Thoms Jr.
    Wilmington, Delaware

  • Galley

    I first learned about remote viewing of baseball in Ken Burns’ Baseball. I think it’s fascinating that tens of thousands of people would stand in the street to watch these recreations. I can definitely see its appeal, though.

  • schubincafe

    “Fandom of the Opera” lectures are available in the “GET THE DOWNLOAD” section of this site. ¬†Two are also available on YouTube. ¬†Search for schubin fandom.

  • schubincafe

    It might be the oldest existing recording of an American voice, but it’s definitely not the world’s oldest recording of a musical performance. ¬†You can hear recordings made 18 years earlier here:¬†

  • Pingback: Schubin Cafe – Comments on the Tessive Time Filter, redefining post production‚ĄĘ()

  • Great write up on this widely misunderstood topic. ¬†I have a little video that shows the sound delay propagating through the first Obama innaguration, you can see it here:¬†

  • schubincafe

    Very cool!  Thanks for posting!

  • disqus_R2tQjD6irY

    I am not sure if this got thru but if you are interested in Fractal technology and want to see how it works with a video that can be compressed or then to screen size resolution size independent
    Look at a company TMMI

    They have a working coded and you will see it very soon so if you are a techie type person all i can say is seeing is believing

    There eniter history can be read at trudef blog

    Enjoy the read and if you truly understand that with there working codec and today’s computer processing speeds have made this a perfect storm for what is about to happen in 2013

    There are no ties ever do H264 MPEG consortium

    You will hear about this company in 2013

  • To me, The Hobbit HFR reminded me of watching a BBC drama shot on video (in HD).

  • Great little video. Fun and informative.
    Thanks Mark!  

  • Great little video on the¬†origins¬†of 24 frames per second.¬†
    Fun and informative.
    Thanks Mark! 

  • Great Presentation.. Was completely unaware of this interesting set of inventions. Thanks Mark well done. ¬†

  • Wow what a story. Great presentation Mark, this is fascinating. ¬†I knew baseball was huge but the attendance at these events and the number of inventions associated with them is astounding. ¬†Thank you for sharing this interesting bit of our history!

  • Christopher Wilkes

    Looks promising, will have to check them out at NAB. Thanks for the info.

  • James Gardiner

    Where can we see the full video?

  • schubincafe

    You’ll have to check with SMPTE. They expect to have the slides available for attendees soon.

  • schubincafe

    This isn’t the only place you’ll see this story, but The New York Times ran a story headlined “Original Wax Voice Record, Made by Bell, Is Heard at Smithsonian After 56 Years.” That story ran on October 28, 1937 and is about a Bell recording made in 1881. See story in “Corrections and Elucidations” above right.

  • HPA

    Hi Mark,

    How can one shoot 3D live? How do you adjust depth of field in a live setting?

  • SneakySAN

    Real-D 3D is a digital stereoscopic projection technology. It is currently the most widely used technology for watching 3D films in theaters. Because Real-D 3D uses a single projector, it suffers a brightness disadvantage. The system causes “significant light loss”.

    IMAX (an abbreviation for Image Maximum) on the other hand; IMAX-3D has the capacity to record and display images of far greater size and resolution than conventional film systems. To create the illusion of depth, uses separate camera lenses to represent the left and right eyes. This translates a better viewing experience as the screens of IMAX fills your entire field-of-vision (FOV).

    In conclusion, Real-D 3D while widely available, sound and image may not always be in sync in quality. Theaters presenting this format are given liberty to customize its systems (ie: Dolby 3D, Dolby 7.1, THX, Screen size, angle of seats, etc..) Meanwhile its rival, IMAX delivers a consistent viewing experience as it requires its theaters all over the world to maintain its strict standards despite its limited locations. Choosing what format to watch really boils down: where are you from? Is there an IMAX theater nearby?

  • schubincafe

    There has been live 3D since 1928. It’s tricky, but it can certainly be done.

  • Jeroen Stessen

    When you discuss the color gamut of the van Gogh painting, you say that 3 color primaries are not enough for rendering green-blue. This is true for displays, which is why there exist multi-primary displays and other wide color gamut displays. It is not true for cameras, any decent camera sensor with 3 channels can capture the entire gamut of human vision. The limitation is in the matrix that converts to BT.709 RGB signals, when negative output values are clipped to zero. A 3-primary camera with LMS or XYZ filters would do the job, but perhaps not with the best S/N ratio. Multi-primary cameras are only needed to capture *more* than what the human eye can see, that’s for spectral imaging.

  • Badger

    Brilliant list, but unfortunately it doesn’t display correctly – at least for me using MSIE 10.
    The left hand and the right hand columns quickly lose sync – which kind of defeats the object.

  • Mark Schubin

    Excellent point! I’ve just added a PDF version at the top.


    And enjoy!

  • schubincafe

    Excellent point! I’ve just added a PDF version at the top.

    Thanks, and enjoy!


  • Edward Wolcott

    So is the complaint that 3D costs extra or that viewers just don’t want 3D? I’d argue that if the 2D and 3D versions of a film are offered at the same price, most consumers would opt for the 3D version. It’s kind of like Dolby Atmos – it certainly adds something to a film and I’d prefer to watch a movie with it, but I’m not going to pay extra for it.

  • Franziska Schroeder

    Great talk! Thank you.

  • Stephen Neal

    Those glasses look to be passive not active, which wouldn’t allow you to do that – unless you have a second set of glasses with the same polarisation for both eyes? With active glasses you just open/close both lenses together rather than alternately to achieve the “watch two shows at the same time effect” (But what do you do about the sound? Headphones for one viewer?)

  • Stephen Neal

    Should also add that unless the couple in question are watching on a PVR or on-demand they don’t have a choice of Strictly and Match of the Day, they air on the same channel at different times…

  • Bruce A Johnson

    AMEN! Finally a little sanity. Let’s all remember that the Consumer Electronics Association is *not* interested in the quality of your home picture – all they care about is pushing boxes out of warehouses. HDTV is not perfect, to be sure, but we haven’t even gotten good at it yet.

  • Matt Kirschner

    The main concerns reflected in this are representative of the current state of broadcast tv. This is much bigger than the shift from SD to HD. You have to understand the paradigm shift that is occurring as a whole to understand the importance of the markets role in bringing consumer technology up to a standard that anyone can view the intended image at anytime. The current bottleneck is bandwidth for broadcast, but when a vast majority of consumers are reportedly shifting towards an a la carte style viewing experience, having a monitor with the ability to resolve the color space, frame rate, and resolution of the image on your hard drive will be the only thing that can remove the banding, smearing, and artifacts that are noticeably present today. my2cents

  • Matt Kirschner

    thanks for the read.

  • SilverknightLV

    Have a look at this :

    Netflix CEO explains 4K challenges

  • AnotherHappyValleySunday

    What’s the point of HD television when you lose all the detail to over compressed video? Sure, I get 100 channels, but they look like crap.

  • Christopher Wilkes

    Looked promising until… another startup with unfinished false promises.
    Needs to be entirely cloud based to deliver published promises. There is no need to have local software install to upload a file. Would guess the uploading software is also a transcode engine as well. Anyone use it yet? It isn’t compatible with my HTML5 compliant browser.

  • Last year. They showed 8KTV at 2013 CES.

  • Michael Silbergleid

    “New.” This year’s showing was with the company’s new 8K technology, including glasses-free 3D with Dolby and Phillips.

  • SilverknightLV

    I (Michael Silbergleid) posted this story, but having seen the demo I was not impressed at all. I am reminded of the early days of HD when the press thought the images looked great, but those who knew what to look for could see macroblocking (especially in 8-bit video and later in ATSC). That being said, it does move the technology further along.
    Full Disclosure: I do contract work for Soliddd Corp., which is a 3D company with glasses-free print, video display, and acquisition technologies.

  • Christopher Wilkes

    Can’t wait to find out too, great insight. What if Aero looses and economical ATSC PVR’s and piracy become the mainstream. Over the Air networks don’t realize scheduled programming will never work for non-live programming with expectations of modern society. People will get the content they want when they want it at the price they deem acceptable. $100/month for TV in the $1 app world is laughable. There is gong to be a pop in the bubble of sports subsidized distribution when non-sports enthusiast keep cutting the cord. Just ask Game of Thrones how fighting cord cutters is working for HBO? Most people not into sports will gladly pay one tenth the price to get programming that is a season behind.

  • Jeroen Stessen

    Yeah, duh, the same problems and complaints also happened when the analog channels were cut off and DVB-T came on the air. The cable companies had learned not to re-use the analog channels, but they conveniently forgot to re-allocate their channels in order to avoid the new digital channels. Besides, the cable is now so full that they couldn’t afford to leave any channels unused anyway. Best they can do is put the most important programs on the best quality channels.

    4G / LTE obviously re-uses UHF channels that previously belonged to TV, so interference is guaranteed. It also doesn’t help that the new RF sources (SFN TV, 4G cells) are nearer to the homes.

    The only solution is to improve the shielding of your in-home coax cabling. The materials and advice have been available since the introduction of DVB-T. The cable companies have no right to complain about rightful use of on-the-air spectrum.

    Cable coverage is extremely high in Nederland, almost nobody uses DVB-T or DVB-S(2). Digitenne (DVB-T) are losing customers by the thousands. And cable is losing customers to glass, but those are not fundamentally different providers.

    I have given up (analog) cable a few years ago, in favor of DVB-S2
    and DVB-T. Later this year I’ll subscribe to glass fiber to the home,
    which comes with a package of analog and DVB-C channels through a coax
    output on the modem. We’ll see if my in-house cabling is still good
    enough, as I have only been using it recently for transporting the same
    signals that are on the air (DVB-T and FM radio).

  • LouiePorter

    Kimio Hamasaki is a well-known figure in the audio industry. He was also the 2014 President-elect for the AES, before withdrawing.

  • Mark Schubin

    It might be worth noting that Alexander Bain came up with the concept of image scanning in 1842 and patented it in 1843. Sometimes it takes a long time to get an Emmy. Coaxial cable, multichannel cable television, remote controls, and videotape all took decades. But Bain will have an unprecedented delay. By the time of the ceremony on January 8, 174 years will have passed between achievement and award.

  • Michael Silbergleid

    Just posted a video on YouTube for a client where I use stock music I have had a license for since 2002 from Sonicfire SmartSound’s library. Getty claimed copyright because they also sell it.

    What use is buying a music library if I have to defend its use?

  • Mark Schubin

    Oy! Thanks for letting us know.

  • schubincafe
  • Great article. I also find it humorous that early telephone systems used relay switching and did not have any “bandwidth limiting” – so it was quite common to transmit television signals over phone lines. We then lost the capability of transmitting video over phone lines until 1996, when VXtreme came up with their streaming video platform – 160×120 resolution video at 15fps over 56K modems (VXtreme was bought by Microsoft, and that technology became the Windows Media Player).

  • schubincafe

    Thanks! Capacitance in the lines, themselves, effectively limited bandwidth, but the bandwidth of early television fell into the audio range, allowing it not only to be transmitted over telephone lines and broadcast on AM radio stations but also recorded on gramophone/phonograph disks. Slow-scan video, even in the electronic age, was sometimes carried on ordinary telephone lines, and bit-rate reduction (“compression”) technology allowed it to be transmitted via ISDN (which often used ordinary telephone lines).

  • Jonas Paulo Negreiros

    Bravo, Mark!

  • David Lezynski

    All Brilliant~~~~I appreciate your presence on the planet

  • John A. Rupkalvis

    Both versions of this movie were in stereoscopic 3D. A major difference in the visual impression perceived depends upon the viewing distance from the screen.

    If you were to wear the 3D viewing glasses while walking down the aisle toward the screen, you would see a marked difference in the image from the front row as compared to the back row. In the back, the proportions from the closest object in the image to the furthest object in the image will appear more than they should be relative to the lateral or vertical proportions. That is, everything on the screen will appear “stretched”, or elongated along the Z axis, as compared to the horizontal and vertical proportions along the X and Y axes.

    As you approach the screen, they will appear more-and-more normal until somewhere in the middle of the auditorium, where the X, Y, and Z axes will all appear to be “normal”, that is, proportioned equally to each other. This is called the “Ortho stereo position”.

    As you pass this point, and keep getting closer yet to the screen, the proportions along the Z axis will appear to be less than the proportions along the X and Y axes, and all parts of the image will seem to be more “squashed”, or flattened, along the Z axis.

    Actually, this is an illusion created by the variable magnification (size) of the image due to the viewing distance, and the variable amount of disparity difference due to the metric of the position of homologous points on the screen being located on the eye retinas, as visually compared to the fixed interpupillary separation of the optical centers of the pupils of the eyes. At different viewing distances, the noted proportion variances will occur because of the difference of the disparity changing by less than the magnification difference of the X and Y axes. In actually, you see “greater depth” closer to the screen, but it seems to be the reverse, because the magnification of the screen image increases by a greater amount than the magnification of the disparity as perceived by the eyes at a fixed spacing.

    Therefore, a person sitting in the front row of any 3D movie is seeing a very different movie than someone in the back row of the same auditorium viewing the same 3D content, whether “regular” or “super”. In order to fairly compare other attributes, such as HFR (frame rate), HDR, (gamma/contrast/brightness), resolution, color values, etc., it is necessary that both experiences are viewed on the same screen size from the same viewing distance from the screen.

  • Patrick vonSychowski

    Fascinating as always, Mark. Thank you and well done on cracking the Polish.

  • schubincafe

    Actually, in the U.S., the regular version was not 3D.