OpenGL
15 Sep 2005 » permalink
It all started with a hint from Richard that he’s already using OpenGL as the display system in his OpenME editor.
In the old days, even before the whole 3d gaming hype, I used to be very much interested in VR-like graphics & visualizations — experimenting a lot with OpenGL and DirectX (sigh). I immediately started to think how we could possibly use all this overblown gaming-oriented 3d-hardware to aid the process of video editing. Well, it seems that the possibilities are there, just waiting to be explored.
Diva will use OpenGL as the rendering engine. If you ever took care to look
at the video renderers available for mplayer or xine, you
might have noticed that they do support OpenGL. But we will do much more
than that. OpenGL (and all the hardware acceleration) will be used not only
for the display. It will be used for the actual compositing and colorspace
handling.
What does it mean from the user’s point of view?
- Absolutely realtime rendering of simple transitions, fades, color corrections, overlays, etc.
- The whole idea of backends I discussed earlier needs a new shape. It will be possible to use that 2048x1024 jpeg’s along with 768x576 DV'files one a single timeline — since hardware will handle all the dirty work. No conversions needed.
- The 3d hardware is widely available, so let’s use it.
The are some caveats obviously. The biggest one the fact, that even though major brands (Nvidia, Ati, Matrox) provide decent OpenGL drivers for Linux, they differ greatly in terms of hardware features (extensions) implemented. Shamefully enough, OpenGL doesn’t provide any defined standard in terms of dealing with YUV data.
But we can overcome this by adjusting the level of acceleration depending on the capabilities present. If you happen to have Nvidia FX or better, it will use the fragment shader, the one you know pretty well if you play Doom 3. If not — we will fallback to multi-texture blending.
Apple already does that with a proprietary extension to Ati
drivers they developed for Quartz (which uses OpenGL). But they support only
one YUV format (4:2:2) and it strikes me we can do much more.
Instead of trying to pixel-convert the textures, we can accurately simulate
the RGB->YUV equation with multi-textural blending. This way we can fed the
the YUV data directly to the GPU without occupying too much of the AGP bus.
Now, I'd be grateful if some OpenGL-magician contacted me to discuss that
before I actually started implementing it.
In other news Eugenia did it again. My monthly bandwidth went
“poof” yesterday. I have to say thanks (again) to rjw who suggested at some
point to host all my demos outside of this site. People managed to generate
70GB of traffic on the Novell Forge in the last two weeks.