Tuesday, May 5, 2009

The Deep Eye Viewer, or how to turn the SL machinima scene on its head.

I love the concept of doing machinima in Second Life.  No other platform gives a machinimatographer such absolute control over their work.  But by the same token, no other platform is quite as frustrating.  Second Life was never made with in depth character acting, dramatic lighting, advanced graphics, or cinematic camerawork in mind.   Lag, limitations in the graphics engine, and even in the avatar models themselves hold back a lot of Second Life's potential for high quality compelling cinema.  That said, the quality of work produced by SL machinimatographers is a testament to their creative will and technical savvy.  But is the nature of SL in and of itself the problem, or just one part of it?

I'd like to back up for a second and challenge one of the assumptions common in most forms of machinima today.  Nameably, the assumption that machinima is the direct per-pixel recording of live game engine output.  Why do we do this?  In closed game systems, such as Warcraft it makes sense, what appears on the screen is pretty much the only accessible output. True, you can perform a GL rip and get what ammounts to a 3D photograph of the game geometry, but in terms of using a engine as a filming apparatus, this method doesn't hold much mainstream value.  Some games such as Halo 2 and 3 open things up a little bit by providing a replay tool, which allows for more advanced shot sequences and interesting camera angles.  Yet nothing comes close to the openness of SL, which is literally streaming data about not only the characters surroundings, but also up to the second changes in status, all in a data stream accessed and interpreted through a open source viewer.   Why are we settling for a system of capturing the pixels cranked out by a video card working overtime, only to recieve footage of so-so quality when we could be capturing event data from this stream and saving it to a file for later rendering and yes, even editing.  Imagine what could be possible if you could shoot a scene, decide that your character came into the scene a bit too close to the camera, and instead of having to reshoot the scene, you could just select the character and shift his performance into the desired position.  Imagine being able to apply such forbidden wonders such as depth of field and raytraced lighting to your shots, where the only limitation to visual quality would be how long you wanted to wait for the final footage to render.  Machinima is supposed to marry the advantages of live action and animation, and a system like this would make good on that. It would also knock down the performance barrier, allowing those with less than stellar computers to still shoot beautiful works of machinima.  Call it crazy if you want, I call it the Deep Eye Viewer.

In essence the viewer would operate as follows.  The machinimatographer would set up their scene as normal, taking into consideration all of the normal concerns of staging, props, animations, etc.   They would then activate hit a pre-record mode on their viewer, which would scope out the surrounding area and take note of all major assets present.  This would include terrain, object UUID's, and position data (NOTE: not actually ripping the prim parameters, just getting a reference for later recall), avatar appearances and their intital positioning, and finally windlight settings.  This in essence creates a snapshot of everything that will be required later to re-rez the scene in a semi-local "sim" for rendering and editing.   Once ready, the viewer will prompt the machinimatographer who could then activate the "recording" mode.  This would begin capturing realtime animation and position data from the pre-recorded avatars in addition to the camera position and motion.  Once the machinimatographer is satisfied with the take they can stop recording.   The recording process can be repeated ad nauseum, with each recording saved as a unique "take" within the data file.  

To review or edit a piece of recorded footage, the machinimatographer would select a file from their hard drive and the scene would be loaded into a semi-local (assets are still being called from the grid) "sim".  The user could then play, pause, rewind, and fast forward through the captured data, edit scene element properties, and mute scene objects from visibility.   Muting is useful in cases where the user wishes to either specifically isolate certain scene elements (opening up the possibility of green screen for machinima) or to remove certain extraneous bits of the scene which detract from the overall effectiveness of the shot. The user could add additional cameras to the scene, in effect allowing for multicam setups of the same action.  Also addable would be advanced lighting setups to enhance the pre-existing lighting, such as spot-lights and negative-intensity lights to add areas of shadow. 

One particular addition to the scene data that would be exceedingly useful is the ability to overlay facial animation data onto a avatar's performance.  Lets face it, the current expressions in SL are clunky at best, and downright offputting at worst.  Imagine shooting a scene and then recording the facial acting through a computer vision system that uses a webcam to interpret your expression (oh yeah, its possible).  The same could be done for the hands, which right now are little more than just great big clunky mitts.  The level of nuace these enhancements could bring would be significant to say the least.  These are advanced functions, to be sure, but something which is going to end up having a major impact on the quality of machinima that is produced using such a system, and something for which Deep Eye would be uniquely suited for.

There are of course questions to be addressed before any of this can leave paper.  For example, would SL allow for the sort of on-demand asset-rezzing described, and if so what would its limitations be?  Additionally there are the obvious concerns of the scope of this project and the amount of effort that would be required to bring it to fruition.  Another valid question is how to marry this data playback to a rendering system.  My hunch would be to leverage existing rendering engines such as Blender's, although leaving the interface open to allow for user choice may very well be a valid option too. 

All in all this might be a crazy rant, but hey, that's what I'm here for.  

2 comments:

evolutie said...

Hi Alan. Great idea.. been thinking about this since I read this interview with Eric Call.
http://www.orange-island.com/?p=387
(scroll to watch on your machinima-made-easier-wishlist)

Alan Tupper said...

Thanks for the comment! Glad to see I'm not the only one who thinks stuff like this would be useful!

Post a Comment