29 November 2008

MIT Creates Center For Future Storytelling

MIT's new Center for Future Storytelling is an amazing and encouraging development. Several of the projects they plan to take on are addressing key questions I have posed throughout the history of this blog.

04 September 2008

3D Morphable Model Face Animation

What is amazing is that this video was posted to YouTube in 2006.

Temporal Resolution

In my very first post to this blog I complained about the limitations of video frame rates (temporal resolution). I was reading more on the products offered by Geometric Informatics and it seems their GeoVideo Real-Time Motion Capture Camera runs at a frame rate that begins to make things interesting on the road towards what I will call Granular Motion Synthesis.

Granular Synthesis in reference to video has been inaccurately co-opted for some years by Kurt Hentschlaeger and Ulf Langheinrich when in fact they are only breaking down the video signal into frame-lengthed grains; 1/25th of a second or 40 milliseconds. The microsound time scale makes this claim to Granular Synthesis more poetic than truthful;

"Microsound includes all sounds on the time scale shorter than musical notes, the sound object time scale, and longer than the sample time scale. Specifically this is shorter than one tenth of a second and longer than 10 milliseconds"

GeoVideo claims to acquire "absolute coordinates at 180 fps" - this equates to 7.2 times more temporal resolution than standard 25 fps PAL video. These GeoVideo "frames" are therefore each 5.5 milliseconds in duration. To give this some context on the microsound time scale, 5 milliseconds is the duration of a honey bee’s wing flap.

Of course the problem lies in a lack of real-time playback systems for 180 fps video. No DVD player, tape player, celluloid projection system or means which doesn't involve custom hardware or a computer is capable of playing this kind of content back to an audience at its full temporal resolution. Even IMAX HD is only 48 fps.

So put plainly, even if I could make a composition of a honey bee dancing wild patterns through spacetime using Granular Motion Synthesis, I would only have the satisfaction of watching it on my laptop. Showing it to a large theater of people seems to still be a ways down the road.

02 September 2008

Quantum Camera Revisited

In January of 2007 I posted about what I considered "Quantum Camera Components". I made reference to speech recognition technology, dynamic time warping, and depth perception as key elements of the so-called Quantum Camera of the future. I am happy to report that now, only a little more than a year and a half later, major elements of that system have evolved which are beyond what I was expecting. And the real fireworks is that the brain of the Quantum Camera has already been put to use in a beautiful music video for Radiohead "House of Cards". Here is text from Google explaining:

"No cameras or lights were used. Instead two technologies were used to capture 3D images: Geometric Informatics and Velodyne LIDAR. Geometric Informatics scanning systems produce structured light to capture 3D images at close proximity, while a Velodyne Lidar system that uses multiple lasers is used to capture large environments such as landscapes. In this video, 64 lasers rotating and shooting in a 360 degree radius 900 times per minute produced all the exterior scenes."

The data the LIDAR scanner produces in Radiohead's video is limited, ostensibly, by some arbitrary factors. I guess this includes such things as:

- scanner-head rotations per second
- number of lasers
- various settings concerning thresholds of signal to noise.

It is beautiful how the LIDAR imagery is distorted and skewed through analog processes. The video has something reminiscent of the Rutt Etra video synthesizer, not only for its visual aesthetics but also in relation to how the property of depth is a factor in the data-visualization.

Yet it is possible to imagine that in the not-to-distant future, a more evolved scanning system would be able to reconstruct a photographically accurate four-dimensional reality. In this light, the team behind this Radiohead video have achieved a remarkable milestone in the history of moving images and cinema. If we are already seeing LIDAR visualizations in this year of 2008, it should be within the next couple years that the full impact of this new medium will reach its visual potential. I can't wait!

Meanwhile on the speech recognition front, Cambridge-based company Everyzing has technology which, if its scalable for the entire searchable web should be able to help on the Hypercubist sound stage. While not conceived for use in production, it would probably be a matter of logistics and inspiration to get the computers needed to crunch the transciption data in close to real time.

As you watch the video above, try to imagine replacing key words of Thomas Wilde's monologue as per the following rubric. It won't work in every sentence, but it might help you see my perspective on the future of this technology and how it relates to cinematic workflow.

"major search portal" - major movie studio
"large destination website with multimedia" - major motion picture
"if these videos are on the web" - if these takes are in the material we shot
"search economy" - editing/tesseracting timeline/hyperspace
"users expect fine grain control online" - editors/tesseractors expect fine grain control when editing

This brings me to the dynamic time stretching issues, and to a recent post I made about Microsoft's Video Synth project. At the time I merely archived the video without comment. What I liked about the technology was its ability to use geolocation data to create visually meaningful relationships from various different photographs.

The potential to combine a technology like Video Synth with a LIDAR scanning system strikes me as the next logical step. This next generation Quantum Camera could allow the true atomization of moving images and usher in an entire new era of cinematic imagery. Instead of Frames, Hypercubes; instead of Continuity, Quantinuity; instead of editing, Tesseracting. This will be a wild ride, and the evidence suggests we've only just begun.

Wow. Just contemplating the Petabytes of data this new cinema will generate gives me a headache.

Thanks to Brandon Rosenbluth for dropping the hint about "House of Cards".

18 July 2008

Cosmopolitan Cyborg Explained

Given the goal of creating an entertaining and cutting edge new station identity campaign for the Italian music network All Music, two words stood out as key concepts:


A human who has certain physiological processes aided
or controlled by mechanical or electronic devices.

  1. Pertinent or common to the whole world.
  2. Having constituent elements from all over the world or from many different parts of the world.
Young people today are more connected to new technologies than ever before. Computers, mobile phones, and mp3 players are just some of the array of gadgets that define the aesthetics of the contemporary youth culture. Today’s young person can be viewed as a cyborg, as many processes which were formerly time consuming real world events are now done virtually with the click of a button.

Globalization has accelerated the mixing and mashing of cultures, producing hybrid forms of art and music. Today’s young person is connected through the internet to large networks of creative influence from all over the world, with an identity that is as digital as it is cosmopolitan.

To express these aspects of the contemporary digital landscape, I proposed to create a personality emblematic of these qualities his/herself: the Cosmopolitan Cyborg. He/she is a hypercubist composite personality made up of the faces and voices of the All Music network’s own VJs. Using only original sounds recorded with the human voice, elaborate yet recognizable genres of contemporary music are synthesized. The result is a concrete answer to the tired world of lip-sync found in music videos. I call it videomusic.

The payoff, which translates to: “All Music, a republic founded on music”, as spoken from the mouth of the Cosmopolitan Cyborg, reads as an intriguing commentary on contemporary Italian music culture, simultaneously conscious of the past and yet boldly jumping into the future of digital aesthetics.

In the future I hope to be able to work with musicians, bands and composers in this manner to create a new definition for music videos befitting our contemporary era of hypercubist aesthetics.

04 June 2008

Take that, Goliath!

The new Weezer video "Pork & Beans" is proof of the slowly crumbling mass media. Corporate media's Goliath is besieged by a swarm of YouTube celebrity Davids!

28 April 2008


noun, plural -ties
  1. Any strategy which articulates vectors of consistency within a multiplicity.
  2. In microcinematics, methods of manipulating video durations which produce results that are not reproducible with celluloid film.
  3. In hypernarrative, methods of dispersing traditional narrative structures across multiplicities inherent in networks and swarms.
[Origin: Coined by Gabriel Shalom in 2007 in a letter to Sven König, a portmanteau of quantum and continuity]

See also Continuity Editing.

30 January 2008