27 January 2009

DIY Nintendo Wii 3D Tracking Hack

Seems to me after watching this video that widespread 3D holographic effects are just around the corner. I wonder how difficult it would be to get this working with most webcams in most laptops, or if the necessary sensor technology could be embedded easily.

Thanks to Rigo for showing me this video :)

22 January 2009

Cubism vs. Hypercubism

I thought I would take a stab at a concise definition of Hypercubism, a word I use quite often and have until now perhaps not defined so exactly. For the sake of reference, here is an excerpt of Wikipedia's definition for Cubism:
In cubist artworks, objects are broken up, analyzed, and re-assembled in an abstracted form — instead of depicting objects from one viewpoint, the artist depicts the subject from a multitude of viewpoints to represent the subject in a greater context. Often the surfaces intersect at seemingly random angles, removing a coherent sense of depth. The background and object planes interpenetrate one another to create the shallow ambiguous space, one of cubism's distinct characteristics.

In contrast we might consider this summation of Hypercubism:
In hypercubist artworks, objects are particlized, analyzed and synthesized in a realistic form — instead of depicting all objects from one temporal perspective, the artist (or artists) depict the subject from a multitude of temporal perspectives to represent the subject in a greater temporality. Often the surfaces of intersect seamlessly, creating a coherent four-dimensional spacetime illusion. The background and object planes are always distinct to create deep concrete space, one of hypercubism's distinct characteristics

Besides being a bit of encyclopedic revisionism, these two definitions set up a useful theoretical dichotomy between Cubism and Hypercubism. Simply put:

Cubism shows multiple spaces in the same time while Hypercubism shows multiple times in the same space.

Microsoft's C-Dragon and Video Synth

I thought I would resurrect this post and talk a little bit more about what I think the implications are for Hypercubist Cinema with technologies like C-Dragon and Photosynth.

These technologies illustrate perhaps the most fundamental aspect of my theory of hypercubism, namely, an aesthetics in which multiple times are visible in the same space.

If the set of a fictional movie were photographed using such technology in concert with some of the multi-camera object-oriented methods and/or scanners, I imagine a robust hybrid system could emerge which would realistically texture map the photos onto detailed clouds of spatial data. The implications for the editing (read: tesseracting) afterward would be tremendous, allowing granularization of every object and every word, every facial expression or movement.

I predict that the difficulty will be in recording sound in such an environment. In a way, we might see a second era of silent films with the emergence of early hypercubist systems. My hunch is that commercial pressure will force new innovations in multi-track recording to the degree that individual sound sources in the same acoustic space will be able to be mixed independently of one another as if they'd been recorded in separate isolation booths. This theme of hypercubist synchronous sound deserves its own post in the future.

PS: sorry for the long BMW advertisement at the end of this video; BMW sponsor the TED talks...

18 January 2009

200 Cameras, 2.5 million frames and 20,000 Gigabytes worth of Toshiba Magic

Although we've seen this effect used since The Matrix in numerous advertisements, films and spoofs, this example reveals in its reduced, contemporary aesthetics the object-oriented nature of the hypercubist revolution on its way. The subtle interplay of the different players in this "time sculpture" (as the ad agency people call it) reveals the short-comings of 2D compositing while also hinting at an entirely different image world.

Of course my contention is that the methodology used in this process is already theoretically obsolete. I predict that the scanning technologies of the future should be able to yield clouds of photo-realistic data without the necessity of 200 distinct video cameras. Nonetheless, state-of-the-art Californian motion capture company MOVA is also using a highly complex strobing multi-camera system together with phosphorescent makeup.

Their white paper on volumetric cinematography is a great quick read, written in accessible, non-scientific language, and resonates strongly with my own hypercubist theory.

These developments surely must be fascinating for the early pioneer of blue-screen proto-hypercubism, Zbig Rybczynski. His 1980 film Tango was an essential forerunner of such object oriented imagery.

The concept sketches, diagrams and film still from the finished composite of Tango reveal a direct hand-crafted analog predecessor of the Toshiba spot;

01 January 2009

The Explosion of Cinematic Time

This coy video from Dan Goldman of Adobe Systems alludes to some pretty fundamental concepts of what an object-oriented cinema might entail;
  • Drawing on Objects
    (making graphic changes to objects which stay with the object in time, as opposed to simulating object-oriented change by altering every frame)

  • Delineating Paths of Objects
    (tracking and displaying an objects path through space)

  • Attaching Visual Metadata to Objects
    (text annotations in the form of cartoon speech bubbles)

  • Object Timeline Scrubbing
    (using a "click and drag"approach to scrub the timeline)

  • "Throwing" Objects
    (giving objects the ability to be manipulated using analog velocity controls)

  • Segmentation of Objects
    (enabling puppet-like effects)

  • Hypercubist Time
    ("drag and drop" manipulation of individual objects through their own timelines to create composite hypercubist time)
Dan explains that his software system analyzes the movement of points and allows the system to recognize moving objects. The video also demonstrates how algorithmic analysis will be able to accurately calculate hypercubist compositing to eliminate visual artifacts.

The curious and persistent question in my mind is: when will this hypercubist vision for moving images be embraced by the video camera manufacturers? Why not displace some of the processing power needed to identify the objects into the camera system itself? There are certainly high-end special effects tracking systems for the commercial industry to do all kinds of compositing and layering of 3D and real images, but I think lower price point systems have a genuine appeal for numerous less glossy applications. It seems like Dan is already on this tip in his experiments with puppets.

The Dynamic Graphics Project at the University of Toronto has also been contributing to this field with its Dimp video player prototype which allows to browse video clips by directly dragging their content.

These interface prototypes are a great step forward in establishing hypercubist video aesthetics. I think 2009 looks auspicious for these projects. With YouTube's recent launch of annotations even with a silly example like this its not hard to imagine more and more regular consumers demanding sophisticated object-oriented video tools.