23 November 2010

3D Video Capture with Kinect vs Neato Robotic Vacuum



Let's see this done with two Kinects and some kind of interpolation algorithm, so that we get full-on volumetrics! That, or trump the whole Kinect thing by hacking the XV-11 Lidar!

14 November 2010

Object Recognition using Kinect on the PC



If this kind of thing is possible with a live video stream it should be possible to have seamless hardware and software integrations in the future. Either the camera sees objects and writes semantic objects into video metadata while recording, or video could be processed after being recorded.

09 November 2010

One Step Closer to Universal EDL



Last week I attended the Mozilla Drumbeat Festival in Barcelona. It gave me an opportunity to collaborate with an amazing ad hoc team of people in the context of the Open Video Lab, chaordinated by Brett Gaylor and David Humphrey. Together over the course of a two day sprint, a big team of us collaborated on a demo of the popcorn.js javascript library that really shows off the potential beauty of web made movies. The vimeo video above is just a screen capture; for the live demo visit this page.

Photo by Samuel Huron BY-NC-ND
It was a very rewarding experience to contribute to the aesthetic and conceptual process. I enjoyed the challenge of conducting interviews in languages I don't speak, and collaborating with the multilingual Xabier Cid on the editing process. I was honored to be able to address the audience at the "BEST of the FEST closing variety slam showcase" for the need for new approaches to film school in the face of scrum/agile approaches to storytelling.



What is great about the demo is how it utilizes time-coded metadata to retrieve live content from flickr and twitter in real time. It shows how as we move towards an object-oriented moving image we will continue to redefine what cinema is and also our notion of editing. The tweets are aggregated from the #futureofeducation hashtag. The flickr photos that appear in the demo are called based on timeline metadata that I approximated by putting dummy content (the blue events in the screenshot above) on the timeline to get a sense of a rough rhythm. I then gave a rough approximation of that timecode information to Berto Yáñez, the programmer who did much of the heavy lifting on the demo. Oscar Otero helped with the design of the page. Oscar, Berto and Xabier all work together at the Galician web company A navalla suíza.

Photo by Homardpayette
This process, which involved swapping lots of data across computers via USB sticks, underscored the need for a Universal Edit Decision List (EDL). This was something I identified  about a year ago as part of my rubric for open source cinema. The Universal EDL got discussed quite a bit during the video lab, and together with the amazing work that's already been done creating a web-based timeline interface with Universal Subtitles, it seems like the seed of inspiration to take things a step further has been planted. I am very excited to have contributed to these developments towards an object-oriented open source cinema!

It would be great to see all the names of the participants in the workshop added to the demo. During the demo Laura Hilliger, David Humphrey and I put together a nice cloud-based credit concept for solving the dilemma of crediting multiple parties with multiple credits. Laura should have a rough list of names and roles, and those who are missing could use the #drumbeat #videolab hashtags on twitter to ID themselves, or comment on the video, so we can round everybody up.

20 August 2010

Multiple Sidosis



An amazing early example of hypercubist videomusical aesthetics, thanks to the Split Screen blog!

06 June 2010

New surveillance camera system provides text feed



(PhysOrg.com) -- Scientists at the University of California in Los Angeles (UCLA) have developed a prototype surveillance camera and computer system to analyze the camera images and deliver a text feed describing what the camera is seeing. The new system aims to make searching vast amounts of video much more efficient.

read the full article

15 March 2010

Augmented Reality vs. Aura Recognition [part 3]

This post was first published as part three of a series of three posts on Augmentology 1[L]0[L]1

Part 3: The Crystal Ball



Film Still, The Wizard of Oz.

During the 1939 film version of The Wizard of Oz, Dorothy visits Professor Marvel and has him read her fortune from his crystal ball. He asks her to close her eyes and takes the opportunity to “read” the belongings in her basket. From these artifacts, Professor Marvel pieces together a story based on his intuition of the meaning of the objects and the context of Dorothy’s visit. Professor Marvel is reading Dorothy’s aura by diving into her metadata and delivers his observations in dramatic and persuasive tones.

Now imagine if Dorothy visited Professor Marvel in the 21st century. His crystal ball is a web-ready mobile device capable of scanning Dorothy’s possessions, clothes, face – maybe even her DNA. This cloud of data is cross-referenced and interlinked with Dorothy’s online profiles and he’s able to quickly conjure up an extremely detailed impression of Dorothy’s past, present and future. At the very least, he’d spot Auntie Em in Dorothy’s Flickr account and come to similar conclusions about Dorothy’s family situation as he does in the film.

As aurec technology improves it will know more and more about us; it will become better at predicting what we do and how we prefer to do it. It will enable us to customize our interactions with everything that surrounds us while also allowing us to share these preferences with others. Search is the essential experience of the web (witness Google). The web asks us “what are you looking for?” every time we use it. To understand the potential of aurec we need to be sensitized to the fact that it will reduce the importance of the question/answer relationship posed by the web and open up an environment of ambient data.

It is my hope that shared aurec experiences will have positive effects on our relationships with other people, allowing us new degrees of emotional intimacy and mutual understanding. Aurec has the potential to change our relations with natural and urban environments by revealing otherwise hidden information on a bespoke basis. This could lead to increased corporate and governmental transparency/accountability as the norm shifts to a sharing paradigm as opposed to hiding data. The more we shift our attention away from gimmicky iphone apps and focus on the broader ontological implications of aura recognition, the more aurec will have the best chances of actualization.

Special thanks to NotThisBody for brilliant insights and reflections while writing this article.

06 February 2010

Augmented Reality vs. Aura Recognition [part 2]

This post was first published as part two of a series of three posts on Augmentology 1[L]0[L]1

Part 2: Infinite Summer Afternoons


Images from Initiations-Studies II by Panos Tsagaris with Kimberley Norcott

Having summarily rejected the term augmented reality for the reasons listed here, I’ll now propose alternate terminology to describe the phenomenon. The following elements contribute to this formation:
  • The mobile web will enable us to become aware of metadata that was previously obscured in day-to-day life.
  • Many current AR applications pride themselves on exposing indications of present metadata relationships which are not as readily apparent as traditional urban indicators (think: fashion).
  • Contemporary visions of AR as something which will merely allow us to hold up our smart phones and look through an AR “window”.

This process of metadata revealing is termed “aura recognition” (or aurec for short). In a future post I will address what I see as shortcomings of visual interfaces for aurec.

In his essay The Work of Art in the Age of Mechanical Reproduction (1935), Walter Benjamin makes the following observations regarding aura:
“If, while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch. This image makes it easy to comprehend the social bases of the contemporary decay of the aura. It rests on two circumstances, both of which are related to the increasing significance of the masses in contemporary life. Namely, the desire of contemporary masses to bring things “closer” spatially and humanly, which is just as ardent as their bent toward overcoming the uniqueness of every reality by accepting its reproduction. Every day the urge grows stronger to get hold of an object at very close range by way of its likeness, its reproduction.”

Certainly – since 1935 – these two “social bases” identified by Benjamin have reached their apex in contemporary digital life. Never before have we had as much convenience in bringing things – whether physical objects or information – into our immediate proximity (think: Amazon, Ebay, Google). Neither have we had the experience of such widespread meme and brand propagation in our physical environment (eg shopping malls, international airports, and fast food franchises). Benjamin continues:
“Unmistakably, reproduction as offered by picture magazines and newsreels differs from the image seen by the unarmed eye. Uniqueness and permanence are as closely linked in the latter as are transitoriness and reproducibility in the former. To pry an object from its shell, to destroy its aura, is the mark of a perception whose “sense of the universal equality of things” has increased to such a degree that it extracts it even from a unique object by means of reproduction. Thus is manifested in the field of perception what in the theoretical sphere is noticeable in the increasing importance of statistics. The adjustment of reality to the masses and of the masses to reality is a process of unlimited scope, as much for thinking as for perception.”

This “sense of the universal equality of things” is the hallmark of the web. All searches are, ostensibly, equal before Google. Yet, among the ruins of this auric destruction, the web is simultaneously imbuing our lives with all kinds of unique and permanent phenomena. These phenomena make up the essence of our digital auras; auras created less by physical objects than by the specificity of context, relationship and juxtaposition. Aura Recognition is the means by which we access these phenomena.

Consider for instance how unique it is to geophysically meet someone who you’ve only previously known online. In the best case scenario, aurec will help us make sense of the emotional significance of digital phenomenon in ways which are meaningful and helpful. Location based services (think: GPS technology) provoke new experiences which are just as dependent on proximity as Benjamin’s proverbial summer afternoon.

(to be continued in "Part 3: The Crystal Ball")

16 January 2010

Augmented Reality vs. Aura Recognition [part 1]

This post was first published as part one of a series of three posts on on Augmentology 1[L]0[L]1

Part 1: Absurd Assumptions



As many opinion leaders have noted, Augmented Reality (AR) may very well be the next evolutionary step in bringing the metadata of the web into our day-to-day lives. Some suggest that AR technology may even surpass the Web in its sustained impact on culture.



While I whole-heartedly agree with this observation, the use of the term “Augmented Reality” may actually impede any progress forged by these technologies, especially in terms of broad/mainstream acceptance.

The first reason why the actual phrase “Augmented Reality” may impede the cultural uptake of associated technologies is via the use of the word “augmented” – meaning to raise or make larger. AR enthusiasts seem to be comfortable implying that this new technology is somehow the first technology to augment or enhance our reality. This seems absurd, as human societies have a well-documented history of using biochemical technology to augment reality in the tradition of psychotropic plant-aided shamanism. The innovation of written language was a concrete visualization of reality-augmenting metadata. The city may also be considered an extension of reality considering cities are highly constructed frameworks of architecture, roads, sewers, electrical and telephone lines. It seems more relevant to utilize a word that more accurately describes the idiosyncratic peculiarities of a mobile web-ready experience.

My second reason for objecting to the AR term stems from when the word “reality” is employed in relation to what are (in most cases) mobile-web applications. This usage implies that other computer applications are not affecting reality, or at least are not affecting reality sufficiently to be labeled accordingly. This also seems an absurd assumption; the host of software which has prevailed during the history of computing have had an affect on reality too (this, of course, is a total understatement). If it were not for preceding software which has already changed our reality, these so-called “augmented reality” applications would not even exist. Furthermore, this use of “reality” in this context indicates that there is one concrete reality which we are in the process of altering with specific technology. Yet, each of us have our own subjective “reality” experience, with some physicists even postulating theories of a holographic reality. While standards for augmented reality ought to be open to ensure accessibility by any mobile web-enabled device, it is a fallacy to interpret these standards as a consensus on reality itself. This new technology is posed to allow us to customize and tweak our own experience of our reality like never before, as well as the “reality” we share with others.

(to be continued in "Part 2: Infinite Summer Afternoons")