23 November 2010
Let's see this done with two Kinects and some kind of interpolation algorithm, so that we get full-on volumetrics! That, or trump the whole Kinect thing by hacking the XV-11 Lidar!
14 November 2010
If this kind of thing is possible with a live video stream it should be possible to have seamless hardware and software integrations in the future. Either the camera sees objects and writes semantic objects into video metadata while recording, or video could be processed after being recorded.
09 November 2010
|Photo by Samuel Huron BY-NC-ND|
What is great about the demo is how it utilizes time-coded metadata to retrieve live content from flickr and twitter in real time. It shows how as we move towards an object-oriented moving image we will continue to redefine what cinema is and also our notion of editing. The tweets are aggregated from the #futureofeducation hashtag. The flickr photos that appear in the demo are called based on timeline metadata that I approximated by putting dummy content (the blue events in the screenshot above) on the timeline to get a sense of a rough rhythm. I then gave a rough approximation of that timecode information to Berto Yáñez, the programmer who did much of the heavy lifting on the demo. Oscar Otero helped with the design of the page. Oscar, Berto and Xabier all work together at the Galician web company A navalla suíza.
|Photo by Homardpayette|
It would be great to see all the names of the participants in the workshop added to the demo. During the demo Laura Hilliger, David Humphrey and I put together a nice cloud-based credit concept for solving the dilemma of crediting multiple parties with multiple credits. Laura should have a rough list of names and roles, and those who are missing could use the #drumbeat #videolab hashtags on twitter to ID themselves, or comment on the video, so we can round everybody up.