16 January 2007
Everything is Miscellaneous
Thanks to Joyce Shintani for blogging this. I found it relevant to the mpeg7 discussion, among other things that I like in general about interenet culture.
10 January 2007
Let The Games Begin?
Hi all! I'll be taking a somewhat different tack in my contributions here because of my different background. I am a social worker, with some backgound in neuropsychology and cognitive science from my undergraduate experience. I am also a phenomenologist and attempted writer, but have very limited knowledge of cinema per se. My primary interest in these technologies are:
a) in the feedback mechanisms which could potentially be used to incorporate viewer response into the process of "recomposing" quantum films
b) the subjective experience of viewing such a "self-composed" film
c) the social consequences of the loss of conventional shared narratives
d) applications of quantum cinema outside "art"
I want to start off this conversation by saying that quantum cinema already exists and it is called video gaming. Whether Pac Man qualifies for the medium is an academic discussion that is hardly relevant at this point. The new wave of game systems offer games in which characters can be designed, rendered somewhat realistically, and directly controlled to move within certain specifications. This creates virtually limitless possibilities of narrative in a sense, although games are almost universally unsatisfying as stories.
One reason is that the stories tend to bottleneck at certain points (usually during the preconcieved "cinematic" sequences during which the character does not have control). This creates the existencial conundrum of having absolute control but no optins.
A second reason is that the system gives too much control and predictability for the brain to translate into an emotional response. Most intense experiences I have had with cinema have been accompanied by the intense feeling of not being able to control outcomes and therefore being dependent on the story. One good example is watching an episode of "Curb Your Enthusiasm" and becoming painfully uncomfortable with the actions of the characters, feeling for example the intense need to apologize to the other characters for the actions of Larry David while at the same time feeling a great deal of sympathy for/with Larry David because of the actions of his wife.
The reason linear cinema is so succesful is that it allows the film maker to speak directly from the story and therefore "make a statement". If you ask a videogame player (full disclosure: I am one, I especially love shooters) what makes a game great, the most common answer is not the story, the music, the graphics, or even the design of the maps or the artificial intelligence. It is something called The Engine, the invisible but explicitly felt algorithms which determine the ways in which the manipulation of the controls translates into modifications of the game world. The well known Halo franchise, for instance, won out largely on the strength of it's engine. Will fans of cinema one day stop talking about stories and start talking about engines?
07 January 2007
Quantum Camera Components
The quantum camera has a brain and ears.
Speech recognition software performs best when you train the software by speaking a series of important words and sentences. It seems to me that the technology already exists to make prototypes of a system that would embed videos with full text transcriptions of everything spoken on camera as metadata. Nevermind the fact that if what you were shooting had a screenplay you could feed the entire screenplay to the camera before you shoot so it can anticipate what the actors will say.
Techniques such as dynamic time warping would accomodate the variations between different takes of the same shot so that they are all regarded as versions of one another. Seems like mpeg7 would be the best possible way to store this data today.
Frameline 47 lets you take advantage of the mpeg7 standard quite fully and its not surprising at all that a company like Eptascape would be in the security and surveillance market. What we need is the metadata to be entered half-automatically during the shooting process combined with a learning curve for the camera where you have it learn what the actors look like and sound like. A sophisticated infra red camera would help define actors from the background. Using three cameras (one on the X, Y and Z axis) could help to define the physical space and depth. And now a dog in infrared:
Speech recognition software performs best when you train the software by speaking a series of important words and sentences. It seems to me that the technology already exists to make prototypes of a system that would embed videos with full text transcriptions of everything spoken on camera as metadata. Nevermind the fact that if what you were shooting had a screenplay you could feed the entire screenplay to the camera before you shoot so it can anticipate what the actors will say.
Techniques such as dynamic time warping would accomodate the variations between different takes of the same shot so that they are all regarded as versions of one another. Seems like mpeg7 would be the best possible way to store this data today.
Frameline 47 lets you take advantage of the mpeg7 standard quite fully and its not surprising at all that a company like Eptascape would be in the security and surveillance market. What we need is the metadata to be entered half-automatically during the shooting process combined with a learning curve for the camera where you have it learn what the actors look like and sound like. A sophisticated infra red camera would help define actors from the background. Using three cameras (one on the X, Y and Z axis) could help to define the physical space and depth. And now a dog in infrared:
Labels:
infra red,
metadata,
mpeg7,
speech recognition,
surveillance
06 January 2007
Wrinklers In Time
I have just been researching wormholes, string theory and Stephen Hawking's chronology protection conjecture.
Quantum editors will be called tesseracters because the medium they compose with will have four dimensions instead of two.
Quantum editors will be called tesseracters because the medium they compose with will have four dimensions instead of two.
The Tyranny of The Frame
I have been thinking about Greenaway's "Tyranny of the Frame". It is pefectly acceptable to me that we should have a frame, whatever the format of the rectangle. I also have nothing against those artists who like to project on round things or whose fetish it is to go beyond our peripheral vision in 360° panorama.
The real tyranny of the frame of celluloid cinema is that it is in two dimensions. Above is an animated projection of a rotating tesseract. Why should the video frame be flat?
The camera of the last 100+ years of cinematic history is essentially a glorified eyeball with sophisticated spectacles.
The Quantum Camera will attach these free-floating, bespectacled eyeballs to brains. Brains capable of perceiving reality more like the way our nervous system works. When I was at University of Maryland I had a class about visual communication where the professor had us read a book about visual perception. Humans have depth perception. We can tell the foreground from the background without any trouble, and if something that was moving ceases to move, we can still distinguish it as a separate entity. We have had multi-track audio recording equipment for years. We need multi-depth video cameras. Cameras that record the background and the foreground to separate layers of video. Goodbye keying and matting, hello alpha channels!
But it needs to go further than just what keying can accomplish. Layers are still a 2D concept. Containers (I admit I borrow the word from conversations I have had with Philipp) are a much more appropriate model. Here is a film still from Lost In Translation:
The quantum camera would ideally see (at least) these containers:
(Restaurant (Charlotte) (Table (Food) ) (Steam) (Bob) )
Of course our eyes can discriminate an incredible level of detail:
(Bob (Costume (Sweater (Shirt) ) (Wristwatch) (Pants) )
And we can even infer things which we cannot see, such as socks, underwear and shoes.
It may also be necessary to "teach" the quantum camera in order to get it to learn to recognize these containers and their IDs. With the above example, I can imagine that it would work that first you would show the camera the empty table and benches and ID it Restaurant. Then you could put the food on the table and ID it Food. Then a threshold knob would be adjusted to catch the steam and ID it Steam. Lastly Bob and Charlotte would each be added to the composition and IDed respectively. Or perhaps you could use a combination of RFIDs and threshold settings for brightness, depth and movement on the camera.
When I was at Bard I attended a screening of Let's Get Lost by Bruce Weber (see this site for video clips). The screening was presented by Bard alumni Jeff Preiss, an accomplished cinematographer. With a successful career in commercial advertising, Jeff knew some secrets of the image industry. He mentioned that he had heard of major corporations developing cameras that photograph all surfaces of physical reality in the hope of creating photographic 3D space.
What are they waiting for?
The real tyranny of the frame of celluloid cinema is that it is in two dimensions. Above is an animated projection of a rotating tesseract. Why should the video frame be flat?
The camera of the last 100+ years of cinematic history is essentially a glorified eyeball with sophisticated spectacles.
The Quantum Camera will attach these free-floating, bespectacled eyeballs to brains. Brains capable of perceiving reality more like the way our nervous system works. When I was at University of Maryland I had a class about visual communication where the professor had us read a book about visual perception. Humans have depth perception. We can tell the foreground from the background without any trouble, and if something that was moving ceases to move, we can still distinguish it as a separate entity. We have had multi-track audio recording equipment for years. We need multi-depth video cameras. Cameras that record the background and the foreground to separate layers of video. Goodbye keying and matting, hello alpha channels!
But it needs to go further than just what keying can accomplish. Layers are still a 2D concept. Containers (I admit I borrow the word from conversations I have had with Philipp) are a much more appropriate model. Here is a film still from Lost In Translation:
The quantum camera would ideally see (at least) these containers:
(Restaurant (Charlotte) (Table (Food) ) (Steam) (Bob) )
Of course our eyes can discriminate an incredible level of detail:
(Bob (Costume (Sweater (Shirt) ) (Wristwatch) (Pants) )
And we can even infer things which we cannot see, such as socks, underwear and shoes.
It may also be necessary to "teach" the quantum camera in order to get it to learn to recognize these containers and their IDs. With the above example, I can imagine that it would work that first you would show the camera the empty table and benches and ID it Restaurant. Then you could put the food on the table and ID it Food. Then a threshold knob would be adjusted to catch the steam and ID it Steam. Lastly Bob and Charlotte would each be added to the composition and IDed respectively. Or perhaps you could use a combination of RFIDs and threshold settings for brightness, depth and movement on the camera.
When I was at Bard I attended a screening of Let's Get Lost by Bruce Weber (see this site for video clips). The screening was presented by Bard alumni Jeff Preiss, an accomplished cinematographer. With a successful career in commercial advertising, Jeff knew some secrets of the image industry. He mentioned that he had heard of major corporations developing cameras that photograph all surfaces of physical reality in the hope of creating photographic 3D space.
What are they waiting for?
Labels:
3D,
camera tracking,
depth perception,
four tyrannies,
Lost In Translation,
rfid,
tesseract
Peter Greenaway's "Four Tyrannies"
I was having trouble in YouTube making this playlist, so for now its not in the best order. But the ideas are interrelated enough that it works. I might fix this some time in the future.
4 vs 3
Tonight I watched the two DVD set of "Ten Minutes Older" a compilation of 15 short films by 15 different directors. Despite Jim Jarmusch, Wim Wenders, and Spike Lee being among the directors, I was basically unimpressed. There was a Mike Figgis piece called "A Staircase - About Time 2" which used the same 4-panel split screen as Figgis' 2000 quantum experiment Timecode but the story wasn't compelling and the imagery was gaudy.
I vividly remember seeing Timecode when it came out at the Dupont Circle theaters in DC. I went alone and was thoroughly captivated by the experience. If quantum cinema is the cinema of multiple universes, then here was a film that interpreted that premise by showing the views of four cameras at all times. Like a video installation for four monitors packaged for the big screen, this technique pushes the limits of our perception with four simultaneous points of view. To hold our concentration on one thing at a time, Figgis mixed the audio higher in the quadrant that ought to command our attention.
Whereas Greenaway's mini-frames have always bothered me for their cut-and-paste aesthetic, Figgis' approach is a step up. It allows dramatic tension to rise and fall by creating suspense across the four quadrants. When two quadrants reveal the same subject from different angles there is an immediate gut-level "ah-ha!" which is quite pleasing.
On the other hand, I wonder if four frames is too many for the visual sense. I have the feeling that perhaps three is a magic number in this regard. My friend Tobi Wootton has done a piece with three simultaneous video angles which works exceptionally well. Four seems to be just above a threshold that guarantees that a substantial portion of what happens will remain subconscious or unconscious. Perhaps some directors find that acceptable. Certainly there is an argument for keeping subtle cinematic information buried, only to be revealed upon multiple viewings.
I vividly remember seeing Timecode when it came out at the Dupont Circle theaters in DC. I went alone and was thoroughly captivated by the experience. If quantum cinema is the cinema of multiple universes, then here was a film that interpreted that premise by showing the views of four cameras at all times. Like a video installation for four monitors packaged for the big screen, this technique pushes the limits of our perception with four simultaneous points of view. To hold our concentration on one thing at a time, Figgis mixed the audio higher in the quadrant that ought to command our attention.
Whereas Greenaway's mini-frames have always bothered me for their cut-and-paste aesthetic, Figgis' approach is a step up. It allows dramatic tension to rise and fall by creating suspense across the four quadrants. When two quadrants reveal the same subject from different angles there is an immediate gut-level "ah-ha!" which is quite pleasing.
On the other hand, I wonder if four frames is too many for the visual sense. I have the feeling that perhaps three is a magic number in this regard. My friend Tobi Wootton has done a piece with three simultaneous video angles which works exceptionally well. Four seems to be just above a threshold that guarantees that a substantial portion of what happens will remain subconscious or unconscious. Perhaps some directors find that acceptable. Certainly there is an argument for keeping subtle cinematic information buried, only to be revealed upon multiple viewings.
Labels:
MIke Figgis,
Peter Greenaway,
Timecode,
Tobi Wootton
03 January 2007
PictoOrphanage
The Pictoplasma PictoOrphanage is a nifty concept which could be extended to included characters from quantum cinematic projects.
In physics the quanta refers to an indivisible entity of energy. In quantum cinema the quanta is the character. This view is constructive in that it views a character as a basic building block of larger narrative forms.
In physics the quanta refers to an indivisible entity of energy. In quantum cinema the quanta is the character. This view is constructive in that it views a character as a basic building block of larger narrative forms.
02 January 2007
Characters
Peter Greenaway has always been ahead of the times. Today I just stumbled across his Tulse Luper project (Archive Promo Game)
His decision to drive the experiment with a character confirms suspicions I have had since I was at the Pictoplasma conference. While I worked videotaping lectures, performances and karaoke, I couldn't help feel a strong affinity for many of the designers and artists. It wasn't just because of my background in comics and illustration; there was a deeper structure that was revealed to me that could tie all artistic mediums together, both digital and analog. The character was serving artists like Tim Biskup (Helper) and Nathan Jurevicius (Scary Girl) as a means to tie together paintings, comics, animations, and collectible vinyl toys. You could say that for these artists merchandising was part of their approach towards being pervasive. I see quantum cinema as using characters as the particles which bind multiple universes and stories together across different mediums.
Now here is a picture of me dressed in a wooden Helper costume at Pictoplasma 2006:
Labels:
characters,
Nathan Jurevicius,
pervasive,
Peter Greenaway,
Tim Biskup,
Tulse Luper
01 January 2007
More On Frame Rates
Apparently IMAX HD is shot and projected at 48 fps.
The 100fps website seems to have some answers about physical thresholds of how many frames per second we can see.
Now, just for fun, a slow-motion video of a coke can being destroyed by an arrow at 4000 fps (courtesy of Photron).
The 100fps website seems to have some answers about physical thresholds of how many frames per second we can see.
Now, just for fun, a slow-motion video of a coke can being destroyed by an arrow at 4000 fps (courtesy of Photron).
Cinenet
NASA Unveils Spray-On Circuits
The space agency showed how it can spray a thin film of metal on any object to create RF antennas and electronic circuits.
Sept. 26, 2002 -- Like many breakthrough discoveries, this one happened by accident. NASA had built a vacuum chamber in which astronauts could practice welding in space. The problem was, whenever the astronauts tried to weld something, it created a vapor that left a thin film of metal on the inside of the chamber.
The Russians had similar problems. But while the Russians learned how to get rid of the vapor, NASA figured out a way to control it and use it. The space agency created a "portable vacuum thin film deposition" device, which is a fancy way of saying NASA developed a handheld unit that lets engineers spray a thin film of metal on just about anything.
At the Frontline conference in Chicago on Tuesday, Fred Schramm of NASA's technology transfer department displayed a feather, a tissue, a piece of plastic wrap and a dollar bill coated with a thin film of chrome, as well as photos of rocks and other objects covered in chrome and copper.
NASA's main interest in the technology, which is still in development, is to create smart structures. Agency engineers spray the metal coating on a part and then analyze the film to determine what happened to it during space flight. But Schramm says that NASA has used a mask, or stencil, to create data matrix, two-dimensional matrix symbols that contain dark and light squares.
Schramm told a rapt audience that the same technology could transform the RFID industry. "We now have a handheld device that's roughly the size of a hair dryer," he said. "You could walk up to a wall and put a metal layer on it. We've created masks to make data matrix on a surface, and if you change the mask you can make an antenna or a circuit."
NASA hasn't made either yet because low-cost RFID is not an area of interest. But it does want to transfer the technology to the private sector. Schramm held out the possibility that one day, RFID tags could be sprayed on packaging, the way bar codes are printed right on many packages today.
Schramm say how long it would take before a product would be on the market, but it would likely be a couple of years. He also declined to say how much a reader would cost to build. The prototype probably cost in the hundreds of thousands, if not millions, to create. A company called Vacuum Arc Technology Inc. is working to commercialize it.
Schramm said RFID companies have contacted him about using the technology to create low-cost tags. "We see [the technology being used to create] the antenna first, then the circuit," he said. "One day, you are going to have somebody putting circuits and sensors right on walls, bags, anything."
He said these spray-on RFID tags and sensors would respond to a reader the same way existing technology does. "New technology impacts every corner of our society," Schramm said. "This is a quantum leap, not a baby step."
The real quantum leap would be to use a transparent, non toxic RFID spray on actors in full costume and makeup, then develop a camera technology that could generate a 2D mask within the video image, eliminating the need for blue screens. You could shoot action in any kind of lighting in many different scenarios and always be able to separate the foreground from the background. The internet of things becomes the internet of actors, props and set elements. This article is over four years old; when will the be coming to a theater near you?
We need a word to describe digitally networked and trackable actors, props and set elements in cinema. Perhaps this could be called the cinenet?
More Martin Arnold
My favorites are the second piece ("passage à l'acte") and the third piece ("Alone. Life Wastes Andy Hardy"). The flipping of the frame in the first piece ("pièce touchée") reminds me of Artavazd Peleshian. I like Peleshian's use of flipping better, but ultimately I don't find the effect so great except when certain actions cross through the frame and make nice visual palindromes.
Labels:
Artavazd Peleshian,
flipping,
martin arnold,
microcinema,
palindromes
Subscribe to:
Posts (Atom)