17 December 2007
Johannes Krause & Max Neupert's Video Sampler
While the thesis of Johannes Krause & Max Neupert's Video Sampler is very convincing, this demo video only seems to barely scratch at the surface of the intention behind the software. I hope they get a new sequencer module video up soon!
Maintenance
As the blog is approaching its one-year anniversary I have decided to remove all the contributors who never managed to post a single entry.
01 December 2007
Excerpt from letter to Sven König
I was checking out your recent project .download finished! and I was pleased to see Small Room Tango made it into the fray of the early videos put through the process.
The process of stripping the video of its key frames and leaving only the delta frames behind suggests another aesthetic to me; one in which the materials to be processed are created with this in mind to craft a kind of "quantinuity" -- my word for describing my new formal theory of Quantum Continuity (as opposed to classical Hollywood Continuity) The particalization of the image into pixels is consistant with this theory in that it strips an edited sequence of images of their temporal edges. If one were to write their own compression algorithm which still wrote delta frames but didn't chunk pixels into blocky groups, perhaps this could create a "cleaner" effect? Although the chunks admittedly have a more "painterly" look -- if that's your thing...
The process of stripping the video of its key frames and leaving only the delta frames behind suggests another aesthetic to me; one in which the materials to be processed are created with this in mind to craft a kind of "quantinuity" -- my word for describing my new formal theory of Quantum Continuity (as opposed to classical Hollywood Continuity) The particalization of the image into pixels is consistant with this theory in that it strips an edited sequence of images of their temporal edges. If one were to write their own compression algorithm which still wrote delta frames but didn't chunk pixels into blocky groups, perhaps this could create a "cleaner" effect? Although the chunks admittedly have a more "painterly" look -- if that's your thing...
24 October 2007
Pioneer Set to Radically Alter Live Cinema
The new Pioneer SVM-1000 AV mixer, pictured here with Pioneer's DVJ-1000 DVD decks sets the stage for a new era in live cinema. Although intended for club performances, this piece of gear has remarkable implications for the construction of narrative as well.
Labels:
audiovisual,
AV,
DVD,
gear,
live cinema,
performance
02 July 2007
30 May 2007
22 May 2007
Cornelius - Wataridori (music video); Gum (live)
I saw Cornelius play in Washington DC several years ago and it remains to be one of the best live shows I have seen in which audio and video were presented in tight relation to one another. Their sense of visual music is spot-on.
25 April 2007
Bill Viola Interview and Marc Aschenbrenner's Zweite Sonne
I like Bill Viola's comparison of video cameras to reindeer bones with notches cut into them. He is profound in his assertion that its the telling of stories and leaving of objects that is the fundamental activity of human existence.
I recently attended the Düsseldorf and Köln art fairs and it was transparently clear that the contemporary art world is dominated by a marketplace driven by the collection of objects. What little video I did see I found remarkably boring and derivative, with the exception of Zweite Sonne by Marc Aschenbrenner on exhibit at the Olaf Stüber Gallery booth in Köln. It exhibited a property I would like to associate with my slowly developing theory of Quantinuity; namely that it transported an otherwise virtual object into physical space. The video was projected inside a giant black plastic balloon-suit that also appears in the video itself as the central subject of the video.
Labels:
art fairs,
Bill Viola,
Marc Aschenbrenner,
objects,
quantinuity
19 April 2007
Eyeliner 3D
Is Eyeliner 3D really holography or just smoke and mirrors?
And does it even matter?
Given my background and formal education, as well as my admitted nostalgic love of certain masterpieces of celluloid cinema, it is challenging for me to conceive of my work in terms of depth. I have always been inclined to see 2-dimensional pictures in my head when imagining a story for a film. And I guess that's just it; proto-quantum cinema and/or live cinema are not really films at all; they may be "features" or even "feature-length" at times, but I am feeling more and more that the proto-quantum cinema will find its deepest roots in theater. Stumbling across technologies like Eyeliner 3D confirm this suspicion.
So while it may not be possible in my lifetime to realize true holographic projection (and certainly not nanopixels), several layers (perhaps as many in late-stage 32-bit video games) may be around the corner. The promise that these virtual layers of inszenierung may have for proto-quantum cinema may in fact cross the minimum thresholds of depth reconstruction necessary to truly define a new artform.
So do we really need absolute depth resolution? Or will foreground, midground and background (with some additional layers) suffice?
And does it even matter?
Given my background and formal education, as well as my admitted nostalgic love of certain masterpieces of celluloid cinema, it is challenging for me to conceive of my work in terms of depth. I have always been inclined to see 2-dimensional pictures in my head when imagining a story for a film. And I guess that's just it; proto-quantum cinema and/or live cinema are not really films at all; they may be "features" or even "feature-length" at times, but I am feeling more and more that the proto-quantum cinema will find its deepest roots in theater. Stumbling across technologies like Eyeliner 3D confirm this suspicion.
So while it may not be possible in my lifetime to realize true holographic projection (and certainly not nanopixels), several layers (perhaps as many in late-stage 32-bit video games) may be around the corner. The promise that these virtual layers of inszenierung may have for proto-quantum cinema may in fact cross the minimum thresholds of depth reconstruction necessary to truly define a new artform.
So do we really need absolute depth resolution? Or will foreground, midground and background (with some additional layers) suffice?
One Flew Over The Cuckoo's Nest & In My Language
I just watched One Flew Over The Cuckoo's Nest for the first time in about seven years. I had always regarded this piece as an amazing character study and a beautifully shot human drama. Watching it today in 2007 I became painfully aware of its shortcomings in terms of its depictions of women as either tramps or sadistic bureaucrats. Yet its inability to be gender equitable bothered me less than its somehow predictable narrative structure and its blantantly ethnocentric construction of the Native American dilemma.
The salient points of the film which redeem it are its cinematography and stellar performances by a perfectly cast ensemble. The art of Milos Forman's directing was flawless; it is the script that poses a problem for me.
The main issue I find is that our theories of psychology and pharmacology have advanced considerably since Cuckoo's Nest was written. Shock therapy and lobotomies have been overshadowed by the pharmaceutical industry. Did the terms ADD and ADHD have relevance in the 70s the way they did in the 90s? I posit that the media landscape itself has had such a drastic effect on the collective human consciousness in the past thirty-odd years that it has made its depiction an entirely different proposition than it was when Cuckoo's Nest was made.
Created by an autistic woman, I find this video far more arresting and inspiring in today's media climate as a vision and interpretation of mental illness. It is a marvel of internet technology that such an individual is able to express herself in this way to such a large audience;
The salient points of the film which redeem it are its cinematography and stellar performances by a perfectly cast ensemble. The art of Milos Forman's directing was flawless; it is the script that poses a problem for me.
The main issue I find is that our theories of psychology and pharmacology have advanced considerably since Cuckoo's Nest was written. Shock therapy and lobotomies have been overshadowed by the pharmaceutical industry. Did the terms ADD and ADHD have relevance in the 70s the way they did in the 90s? I posit that the media landscape itself has had such a drastic effect on the collective human consciousness in the past thirty-odd years that it has made its depiction an entirely different proposition than it was when Cuckoo's Nest was made.
Created by an autistic woman, I find this video far more arresting and inspiring in today's media climate as a vision and interpretation of mental illness. It is a marvel of internet technology that such an individual is able to express herself in this way to such a large audience;
24 February 2007
"8 1/2 Mile"
Sample-based aesthetics point to a larger grammar of clashing various media with topical or visual similarities to create mutant remixed offspring. While my personal approach with video is to use the formal processes common to sample-based work on materials I create myself, I still find a good mash-up to be rare finds in the cluttered landscape of audiovisual collage. This clip by The AV Club is particularly multi-dimensional in its result. The best mash-up work is something that will never be made better through automation; it is hi-tech handiwork.
My personal mash-up triumph was a track I made from ODB's Baby I got Your Money and Pink Floyd's Money, decorated by choice samples of Noam Chomsky talking about economics. I called it Dirty Money.
Labels:
collage,
Fellini,
mash-up,
Noam Chomsky,
sample
19 February 2007
WiiJ
Well it didn't take long for people to hack the Nintendo Wii controllers. Once again music makes another leap forward in interface design. Like the guy says at the end of the first video, its show business. I wish it would stop being show business and start addressing the needs of artists. But like the Sony portapack video camcorders, technology somehow must always be hacked for artists. I wonder if it would have a negative effect on creativity if artists were able to buy really well designed technological tools. My intuition tells me the answer is no. I mean, do painters dislike the fact that they can go get great brushes, canvas and pigments easily?
Another example of how far ahead music technology is opposed to video technology in our ability to manipulate the digital media.
Video For Ray
This music video was made by many people contributing stills and video clips according to a pre-determined structure which, during the video's production, took the form of a wiki at zefrank.com. Unfortunately it seems as if the wiki with all the instructions on how to participate in the shot list is now gone, replaced by collaborator credits.
By soliciting and assembling individual cinematic elements over the internet this work is a pretty interesting collaboration. And while its probably not the first video to be made in this manner, it may well be the most popular ever made.
Ultimately the content of the video is cryptic and low-quality. It is probably more meaningful to the participants than to non-participants. At least the music is somewhat entertaining. Yet the concept begs for a better author to step forward and lead such a project.
18 February 2007
Bill Viola Framed: Where are the angels?
Watching a clip from Bill Viola's "I Do Not Know What It Is I Am Like" on YouTube, it struck me how the computer restates the frame again and again. I took this photo to try to communicate the phenomenon of how my computer monitor, the web browser and the YouTube site, triple the frame of Viola's piece, which itself has a frame in its own frame (the frame of the video monitor, here with an image of a toucan). Yet even as the idea struck me to document my observation I realized I would be compounding it further by blogging it; adding the frames of the blog, the browser, and the computer again.
I have been thinking about a solution to the theoretical problem of the holographic cinema. My solution is theoretically elegant and potentially physically and technologically impossible. It was a flash of insight I had in a bar in Berlin during my visit to Transmediale.
The tyranny is the projector. The revolutionary answer is what I like to call the nanopixel. Imagine a microscopic cube capable of emitting a different colored light on each of its faces. Now imagine millions of these microscopic cubes suspended in some kind of electromagnetic vacuum. Each cube can wirelessly receive color information, luminosity information and dynamic positioning information in space. These nanopixels form ephemeral solids representing actors, sets and props; digital skins containing hollow cinematic bodies. A kind of elaborate, programmable, kinetic, narrative sculpture medium.
To record the data for the six faces of the nanopixels, a sophisticated system of either four (tetrahedron formation) or six (cubic formation) high-frequency, high-resolution imaging scanners would be deployed on set to get all angles necessary. The audience could sit in the round or in more conventional theatric seating.
My friend Florian Grond analyzed it in terms of mystical metaphor in which the scanner-cameras are the omniscient eye-of-god and the nanopixels are angels.
I have been thinking about a solution to the theoretical problem of the holographic cinema. My solution is theoretically elegant and potentially physically and technologically impossible. It was a flash of insight I had in a bar in Berlin during my visit to Transmediale.
The tyranny is the projector. The revolutionary answer is what I like to call the nanopixel. Imagine a microscopic cube capable of emitting a different colored light on each of its faces. Now imagine millions of these microscopic cubes suspended in some kind of electromagnetic vacuum. Each cube can wirelessly receive color information, luminosity information and dynamic positioning information in space. These nanopixels form ephemeral solids representing actors, sets and props; digital skins containing hollow cinematic bodies. A kind of elaborate, programmable, kinetic, narrative sculpture medium.
To record the data for the six faces of the nanopixels, a sophisticated system of either four (tetrahedron formation) or six (cubic formation) high-frequency, high-resolution imaging scanners would be deployed on set to get all angles necessary. The audience could sit in the round or in more conventional theatric seating.
My friend Florian Grond analyzed it in terms of mystical metaphor in which the scanner-cameras are the omniscient eye-of-god and the nanopixels are angels.
Labels:
angels,
Bill Viola,
frame,
mysticism,
nanopixels. scanner-cameras
10 February 2007
"Nostalgia"
I just watched Tarkovsky's Nostalgia. The only other film of his I have seen is Stalker. I find his storytelling incredibly compelling for its theatric and meditative qualities (obvious draws).
Another perhaps more subtle thing which seduces me is his proclivity for carefully composed wide shots. He tends to avoid medium shots, relying on either canvas-like settings or closeups in the tradition of portraiture. I was also acutely aware of the sound design being heightened and immersive. I am interested in these qualities because of their musical nature. The visual separation of characters and objects within a Tarkovsky frame are often as elegant a composition as a painting. But these are paintings in time which, reinforced by a strong sound design, speak to me as audiovisual music.
Another perhaps more subtle thing which seduces me is his proclivity for carefully composed wide shots. He tends to avoid medium shots, relying on either canvas-like settings or closeups in the tradition of portraiture. I was also acutely aware of the sound design being heightened and immersive. I am interested in these qualities because of their musical nature. The visual separation of characters and objects within a Tarkovsky frame are often as elegant a composition as a painting. But these are paintings in time which, reinforced by a strong sound design, speak to me as audiovisual music.
08 February 2007
"Every Day"
Last week I was in Berlin visiting Transmediale and Club Transmediale, the twin events which ostensibly represent Germany's most cutting edge media art festivals. Unfortunately I admit I was quite disappointed with the work that I saw. Thank goodness this is the Transmediale chief curator's final year. Club Transmediale was better quality in general, with more exciting and relevant work, in my humble opinion.
During my visit I started reading The Time Machine by H. G. Wells. I have been thinking alot lately about time travellers and so I thought this video would be an appropriate starting point. Perhaps I will continue collecting other time travellers in the future.
Among other things I saw (and disliked) at Transmediale was a lame video called The Chronic Argonauts, apparently titled after an unused title for H. G. Wells' book. It unfortunately was not worthy of the distinction.
Labels:
Germany,
H. G. Wells,
time travellers,
transmediale
16 January 2007
Everything is Miscellaneous
Thanks to Joyce Shintani for blogging this. I found it relevant to the mpeg7 discussion, among other things that I like in general about interenet culture.
10 January 2007
Let The Games Begin?
Hi all! I'll be taking a somewhat different tack in my contributions here because of my different background. I am a social worker, with some backgound in neuropsychology and cognitive science from my undergraduate experience. I am also a phenomenologist and attempted writer, but have very limited knowledge of cinema per se. My primary interest in these technologies are:
a) in the feedback mechanisms which could potentially be used to incorporate viewer response into the process of "recomposing" quantum films
b) the subjective experience of viewing such a "self-composed" film
c) the social consequences of the loss of conventional shared narratives
d) applications of quantum cinema outside "art"
I want to start off this conversation by saying that quantum cinema already exists and it is called video gaming. Whether Pac Man qualifies for the medium is an academic discussion that is hardly relevant at this point. The new wave of game systems offer games in which characters can be designed, rendered somewhat realistically, and directly controlled to move within certain specifications. This creates virtually limitless possibilities of narrative in a sense, although games are almost universally unsatisfying as stories.
One reason is that the stories tend to bottleneck at certain points (usually during the preconcieved "cinematic" sequences during which the character does not have control). This creates the existencial conundrum of having absolute control but no optins.
A second reason is that the system gives too much control and predictability for the brain to translate into an emotional response. Most intense experiences I have had with cinema have been accompanied by the intense feeling of not being able to control outcomes and therefore being dependent on the story. One good example is watching an episode of "Curb Your Enthusiasm" and becoming painfully uncomfortable with the actions of the characters, feeling for example the intense need to apologize to the other characters for the actions of Larry David while at the same time feeling a great deal of sympathy for/with Larry David because of the actions of his wife.
The reason linear cinema is so succesful is that it allows the film maker to speak directly from the story and therefore "make a statement". If you ask a videogame player (full disclosure: I am one, I especially love shooters) what makes a game great, the most common answer is not the story, the music, the graphics, or even the design of the maps or the artificial intelligence. It is something called The Engine, the invisible but explicitly felt algorithms which determine the ways in which the manipulation of the controls translates into modifications of the game world. The well known Halo franchise, for instance, won out largely on the strength of it's engine. Will fans of cinema one day stop talking about stories and start talking about engines?
07 January 2007
Quantum Camera Components
The quantum camera has a brain and ears.
Speech recognition software performs best when you train the software by speaking a series of important words and sentences. It seems to me that the technology already exists to make prototypes of a system that would embed videos with full text transcriptions of everything spoken on camera as metadata. Nevermind the fact that if what you were shooting had a screenplay you could feed the entire screenplay to the camera before you shoot so it can anticipate what the actors will say.
Techniques such as dynamic time warping would accomodate the variations between different takes of the same shot so that they are all regarded as versions of one another. Seems like mpeg7 would be the best possible way to store this data today.
Frameline 47 lets you take advantage of the mpeg7 standard quite fully and its not surprising at all that a company like Eptascape would be in the security and surveillance market. What we need is the metadata to be entered half-automatically during the shooting process combined with a learning curve for the camera where you have it learn what the actors look like and sound like. A sophisticated infra red camera would help define actors from the background. Using three cameras (one on the X, Y and Z axis) could help to define the physical space and depth. And now a dog in infrared:
Speech recognition software performs best when you train the software by speaking a series of important words and sentences. It seems to me that the technology already exists to make prototypes of a system that would embed videos with full text transcriptions of everything spoken on camera as metadata. Nevermind the fact that if what you were shooting had a screenplay you could feed the entire screenplay to the camera before you shoot so it can anticipate what the actors will say.
Techniques such as dynamic time warping would accomodate the variations between different takes of the same shot so that they are all regarded as versions of one another. Seems like mpeg7 would be the best possible way to store this data today.
Frameline 47 lets you take advantage of the mpeg7 standard quite fully and its not surprising at all that a company like Eptascape would be in the security and surveillance market. What we need is the metadata to be entered half-automatically during the shooting process combined with a learning curve for the camera where you have it learn what the actors look like and sound like. A sophisticated infra red camera would help define actors from the background. Using three cameras (one on the X, Y and Z axis) could help to define the physical space and depth. And now a dog in infrared:
Labels:
infra red,
metadata,
mpeg7,
speech recognition,
surveillance
06 January 2007
Wrinklers In Time
I have just been researching wormholes, string theory and Stephen Hawking's chronology protection conjecture.
Quantum editors will be called tesseracters because the medium they compose with will have four dimensions instead of two.
Quantum editors will be called tesseracters because the medium they compose with will have four dimensions instead of two.
The Tyranny of The Frame
I have been thinking about Greenaway's "Tyranny of the Frame". It is pefectly acceptable to me that we should have a frame, whatever the format of the rectangle. I also have nothing against those artists who like to project on round things or whose fetish it is to go beyond our peripheral vision in 360° panorama.
The real tyranny of the frame of celluloid cinema is that it is in two dimensions. Above is an animated projection of a rotating tesseract. Why should the video frame be flat?
The camera of the last 100+ years of cinematic history is essentially a glorified eyeball with sophisticated spectacles.
The Quantum Camera will attach these free-floating, bespectacled eyeballs to brains. Brains capable of perceiving reality more like the way our nervous system works. When I was at University of Maryland I had a class about visual communication where the professor had us read a book about visual perception. Humans have depth perception. We can tell the foreground from the background without any trouble, and if something that was moving ceases to move, we can still distinguish it as a separate entity. We have had multi-track audio recording equipment for years. We need multi-depth video cameras. Cameras that record the background and the foreground to separate layers of video. Goodbye keying and matting, hello alpha channels!
But it needs to go further than just what keying can accomplish. Layers are still a 2D concept. Containers (I admit I borrow the word from conversations I have had with Philipp) are a much more appropriate model. Here is a film still from Lost In Translation:
The quantum camera would ideally see (at least) these containers:
(Restaurant (Charlotte) (Table (Food) ) (Steam) (Bob) )
Of course our eyes can discriminate an incredible level of detail:
(Bob (Costume (Sweater (Shirt) ) (Wristwatch) (Pants) )
And we can even infer things which we cannot see, such as socks, underwear and shoes.
It may also be necessary to "teach" the quantum camera in order to get it to learn to recognize these containers and their IDs. With the above example, I can imagine that it would work that first you would show the camera the empty table and benches and ID it Restaurant. Then you could put the food on the table and ID it Food. Then a threshold knob would be adjusted to catch the steam and ID it Steam. Lastly Bob and Charlotte would each be added to the composition and IDed respectively. Or perhaps you could use a combination of RFIDs and threshold settings for brightness, depth and movement on the camera.
When I was at Bard I attended a screening of Let's Get Lost by Bruce Weber (see this site for video clips). The screening was presented by Bard alumni Jeff Preiss, an accomplished cinematographer. With a successful career in commercial advertising, Jeff knew some secrets of the image industry. He mentioned that he had heard of major corporations developing cameras that photograph all surfaces of physical reality in the hope of creating photographic 3D space.
What are they waiting for?
The real tyranny of the frame of celluloid cinema is that it is in two dimensions. Above is an animated projection of a rotating tesseract. Why should the video frame be flat?
The camera of the last 100+ years of cinematic history is essentially a glorified eyeball with sophisticated spectacles.
The Quantum Camera will attach these free-floating, bespectacled eyeballs to brains. Brains capable of perceiving reality more like the way our nervous system works. When I was at University of Maryland I had a class about visual communication where the professor had us read a book about visual perception. Humans have depth perception. We can tell the foreground from the background without any trouble, and if something that was moving ceases to move, we can still distinguish it as a separate entity. We have had multi-track audio recording equipment for years. We need multi-depth video cameras. Cameras that record the background and the foreground to separate layers of video. Goodbye keying and matting, hello alpha channels!
But it needs to go further than just what keying can accomplish. Layers are still a 2D concept. Containers (I admit I borrow the word from conversations I have had with Philipp) are a much more appropriate model. Here is a film still from Lost In Translation:
The quantum camera would ideally see (at least) these containers:
(Restaurant (Charlotte) (Table (Food) ) (Steam) (Bob) )
Of course our eyes can discriminate an incredible level of detail:
(Bob (Costume (Sweater (Shirt) ) (Wristwatch) (Pants) )
And we can even infer things which we cannot see, such as socks, underwear and shoes.
It may also be necessary to "teach" the quantum camera in order to get it to learn to recognize these containers and their IDs. With the above example, I can imagine that it would work that first you would show the camera the empty table and benches and ID it Restaurant. Then you could put the food on the table and ID it Food. Then a threshold knob would be adjusted to catch the steam and ID it Steam. Lastly Bob and Charlotte would each be added to the composition and IDed respectively. Or perhaps you could use a combination of RFIDs and threshold settings for brightness, depth and movement on the camera.
When I was at Bard I attended a screening of Let's Get Lost by Bruce Weber (see this site for video clips). The screening was presented by Bard alumni Jeff Preiss, an accomplished cinematographer. With a successful career in commercial advertising, Jeff knew some secrets of the image industry. He mentioned that he had heard of major corporations developing cameras that photograph all surfaces of physical reality in the hope of creating photographic 3D space.
What are they waiting for?
Labels:
3D,
camera tracking,
depth perception,
four tyrannies,
Lost In Translation,
rfid,
tesseract
Peter Greenaway's "Four Tyrannies"
I was having trouble in YouTube making this playlist, so for now its not in the best order. But the ideas are interrelated enough that it works. I might fix this some time in the future.
4 vs 3
Tonight I watched the two DVD set of "Ten Minutes Older" a compilation of 15 short films by 15 different directors. Despite Jim Jarmusch, Wim Wenders, and Spike Lee being among the directors, I was basically unimpressed. There was a Mike Figgis piece called "A Staircase - About Time 2" which used the same 4-panel split screen as Figgis' 2000 quantum experiment Timecode but the story wasn't compelling and the imagery was gaudy.
I vividly remember seeing Timecode when it came out at the Dupont Circle theaters in DC. I went alone and was thoroughly captivated by the experience. If quantum cinema is the cinema of multiple universes, then here was a film that interpreted that premise by showing the views of four cameras at all times. Like a video installation for four monitors packaged for the big screen, this technique pushes the limits of our perception with four simultaneous points of view. To hold our concentration on one thing at a time, Figgis mixed the audio higher in the quadrant that ought to command our attention.
Whereas Greenaway's mini-frames have always bothered me for their cut-and-paste aesthetic, Figgis' approach is a step up. It allows dramatic tension to rise and fall by creating suspense across the four quadrants. When two quadrants reveal the same subject from different angles there is an immediate gut-level "ah-ha!" which is quite pleasing.
On the other hand, I wonder if four frames is too many for the visual sense. I have the feeling that perhaps three is a magic number in this regard. My friend Tobi Wootton has done a piece with three simultaneous video angles which works exceptionally well. Four seems to be just above a threshold that guarantees that a substantial portion of what happens will remain subconscious or unconscious. Perhaps some directors find that acceptable. Certainly there is an argument for keeping subtle cinematic information buried, only to be revealed upon multiple viewings.
I vividly remember seeing Timecode when it came out at the Dupont Circle theaters in DC. I went alone and was thoroughly captivated by the experience. If quantum cinema is the cinema of multiple universes, then here was a film that interpreted that premise by showing the views of four cameras at all times. Like a video installation for four monitors packaged for the big screen, this technique pushes the limits of our perception with four simultaneous points of view. To hold our concentration on one thing at a time, Figgis mixed the audio higher in the quadrant that ought to command our attention.
Whereas Greenaway's mini-frames have always bothered me for their cut-and-paste aesthetic, Figgis' approach is a step up. It allows dramatic tension to rise and fall by creating suspense across the four quadrants. When two quadrants reveal the same subject from different angles there is an immediate gut-level "ah-ha!" which is quite pleasing.
On the other hand, I wonder if four frames is too many for the visual sense. I have the feeling that perhaps three is a magic number in this regard. My friend Tobi Wootton has done a piece with three simultaneous video angles which works exceptionally well. Four seems to be just above a threshold that guarantees that a substantial portion of what happens will remain subconscious or unconscious. Perhaps some directors find that acceptable. Certainly there is an argument for keeping subtle cinematic information buried, only to be revealed upon multiple viewings.
Labels:
MIke Figgis,
Peter Greenaway,
Timecode,
Tobi Wootton
03 January 2007
PictoOrphanage
The Pictoplasma PictoOrphanage is a nifty concept which could be extended to included characters from quantum cinematic projects.
In physics the quanta refers to an indivisible entity of energy. In quantum cinema the quanta is the character. This view is constructive in that it views a character as a basic building block of larger narrative forms.
In physics the quanta refers to an indivisible entity of energy. In quantum cinema the quanta is the character. This view is constructive in that it views a character as a basic building block of larger narrative forms.
02 January 2007
Characters
Peter Greenaway has always been ahead of the times. Today I just stumbled across his Tulse Luper project (Archive Promo Game)
His decision to drive the experiment with a character confirms suspicions I have had since I was at the Pictoplasma conference. While I worked videotaping lectures, performances and karaoke, I couldn't help feel a strong affinity for many of the designers and artists. It wasn't just because of my background in comics and illustration; there was a deeper structure that was revealed to me that could tie all artistic mediums together, both digital and analog. The character was serving artists like Tim Biskup (Helper) and Nathan Jurevicius (Scary Girl) as a means to tie together paintings, comics, animations, and collectible vinyl toys. You could say that for these artists merchandising was part of their approach towards being pervasive. I see quantum cinema as using characters as the particles which bind multiple universes and stories together across different mediums.
Now here is a picture of me dressed in a wooden Helper costume at Pictoplasma 2006:
Labels:
characters,
Nathan Jurevicius,
pervasive,
Peter Greenaway,
Tim Biskup,
Tulse Luper
01 January 2007
More On Frame Rates
Apparently IMAX HD is shot and projected at 48 fps.
The 100fps website seems to have some answers about physical thresholds of how many frames per second we can see.
Now, just for fun, a slow-motion video of a coke can being destroyed by an arrow at 4000 fps (courtesy of Photron).
The 100fps website seems to have some answers about physical thresholds of how many frames per second we can see.
Now, just for fun, a slow-motion video of a coke can being destroyed by an arrow at 4000 fps (courtesy of Photron).
Cinenet
NASA Unveils Spray-On Circuits
The space agency showed how it can spray a thin film of metal on any object to create RF antennas and electronic circuits.
Sept. 26, 2002 -- Like many breakthrough discoveries, this one happened by accident. NASA had built a vacuum chamber in which astronauts could practice welding in space. The problem was, whenever the astronauts tried to weld something, it created a vapor that left a thin film of metal on the inside of the chamber.
The Russians had similar problems. But while the Russians learned how to get rid of the vapor, NASA figured out a way to control it and use it. The space agency created a "portable vacuum thin film deposition" device, which is a fancy way of saying NASA developed a handheld unit that lets engineers spray a thin film of metal on just about anything.
At the Frontline conference in Chicago on Tuesday, Fred Schramm of NASA's technology transfer department displayed a feather, a tissue, a piece of plastic wrap and a dollar bill coated with a thin film of chrome, as well as photos of rocks and other objects covered in chrome and copper.
NASA's main interest in the technology, which is still in development, is to create smart structures. Agency engineers spray the metal coating on a part and then analyze the film to determine what happened to it during space flight. But Schramm says that NASA has used a mask, or stencil, to create data matrix, two-dimensional matrix symbols that contain dark and light squares.
Schramm told a rapt audience that the same technology could transform the RFID industry. "We now have a handheld device that's roughly the size of a hair dryer," he said. "You could walk up to a wall and put a metal layer on it. We've created masks to make data matrix on a surface, and if you change the mask you can make an antenna or a circuit."
NASA hasn't made either yet because low-cost RFID is not an area of interest. But it does want to transfer the technology to the private sector. Schramm held out the possibility that one day, RFID tags could be sprayed on packaging, the way bar codes are printed right on many packages today.
Schramm say how long it would take before a product would be on the market, but it would likely be a couple of years. He also declined to say how much a reader would cost to build. The prototype probably cost in the hundreds of thousands, if not millions, to create. A company called Vacuum Arc Technology Inc. is working to commercialize it.
Schramm said RFID companies have contacted him about using the technology to create low-cost tags. "We see [the technology being used to create] the antenna first, then the circuit," he said. "One day, you are going to have somebody putting circuits and sensors right on walls, bags, anything."
He said these spray-on RFID tags and sensors would respond to a reader the same way existing technology does. "New technology impacts every corner of our society," Schramm said. "This is a quantum leap, not a baby step."
The real quantum leap would be to use a transparent, non toxic RFID spray on actors in full costume and makeup, then develop a camera technology that could generate a 2D mask within the video image, eliminating the need for blue screens. You could shoot action in any kind of lighting in many different scenarios and always be able to separate the foreground from the background. The internet of things becomes the internet of actors, props and set elements. This article is over four years old; when will the be coming to a theater near you?
We need a word to describe digitally networked and trackable actors, props and set elements in cinema. Perhaps this could be called the cinenet?
More Martin Arnold
My favorites are the second piece ("passage à l'acte") and the third piece ("Alone. Life Wastes Andy Hardy"). The flipping of the frame in the first piece ("pièce touchée") reminds me of Artavazd Peleshian. I like Peleshian's use of flipping better, but ultimately I don't find the effect so great except when certain actions cross through the frame and make nice visual palindromes.
Labels:
Artavazd Peleshian,
flipping,
martin arnold,
microcinema,
palindromes
Subscribe to:
Posts (Atom)