Feedback Feedback Feedback

DepthEditorDebug

february 2011
Public candids combining DSLR and Kinect
Collaboration with Alexander Porter

In a future of user-complicity in surveillance can we create a parallel narrative allowing those who are seen to abstract and enjoy their own image?

We intend for these images to represent a hint of the potential for play and experimentation in a world of advanced imaging technology.

Process

The images depict fragments of candid photographs placed into 3-dimensional space. They use depth data captured from a Microsoft XBOX Kinect video game controller with hacked drivers, digital SLR images, and custom software.

Background

In October of 2010, science fiction writer Bruce Sterling gave the keynote speech concluding the Vimeo Festival + Awards in New York. He described his prediction of the future of imaging technology. Regarding how a camera of the future may function, Sterling said:

"It simply absorbs every photon that touches it from any angle. And then in order to take a picture I simply tell the system to calculate what that picture would have looked like from that angle at that moment. I just send it as a computational problem out in to the cloud wirelessly."

One month later, Microsoft released their new video game controller, the XBOX Kinect. Kinect is unique in that it uses a depth sensing camera and computer vision software to sense the position and actions of gamers. A group of developers released an Open Source device driver that allowed programmers to access the Kinect's data on a personal computer.

Visualizations of space as seen through Kinect's sensors can be computed from any angle using 3D software. When the drivers were made available, online creative software developer communities were flooded with artistic and novel interpretations of this data. The images were often characterized by depictions of people as clouds of dots and wireframes representing human figures moving in time.

In mid-2005, six weeks after the tragic subway and bus bombings in London, New York's Metro Transit Authority (MTA) signed a contract with the high-tech defense and military technology giant Lockheed Martin. Lockheed Martin promised the MTA a high-tech surveillance system driven by computer vision and artificial intelligence systems. The security system turned out to be vaporware and the contract collapsed under lawsuits. As a result, thousands of security cameras in the New York subway stations sit unused. (WNYC source story)

It is in this technological atmosphere that we chose to collaborate. We soldered together an inverter and motorcycle batteries to run the laptop and Kinect sensor on the go. We attached a Canon 5D DSLR to the sensor and plugged it in to a laptop. The entire kit went into a backpack.

We spent an evening in the New York Union Square subway capturing high resolution stills and and archiving depth data of pedestrians. We wrote an openFrameworks application to combine the data, allowing us to place fragments of the two dimensional images into three dimensional space, navigate through the resulting environment and render the output.

These prints are selected renderings from this process.


Media





original flickr album

Exhibitions

DepthEditorDebug @ Enter5: datapolis Prague, Czech Republic
April 14th-17th 2011

Press

Included in a feature on Today in Art

Mentioned alongside other projects on New Scientist

Fragments of Space And Time featured project on Creative Applications Network

Art-Pie blog feature