Monthly Archives: October 2013

We be Cyborgs!

Workshop at KIKK, Namur, Belgium

http://www.niklasroy.com/workshop/165

Meet Agnes

[Warning: This post contains spoilers related to the Portal and Portal 2 games.]

Digital games have always received considerable influence from the science fiction world. Even really early games, like perhaps Space Invaders, in elementary ways were already drawing inspiration from sci-fi themes, and the trend has remained over time. One of the most interesting examples I can think of is the Portal games by Valve Software, which develop a very unconventional form of narrative: lacking any cutscenes or narrative interludes to provide the player with explanations about what’s going on, the depth of the world in which you abruptly find yourself in is provided bit by bit by GLadOs, an initially friendly artificial intelligence which leads the protagonist through a series of physical test chambers in an otherwise deserted scientific facility.

GLadOs controls the Aperture Laboratories facility and all of its contents, and as you progress through the first game she begins to act increasingly oddly, until you eventually find out she’s responsible for killing everyone in the facility with a neurotoxin gas. GLadOs appears to be driven solely by the urge to continue scientific testing of the Portal gun, and indeed, as soon as the player character completes the final testing sequence, she attempts to kill you (something you manage to escape to ultimately confront, and destroy, the GLadOs core).

GLadOS's main unit

GLadOS’s main unit

GLadOs, an experiment from Aperture Laboratories herself, was fitted with a sort of “fail-safe” mechanism in the form of personality cores. The various sides of her personality were split into multiple physical cores which could be attached or detached from the main unit, including cores for logical processing, morality, rage, and in Portal 2, an actual stupidity core – Wheatley – whose design was intended to stop GLadOs from taking over control of everything.

Some of the GLadOs personality cores

Some of the GLadOs personality cores

My SciFi2SciFab project is an attempt to reconstruct a version of GLadOs (without the murderous desires, of course), which I’m calling Agnes. GLadOs falls into a long and rich tradition of crazy robots which includes some hallmarks as HAL9000 from the 2001: A Space Odyssey novel and film. But it is also a very interesting reflection on natural and artificial intelligence: in the Portal 2 game, it is revealed that GLadOs was actually the result of transferring the consciousness of a living person on to a machine, which then went on to become a little bit insane because of the transformation and the sudden realisation she no longer had any human limits.

So I’m interested to see if I can put together a functional enough replica that will allow for some interaction and, especially, that will be able to pass for a playful robot despite not having full artificial intelligence at its disposal. Agnes is then made up of two related components: a software side which simulates the user interaction and provides vocal feedback based on user input, and how this feedback is provided changes based on the active “personality core” Agnes is running with. The intention is to make multiple cores available which can be physically switched, just like with GLadOs, but with each of them generating changes in tone and address rather than in criminal tendencies. Agnes cores will move interactions across a “politeness spectrum” where some cores will be more agreeable and nicer than others.

The other side of Agnes is the hardware component, a version of the hanging mechanical arm seen in the picture above providing a material interface to the pseudo AI. The arm will be fitted with a sensors allowing it to detect presences standing in front of it, and Agnes will then lock on to them and follow them around as they move within her operational range. I’ve wanted so far to preserve the original design of having Agnes hanging from above, which I’m also thinking would turn into a really cool application if you had a more evolved version coupled to rails along the ceiling of a building, to have your own version of Agnes following you around and providing helpful information and assistance.

The original arm design for Agnes

The original arm design for Agnes

The more I look for inspiration and ideas for the project, the more I’m drawn to realise how small design decisions going into Agnes have intense narrative charge. Not only in terms of associations one can make, but in terms of conveying the notion of an imaginary designer making imaginary decisions, and the object itself, Agnes, being the conveyor of those decisions and that background world invisible to the user – very much like Portal, the game itself, very successfully gives player access to the thoughts and lives of people in Aperture Laboratories. I think this contributes a lot of value to the experience of the user/player, who is then driven not only to interact with Agnes, but hopefully will be curious enough to try and “hack it” to understand where it came from and why it is how it is. So the robot taken from a game becomes a little bit of a game itself – is there a mystery to Agnes that needs to be deciphered?

Right now I’m working through these narrative associations and thinking about how this object design conveys identity – thinking how Agnes could easily go from GLadOs to Luxo with very different implications. And I’m in the process of implementing the pseudo-AI that will be driving Agnes, including the basis for the core switching which I need to turn into physical switching as a next step. Once that’s somewhat running, I’ll work on putting together the hanging arm setup and connecting the two. The (admittedly super elementary) code for Agnes is up on GitHub, and a little demo of what it does right now is posted below. Any feedback, thoughts or comments are very much welcome, as I’m figuring this all out just as I go along.

Case and Molly: A Game Inspired by Neuromancer

“Case and Molly” is a prototype for a game inspired by William Gibson’s Neuromancer. It’s about the coordination between the virtual and the physical, between “cyberspace” and “meat”.

Neuromancer presents a future containing two aesthetically opposed technological visions. The first is represented by Case, the cyberspace jockey hooked on navigating the abstract information spaces of The Matrix. The second is embodied by Molly, an augmented mercenary who uses physical prowess and weaponized body modifications to hurt people and break-in places.

In order to execute the heists that make up most of Neuromancer’s plot, Case and Molly must work together, coordinating his digital intrusions with her physical breakings-and-enterings. During these heists they are limited to an extremely asymmetrical form of communication. Case can access Molly’s complete sensorium, but can only communicate a single bit of information to her.

On reading Neuromancer today, this dynamic feels all too familiar. We constantly navigate the tension between the physical and the digital in a state of continuous partial attention. We try to walk down the street while sending text messages or looking up GPS directions. We mix focused work with a stream of instant message and social media conversations. We dive into the sudden and remote intimacy of seeing a family member’s face appear on FaceTime or Google Hangout.

Gameplay

“Case and Molly” uses the mechanics and aesthetics of Neuromancer’s account of cyberspace/meatspace coordination to explore this dynamic. It’s a game for two people: “Case” and “Molly”. Together and under time pressure they must navigate Molly through a physical space using information that is only available to Case. Case can see Molly’s point of view in 3D but can only communicate to her by flipping a single bit: a screen that’s either red or green.

Case and Molly headset view

Case is embedded in today’s best equivalent of Gibsonian cyberspace: an Oculus Rift VR unit. He oscillates between seeing Molly’s point of view and playing an abstract geometric puzzle game.

Case and Molly: stereo phone rig

Molly carries today’s version of a mobile “SimStim unit” for broadcasting her point of view and “a readout chipped into her optic nerve”: three smartphones. Two of the phones act as a pair of stereo cameras, streaming her point of view back to Case in 3D. The third phone (not shown here) substitutes for her heads-up display, showing the game clock and a single bit of information from Case.

Case and Molly: Molly turn

The game proceeds in alternating turns. During a Molly turn, Case sees Molly’s point of view in 3D, overlaid with a series of turn-by-turn instructions for where she needs to go. He can toggle the color of her “readout” display between red and green by clicking the mouse. He can also hear her voice. Within 30 seconds, Molly attempts to advance as far as possible, prompting Case for a single bit of direction over the voice connection. Before the end of that 30 second period, Molly has to stop at a safe point, prompting Case to type in the number of a room along the way. If time runs out before Case enters a room number, they lose. When Case enters a room number, Molly stays put and they enter a Case turn.

Case and Molly: Case turn

During his turn, Case is thrust into an abstract informational puzzle that stands in for the world of Cyberspace. In this prototype, the puzzle consists of a series of cubes arranged in 3D space. When clicked, each cube blinks a fixed number of times. Case’s task is to sort the cubes by the number of blinks within 60 seconds. He can cycle through them and look around by turning his head. If he completes the puzzle within 60 seconds they return to a Molly turn and continue towards the objective. If not, they lose.

At the top of this post is a video showing a run where Case and Molly make it through a Molly turn and a Case turn before failing on the second Molly turn.

Play Testing and Similarities and Differences from Neuromancer

In play testing the game and prototyping its constituent technology I found ways in which the experience resonated with Gibson’s account and others in which it radically diverged.

One of the strongest resonances was the dissonance between the virtual reality experience and being thrust into someone else’s point of view. In Neuromancer, Gibson describes Case’s first experience of “switching” into Molly’s subjective experience, as broadcast by a newly installed “SimStim” unit:

The abrupt jolt into other flesh. Matrix gone, a wave of sound and color…For a few frightened seconds he fought helplessly to control her body. Then he willed himself into passivity, became the passenger behind her eyes.

This dual description of sensory richness and panicked helplessness closely matches what it feels like to see someone else’s point of view in 3D. In Molly mode, the game takes the view from each of two iPhones aligned into a stereo pair and streams them into each eye of the the Oculus Rift. The resulting 3D illusion is surprisingly effective. When I first got it working, I had a lab mate carry the pair of iPhones around, placing me into different points of view. I found myself gripping the arms of my chair, white-knuckled as he flew the camera over furniture and through obstacles around the room. In conventional VR applications, the Oculus works by head tracking, making the motions of your head control the direction of a pair of cameras within the virtual scene. Losing that control, having your head turned for you, and having your actual head movements do nothing is extremely disorienting.

Gibson also describes the intimacy of this kind of link, as in this exchange where Molly speaks aloud to case while he rides along with her sensorium:

“How you doing, Case?” He heard the words and felt her form them. She slid a hand into her jacket, a fingertip circling a nipple under warm silk. The sensation made him catch his breath. She laughed. But the link was one-way. He had no way to reply.

While it’s not nearly as intimate as touch, the audio that streamed from “Molly”’s phone rig to “Case” in the game provided an echo of this same experience. Since Molly holds the phones closely and moves through a crowded public space, she speaks in a whisper, which stays close in Case’s ears even as she moves ever further away in space.

Even in simpler forms, this Case-Molly coordination can be interesting. Here’s a video from an early prototype where we try to coordinate the selection of a book using only the live camera feed and the single red/green bit.

One major aspect of the experience that diverged from Gibson’s vision is the experience of “cyberspace”. The essence of this classic idea is that visualizing complex data in immersive graphical form makes it easier to navigate. Here’s Gibson’s classic definition:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators…A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data. Like city lights, receding"

Throughout Neuromancer, Gibson emphasizes the fluency achieved by Case and other cyberspace jockeys, the flow state enabled by their spacial navigation of the Matrix. Here’s a passage from the first heist:

He flipped back. His program had reached the fifth gate. He watched as his icebreaker strobed and shifted in front of him, only faintly aware of his hands playing across the deck, making minor adjustments. Translucent planes of color shuffled like a trick deck. Take a card, he thought, any card. The gate blurred past. He laughed. The Sense/Net ice had accepted his entry as a routine transfer from the consortium’s Los Angeles complex. He was inside.

My experience of playing as Case in the game could not have been more opposed to this. Rather than a smooth flow state, the virtual reality interface and the rapid switches to and from Molly’s POV left me cognitively overwhelmed. The first time we successfully completed a Molly turn, I found I couldn’t solve the puzzle because I’d essentially lost the ability to count. Even though I’d designed the puzzle and played it dozens of times in the course of implementing it, I failed because I couldn’t stay focused enough to retain the number of blinks of each cube and where they should fit in the sorting. This effect was way worse than the common distractions of email, Twitter, texts, and IM many of us live with in today’s real computing environments.

Further, using a mouse and a keyboard while wearing a VR helmet is surprisingly challenging itself. Even though I am a very experienced touch-typist and am quite confident using a computer trackpad, I found that when presented with contradictory information about what was in front of me by the VR display, I struggled with basic input tasks like keeping my fingers on the right keys and mousing confidently.

Here you can see a video an early run where I lost the Case puzzle because of these difficulties:

Technical Implementation

Case and Molly game diagram

Lacking an Ono Sendai and a mobile SimStim unit, I built this Case and Molly prototype with a hodgepodge of contemporary technologies. Airbeam Pro was essential for the video streaming. I ran their iOS app on both iPhones which turned each one into an IP camera. I then ran their desktop client which captured the feeds from both cameras and published them to Syphon, an amazingly-useful OSX utility for sharing GPU memory across multiple applications for synced real time graphics. I then used Syphon’s plugin for the Unity3D game engine to display the video feeds inside the game.

I built the game logic for both the Case and Molly modes in Unity using the standard Oculus Rift integration plugin. The only clever element involved was placing the Plane displaying the Syphon texture from each separate camera into its own Layer within Unity so the left and right cameras could be set to not see the overlapping layer from the other eye.

To communicate back from Case to Molly, I used the Websockets-Sharp plugin for Unity to send messages to a Node.js server running on Nodejitsu, the only Node host I could find that supported websockets rather than just socket.io. My Node app then broadcasts JSON with the button state (i.e. whether Case is sending a “red” or “green” message) as well as the game clock to a static web page on a third phone, which Molly also carries.

The code for all of this can be found here: case-and-molly on Github.

Special thanks to Kate Tibbetts for playing Molly to my Case throughout the building of this game and for her endlessly useful feedback.

CARTESIAN SPACE PROJECT

CARTESIAN SPACE PROJECT

Student: Paloma Gonzalez

This project is named after the cartesian coordinates, the orthogonal projection and was inspired by the idea of CYBERSPACE in novel “Neuromancer”, as how human figure can be perceived as data.

CYBERSPACE: A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters (Gibson, 1984)

                                          Neuromancer_(Book)

How human figure data representation would be? … like inside the cyberspace?

GESlideCount2

Gesture Everywhere. 2013. Joe Paradiso and Nick  Gillian.

What would happen if we beggin to understand our own representations  into cyberspace  in a physical way, or start seeing this data representations?

… a human inside of a computer is just data, the figure is mainly coordinates.

This project consists about using  6 projectors inside a squared  room to  project in the walls the Cartesian projection of the people inside of that room, scanned by the KINECT. In that way we will have the “plans” of a person.

Cartesian Space (1)_Page_4

The projection is like an abstract mirror. It delivers human scaled data.

Cartesian Space (1)_Page_6 Cartesian Space (1)_Page_5

Cartesian Space (1)_Page_7

Comments: Cyberspace is timeless, so this data representations can work with different times. The Cartesian Space can allow interactions with those “phantoms” of previous motions.

Constructors

Our vision has a deep nostalgia for the complexity of the design that was produced through more than three thousands years of cultural evolution that today seems to be forgotten by dry automatization processes of fabrication. Following this vision we ask the question if digital fabrication and computer aided design can enhance the practice of design or we have simply lost the appreciation for beauty?

Rapto Preserpina – Bernini

We envision a future in which construction workers are equipped with wearable devices that enable precise, digitally controlled fabrication. Constructors may fill different roles: some may add material blocks while others may intricately carve and detail sections of a structure, still others may enact specialized roles like routing capillary channels through structures to add functionality to structural elements.

In this project we present one instantiation of such a device: a prosthetic milling arm with an augmented display helmet.

Constructors wear a safety helmet with an integrated augmented display. The display gives real-time feedback to constructors as to what parts of the building or structure need to be modified.


The arm’s 3D position may be tracked by globally-positioned optical sensors.

https://www.dropbox.com/s/r58htuzwiixru7k/Video5min_lowres2.mov
https://www.dropbox.com/s/r58htuzwiixru7k/Video5min_lowres2.mov