Category Archives: Projects

A Voice To Keep You Sane While Exploring The Depths Of Space

You might remember AgNES, the cute little robot head with one eye that would turn to follow bright colours, modelled as a sort of younger (or older?) sibling to Portal’s GLaDOS. Well, AgNES is evolving and growing – this time, AgNES even has a brain. Not a particularly smart brain, but it’s a start.

AgNES's brain is really a standard USB hub with five individual USB flash drives attached to it.

AgNES’s brain is really a standard USB hub with five individual USB flash drives attached to it.

There are two broad lines of thought informing the design of AgNES. One is the concept of future archaelogy, or how past and present technologies will appear to researchers hundreds or thousands of years from now – how protocols, interfaces, design conventions, and so on, will be accessible to people who will want to study our present society. In many cases, these systems will quite likely have to be reverse engineered to even make them functional, and then the work of trying to understand the social and cultural role they played may be pretty close to a guessing game. The other big line of thought is human deep-space exploration, or how we will adapt to sending people on really long deep space voyages, sometimes on their own and sometimes with other people, trying to make sure they don’t go crazy in the process. AgNES is a little bit of both: it’s a companion robot designed to accompany deep space explorers, but it’s also one that’s probably been derelict for a while and found by some other crew hundreds of years later, without major explanation or indication as to what happened. That also makes AgNES the only mechanism through which one can reconstruct the story, and a little bit of a mystery game to try to understand what happened under its watch.

So there’s a lot going on here, and many things to unpack – most of them driven by this underlying narrative scenario, which then in turn motivates the design and informs many of its decisions. As a companion robot, the voice of AgNES is intended to keep company and provide the crew with some persistent grounding to the real world, by providing them with information or simply entertainment. AgNES can spew out facts and teach about things, but it can also tell stories or read poems. And most of this content is constantly changing, some of it even being randomly generated every time. In a way, it can be interpreted as a science fiction reinterpretation of the tale of One Thousand and One Nights, where Scheherazade earned herself one more day to live every night by telling a new story. Similarly, AgNES is able to keep explorers engaged by providing one more piece of content at a time.

But where AgNES’s design gets really interesting is in her personality. The design of AgNES’s brain is modeled on the “Five Factor Model” (FFM) of personality, according to which personality can be described as operating across five different domains: openness, conscientiousness, extraversion, agreeableness, and neuroticism, with different personally being various strong or weak on any of these domains. AgNES’s personality is similarly modelled on the Five Factor Model, albeit with a few of the trait names modified slightly. Her personality is composed of five independent “cores”, each representing one trait, and each being activated or deactivated independently. What personality cores are active at any given time affects AgNES’s behaviour roughly following the FFM description: if the openness – renamed as “curiosity” – core is turned off, for example, AgNES’s language becomes simpler, as do the stories she will tell. If the conscientiousness – renamed “discipline” – core is offline, she might be willing to comply with some commands that would otherwise be blocked for the user. In this way, you can experience several different versions of AgNES by experimenting with personality core configurations, where each of them might yield different information about herself, her purpose, her design, her creators, or her history.

AgNES's personality cores based on the Five Factor Model.

AgNES’s personality cores based on the Five Factor Model.

That’s where the interactive storytelling component comes in. Based on the present configuration, AgNES will give you some information. Under a different configuration, you might be able to explore that information further, or you might get different, even contradictory, information. Since there’s no documentation or knowledge about the context available, there’s really no way to tell – so as the hypothetical future researcher your playing, you can only make your best conjecture as to what’s going on.

In terms of implementation, AgNES’s brain operates really as just a switch. The brain itself is a standard USB hub, and the personality cores are standard USB flash drives with renamed volume labels to tell them apart. The updated code for AgNES – as always, available on GitHub if anyone’s interested – checks for which cores are plugged in after every command is executed and updates accordingly. Some commands will simply be nonresponsive unless the right configuration is set, while other will exhibit different behaviour and information. The way many of the commands are set up is so that they will always give the user new information in randomised fashion: for instance, the til command (or “today I learnt”) will randomly fetch and read out the summary to a random Wikipedia article, assuming all cores are in place (if the Curiosity core is off then the command does the same, except pulling from the Simple English Wikipedia). Other commands pull information from other sources around the web, especially randomised text generators: haikus, Post Modern randomly generated articles, FML tweets, and so on. The code pulls a source, parses the webpage, extracts the desired bit of text and then passes it over to the Mac’s built-in text-to-speech synthesizer to read out loud.

AgNES reads when cores are plugged out, and modifies behaviour accordingly.

AgNES reads when cores are plugged out, and modifies behaviour accordingly.

The result is often funny, and often awkward. It’s when commands break or fail to work as expected that crucial information about AgNES is revealed – when it forgets who it’s taking too and calls you by one of her developer’s names, or when it inadvertently reveals how to get access to some crucial piece of information. Working on this breadcumb trail is what’s probably going to be the highest priority item for the next and final iteration, so that the AgNES simulation is actually “playable.” Additionally, I also need to improve how rules are manages in determining what operation should be based on the active cores, and I need to clean up the interface further still – as well as integrating this iteration with the previous one, to actually have a moving, animated object to correlated with the synthesized voice you get from the computer.

I’m already getting really good feedback to consider for the final iteration – so any comments or suggestions are more than welcome! Also, I’ll try to get a video up soon to show how it actually operates, as it’s a little difficult to reproduce independently right now.

LawyeR: Exploring the future of legal work with machine learning and comics

2H2K: LawyeR presentation.

“2H2K: LawyeR” is an in-progress project exploring the the future of electronic document discovery and machine learning on the practice of the legal profession. I’m pursuing that topic by prototyping an interactive machine learning interface for document discovery and writing and illustrating a comic telling the story of a sysadmin in a 2050 law firm. This work is a part of a collaborative project I’m pursuing with John Powers, 2H2K, which imagines life in the second half of the 21st century.

Discovery is the legal process of finding and handing over documents in response to a subpoena received in the course of a lawsuit. Currently, discovery is one of the most labor-intensive parts of the work of large law firms. It employs thousands of well-paid lawyers and paralegals. However, the nature of the work makes it especially amenable to recent advances in machine learning. Due to the secretive and competitive nature of the field, much of the work has gone unpublished. In this project, I’m working to create a prototype interactive machine learning system that would enable a lawyer or paralegal to do the work of discovery much more efficiently and effectively than is currently possible. Further, I’m trying to imagine the cultural consequences of the displacement of a large portion of well-paid highly-skilled legal labor in favor of automated systems. What happens when a large portion of the white collar jobs in large corporate legal firms are eliminated through automation? What does a law firm look like in that world? What does the law look like?

For this first stage of the project, I’ve been working with the Enron email dataset to develop a classifier that can detect emails that are relevant to a legal case on insider trading. In the course of developing that classifier, I had to read and label 1028 emails from and to Martin Cuilla, a trader on Enron’s Western Canada desk. While this process might seem dry and technical, in practice it threw me into the midst of the personal details of Cuilla’s life, ranging from his management of the Enron fantasy football league to the planning of his wedding to his heavy gambling to his problems with alcohol and contact with recruiters in the later period of Enron’s decline. This experience is already common to lawyers and paralegals who are immersed in previously personal documents in the course of doing discovery on a case. The introduction of this interactive machine learning system would transform the shared soap opera experience of a large team of lawyers into the personal voyeurism of the individual distributed users of the system.

In addition to this technical work, I’m also writing and drawing a comic telling the story of an individual working in a law firm in the year 2050. The comic is in the early stages, but I included rough versions of the first two pages in the presentation.

Slapsticks

slapsticks_undercaption2_s

Slapsticks are a set of chopsticks controlling what and how you eat. Built from stainless, low-carbon steel wrapped in copper-wire Slapsticks’ individual magneticism is controlled by a computersystem supervising the trenchermans eating-order.

Inspired by the mediatronic chopstick showcased in the novel “The Diamond Age” by Neal Stephenson, Slapsticks monitor mainly how fast you are eating – but also what you are eating. Whenever your eating behaviour departs from the computationally optimised diet-plan (COD), Slapsticks enforce a better nutrition by attracting or repelling each other. Is the trencherman still defying the COD, Slapsticks will rise their temperature until unuseable.
Good eating behaviour and food selection is rewarded by supporting the eater magnetically.

DSCF2443_lines

Slapsticks are built from low-carbon steel with isolated copper-wire wrapped around it. This setup produces a magnetic field when electric current is applied (Electromagnetism). As with most electromagnets, the magnetic field disappears when the current is turned off. The rear part of the Slapsticks are coated in rubber to ensure a secure grib and savely isolate the copper-wire from the eaters hand.

DSCF2429_sDSCF2426_s DSCF2428_s

 

Assembling AgNES, Or How I Read Science Fiction

The new revision of AgNES.

The new revision of AgNES.

Definitely the hardest part about working on Agnes so far was trying to come up with some backronym for the name. But I think I’ve finally found something: A(u)gmented Narrative Experience Simulator – AgNES.

As the sole representative of the MIT Comparative Media Studies program in the SciFi2SciFab class, my approach at building prototypes and exploring science fiction themes is likely quite different than that of people at the Media Lab. For starters, I don’t have a background on engineering or computer science, and my “making things” experience is limited. Which has made the class quite an interesting challenge, especially in trying to adapt what the experience of science fiction design means from my point of view.

That’s perhaps why my approach at building something like AgNES takes more of a narrative approach, or is perhaps better framed around larger themes involving science fiction and the exploration of possible futures. Even though I began thinking about AgNES as a version of GladOS from portal, in the process of working out the code and figuring out the design I ended up thinking a lot more about the possible narrative implications of what I was doing. In the game Portal, GladOS has been left behind by herself in an abandoned facility (after she killed most of the people there with a neurotoxin gas), and all of the information we receive from her own history and background comes either from direct interaction, or from subtle clues the designers of the game actually laid around hidden areas in the game’s levels. I started thinking about it from the point of view of far-future archaeology, where someone might find a facility and a system such as this one thousands of years into the future and then start interacting with it, trying to figure out what was going on at the time, and what its designers were thinking.

The rat man's den in Portal.

The rat man’s den in Portal.

If you think about it, most of the technology we use nowadays is fairly well documented – most of the stuff that has been widely used over the last hundred years or so can be traced roughly to where it came from and put into its historical context. If we go back a few hundred years, things begin to get murkier, and if we go back a few thousand, we begin to rely on archaeology to figure things out. So even though we’re able to understand our current technologies, thousands of years from now some people looking back at this time might not have such an easy time. For the most part, even trying to go back a few years to read some data stored on floppy disks can be quite the challenge nowadays. If you account for the evolution of storage media, operating systems and programming languages, interface design principles, and so on, our present technology might be entirely inaccessible to someone from the future. And while a clear chain of influence and evolution can be mapped, at some point, given enough variation, that might cease to be the case.

So that all ended up flowing into the design for AgNES. First of all, how would the design stand on its own to someone using this system, without having access to any information like context, purpose, background, and so on? How could one make inferences as to any of these things strictly from what was provided through the interface? And conversely, how could one exploit this information vacuum for narrative purposes? How can I construct a story about AgNES and convey it to the user strictly through interface and interaction elements?

The abandones facilities of Aperture Science, with GladOS at its core.

The abandones facilities of Aperture Science, with GladOS at its core.

In other words, the point of playing around with AgNES is figuring out a way in which it provides information to the user bit by bit, which the user can then put together to reconstruct the origin, purpose and story behind this system. It then becomes something of a mystery game, but just as in archaelogy, it is not a game of precise inputs and outputs. The user/player can only draw conjectures based on the available information, but the system itself cannot speak its “truth” and fully resolve what’s going on behind the scenes. While looking at the code behind the system might be useful and helpful, that doesn’t necessarily mean all the elements are contained in it to solve the mystery.

AgNES has become sort of a game, and games are essentially complex information systems where the purpose is to strategically conceal information from the player in a way that he or she is driven to uncover that information through the mechanics of the game. The difference is that the objective of this game is only to understand its parameters, yet the feedback mechanism is such that you can never fully know whether you have done so (a bit perverse, perhaps). You can only get closer and closer, testing things out to see what happens: while the original design had five entirely different personality cores that were interchangeable, the new design is closer to GladOS in being built around five cores which provide aspects to one whole personality. The cores can be deactivated, thus affecting the way in which AgNES operates, and prompting her to reveal different pieces of information depending on the active combination of cores. The current design is now drawing from many things we’ve been looking at in class: while Portal remains the main source, there’s a little bit of The Diamond Age’s Primer and ractors in thinking about the purpose AgNES could serve (while it also now has a new confession mode borrowed from THX1138). The personality side of AgNES also bears some influence from the Neuromancer/Wintermute conflict between artificial intelligences.

The new personality core design for AgNES.

The new personality core design for AgNES.

It’s a somewhat different approach at the intersection between science fiction and speculative/critical design, but hopefully an interesting one. The code for AgNES remains available on GitHhub, and hopefully in the near future it’ll be somewhere where it can be tested to see how people react to it. A new demo is also posted below, this new iteration built in collaboration with Travis Rich at the Media Lab.

Project LIMBO

limbo_hand

What is LIMBO?

LIMBO stands for Limbs IMotion BOthers, a tech demonstration of how to control other humans remotely, via digital interfaces.

The concept is simple: first, you need two people. We start with a guide–the person who wants to be in control–he or she can control by sending a signal from any digital interface imaginable: a software UI button, a sensor controller, a hand gesture to a computer, for example.  A special glove is worn by another person far away (this person, we call the dupe) that can be triggered to control muscle contractions.  So, it’s that simple, the guide controls the movements of the dupe, far away via digital interface.

What we’ve created is one specific scenario implementation of this concept.  Here’s how it works

LIMBO_Slide07

  1. Analyze the openness of the fist of the guide using the Creative Interactive Gesture Camera and computer vision techniques provided by Intel’s Perceptual Computing SDK, which gives us some information about the guide’s hand position and state.
  2. When we detect a palm from the guide, we can send some information to the dupe about the openness of the guide’s hand.
  3. The dupe is wearing a glove with electrically conducting pads.  Using principles of functional electrical stimulation (or FES) for short, we can send a current through the dupe’s arm to activate specific muscle contractions in the dupe‘s hand.
  4. What we’ve demonstrated is effectively the mapping of the grasp of one person’s hand, directly to another’s.

LIMBO_Slide08

Here’s a video of LIMBO in action

LIMBO Presentation PDF

 

Ermal Dreshaj and Sang-won Leigh

 

Rodent Sense

Inspired by the protagonist in Kill Decision’s reliance on a trained pair of ravens (as opposed to drones), and by the SimStim in Neuromancer that allows Case to tap into the sensory experience of another, the Rodent Sense project links its wearer to the world of animals.

We drew on Umwelt theory to imagine how a human might make sense of the world if given the opportunity to to switch between various animal sensory inputs and augment (or diminish) their senses in particular ways. For the demo, we focused on allowing the viewer to see through the eyes of a hamster.

As hamsters can only see 2 inches in front of their eyes, the view offered to the wearer is a quite distorted one. To create this experience, we attached stereoscopic cameras to a carriage and hamster-ball device that the hamster pulled, and processed the resulting video feed so that it could be seen by the viewer in 3d while wearing an Oculus Rift. The carriage and wheels were laser cut from 5mm mirrored acrylic and 25mm clear acrylic.

We also recorded fictional promotion material for our product:

Thank you for purchasing the Rodent Sense™ Virtual Reality expansion pack. Now you can have the sensory experience you’ve always wanted.

For example, with the Rodent Sense™ pack, you can focus sharply on objects that are two inches from your nose, perceive time in slow motion, and feel infra red lights. With the Reptile Sense™ pack you can hear through vibrations and discern the body heat of others. And, if you’re feeling adventurous, why not see through the hundreds eyes of the sea scallop with Aquatic Sense™.

Setting up is easy. First, catch an animal or grow one in your household hatchery. Next, simply attach the Sensory Transmitter™ to your desired animal. The Transmitter will anchor to the neural system of its host and manifest pathways necessary for wireless communication. Initiation will take a few days. Once your augmented animal system is ready, you can begin to immerse yourself in this animal’s senses using your VR unit.

Collect multiple expansion packs to switch between a fleet of your favorite animals. If you want to experience our new, multiplayer product, try Swarm Sense™. You and your friends can experience what it is like to be part of a hive of bees, a school of fish, or a flock of seagulls. Experience communication on an animal level.

Warning: Death of host sometimes occurs during Sensory Transmitter™ installation. If you a experience a sensory malfunction lasting more than 4 hours, call a doctor. Sense Co. will not be held responsible.

Sense Co.

Feel Everything

Meet Agnes

[Warning: This post contains spoilers related to the Portal and Portal 2 games.]

Digital games have always received considerable influence from the science fiction world. Even really early games, like perhaps Space Invaders, in elementary ways were already drawing inspiration from sci-fi themes, and the trend has remained over time. One of the most interesting examples I can think of is the Portal games by Valve Software, which develop a very unconventional form of narrative: lacking any cutscenes or narrative interludes to provide the player with explanations about what’s going on, the depth of the world in which you abruptly find yourself in is provided bit by bit by GLadOs, an initially friendly artificial intelligence which leads the protagonist through a series of physical test chambers in an otherwise deserted scientific facility.

GLadOs controls the Aperture Laboratories facility and all of its contents, and as you progress through the first game she begins to act increasingly oddly, until you eventually find out she’s responsible for killing everyone in the facility with a neurotoxin gas. GLadOs appears to be driven solely by the urge to continue scientific testing of the Portal gun, and indeed, as soon as the player character completes the final testing sequence, she attempts to kill you (something you manage to escape to ultimately confront, and destroy, the GLadOs core).

GLadOS's main unit

GLadOS’s main unit

GLadOs, an experiment from Aperture Laboratories herself, was fitted with a sort of “fail-safe” mechanism in the form of personality cores. The various sides of her personality were split into multiple physical cores which could be attached or detached from the main unit, including cores for logical processing, morality, rage, and in Portal 2, an actual stupidity core – Wheatley – whose design was intended to stop GLadOs from taking over control of everything.

Some of the GLadOs personality cores

Some of the GLadOs personality cores

My SciFi2SciFab project is an attempt to reconstruct a version of GLadOs (without the murderous desires, of course), which I’m calling Agnes. GLadOs falls into a long and rich tradition of crazy robots which includes some hallmarks as HAL9000 from the 2001: A Space Odyssey novel and film. But it is also a very interesting reflection on natural and artificial intelligence: in the Portal 2 game, it is revealed that GLadOs was actually the result of transferring the consciousness of a living person on to a machine, which then went on to become a little bit insane because of the transformation and the sudden realisation she no longer had any human limits.

So I’m interested to see if I can put together a functional enough replica that will allow for some interaction and, especially, that will be able to pass for a playful robot despite not having full artificial intelligence at its disposal. Agnes is then made up of two related components: a software side which simulates the user interaction and provides vocal feedback based on user input, and how this feedback is provided changes based on the active “personality core” Agnes is running with. The intention is to make multiple cores available which can be physically switched, just like with GLadOs, but with each of them generating changes in tone and address rather than in criminal tendencies. Agnes cores will move interactions across a “politeness spectrum” where some cores will be more agreeable and nicer than others.

The other side of Agnes is the hardware component, a version of the hanging mechanical arm seen in the picture above providing a material interface to the pseudo AI. The arm will be fitted with a sensors allowing it to detect presences standing in front of it, and Agnes will then lock on to them and follow them around as they move within her operational range. I’ve wanted so far to preserve the original design of having Agnes hanging from above, which I’m also thinking would turn into a really cool application if you had a more evolved version coupled to rails along the ceiling of a building, to have your own version of Agnes following you around and providing helpful information and assistance.

The original arm design for Agnes

The original arm design for Agnes

The more I look for inspiration and ideas for the project, the more I’m drawn to realise how small design decisions going into Agnes have intense narrative charge. Not only in terms of associations one can make, but in terms of conveying the notion of an imaginary designer making imaginary decisions, and the object itself, Agnes, being the conveyor of those decisions and that background world invisible to the user – very much like Portal, the game itself, very successfully gives player access to the thoughts and lives of people in Aperture Laboratories. I think this contributes a lot of value to the experience of the user/player, who is then driven not only to interact with Agnes, but hopefully will be curious enough to try and “hack it” to understand where it came from and why it is how it is. So the robot taken from a game becomes a little bit of a game itself – is there a mystery to Agnes that needs to be deciphered?

Right now I’m working through these narrative associations and thinking about how this object design conveys identity – thinking how Agnes could easily go from GLadOs to Luxo with very different implications. And I’m in the process of implementing the pseudo-AI that will be driving Agnes, including the basis for the core switching which I need to turn into physical switching as a next step. Once that’s somewhat running, I’ll work on putting together the hanging arm setup and connecting the two. The (admittedly super elementary) code for Agnes is up on GitHub, and a little demo of what it does right now is posted below. Any feedback, thoughts or comments are very much welcome, as I’m figuring this all out just as I go along.

Case and Molly: A Game Inspired by Neuromancer

“Case and Molly” is a prototype for a game inspired by William Gibson’s Neuromancer. It’s about the coordination between the virtual and the physical, between “cyberspace” and “meat”.

Neuromancer presents a future containing two aesthetically opposed technological visions. The first is represented by Case, the cyberspace jockey hooked on navigating the abstract information spaces of The Matrix. The second is embodied by Molly, an augmented mercenary who uses physical prowess and weaponized body modifications to hurt people and break-in places.

In order to execute the heists that make up most of Neuromancer’s plot, Case and Molly must work together, coordinating his digital intrusions with her physical breakings-and-enterings. During these heists they are limited to an extremely asymmetrical form of communication. Case can access Molly’s complete sensorium, but can only communicate a single bit of information to her.

On reading Neuromancer today, this dynamic feels all too familiar. We constantly navigate the tension between the physical and the digital in a state of continuous partial attention. We try to walk down the street while sending text messages or looking up GPS directions. We mix focused work with a stream of instant message and social media conversations. We dive into the sudden and remote intimacy of seeing a family member’s face appear on FaceTime or Google Hangout.

Gameplay

“Case and Molly” uses the mechanics and aesthetics of Neuromancer’s account of cyberspace/meatspace coordination to explore this dynamic. It’s a game for two people: “Case” and “Molly”. Together and under time pressure they must navigate Molly through a physical space using information that is only available to Case. Case can see Molly’s point of view in 3D but can only communicate to her by flipping a single bit: a screen that’s either red or green.

Case and Molly headset view

Case is embedded in today’s best equivalent of Gibsonian cyberspace: an Oculus Rift VR unit. He oscillates between seeing Molly’s point of view and playing an abstract geometric puzzle game.

Case and Molly: stereo phone rig

Molly carries today’s version of a mobile “SimStim unit” for broadcasting her point of view and “a readout chipped into her optic nerve”: three smartphones. Two of the phones act as a pair of stereo cameras, streaming her point of view back to Case in 3D. The third phone (not shown here) substitutes for her heads-up display, showing the game clock and a single bit of information from Case.

Case and Molly: Molly turn

The game proceeds in alternating turns. During a Molly turn, Case sees Molly’s point of view in 3D, overlaid with a series of turn-by-turn instructions for where she needs to go. He can toggle the color of her “readout” display between red and green by clicking the mouse. He can also hear her voice. Within 30 seconds, Molly attempts to advance as far as possible, prompting Case for a single bit of direction over the voice connection. Before the end of that 30 second period, Molly has to stop at a safe point, prompting Case to type in the number of a room along the way. If time runs out before Case enters a room number, they lose. When Case enters a room number, Molly stays put and they enter a Case turn.

Case and Molly: Case turn

During his turn, Case is thrust into an abstract informational puzzle that stands in for the world of Cyberspace. In this prototype, the puzzle consists of a series of cubes arranged in 3D space. When clicked, each cube blinks a fixed number of times. Case’s task is to sort the cubes by the number of blinks within 60 seconds. He can cycle through them and look around by turning his head. If he completes the puzzle within 60 seconds they return to a Molly turn and continue towards the objective. If not, they lose.

At the top of this post is a video showing a run where Case and Molly make it through a Molly turn and a Case turn before failing on the second Molly turn.

Play Testing and Similarities and Differences from Neuromancer

In play testing the game and prototyping its constituent technology I found ways in which the experience resonated with Gibson’s account and others in which it radically diverged.

One of the strongest resonances was the dissonance between the virtual reality experience and being thrust into someone else’s point of view. In Neuromancer, Gibson describes Case’s first experience of “switching” into Molly’s subjective experience, as broadcast by a newly installed “SimStim” unit:

The abrupt jolt into other flesh. Matrix gone, a wave of sound and color…For a few frightened seconds he fought helplessly to control her body. Then he willed himself into passivity, became the passenger behind her eyes.

This dual description of sensory richness and panicked helplessness closely matches what it feels like to see someone else’s point of view in 3D. In Molly mode, the game takes the view from each of two iPhones aligned into a stereo pair and streams them into each eye of the the Oculus Rift. The resulting 3D illusion is surprisingly effective. When I first got it working, I had a lab mate carry the pair of iPhones around, placing me into different points of view. I found myself gripping the arms of my chair, white-knuckled as he flew the camera over furniture and through obstacles around the room. In conventional VR applications, the Oculus works by head tracking, making the motions of your head control the direction of a pair of cameras within the virtual scene. Losing that control, having your head turned for you, and having your actual head movements do nothing is extremely disorienting.

Gibson also describes the intimacy of this kind of link, as in this exchange where Molly speaks aloud to case while he rides along with her sensorium:

“How you doing, Case?” He heard the words and felt her form them. She slid a hand into her jacket, a fingertip circling a nipple under warm silk. The sensation made him catch his breath. She laughed. But the link was one-way. He had no way to reply.

While it’s not nearly as intimate as touch, the audio that streamed from “Molly”’s phone rig to “Case” in the game provided an echo of this same experience. Since Molly holds the phones closely and moves through a crowded public space, she speaks in a whisper, which stays close in Case’s ears even as she moves ever further away in space.

Even in simpler forms, this Case-Molly coordination can be interesting. Here’s a video from an early prototype where we try to coordinate the selection of a book using only the live camera feed and the single red/green bit.

One major aspect of the experience that diverged from Gibson’s vision is the experience of “cyberspace”. The essence of this classic idea is that visualizing complex data in immersive graphical form makes it easier to navigate. Here’s Gibson’s classic definition:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators…A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data. Like city lights, receding"

Throughout Neuromancer, Gibson emphasizes the fluency achieved by Case and other cyberspace jockeys, the flow state enabled by their spacial navigation of the Matrix. Here’s a passage from the first heist:

He flipped back. His program had reached the fifth gate. He watched as his icebreaker strobed and shifted in front of him, only faintly aware of his hands playing across the deck, making minor adjustments. Translucent planes of color shuffled like a trick deck. Take a card, he thought, any card. The gate blurred past. He laughed. The Sense/Net ice had accepted his entry as a routine transfer from the consortium’s Los Angeles complex. He was inside.

My experience of playing as Case in the game could not have been more opposed to this. Rather than a smooth flow state, the virtual reality interface and the rapid switches to and from Molly’s POV left me cognitively overwhelmed. The first time we successfully completed a Molly turn, I found I couldn’t solve the puzzle because I’d essentially lost the ability to count. Even though I’d designed the puzzle and played it dozens of times in the course of implementing it, I failed because I couldn’t stay focused enough to retain the number of blinks of each cube and where they should fit in the sorting. This effect was way worse than the common distractions of email, Twitter, texts, and IM many of us live with in today’s real computing environments.

Further, using a mouse and a keyboard while wearing a VR helmet is surprisingly challenging itself. Even though I am a very experienced touch-typist and am quite confident using a computer trackpad, I found that when presented with contradictory information about what was in front of me by the VR display, I struggled with basic input tasks like keeping my fingers on the right keys and mousing confidently.

Here you can see a video an early run where I lost the Case puzzle because of these difficulties:

Technical Implementation

Case and Molly game diagram

Lacking an Ono Sendai and a mobile SimStim unit, I built this Case and Molly prototype with a hodgepodge of contemporary technologies. Airbeam Pro was essential for the video streaming. I ran their iOS app on both iPhones which turned each one into an IP camera. I then ran their desktop client which captured the feeds from both cameras and published them to Syphon, an amazingly-useful OSX utility for sharing GPU memory across multiple applications for synced real time graphics. I then used Syphon’s plugin for the Unity3D game engine to display the video feeds inside the game.

I built the game logic for both the Case and Molly modes in Unity using the standard Oculus Rift integration plugin. The only clever element involved was placing the Plane displaying the Syphon texture from each separate camera into its own Layer within Unity so the left and right cameras could be set to not see the overlapping layer from the other eye.

To communicate back from Case to Molly, I used the Websockets-Sharp plugin for Unity to send messages to a Node.js server running on Nodejitsu, the only Node host I could find that supported websockets rather than just socket.io. My Node app then broadcasts JSON with the button state (i.e. whether Case is sending a “red” or “green” message) as well as the game clock to a static web page on a third phone, which Molly also carries.

The code for all of this can be found here: case-and-molly on Github.

Special thanks to Kate Tibbetts for playing Molly to my Case throughout the building of this game and for her endlessly useful feedback.

CARTESIAN SPACE PROJECT

CARTESIAN SPACE PROJECT

Student: Paloma Gonzalez

This project is named after the cartesian coordinates, the orthogonal projection and was inspired by the idea of CYBERSPACE in novel “Neuromancer”, as how human figure can be perceived as data.

CYBERSPACE: A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters (Gibson, 1984)

                                          Neuromancer_(Book)

How human figure data representation would be? … like inside the cyberspace?

GESlideCount2

Gesture Everywhere. 2013. Joe Paradiso and Nick  Gillian.

What would happen if we beggin to understand our own representations  into cyberspace  in a physical way, or start seeing this data representations?

… a human inside of a computer is just data, the figure is mainly coordinates.

This project consists about using  6 projectors inside a squared  room to  project in the walls the Cartesian projection of the people inside of that room, scanned by the KINECT. In that way we will have the “plans” of a person.

Cartesian Space (1)_Page_4

The projection is like an abstract mirror. It delivers human scaled data.

Cartesian Space (1)_Page_6 Cartesian Space (1)_Page_5

Cartesian Space (1)_Page_7

Comments: Cyberspace is timeless, so this data representations can work with different times. The Cartesian Space can allow interactions with those “phantoms” of previous motions.

Constructors

Our vision has a deep nostalgia for the complexity of the design that was produced through more than three thousands years of cultural evolution that today seems to be forgotten by dry automatization processes of fabrication. Following this vision we ask the question if digital fabrication and computer aided design can enhance the practice of design or we have simply lost the appreciation for beauty?

Rapto Preserpina – Bernini

We envision a future in which construction workers are equipped with wearable devices that enable precise, digitally controlled fabrication. Constructors may fill different roles: some may add material blocks while others may intricately carve and detail sections of a structure, still others may enact specialized roles like routing capillary channels through structures to add functionality to structural elements.

In this project we present one instantiation of such a device: a prosthetic milling arm with an augmented display helmet.

Constructors wear a safety helmet with an integrated augmented display. The display gives real-time feedback to constructors as to what parts of the building or structure need to be modified.


The arm’s 3D position may be tracked by globally-positioned optical sensors.

https://www.dropbox.com/s/r58htuzwiixru7k/Video5min_lowres2.mov
https://www.dropbox.com/s/r58htuzwiixru7k/Video5min_lowres2.mov