2H2K Lawyer: Artificial Labor and Ubiquitous Interactive Machine Learning

Intro

“2H2K: LawyeR” is a multimedia project exploring the fate of legal work in a future of artificial labor and ubiquitous interactive machine learning.

This project arose out of 2H2K, my ongoing collaboration with John Powers where we’re trying to use science fiction, urbanism, futurism, cinema, and visual effects to imagine what life could be like in the second half of the 21st century. One of the major themes to emerge in the 2H2K project is something we’ve taken to calling “artificial labor”. While we’re skeptical of the claims of artificial intelligence, we do imagine ever-more sophisticated forms of automation transforming the landscape of work and economics. Or, as John puts it, robots are Marxist.

Due to our focus on urbanism and the built-environment, John’s stories so far have mainly explored the impact of artificial labor on physical work: building construction, forestry, etc. For this project, I wanted to look at how automation will affect white collar work.

Having known a number of lawyers who worked at large New York firms such as Skadden
and Kirkland and Ellis, one form of white collar work that seemed especially ripe for automation jumped out to me: document evaluation for legal discovery. As I’ll explain in more detail below, discovery is the most labor-intensive component of large corporate lawsuits and it seems especially amenable to automation through machine learning. Even the widespread application of technologies that already exist today would radically reduce the large number of high-paid lawyers and paralegals that currently do this work.

In the spirit of both 2H2K and the MIT Media Lab class, Science Fiction to Science Fabrication (for which this project acted as a final), I set out to explore the potential impact of machine learning on the legal profession through three inter-related approaches:

  • Prototyping a real interactive machine learning system for legal discovery.
  • Writing and illustrating a sci-fi comic telling the story of how it might feel to work in a law firm of 2050 that’s been transformed by this new technology.
  • Designing the branding for an imaginary firm working in this field.

For the rest of this post, I’ll discuss these parts of the project one-by-one and describe what I learned from each. These discussions will range from practical things I learned about machine learning and natural language processing to interface design issues to the relationship between legal discovery and voyeurism.

Before beginning, though, I want to mention one of the most powerful and surprising things I learned in the course of this project. Using science fiction as the basis of a design process has lead me to think that design fiction is incredibly broken. Most design fiction starts off with rank speculation about the future, imagining a futuristic device or situation out of whole cloth. Only then does it engage prototyping and visual effects technologies in order to communicate the consequences of the imagined device through “diegetic prototypes”, i.e. videos or other loosely narrative formats that depict the imagined technology in use.

This now seems perfectly backwards to me. For this project, by contrast, I started with a real but relatively cutting edge technology (machine learning for document recall). I then engaged with it as a programmer and technologist until I could build a system that worked well enough to give me (with my highly specialized technical knowledge) the experience of what it would be like to really use such a system in the real world. Having learned those lessons, I then set out to communicate them using a traditional storytelling medium (in this case, comics). I used my technical know-how to gain early-access to the legendarily unevenly distributed future and then I used my storytelling ability to relay what I learned.

Design fiction uses imagination to predict the future and prototyping to tell stories. Imagination sucks at resolving the complex causes that drive real world technology development and uptake. Prototyping sucks at producing the personal identification necessary to communicate a situation’s emotional effect. This new process – call it Science Fiction Design, maybe? – reverses this mistake. It uses prototyping and technological research to predict the future and storytelling media to tell stories.

(Much of the content of this post is reproduced in the third episode of the 2H2K podcast where John and I discuss this project. The 2H2K podcast consists of semi-regular conversations between the two of us about the stories and technologies that make up the project. Topics covered include urbanism, labor, automation, robots, interactive machine learning, cross-training, cybernetics, and craft. You can subscribe here.)

What is Discovery?

According to wikipedia:

Discovery, in the law of the United States, is the pre-trial phase in a lawsuit in which each party, through the law of civil procedure, can obtain evidence from the opposing party by means of discovery devices including requests for answers to interrogatories, requests for production of documents, requests for admissions and depositions.

In other words, when you’re engaged in a lawsuit, the other side can request internal documents and other information from your company that might help them prove their case or defend against yours. This can include internal emails and memos, call records, financial documents, and all manner of other things. In large corporate lawsuits the quantity of documents involved can be staggering. For example, during the US government’s lawsuit against Big Tabacco six million documents were discovered totaling more than 35 million pages.

Each of these documents needs to be reviewed for information that is relevant to the case. This is not simply a matter of searching for the presence of absence of particular words, but making a legal judgment based on the content of the document. Does it discus a particular topic? Is it evidence of a particular kind of relationship between two people? Does it represent an order or instruction from one party to another?

In large cases this review is normally performed by hordes of first year associates, staff attorneys, and paralegals at large law firms. Before the crash of 2008, large law firms, which do the bulk of this kind of work and employ hundreds or even thousands of such workers, hired more than 30% of new law school graduates (see What’s New About the New Normal: The Evolving Market for New Lawyers in the 21st Century by Bernard A. Burk of UNC Chapel Hill).

As you can imagine, this process is wildly expensive both for law firms and their clients.

Legal Discovery and Machine Learning

Legal discovery is a perfect candidate for automation using recent advances in machine learning. From a machine learning perspective discovery is a classification task: each document must be labeled as either relevant or irrelevant to the case. Since the legal issues, people involved, and topics discussed vary widely between cases, discovery is a prime candidate for supervised learning, a machine learning approach where humans provide labels for a small subset of documents and then the machine learning system attempts to generalize to the full set.

Machine learning differs from traditional information retrieval systems such as full-text search exactly because of this ability to generalize. Machine learning systems represent their documents as combinations of “features”: the presence or absence of certain words, when a message was sent, who sent it, who received it, whether or not it includes a dollar amount or a reference to stock ticker symbol, etc. (Feature selection is the single most critical aspect of machine learning engineering; more about it below when I describe the development of my system.) Supervised machine learning algorithms learn the patterns that are present in these features amongst the labeled examples they are given. They learn what types of combinations of features characterize documents that are relevant vs irrelevant and then they classify a new unseen document by comparing its features.

Information retrieval systems are currently in widespread use throughout the legal field. One of the landmark information retrieval systems, IBM’s STAIRS system was even originally developed in order to reduce the expense of defending against an antitrust lawsuit in 1969 before being commercialized in 1973.

However, there is little public sign that machine learning techniques are in widespread use at all. (It’s impossible to know how widely these techniques are used within proprietary systems inside of firms, of course.) One of the most visible proponents of machine learning for legal discovery is former Bell Labs researcher, David Lewis. Lewis’s Purdue lecture, Machine Learning for Discovery in Legal Cases represents probably the best public survey of the field.

This seems on the verge of changing. In a March 2011 story in the New York Times, Armies of Expensive Lawyers, Replaced by Cheaper Software John Markoff reported on burgeoning set of companies beginning to compete in this field including Clearwell Systems, Cataphora, Blackstone Discovery, and Autonomy, which has since been acquired by HP. Strikingly, Bill Herr, one of the lawyers interviewed for Markoff’s story, used one of these new e-discovery systems to review a case his firm had worked in the 80s and 90s and learned that the lawyers had only been 60 percent accurate, only “slightly better than a coin toss”.

Prototyping an Interactive Machine Learning System for E-Discovery

Having reviewed this history, I set out to prototype a machine learning system for legal discovery.

The first thing I needed in order to proceed was a body of documents from a legal case against which I could train and test a classifier. Thankfully in Brad Knox’s Interactive Machine Learning class this semester, I’d been exposed to the existence of the Enron corpus. Published by Andrew McCallum of CMU in 2004, the Enron corpus collects over 650,000 emails from 150 users obtained during the Federal Energy Regulatory Commission’s investigation of Enron and made public as part of the federal case against the company. The Enron emails make the perfect basis for working on this problem because they represent real in situ emails from a situation where there were actual legal issues at stake.

After obtaining the emails, I consulted with a lawyer in order to understand some of the legal issues involved in the case (I chose my favorite criminal defense attorney: my dad). The government’s case against Enron was huge, sprawling, and tied up with many technicalities of securities and energy law. We focused on insider trading, situations where Enron employees had access to information not available to the wider public, which they used for their own gain or to avoid losses. In the case of Enron this meant both knowledge about the commodities traded by the company and the company’s own stock price, especially in the time period of the later’s precipitous collapse and the government’s investigation.

The World of Martin Cuilla

With this knowledge in hand, I was ready to label a set of Enron emails in order to start the process of training a classifier. And that’s when things started getting personal. In order to label emails as relevant or irrelevant to the question of insider training I’d obviously need to read them.

So, unexpectedly I found myself spending a few hours reading 1028 emails sent and received by Martin Cuilla, a trader on the Western Canada Energy Desk at Enron. To get started labeling, I chose one directory within the dataset, a folder named “cuilla-m”. I wasn’t prepared for the intimate look inside someone’s life that awaited me as I conducted this technical task.

Of the 1028 emails belonging to Mr. Cuilla, about a third of them relate to the Enron fantasy football league, which he administered:

A chunk of them from early in the dataset reveal the planning details of Cuilla’s engagement and wedding.

They include fascinating personal trivia like this exchange where Cuilla buys a shotgun from a dealer in Houston:

In the later period of the dataset, they include conversations with other Enron employees who are drunk and evidence of Cuila’s drinking and marital problems:

As well as evidence of an escalating gambling problem (not a complete shocker in a day trader):

And, amongst all of this personal drama, there are emails that may actually be material to the case where Cuilla discusses predictions of gas prices:

orders trades:

and offers to review his father’s stock portfolio to avoid anticipated losses (notice that his father also has an Enron email address):

In talking to friends who’ve worked at large law firms, I learned that this experience is common: large cases always become soap operas. Apparently, it’s normal when reading the previously private correspondence of any company to come across evidence of at least a few affairs, betrayals, and other such dramatic material. Part of working amongst hundreds of other lawyers, paralegals, and staff on such a case is the experience of becoming a collective audience for the soap opera that the documents reveal, gossiping about what you each have discovered in your reading.

As I learned in the course of building this prototype: this is an experience that will survive into a world of machine learning-based discovery. However, it will likely be transformed from the collective experience of large firms to a far more private and voyeuristic one as individuals (or distributed remote workers) do this work alone. This was an important revelation for me about the emotional texture of what this work might feel like in the future and (as you’ll see below) it became a major part of what I tried to communicate with the comic.

Feature Engineering and Algorithm Selection

Now that I’d labeled Martin Cuilla’s emails, I could begin the process of building a machine learning system that could successfully predict these labels. While I’ve worked with machine learning before, it’s always been in the context of computer vision, never natural language.

As mentioned above, the process of designing machine learning systems have two chief components: features engineering and learning algorithm selection. Feature engineering covers what information you extract from each document to represent it. The learning algorithm is how you use those features (and your labels) to build a classifier that can predict labels (such as relevant/irrelevant) for new documents. Most of the prestige and publicity in the field goes to the creation of learning algorithms. However, in practice, feature engineering is much more important for solving real world problems. The best learning algorithm will produce terrible results with the wrong features. And, given, good feature design, the best algorithms will only incrementally outperform the other options.

So, in this case, my lack of experience with feature engineering for natural language was a real problem. I barged forwards nonetheless.

For my first prototype, I extracted three different kinds of features: named entities, extracted addresses, and date-sent. My intuition was that named entities (i.e. stock symbols, company names, place names, etc) would represent the topics discussed, the people sending and receiving the messages would represent the command structure within Enron, and the date sent would relate to the progress of the government’s case and the collapse of the company.

I started by dividing Martin Cuilla’s emails into training and testing sets. I developed my system against the training set and then tested its results against the test set. I used CoreNLP, an open source natural language processing library from Stanford to extract named entities from the training set. You can see the results in the github repo for this project here, (Note: all of the code for this project is available in my github repo: atduskgreg/disco and the code from this stage of the experiment is contained in this directory). I treated this list as a “Bag of Words”, creating a set of binary features corresponding to each entity with the value of 1 given when an email included the entity and 0 when it did not. I then did something similar for the email addresses in the training set, which I also treated as a bag of words. Finally, to include the date, I transformed the date into a single feature: a float which was scaled to the timespan covered by the corpus. In other words, a 0.0 for this feature would mean an email was sent at the very start of the corpus and a 1.0 that it was the last email sent. The idea being that emails sent close together in time would have similar values.

For the learning algorithm, I selected Random Decision Forest. Along with Support Vector Machines, Random Decision Forests are amongst the most effective widely-deployed machine learning algorithms. Unlike SVMs though, Random Decision Forests have a high potential for transparency. Due to the nature of the algorithm, most Random Decision Forest implementations provide an extraordinary amount of information about the final state of the classifier and how it derived from the training data (see my analysis of Random Decision Forrest’s interaction affordances for more). I thought this would make it a superior choice for an interactive e-discovery system since it would allow the system to explain the reasons for its classifications to the user, increasing their confidence and improving their ability to explore the data, add labels, tweak parameters, and improve the results.

Since I maintain the OpenCV wrapper for Processing and am currently in the process of integrating OpenCV’s rich machine learning libraries, I decided to use OpenCV’s Random Decision Forest implementation for this prototype.

Results of the First Prototype: Accuracy vs Recall

The results of this first prototype were disappointing but informative. By the nature of legal discovery, it will always be a small minority of documents that are relevant to the question under investigation. In the case of Martin Cuilla’s emails, I’d labeled about 10% of them as relevant. This being the case, it is extremely easy to produce a classifier that has a high rate of accuracy, i.e. that produce the correct label for a high percentage of examples. A classifier that labels every email as irrelevant will have an accuracy rate around 90%. And, as you can see from the console output in the image above, that’s exactly what my system achieved.

While this might sound impressive on paper, it is actually perfectly useless in practice. What we care about in the case of e-discovery is not accuracy, but recall. Where accuracy measures how many of our predicted labels were correct, recall measures how many of the total relevant messages we found. Whereas accuracy is penalized for false positives as well as false negatives, recall only cares about avoiding false negatives: not missing any relevant messages. It is quite easy for a human to go through a few thousand messages to eliminate any false positives. However, once a truly relevant message has been missed it will stay missed.

With the initial approach, our classifier only ever predicted that messages were irrelevant. Hence, the 90+% accuracy rate was accompanied by a recall rate of 0. Unacceptable.

Improving Recall: Lightside and Feature Engineering for Text

In order to learn how to improve on these results, I consulted with Karthik Dinakar, a PhD candidate at the lab who works with Affective Computing and Software Agents and is an expert in machine learning with text. Karthik gave some advice about what kinds of features I should try and pointed me towards Lightside.

Based on research done at CMU, Lightside is a machine learning environment specifically tailored to working with text. It’s built on top of Weka, a widely-used GUI tool for experimenting with and comparing machine learning algorithms. Lightside adds a suite of tools specifically to facilitate working with text documents.

Diving into Lightside, I set out to put Karthik’s advice into action. Karthik had recommended a different set of features than I’d previously tried. Specifically, he recommended unigrams and bigrams instead of named entities. Unigrams and bigrams one- and two-word sequences, respectively. Their use is widespread throughout computational linguistics.

I converted the emails and my labels to CSV and imported them into Lightside. Its interface made it easy to try out these features, automatically calculating them from the columns I indicated. Lightside also made it easy to experiment with other computed features such as regular expressions. I ended up adding a couple of regexes designed to detect the presence of dollar amounts in the emails.

Lightside also provides a lot of additional useful information for evaluating classifier results. For example, it can calculate “feature weights”, how much each feature contributed to the classifier’s predictions.

Here’s a screenshot showing the highest-weighted features at one point in the process:

The first line is one of my regexes designed to detect dollar amounts. Other entries are equally intriguing: “trades”, “deal”, “restricted”, “stock”, and “ene” (Enron’s stock ticker symbol). Upon seeing these, I quickly realized that they would make an excellent addition to final user interface. They provide insight into aspects of the emails the system has identified as relevant and potentially powerful user interface hooks for navigating through the emails to add additional labels and improve the system’s results (more about this below when I discuss the design and implementation of the interface).

In addition to tools for feature engineering, Lightside makes it easy to compare multiple machine learning algorithms. I tested out a number of options, but Random Decision Forest and SVN performed the best. Here were some of their results early on:

As you can see, we’re now finally getting somewhere. The confusion matrices compare the models’ predictions for each value (0 being irrelevant and 1 being relevant) with reality, letting you easily see false negatives, false positives, true negatives, and true positives. The bottom row of each matrix is the one that we care about. That row represents the relevant emails and shows the proportions with which the model predicted 0 or 1. We’re finally getting predictions of 1 for about half of the relevant emails.

Notice also, the accuracy rates. At 0.946 the Random Decision Forest is more accurate than the SVM at 0.887. However, if we look again at the confusion matrix, we can see that the SVM detected 11 more relevant emails. This is a huge improvement in recall so, despite Random Forest’s greater potential for transparency, I selected SVM as the preferred learning algorithm. As we learned above, recall matters above all else for legal discovery.

Building a Web Interface for Labeling and Document Exploration

So, now that I had a classifier well-suited to detecting relevant documents I set out to build an interface that would allow someone with legal expertise to use it for discovery. As in many other interactive machine learning contexts, designing such an interface is a problem of balancing the rich information and options provided by the machine learning algorithms with the limited machine learning knowledge and specific task focus of the user. In this case I wanted to make an interface that would help legal experts do their work as efficiently as possible while exposing them to as little machine learning and natural language processing jargon as possible.

(An aside about technical process: the interface is built as a web application in Ruby and Javascript using Sinatra, DataMapper, and JQuery. I imported the Enron emails into a Postgres database and setup a workflow to communicate bidirectionally with Lightside via CSVs (sending labels to Lightside and receiving lists of weighted features and predicted labels from Lightside). An obvious next iteration would be to use Lightside’s web server example to provide classification prediction and re-labeling as an HTTP API. I did some of the preliminary work on this and received much help from David Adamson of the Lightside project in debugging some of the problems I hit, but was unable to finish the work within the scope of this prototype. I plan to publish a simple Lightside API example in the future to document what I’ve learned and help others who’d like to improve on my work.)

The interface I eventually arrived at looks a lot like Gmail. This shouldn’t be too surprisingly since, at base, the user task is quite similar to Gmail’s users: browse, read, search, triage.

In addition to providing a streamlined interface for browsing and reading emails, the basic interface also displays the system’s predictions: highlighting in pink messages predicted as relevant. Further, it allows users to label messages as relevant or irrelevant in order to improve the classifier.

Beyond basic browsing and labeling, the interface provides a series of views into the machine learning system designed to help the user understand and improve the classifier. Simplest amongst these is a view that shows the system’s current predictions grouped by whether they’re predicted to be relevant or irrelevant. This view is designed to give the user an overview of what kind of messages are being caught and missed and a convenient place to correct these results by adding further labels.

The messages that have already been labeled show up in a sidebar on all pages. Individual labels can be removed if they were applied mistakenly.

The second such view exposes some technical machine learning jargon but also provides the user with quite a lot of power. This view shows the features extracted by Lightside, organized by whether they correlate with relevant or irrelevant emails. As you can see in the screenshot above, these features are quite informative about what message content is found in common amongst relevant emails.

Further, each feature is a link to a full-text search of the message database for that word or phrase. This may be the single most-powerful aspect of the entire interface. One of the lessons of the Google-era seems to be a new corollary to Clarke’s Third Law: any sufficiently advanced artificial intelligence is indistinguishable from search. These searches quite often turn up additional messages where the user can improve the results by applying their judgment to marginal cases by labeling them as relevant or irrelevant.

One major lesson from using this interface is that a single classifier is not flexible enough to capture all of the subtleties of a complex legal issue like insider trading. I can imagine dramatically improving on this current interface by adding an additional layer on top of what’s currently there that would allow the user to create multiple different “saved searches” each of which trained an independent classifier and which were composable in some way (for example through interface option that would automatically add the messages matching highly negatively correlated terms from one search to the relevant group of another). The work of Saleema Amershi from Microsoft Research is full of relevant ideas here, especially her ReGroup paper about on-demand group-creation in social networks and her work on interactive concept learning.

Further, building this interface lead me to imagine other uses for it beyond e-discovery. For example, I can imagine the leaders of a large company wanting versions of these saved-search classifiers run against their employees’ communications in real time. Whether as a preventative measure against potential lawsuits, in order to capture internal ‘business intelligence’, or simply out of innate human curiousity it’s difficult to imagine such tools, after they come into existence, not getting used for additional purposes. To extend William Gibson’s famous phrase into a law of corporate IT: the management finds its own uses for things.

This leads me to the next part of the project: making a sci-fi comic telling the story of how it might feel to work in a 2050 law firm that’s been transformed by these e-discovery tools.

The Comic: Sci-Fi Storytelling

When I first presented this project in class, everyone nodded along to the technical parts, easily seeing how machine learning would better solve the practical problem. But the part that really got them was when I told the story of reading and labeling Martin Cuilla’s emails. They were drawn into Cuilla’s story along with me and also intrigued by my experience of unexpected voyeurism.

As I laid out in the beginning of this post, the goal of this project was to use a “Science Fiction Design” process – using the process of prototyping to find the feelings and stories in this new technology and then using a narrative medium to communicate those.

In parallel with the technical prototype, I’ve been working on a short comic to do just this. Since I’m a slow writer of fiction and an even slower comics artist, the comic is still unfinished. I’ve completed a script and I have three pages with finished art, only one of which (shown at the top of this section) I’ve also lettered and completed post-production. In this section, I’ll outline some of the discoveries from the prototype that have translated into the comic, shaping its story and presenting emotional and aesthetic issues for exploration. I’ll also show some in-progress pages to illustrate.

The voyeurism inherent in the supervised learning process is the first example of this. When I experienced it, I knew it was something that could be communicated through a character in my comic story. In fact, it helped create the character: someone who’s isolated, working a job in front of a computer without social interaction, but intrigued by the human stories that filter in through that computer interface, hungering to get drawn into them. This is a character who’s ripe for a mystery, an accidental detective. The finished and lettered page at the top of this section shows some of this in action. It uses actual screenshots of the prototype’s interface as part of a section of the story where the character explains his job and the system he uses to do it.

But where does such a character work? What world surrounds him, in what milieu does e-discovery take place? Well, thinking about the structure of my machine learning prototype, I realized that it was unlikely that current corporate law firms would do this work themselves. Instead, I imagined that this work would be done by the specialized IT firms I already encountered doing it (like Cataphora and Blackstone Discovery).

Firms with IT and machine learning expertise would have an easier time adding legal expertise by hiring a small group of lawyers than law firms would booting up sophisticated technical expertise from scratch. Imagine the sales pitch an IT firm with these services could offer to a big corporate client: “In addition to securely managing your messaging and hosting which we already do, now we can also provide defensive legal services that dramatically lower your costs in case of a lawsuit and reduce or eliminate your dependence on your super-expensive external law firm.” It’s a classic Clayton Christensen-esque case of disruption.

So, instead of large corporate law firms ever fully recovering from their circa–2008 collapse, I imagined that 2050 will see the rise of a new species of firm to replace them: hybrid legal-IT firms with heavy technological expertise in securely hosting large amounts of data and making it discoverable with machine learning. Today’s rooms full of paralegals and first-year associates will be replaced with tomorrow’s handful of sysadmins.

This is where my character works: at a tech company where a handful of people operate enormous data centers, instantly search and categorize entire corporate archives, and generally do the work previously done by thousands of prestigious and high-paid corporate lawyers.

And, as I mentioned in the last section, I don’t imagine that the services provided by such firms will stay limited to legal discovery. To paraphrase Chekov, if in the first act you have created way of surveilling employees, then in the following one you will surveil your own employees. In other words, once tools are built that use machine learning to detect messages that are related to any topic of inquiry, they’ll be used by managers of firms for preemptive prevention of legal issues, to capture internal business intelligence, and, eventually, to spy on their employees for trivial personal and political purposes.

Hence, in my comic’s story twists comes when it turns out that the firm’s client has used their tools inappropriately and when, inevitably, the firm itself is also using them to spy on my main character. While he enjoys his private voyeuristic view into the lives of others, someone else is using the same tools to look into his.

Finally, a brief note about the style of the comic’s art. As you can see from the pages included here, the comic itself includes screenshots of the prototype interface I created early in the process. In addition to acting as background research, the prototype design process also created much more realistic computer interfaces than you’d normally see in fiction.

Another small use of this that I enjoyed was the text of the emails included at the bottom of that finished page. I started with the Enron emails and then altered the text to fit the future world of my story. (See the larger version where you can read the details.) My small tribute to Martin Cuilla and all he did for this project.

The other thing I’ve been experimenting with in the art style is the use of 3D models. In both the exterior of the building and the server room above, I downloaded or made 3D models (the building was created out of a 3D model of a computer fan, which I thought appropriate for a futuristic data center), rendered them as outlines, and then glued them onto my comics pages where I integrated them (through collage and in-drawing) with hand-drawn figures and other images. This was both pragmatic – radically accelerating the drawing of detailed perspective scenes, which would have otherwise been painstaking to create by hand – and designed to make the technology of this future world feel slightly absent and undefined, a blank slate onto which we can project our expectations of the future. After all, isn’t this how sci-fi props and scenery usually acts?

Lawgorithm.com

Last and definitely least, as a lark I put together a website for the fictional firm described in the story (and whose branding adorned the interface prototype). I was quite proud of the domain I manage to secure for this purpose: lawgorithm.com. I also put an unreasonable amount of time into copying and satirizing the self-presentation style I imagined such a firm using: an unholy mashup of the pompous styling of corporate law firm websites like Skadden’s and the Apple-derivative style so common amongst contemporary tech startups. (I finished it off by throwing in a little Lorem Gibson for good measure.)

Despite a few masterpieces, satirical web design is an under-utilized medium. While comedic news sites like The Onion and The Daily Currant do look somewhat like the genre of news sites they skewer, they don’t take their visual mockery nearly as far as their textual mockery.

I am Building E-14

I am Building E-14mandala  I am building e-140304

…from cyberspace to space/people interactions” (or: making the building’s brain)

This project questions the concept and meaning of CYBERSPACE for understanding its implications to real-space. In a first stage I explore the idea of a “Cartesian Space”, a space in which abstract human data is represented. In this stage I seek to develop a way to track people in reality and transfer them to cyberspace as the core of a building’s “brain”.

johnny mnemonic blu-raycompre CYBERSPACE:”… A graphic representation of data  abstracted from the banks of every computer in  the human system. Unthinkable complexity.  Lines of light ranged in the nonspace of the  mind… clusters…” (Gibson, 1984)

 

What is human scale in relation to cyberspace? What would that mean?

The concept of Cyberspace was first developed by Gibson in the eighties. The idea was to introduce our minds into a place in which we can connect with others and with data, which sounds pretty much like the internet. This idea faces the impossibility of transporting our bodies into Cyberspace because of obvious technological reasons: the computer interfaces were able to translate our thoughts  but not our body-data.

Today, we have the depth cameras to “capture” our bodies. This kind of developments may bring some unexpected outcomes such as “The Building’s Brain”.

In 2012, a research group of the MIT Media Lab (Responsive Environments) decided to explore the use of Kinects for gesturally controlling 25 screens around their building, recording at the same time every path of a person walking in front of the sensors across a space. I started working three months ago with the data collected by this group, realizing these are exceptional findings – these data may constitute the only database of anonymized tracked people inside a building, perhaps in the entire world.

IMG_0752

011

What we see in the images above are 6 random visualizations of 6 different locations inside the building E-14, from a top view. The Kinect sensing range is a triangle therefore the tracking visualizations are also triangles. The “Building E-14” may use this information to analyze and control what is happening inside of it. The depth cameras stream  and capture every event that’s happening inside, therefore to retrieve information such as, how many people is inside a space, for how long, compare this information to schedules, day time, and protocols, is currently possible. The building’s brain has emerged.

140-5_10-for-post

Yet the idea is not a surveillance state of architecture but an interactive system that augments the information available for the occupants of the building qualifying its spaces for the user to choose where to go according to their intentions. For example the Building E-14 may suggest that a space is too crowded and a quiet person may choose to go somewhere else, or warn people that the doors will close soon. The Building’s brain may be a channel to interface between occupants and the architectural context.

For the occupants the way we understand space and socialize might change forever. For the designer this is a remarkable tool since from now on the interaction between the building and people’s motion can be recorded and analyzed in order to asses new designs.

Lobsters: CYBERSPACE
“…Let me think about it,” says Manfred. He closes the dialogue window, opens his eyes again, and shakes his head. Some day he too is going to be a lobster, swimming around and waving his pincers in a cyberspace so confusingly elaborate that his uploaded identity is cryptozoic: a living fossil from the depths of geological time, when mass was dumb and space was unstructured.

Next steps

Nicholas Negroponte (founder of the Media Lab) and his group “The Architecture Machine Group”, built in 1970 an experiment consisting of a controlled environment (a box) filled with small blocks and introduced a family of gerbils to inhabit it. The gerbils were in continuous observation in order to understand their performance inside the environment. A robotic arm would accommodate the blocks according to this processed data, hence the environment adapted endlessly to the gerbil behavior patterns. Assuming that it was just an experiment and, of course, that it worked sufficiently well, this experiment tests the possibility of a cyclical/adaptive/responsive/changing environment.

Consequently, the possible aim of this research would be: If we can track, for example, how many people goes inside of a rom of a party, and observe that the number of attendants exceeds the regulations “The Building” may decide to expand that room.


 

 

Ice-9 Spyware: Vonnegut-Inspired Spy Tools

vonnegut_catscradleIce-9 is a polymorph of water that melts at 45.8 ºC, that appears in Kurt Vonnegut’s “Cat’s Cradle”. When it comes into contact with liquid water under 45.8 ºC, it acts as a seed crystal that eventually turns the whole water into ice.

This is what was envisioned by renowned sci-fi satire author Kurt Vonnegut in his famous book Cat’s Cradle.  In the book, Vonnegut imagined ice-nine as a doomsday plot device, a material that when put into the wrong hands could freeze the entire planet over instantly and kill all life.

Inspired by ice-nine, we began to wonder what we could do if we had a material that could transition from liquid to solid states on command.  Here the idea of futuristic spy tools was born…

ice_9_process

We see this fictional material as more than a killing tool, we see it as a future of fabrication and manufacturing. For example, we can use this state-changing property for instantly making hand tools out of the liquid. What if the liquid can transform into a certain functional shape e.g. weapons, tools, or anything, and more importantly, at the moment we need them?  This could be revolutionary for personal fabrication, yet it also makes for a good science fiction technology.

The process is illustrated above: first we envision that a cup is designed integrated with a mold of an object (tool) contained discretely on the inside.  The cup can be filled with the “ice-nine” liquid to appear that the person is merely enjoying a soft drink, tea or coffee.  When ready, the user can agitate the solution or drop a seed crystal into the cup, causing the material to change into solid state instantly.  When the solution hardens, the user can pull out and crack open the mold to reveal the cast of a ready-to-use object.

Drawing from the influence of spy novels and James Bond movies, we envision the spy who needs a tool to open a secret cabinet inside an embassy, or an assassin who has to sneak in a weapon past metal detectors.

After doing some research, we had a good candidate for a material that fit these properties: sodium acetate.  This food-grade compound has a property that was very interesting to us: when at room temperature, it acts as a super-cooled liquid.  That is, at room temperature the compound would prefer to be a solid, but if it is in pure form, it will not crystalize at room temperature unless a seed crystal is introduced.  This was exactly what we were looking for!

For a proof-of-concept, we designed a tool and a weapon mold in Rhinoceros.  One would integrate with a coffee cup, the other would go into a team tumbler, respectively.  Here’s a 3D model sketch of the knife design

We fabricated the designs using a 3D printer, and here are the results of what we made (a knife and a wrench):

 

Finally we had a chance for some experimentation

We envision that, if we can have robust control over the crystalization and supercooling, a liquid with this state-shifting property could enable a new wave of personal product manufacturing. It doesn’t require much external energy for the printing process, and this method has the ability to turn into final shapes very quickly. As a practical application, we can always carry the liquid and the molds for different hand tools, and whenever there is immediate need, we can always turn the liquid into the tool we need and turn it back to liquid after use.  General-purpose liquid for creating and recycling tools, like omni-gel seen in the video game Mass Effect.

Ermal Dreshaj and Sang-won Leigh

Tomorrow’s Yesterday, Today

open-pod-bay-doors

Over the course of the semester I’ve been iterating on the original idea for AgNES. Originally, it was meant to be an implementation based on Portal’s snarky and evil artificial intelligence, GLaDOS, following a roughly similar design and trying to mimic the functionality. The first demo was a first attempt at using the Mac’s built-in text-to-speech synthesizer to “sing” the Portal ending song, “Still Alive”. The second iteration took a more tangible approach, and with the help of Travis Rich from the Viral Systems group at the Media Lab, we were able to put together a physical manifestation for AgNES: a small robotic head capable of tracking a user’s movement when tagged by bright colours. The head – a small cardboard box holding an Arduino board and a webcam, mounted on a small servo – was controlled by an attached computer that processed the image from the camera, looked for the desired colour within a treshold in the frame, and commanded movement accordingly. The third iteration, just a couple weeks ago, revisited the software component of it and already started going in a different, more sci-fi-ish direction: AgNES became a sort of companion robot for long, solitary travel – essentially, deep-space exploration – which could provide a grounding helpful voice to an imaginary traveller. By giving the traveller stories, facts, and various other voice-mediated interactions, the companionship of AgNES can keep the traveller grounded and relatively sane over long journeys. But AgNES’s personality also became something the user could interface with: based on the five factor model for the description of personality, AgNES’s own personality is made up of five independent cores that can be individually turned on or off. How the cores are configured has an effect on the output the user gets from the various commands, and one can thus experiment with different configurations to get different results.

By playing around with AgNES’s personality, one is also playing with the conditions necessary for its optimal functioning. Which means that as results vary, some cracks in its design are revealed: in the confusion, AgNES begins to unintendedly give out clues as to the identities of its designers and its operators. The user can then follow this clues to learn more about this design and better understand the purpose of AgNES and its intentions. This is grounded in yet another science fiction underlying narrative: how future individuals will react to and interact with technologies from the future past.

Future Archaeology and Deep-Space Exploration

Even just today, we’ve already accrued a significant technological past that is hard to access and explore. Floppy disks are a good example: if you stumbled upon a box of old floppies from years ago and wanted to browse for meaningful things within them, getting to that data would be really complicated. If the disks are functional and you can find a drive to read them, there’s still the matter of whether the data is uncorrupted and whether you can still get the software to read it. Not impossible, of course, but as time goes on, increasingly complicated.

Future researchers, probably deprived of access to instruction manuals and other reference materials that help us situate past technologies, will contemplate our present technological world trying to make sense of it just as we look back on archaeological remains and try to make well-informed conjectures about what objects were used for or why they were designed one way over another. While we make an effort to design technologies that are intuitive to use, this intuitiveness is anchored at specific moments in space and time. Thousands of years from now, when behavioural patterns become very different, it is plausible to assume that many of the design conventions in use today will no longer have the same effect. Future archaeologists then face a complicated task of reconstruction.

That is the narrative framing where AgNES comes in. AgNES is designed from the point of view of being this deep-space exploration companion; but as a narrative device, it is also about thinking what would happen if thousands of years from now, future researchers came across this device built only hundreds of years from now. What sense would they make of it? How would they understand it, after stumbling upon it floating through space in a derelict ship, perhaps still powered but no longer in the company of any travellers? In the first of the many Star Trek movies, the crew of the Enterprise stumbles upon a massive entity threatening Earth called VGER, which upon closer inspection turns out to be the Voyager 6 probe, found by an alien species who augmented its design to enable it to fulfil its mission to “collect knowledge and bring it back to its creator”, creating a sentient entity on its way back to Earth in an unrecognisable form. These technologies we’re unleashing on the universe may at some point cycle back and be found again, and it’ll be a challenge to interpret them and make sense of their original context.

AgNES and Meta-AgNES

AgNES works on two levels. As a “present day” object, it is this pseudo-artificial intelligence that provides company and grounding during deep space travel, with a customisable personality the user can modify. AgNES’s commands are limited but they provide different forms of entertainment to keep a user distracted over what would be, presumably, very long sessions of just floating through space. The design of AgNES draws from multiple science fiction sources: the already mentioned Portal was the chief one throughout, but other sources such as Arthur Clarke’s 2001 and its own evil AI, HAL9000, also had a big influence. As did many of the themes we discussed in class related to artificial intelligence and robotics (including such things as Neuromancer by William Gibson, or Do Androids Dream of Electric Sheep? by Phillip K. Dick). As a companion providing information, there’s also certainly some influence from the Illustrated Primer technology found in Neal Stephenson’s The Diamond Age.

As a “future day” object, the design of AgNES is populated with a series of clues that only become evident to the user when they begin playing around with the personality configuration. Deactivating certain personality cores opens up areas that would otherwise be forbidden to an “unauthorised” user, where they may find information that can later be explored more in detail using additional commands. The “future day” user can then put together these pieces of the puzzle to come up with their own conjecture as to what AgNES is, where it came from, who it was with, and how it came to be where it is. From this point of view, AgNES plays more like a game where you’re trying to decipher what’s going on with this object by interacting with it, drawing inspiration primarily from the text and point-and-click adventure games especially popular in the early 1990s. The big caveat, however, is that there’s no real resolution to the game: there’s no “win state” as such, and there’s no real correct answer – you can only get as far as the conjecture you draw from the information you received as to who was involved and what happened. Just as future researchers, you can never be fully certain whether your conjecture was actually the case.

Pay No Attention To The Man Behind The Curtain

Training AgNES to track bright colours

Training AgNES to track bright colours

Technically speaking, AgNES is not incredibly complex. There are two pieces running simultaneously. One of the code for AgNES itself, written on Python and managing all the commands, the UI, and the personality cores. The cores themselves are five USB sticks and a hub that are together used as a switch – the code detects which cores are plugged in at any given time and makes changes accordingly. The first versions of AgNES used the Cmd Python module for a simple command-line interface, while the final iteration uses Tkinter to instead have a simple GUI that is less prone to error and displays information more clearly. AgNES’s commands are highly dynamic, and they often pull randomised content from various sources around the web based on the information desired: for instance, under normal operations, the TIL (“Today I Learnt”) command will pull a random article from Wikipedia and then read the summary out to the user. If the Curiosity core is turned off (meaning a reduced openness to experience factor, signaled by a more limited use of language) the command does the same, but pulling from the Simple English version of Wikipedia. If, instead, the Empathy core is turned off, the system pulls a generated text from the PoMo (Post Modernism) generator and reads that out – without empathy, AgNES loses any regard for the user actually understanding the information. And so on. Not all core combinations are meaningful, but those that are pull and parse content from the web using the BeautifulSoup web scraping library.

User testing AgNES's tracking of bright colours.

User testing AgNES.

The other pieces is AgNES’s head, described above. The setup remains the same, with the box containing an Arduino board and a webcam, all of it mounted on a small servo. The image handling is done with Processing, going over a frame of the image, finding the desired colour (Post-It pink, so it can be as unambiguous as possible) and making sure it stays within a center treshold – if it falls outside of that, it signals the servo to move left or right until it readjusts.

AgNES wearing its tin foil space helmet.

AgNES wearing its tin foil space helmet.

For its final presentation, both pieces were running of a Mac Mini concealed within a stand, “decorated” to appear as if it was a 1960s sci-fi B-movie prop (meaning, lots of aluminum foil). Using an app called Air Display, the mini was using an iPad as an external display with the AgNES GUI running, making it touch-enabled very easily. (I really wanted to try to get everything running off a Raspberry Pi but it proved to be too much for this iteration. The code for AgNES itself runs OK, but the computer vision and the text-to-speech stuff would’ve been more complicated to pull off, though not impossible – just needed more time!).

The final prototype setup in all its tin foil glory.

The final prototype setup in all its tin foil glory.

Building AgNES has been great, and especially interesting to think through its implications and the underlying concepts and issues at stake. All of the code for the project is available on GitHub if you want to try it out (though replicating the specific setup might be a bit complicated). Special thanks to everyone who contributed feedback, ideas, and testing, and any comments to improve it are more than welcome!

Prosthetics in sci-fi

A familiar plot point in sci-fi movies is the introduction of prosthetics as the artificial organic limbs give the main character a beyond-regular human being presence.

Lukes prosthetic hand

Image Source: LucasFilm Ltd.

This concept gets taken even further in Warren Ellis’ Transmetropolitan, which takes place about 2000 years into the future.  Human kind has evolved in many ways and technology has grown close to our bodies.. The desire and means to augment experiences and capabilities are so elevated in Transmetropolitan that humans can be modified with alien DNA and they can also upload their consciousnesses into computers.

Screen Shot 2013-12-19 at 1.42.05 PM

Image Source: DC comics, Warren Ellis

Inspired by this paradigm and by the development of bio technology, we can now see more and more technology incorporated into our daily lives i.e  google glass, tattoo microphones and wearable tech.  Through my research, I began to wonder: can we download our muscle memory?  If we can, what has been done out in the world that could inform the process of doing so?

A bio-signal is a general term for all kinds of signals that can be (continually) measured and monitored from biological beings. The term bio-signal is often used to mean bio-electrical signal but in fact, bio-signal refers to both electrical and non-electrical signals. There are many different kinds of bio-signals, but the one that seems to be the most promising and has physiologists the most interested is electromyography.

 

EMG signals are detected over the skin surface and are generated by the electrical activity of the muscle fibers during contraction. Since each movement corresponds to a specific pattern of activation of several muscles, multi-channel EMG recordings, recorded by placing  electrodes on the involved muscles, can be used to identify the movement. This concept has been applied in the development of myoelectric prostheses.  A group at the University of Washington  has been working with technology as the means to have a human computer interaction.

Now for the fabrication part of this class, after all it is called “scifi2scifab“. Recently an article caught my attention; “researchers at the University of Sheffield, Fripp’s company, have developed a process that can print a customized nose or ear within 48 hours. First, the patient’s face is 3D-scanned, then the specific contours are added to a digital model of the new prosthetic part for a perfect fit.” -3ders.org

Photograph: Fripp Design

As a prototype, I presented a 3d printed arm (thingiverse) that can be controlled by the EMG inputs generated by someone’s muscle. The majority of the arm components are 3d printed in a MakerBot Replicator, and then connected using 7 motors and strings that operate as tendons, influenced by the work of puppeteers.  As you can see in the video (follow link) below, the arm tries to replicate the programmed basic gestures.

guillermoArm

See it reacting here! | EMGarm.mov

LuvLuv: An Experiment in Modern Dating

This book was not our our class reading list, but it was released halfway through the semester and we were inspired by its near-future musings on the state of social media.

The Circle, by Dave Eggars, follows 24-year old Mae Holland as she starts her new job at The Circle, a mix of Facebook, Google, Twitter and other social media and advertising companies. The Circle campus is inspired by modern technology company campuses, where buildings are named after historical periods like Renaissance and Enlightenment and employees are encouraged to have all their social activities on campus.

The Circle’s main product is TruYou, a unified operating system that links users’ personal emails, social media, banking, and purchasing resulting in one online identity. According to the company’s public rhetoric, this kind of transparency will usher in a new age of civility.

One of the main principles guiding the Circle is that ‘ALL THAT HAPPENS MUST BE KNOWN’ and for that reason they never delete anything. They implement a CCTV system that covers both private and public spaces, as well as full ‘transparency systems” where people wear a streaming camera, on the principle that  “SECRETS ARE LIES, SHARING IS CARING, PRIVACY IS THEFT.”

One of the technologies in the book, LuvLuv, captured our attention because it seemed to be something that could easily be implemented today. LuvLuv is a dating application that scrapes all of the known data about an individual in order to provide the searcher with information to help them plan good dates and win over the object of their affection. For example, LuvLuv could advise you of where to take your date to dinner based on their history of allergies, or suggest conversation topics that they would be interested in. This also reminded us of a great short film called Sight, which combines these kinds of dating suggestions with an augmented reality display and gamification elements.

Our incarnation of LuvLuv was an interactive website where we used search results of the online activity of one of our classmates to construct a profile where someone looking to impress him could find out everything they needed to know. Part of this project was to see how he and the class reacted to this information. Although our (awesome) classmate consented to taking part in some sort of experiment, he did not know the specifics of our project. He was surprised to see how much could be learned about him based on only information he had put up willingly online. We may have gotten him to consider changing his privacy settings! Of course, in the world of The Circle, there are no privacy settings…

Below are screenshots of LuvLuv in action:

luvluvsplash

luvluvresults

 

by Alexis Hope & Julie Legault

Sensory Fiction

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

beach2

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

suit1

suit2

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

View more photos of Sensory Fiction on Flickr

– Felix Heibeck, Alexis Hope, Julie Legault

W – Microwave of the Future

What is “W”?

“W” is a science fiction design concept of a high-end microwave that comes from the not-too-distant future.

What does the microwave of the future look like?

The W uses a revolutionary user interface to tell the hungry user everything he or she would like to know about a food item that is placed in the microwave.  No more buttons or dials on the microwave, W gets rid of these nuisances and will automatically calculate the optimal time required to heat or cook food to perfection!

What else?

The W can access the internet to give the user recipes and calorie information about a food item, video cooking guides and more!

Safety is our top concern–so we’ve designed W to identify materials that are not microwave safe.  If the item placed in the microwave is unsafe, our device will refuse to cook until the object is taken out.

Technical info:

  • Designed in Rhino 3D
  • Wood material
  • iPad Mini for UI overlay
  • Vuforia opensource library from Qualcomm for object recognition
  • Openframeworks user interface

First prototypes

W Photo 1

W Photo 5

Design and prototype by

Ermal Dreshaj

Sang-won Leigh