Magnifying glass

Thad Starner has been wearing some kind of computer on his head for twenty years. Now the Georgia Tech professor and Google Glass pioneer wants the world to join him.
1728
magnifying glass

Photograph by Josh Meister

On a chilly morning in early January, I joined a hundred students in a lecture hall on the Georgia Tech campus for a class called Mobile and Ubiquitous Computing. The professor, Thad Starner, looked up at his audience of aspiring programmers, industrial designers, roboticists, and user-interface specialists. He is forty-four, with a boyish face and sideburns that yearn for the 1990s. He wore, as he often does, a black T-shirt and black jeans. “In this class we’re going to talk about four main things,” he said. “Privacy, power and heat, networking on- and off-body, and interface. Every time you make any decision on any four of these dimensions, it’s going to affect the others. It’s always a balancing act.”

This wasn’t my first visit to his class. The previous week, a guest lecturer named Gregory Abowd had led the students through an article by the late Mark Weiser, a researcher often considered the father of ubiquitous computing. “The most profound technologies are those that disappear,” Weiser wrote in 1991. “They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Weiser was envisioning a world in which computers no longer took the form of desktops and mainframes, monitors and keyboards. Instead, they would be seamlessly integrated into everything around us, every man-made object, from vacuum to refrigerator to streetlight to chair. Starner was offering his students a message in tune with Weiser’s, but with a twist. “A lot of time, technology gets in your way because it’s not close enough to you,” Starner said. His audience had every reason to believe him: He was wearing a computer on his face.

“The guy with the computer on his face.” This would have been a fair description of Starner at almost any time over the past twenty years. He first built his own wearable computer with a head-mounted display in 1993, and has donned some version or another of the computer-eyepiece-Internet system most days since then. But over the previous year, something changed.

As well as being a Georgia Tech professor, Starner is also a technical lead/manager for Google’s Project Glass, and as such he was able to replace the many generations of his own system with a Google Glass prototype. Last year, just ten thousand carefully selected “explorers” were allowed to buy the first-generation Glass models for $1,500, but tens of millions of people watched the YouTube videos showing the world through the tiny computer’s glass display, mounted on a frame and positioned a little above the wearer’s right eye. At last, strangers got a sense of what Starner has been living for decades.

And they had reactions: Yes, Glass is an object of intense curiosity, but also, depending on your perspective, one of desire, fear, disdain, or even hope. Long before you can buy one yourself (which could be as soon as this year), Glass has been declared both the greatest invention of the year and doomed to failure. A computer on your face? This was just too much. And for a few—a tiny screen that didn’t even cover one eye and offered fewer functions than a smartphone?—this wasn’t enough.

Around Georgia Tech, though, Starner’s Glass was a familiar sight. “This allows me to take a quick look at what I need,” he told the class, “and then ignore it. Get the information you need, and then it gets out of your way.” A bicycle in the moment of braking, he said by way of analogy, ceases to feel like a complex machine. “If you can actually make technology so that it’s not about the interface, so it’s just an extension of your body,” he said, that was when mobile computing got interesting. “That’s one of the most powerful things I can teach you.”

Another day, after class, I walked with him to his BMW convertible and he expounded on this thought. “Can we make a computer where it’s so facile to interact with that you’ll react through it? That you act with it? That’s so seamless to work with while you’re doing the things in your normal life that it’s just an augmentation of you, just like the bike is the augmentation of your speed? Can you actually make the wearable computer an augmentation of your intellect?”

Starner is a prolific and broad-ranging researcher, with more than 150 peer-reviewed papers across the fields of artificial intelligence, pattern discovery and recognition, and mobile human-computer interactions. Practically, that means he works on everything from fundamental problems of building intelligence in computers to bridging communication gaps with the deaf community, with dogs, and even with dolphins. He holds twelve patents, with seventy others pending. When you search for something on your mobile phone and it uses your location to personalize the search—he has a patent for that.

But even though Google Glass is the biggest thing he’s worked on, no one knows if Glass will be a commercial success, a groundbreaking technology, or a permanent punch line. In the first year of its carefully managed rollout, the buzz has been loud, but so has the criticism. There was the restaurant in Seattle, the coffee shop in Durham, and the bar in Colorado whose owners banned Glass wearers—some before they’d met any Glass wearers to ban. There was the letter from Congress to Google outlining concerns about privacy; ditto the letter from the privacy commissioners of Canada, New Zealand, Australia, and Mexico. There were the labels from the press and from advocacy groups—“privacy-eroding,” “privacy invasive”—and the grassroots campaign called “Stop the Cyborgs,” with its anti-Glass logo.

And then there was the dork factor, the inevitable avalanche of nerd jokes. Was Glass “Segway for your face”? In the few parts of the country home to a critical mass of early Glass wearers, the terms “Glasshole” and “Glass-kicking” entered the lexicon. One user who happened to work for Wired magazine summed up his experience this way: “It is pretty great when you are on the road—as long as you are not around other people, or do not care when they think you’re a knob.”

If a computer on your face—even one so carefully engineered and designed by the talent and resources at Google’s disposal—made you look like a “knob,” was this barrier too high for wearable computing to overcome? Or was it more akin to the way people used to make fun of anyone on a cellphone—Look at that jackass; he’s so important he needs to carry his phone with him.

It’s safe to say that the criticism was anticipated by Google. Certainly it is nothing new to Starner. If he’d worried about looking cool, he never would have built his own wearable computer from parts and donned evolving versions of it almost constantly for two decades. And if there weren’t something that seemed odd or off-putting or fundamentally unnecessary about having a computer on your head, it wouldn’t have taken Starner two decades to arrive at this moment of mass attention (if not yet mass adoption).

The privacy questions were also nothing new—they were just newly on everyone’s lips. Somehow, with the slow rollout of Glass, broad concerns about the social and legal implications of ubiquitous and mobile computing that had been lurking in the background for years were rushing to the fore. If you believed in the future of wearables, these were billion-dollar questions. The students around me in Starner’s class, and thousands like them around the world, would be designing the next generations of computers on the body (if not in the body). This was why Gregory Abowd, the guest lecturer, urged them to consider carefully the ubiquitous and mobile computing question of our time: What is a computer and what is human?

He then strode over to where Starner sat—the nearest embodiment of that question. “Listen to this man,” Abowd told the students. “He’s a prophet.”

Growing up an only child in Dallastown, Pennsylvania, near Amish country and the Hershey factory, Starner spent a lot of time visiting patients with his mother, a geriatric nurse. “My best friends were sixty or seventy years older than I was,” he recalled. One of his mother’s closest friends—and so also one of his friends—had cerebral palsy and limited muscle control. She could communicate directly with Starner and his mother, who were used to her manner of speaking, but with others she relied on a laborious process of indicating symbols on a keyboard with a pointer attached to headgear. It was a challenge that stuck with him.

When Starner showed up as a freshman at MIT in 1987, he already had decent hacker chops and an impressive drive to work. Irfan Essa—now a colleague of Starner’s at Georgia Tech and a frequent collaborator, and in 1988 a grad student at MIT’s Media Lab—was Starner’s first research mentor, and remembers occasionally having to rein in Starner’s excitement. “If anything, I had to make sure he wasn’t going to do too much,” Essa told me. “I had to focus him a little bit, but he had loads of ideas.”

Starner stayed on for a master’s degree, but after his first year he took a break from school and got a job at a nearby tech company. He’d been thinking about making a wearable computer for a couple of years, but it wasn’t until the summer of 1993 that he was able to scrape together enough money (and free time) to build his first system from hobbyist-level parts: a CPU from here, an eyepiece from there, a motorcycle battery to run the thing. For input he trained himself to type on a one-handed keyboard called a Twiddler. It strapped onto his hand and he typed using “chords” of key combinations. The heavier stuff he wore in a satchel around his waist, from which cords spewed out to the head-mounted display and the Twiddler. (Fashion was not his god.) He went back to school in the fall. Recalled Essa, “One day he came in wearing a computer.”

At the Media Lab, Starner’s office filled up with parts as he came up with a model he could show others how to assemble—but they had to learn to solder. This was the “Lizzy,” named after the Tin Lizzie Model T. “You had to know how to fieldstrip and put it back together,” he said, “and that meant that if it broke while you were in the field, you could fix it; if you wanted to do something funky with it, you could do it yourself.” A group of wearable enthusiasts emerged at the Media Lab, at first paying for all the equipment themselves until Starner’s faculty advisor, Alex Pentland, found them some funding.

Using amateur radio or slow-scan TV waves or a car phone, they were able to connect to the Internet—this was before there were any websites to speak of, let alone Wi-Fi—and communicate with each other and with anyone else online via MIT’s Internet messaging service. Once Josh Weaver, part of the wearables group, snuck up to the top of MIT’s Great Dome by riding on the top of an elevator. Someone brought the elevator down, stranding him, and he sent a distress call to one of MIT’s online user groups—Help, come send the elevator up! The responses he got were understandably dismissive: If you’re on top of the dome, how are you typing this message? Weaver responded, I’m one of the Borg. I really am—I’m doing this from our wireless modems. I really am stuck, please come save me. It took him half an hour to convince someone to rescue him.

That’s what they called themselves: the Borg. At one point there were twenty of them. “The Cyborgs Are Coming,” Starner titled an early paper about the group’s efforts. “The idea of creating a community to encourage this kind of work was always a big part of what he was working on,” Greg Priest-Dorman, another early wearable pioneer who now works on Google Glass, told me. Some were more interested in gesture recognition, or in “seeing” sound through computer vision, or in measuring skin conductivity and temperature as a way to read stress. The Canadian inventor Steve Mann, then a grad student at the Media Lab, had been experimenting for years with wearable computers as aids to photography, and continued to work on computer vision and augmented reality while streaming video of his experience to the web. Starner learned sign language and did his master’s research on using wearable computers to recognize hand gestures as an aid to learning ASL. Yet for many of the early wearers, a big part of the experiment was simply wearing their computers out in the world and interacting with people. “We were doing this stuff in society,” Priest-Dorman said. “It wasn’t just that you put it on and wore it alone in the lab.”

“We were always seen as complete weirdos,” Pentland said, “people running around with motorcycle batteries and antennas on their heads. But it sunk in. And one time in the faculty newsletter there was a long editorial about how one day MIT would all be wearables. That was the result of this Borg collective that Thad managed and kept supplied.”

“This is something you don’t put on and take off like a uniform,” Starner recalled of the group’s mentality. “This is something you put on and wear like your eyeglasses or wear like a shirt. This is part of your life.” In the lingo of the 1990s, the digital Holy Grail was the so-called killer app; Starner and his fellow Borgs had found what they called a “killer lifestyle.”

In the summer of 1996, the Department of Defense sponsored its first wearable workshop. The next year, Starner helped organize the first International Symposium on Wearable Computers at MIT, and the Media Lab put on a huge fashion show—Beauty and the Bits—working with major design schools in Tokyo, Paris, Milan, and New York; 1,400 people attended.

The press started to notice. Starner gave demonstrations for CNN and 60 Minutes. Articles appeared in Parade magazine and the New York Times. Once he left for a job at Georgia Tech, he launched a startup with Pentland called Charmed, which put out a wearable computer assemblage aimed at the hobbyist or the researcher. Charmed also helped stage hundreds of fashion shows with wearable computers at consumer electronics shows around the world.

But the killer lifestyle didn’t catch on, and Charmed never took off. Starner and Mann continued to wear their computers, continued to create new versions, but the few companies that tried to bring wearable computers to a mass market never got traction. People weren’t yet ready for it—not just the look or the encumbrance of it, but the functions. When strangers stopped Starner in the mid-nineties to ask about his device, the question they asked wasn’t “Why would you want a computer on your face?,” or even “Why would you want a mobile phone or mobile Internet?” Desktops were still the default personal computer. The question, Starner recalled, was more basic: “Why do you want a mobile computer?”

I first met Starner in Toronto last summer at a conference on augmented reality and surveillance. Steve Mann, one of the MIT cyborgs, was an organizer of the event. He’d continued to build and refine his own wearable computer system, which he called the EyeTap, and had become known for drawing attention to surveillance in public places by forcing companies and institutions to reckon with the camera he wore on his face—by watching the watchers. Here was a rare gathering of people, most with few qualms about the aesthetics of wearable computing or the potential obtrusiveness of Glass. Many of them advocated or actively wore far more cumbersome, immersive, or privacy-invading systems.

Starner showed the audience a Glass demo video: upbeat music and cool people doing cool things, all shot on Google Glass. One of the most remarkable things was a high school science teacher who traveled to the Large Hadron Collider in Switzerland wearing Google Glass, beaming the whole thing back to his students in the States, who were able to watch and ask questions of the scientists in real time.

Later, Starner sat on a patio in a small courtyard as augmented reality researchers and aficionados hovered nearby. His was the first Google Glass many of them had seen in person.

Starner displayed the social cues built into the device. He had to tap the touch pad along the side of his head or say “Okay, Glass, record video,” to record. To someone nearby, the tiny glow of the screen inside the glass cube indicated that Glass was in use. The wearer had to look directly at whatever she was recording, so if you saw someone wearing Glass and the cube was glowing and the person was staring directly at you, then yes, maybe you were being recorded—but it was pretty obvious. “When you’re talking about principles of privacy, the first one is ‘notice,’” Starner said. “Are you giving notice to people around you of what you’re doing?”

Someone pointed out that the mechanism could conceivably be hacked, so that the camera could record without the glass cube lighting up. The battery would still run down quickly in recording mode, Starner said. This was a design feature, he insisted, not a flaw.

“Glass is much more honest than what you’re wearing now,” he said to one onlooker. “I’m referring to your cell phone. You do not know if you’re recording me. Let me say that again: You do not know if you’re recording me. Your phone can be turned on remotely by your service provider. China has admitted to doing this. The FBI has admitted to doing this. It’s a service that’s been built into cell phones since the 1980s.”

I sat next to Starner for a presentation by the young founder of Meta View—a potential competitor. The system covered both of your eyes, and it seemed that it would rely on you moving your hands in space to control the computer, using the computer’s gesture recognition capabilities. In sci-fi terms, it sounded a little like Tom Cruise in Minority Report. In layman’s terms, it sounded pretty awesome. I asked Starner why Google didn’t use a similar system.

“Hold your arms out,” Starner said. I did for a moment, then started to lower them. “Keep them out,” he said. I obeyed. “They get tired, don’t they?” They did. “If you look at sign language, it’s all in the signing box,” he said, indicating an area slightly smaller than a baseball strike zone—from the belt to the chest. The movements are small and contained and close to the body, he explained, because it’s tiring to keep moving your arms and your hands out in front of you, above your shoulders, in your line of sight. “There’s even a term for it—gorilla arms.” They’d thought of this and rejected it for Glass. “Doing things much more subtly is where it’s at.”

To Starner, immersive computer vision—for all the sci-fi augmented reality it promised—was too great a leap, at least for now. Glass was trying to solve a smaller problem—creating smooth “microinteractions” with your computer—while leaving the door open for other things down the line. “With the cell phone,” Starner said, “it takes twenty seconds to get the thing out and get to the right spot. That’s untenable for many tasks in everyday life. People are starting to realize that. They’re realizing the services that a smartphone gives them and realizing its limitations. People have had a taste of what’s possible. That’s why I think the time is now.”

At a conference in the 1990s, Starner met two Stanford grad students who were working on a new kind of search engine. Their names were Larry Page and Sergey Brin, and they asked Starner for a demonstration of his wearable computer. Starner obliged, and didn’t give the meeting much thought until years later, when Google—the company Page and Brin founded—bought the Android operating system for mobile phones. In an email, Starner invited Brin to Atlanta. You haven’t seen my stuff in forever, he recalled writing. “I was hoping to convince them that this was an interesting idea to look at. I had no idea they’d already committed to it.”

Instead, Google brought Starner out to Silicon Valley for a visit that was really a job interview for a project they couldn’t quite describe to him until he said yes. “He really believes in wearables,” Pentland said of his former student, “and what happens is, people get drawn into this gravity well around him. He’s fun, he’s interesting, they begin playing with it—and Sergey got drawn in too. Lots of people get drawn in.” It was Starner’s experience with head-mounted displays that pushed Glass down the path of a single screen in front of a single eye. His role in Glass was about the project “staying on the right path,” he said. “This is what works, this is what doesn’t—here, let me show you so you can understand yourself what the issues are. It’s always about networking, power and heat, privacy, and interface. Those four dimensions. I basically am the voice of experience and try to say, ‘Hey, why don’t you try to look at it this way instead?’ Or ‘Why don’t you think about this problem?’ Or ‘Here’s an issue that you’re not hitting yet but you will next month. If you do this now it’ll make things easier.’”

Several other people from the MIT Borg years were brought in to work on Glass, too, including Josh Weaver and Priest-Dorman, who was at Vassar but hung out with the MIT crew. By wearing their computers out in the world, they’d learned little things, like the importance of the color of the eyepiece—if it was beige, people first assumed it was a medical device. When they tried black, people presumed it was consumer electronics, maybe some new Sony gadget. Purple? Something fun. Starner and his friends found that people were also sometimes unnerved by not knowing what the cyborg was doing. The Lizzy served the function of a watch, a cell phone, a fax machine, a laptop, a CD player, a camera, a camcorder, a health monitor. “Even when an onlooker properly identifies the wearable computer, he may still have no idea as to how the device is being used at the time and whether or not the user is interruptible,” they wrote in the 1990s.

For Priest-Dorman, “The attempt from the ground up to make Glass a device to be used in social context—I think that’s because of people like Thad making those arguments that the social aspect of doing this has to be there from the beginning. I see that as part of Thad’s work—the consideration for the other person in the room—that this is ultimately a communicative act between you and other human beings.” Starner himself put it this way: “You’re not just making it for the user, you’re making it for people around the user. This is a big deal. You want to encourage participation with other people around you. You want the other people around you to know what you’re doing with it.”

Starner took a leave of absence to work on Glass for fifteen months, and then returned to Georgia Tech part-time. It didn’t strike him that Glass itself might be a big deal until the 2013 Google developer conference. “There were literally a thousand people walking around with these devices on.” But it was also a moment for self-reflection. “Nobody was looking at me because I was wearing a device—because everyone else was wearing a device.”

“Going from a place where there are only a handful of us doing this kind of thing to a place where there’s people I don’t recognize with head-mounted displays in public,” Priest-Dorman said, “has caused me to question a certain amount of what I hang my sense of self on.”

In the past year the media reported new Glass hacks—now you could take a picture just by winking, for example. Although there were some obvious useful features of facial recognition software—for someone suffering from Alzheimer’s who had difficulty recognizing family members and friends—Google heeded the vocal privacy concerns of many and announced that facial recognition apps would not be allowed on the device. By December, someone had developed the application on their own and demonstrated it at a private event in Las Vegas.

When I spoke to Starner about the issue, he declared, “I am a privacy fundamentalist.” He mentioned that he grew up around religious people who believed in the “mark of the beast” and the evils of a unique identifying number—be it a credit card or social security number. “In my PhD qualifying exams, privacy and the social effects of technology was one of the main things that I specialized in.”

His point was that he’d thought about this. Starner will eagerly recite the six principles of privacy-by-design laid out in Marc Langheinrich’s review of the field in 2001—notice, choice and consent, proximity and locality, anonymity and pseudonymity, security, access and recourse. Glass incorporates all of these. The fears of facial recognition on Glass, he thought, were overblown and misplaced. For starters, that kind of processing would quickly drain the battery. Plus, he’d done research on computer facial recognition; it was one thing to recognize a face in a photograph, but performing it successfully on a random population “in the wild” was still a ways away. Maybe so, but it struck me as just a matter of time until anonymity in public would be rendered almost impossible by wearable computers, whether you chose to use them or not. To Starner, fixating on the risk this one particular technology—Glass—posed to privacy and anonymity was to miss the larger point.

“There should be a serious conversation about the cell phone,” he said, “and there’s not.” At MIT, they knew how to tease identifying information out of cell phones—even phones with their batteries removed. “I can figure out who you are by your cell phone. Really easily. Even if you had the old one-way pager—there was still a two-way tracking system. It had a transmitter in it as well, so they knew where you were already back in the eighties. The conversation should have started with a pager. The horse left the barn a long time ago and people are still not having the right discussion about it.”

What is it about Google Glass that has pushed these concerns back to the fore? A digital voice recorder the size of a stick of gum, capable of recording hours of conversation, provokes no outcry. A small, traceable audio-video recorder with Internet capability is no longer new—we call it a smartphone, and with it we can instantly share private conversations with the world. Mitt Romney won 47 percent of the presidential votes in 2012, but his remarks about 47 percent of American voters, surreptitiously captured on a cell phone, likely cost him many more. Are wearable computers somehow different?

Perhaps so, if only because they finally throw the questions of attention and distraction and “privacy in public” right into our faces. Aren’t we distracted enough? Aren’t we mediated enough without actually looking at the world through a computer? The counterargument—one I’ve heard from several Glass wearers—is this: Go to a concert or an elementary school play. The stage is half-obscured by an army of hands holding up smartphones and tablets to record the event; these people are already watching a live event through the screens of their devices in order to preserve and share. With wearables, the camera can see what you see, giving you a chance to remain in the moment, trusting your own eyesight while your computer records. If you want to question the need for constant connectivity, that’s one thing. But if you want that connectivity and that ability to casually record moments in your day, is staring at a phone in your hand as you walk down a street, or glancing at it as you drive, or picking it up every ten minutes at dinner, or aiming it at a stage the best form this technology can take in your life?

For Priest-Dorman, seeing the early Glass “explorers” wearing the technology made him think that people finally “got it—the transformation that happens from sticking your phone on your face or sticking a laptop in front of your face versus the idea of having a device there all the time to intelligently help you parse the electronic side of your life.”

For Starner, these are arguments for putting Glass out in the world. “That’s why you actually have this deployed on a large enough scale so you can figure out both what the potential of the technology is, and what the issues are. This idea of a living laboratory—we started it twenty years ago in academia for the wearable computing stuff, and we knew, back then, that this was going to be part of the point of doing the lab. Having people wear these things in their daily lives, every day—figuring out privacy and security and social issues.”

In his lab at Georgia Tech, Starner has been thinking about his mother’s friend, the woman with cerebral palsy who communicated with a head-mounted pointer. He recently discovered that the magnetometer built into Glass is sensitive enough that if you attach a tiny magnet to a person’s tongue, Glass can track the tongue’s movement just by measuring magnetic flux. Two and a half million Americans use some form of “alternative and augmentative communication devices.” “That includes stroke, brain injury, Parkinson’s, ALS,” Starner said. “Some can type, but others have multiple injuries.” If he can train Glass to recognize tongue movements with its magnetometer, and then to call up likely phrases in the visual display, the person can select the right phrase using head movements, and a cell phone connected to Glass can speak the phrase. The technology could be useful for the deaf community, too, since many deaf people have perfectly good mouth and tongue articulation. Voice commands could work for them—silent voice commands, just the tongue moving in the mouth.

Silent voice commands, undetectable to those around you—doesn’t that undermine the privacy principle of “notice,” of letting others know when the computer is in use? It’s easy to imagine a trend toward more and more intimate connections to wearables—so that the interfaces and the wearables themselves become increasingly obscure, increasingly covert. It may be that the “Stop the Cyborgs” movement will grow—that, for enough people, face computers somehow mean that as a culture, as a species, we have strayed too far from “real” experience. But if history is any guide, we may simply get used to computers on our bodies, the way we grew accustomed to tiny clocks on our wrists, and to another technology on our face: eyeglasses. If that’s the case, wearable computers will, as Mark Weiser wrote of the most “profound technologies,” simply disappear—because we will cease to notice them. “For me,” Starner said, “the best way to understand the future is to live it.” He has a bigger platform than ever. Whether that wearable future is Glass or something we haven’t seen yet, we’re all part of the experiment now.

This article originally appeared in our March 2014 issue.

Advertisement