Google's Project Glass demo is certainly the coolest hardware demo so far this year. Behind the scenes is something equally intriguing: artificial-intelligence software.
The augmented-reality glasses, which Google co-founder Sergey Brin was spotted wearing yesterday, created a huge buzz Wednesday when Google released a video showing, from the wearer's perspective, how they could be used.
In the video, the small screen on the glasses flashes information right on cue, allowing the wearer to set up meetings with friends, get directions in the city, find a book in a store, and even videoconference with a friend. The device itself has a small screen above the right eye on wrap-around glasses which have no lenses.
For the most part, the augmented-reality glasses do what a person could do with a smartphone, such as look up information and socialize. But the demo also shows glimpses of an artificial-intelligence (AI) system working behind the scenes. It's the AI system that could make mobile devices, including wearable computers, far more powerful and take on more complex tasks, according to an expert.
"The new thing that Google was showing was the interaction model using new hardware, rather than truly showing the potential of such a device," said Lars Hard, the chief technology officer of AI software company Expertmaker. "AI can actually enhance and improve different decision situations."
Although there isn't a precise, agreed-upon definition, artificial intelligence describes computer systems that accommodate human-like behaviors, through features such as speech and gesture recognition, and mimic human thinking. Working with a mobile device, artificial-intelligence systems can perform tasks in the background and bring highly relevant information to users, Hard said.
The Project Glass hardware was operated primarily by voice commands, an indicator of Google's work on voice recognition for mobile devices like Apple's Siri. Siri, which has been well-received, translates spoken commands into actions for the iPhone, such as looking up information or making appointments. Google is reportedly working on voice-recognition software for Android.
The makers of Project Glass said the hardware is designed to help "you explore and share your world, putting you back in the moment," according to a Google Plus post. "We think technology should work for you--to be there when you need it and get out of your way when you don't," said Babak Parviz, Steve Lee, and Sebastian Thrun, three employees from Google's secretive Google X Labs, on the post introducing Project Glass.
In one scene of the video, for example, the wearer takes a picture of a poster by pressing a button on the glasses and sends it to himself. This new type of user interaction is quicker than, say, pulling a phone or camera out of a pocket.
The demo also shows that the software operating the glasses is location aware. A notification tells the wearer that the No. 6 subway is shut down as he walks up and the system suggests an alternate route for getting to his destination.
To have a wearable computer aware of its physical surroundings and present personalized information to the user requires artificial intelligence and machine-learning software in the background, noted Hard. It turns out Thrun, a Google fellow and member of the Project Glass team, is an artificial intelligence and robotics expert who is instrumental in another Google X project, the driverless car.
"This puts Google out in front of Apple; they are a long ways ahead at this point, Michael Liebhold, a senior researcher specializing in wearable computing at the Institute for the Future told The New York Times. "In addition to having a superstar team of scientists who specialize in wearable, they also have the needed data elements, including Google Maps."
AI in the cloud
A more sophisticated AI platform with a wearable computer could do much more than find friends online and provide maps, said Hard.
Having wearable screens could help doctors make diagnoses, be used in business negotiations, or in service industries, such as retail, Hard said. Although an augmented-reality screen is smaller than a smartphone, it has the potential to present the "right information at the right time" and show complex data such as diagrams, he said.
The hope for AI software is that it will process information in the background and present targeted information as needed, he said. In shopping, for example, the AI system would sift through lots of data to come up with very granular and personalized recommendations, rather than recommendations based on past purchases as computers do today.
"Even though the technologies today deliver this type of service, they are relatively crude and boring in many respects," Hard said. "We're going to see lots of changes to that, using big data and machine learning."