Google revealed more this week about how its Glass headpiece will be controlled, and how developers might take advantage of the platform to create useful apps. In addition to voice input, you can control Glass with eye movements and subtle head gestures.
Given how much buzz surrounds Google Glass, we know curiously little about it. Glass, which consists of a miniature heads-up display and camera attached to lensless eyeglass frames, promises all sort of info on demand, but Google is notoriously strict about letting people test it out -- and as a result, details on how Glass will actually fit into our day-to-day lives have been sparse.
But thanks to a Google presentation at the South by Southwest conference this week, we now know a little more about Glass, including how it’s controlled and how apps might work on the device.
Senior developer advocate Timothy Jordan took to the stage in Austin to show how Glass responds to user input. We knew already that the glasses can be controlled by voice -- “Okay, Glass, take a picture,” for example -- but Jordan demonstrated touch and head gestures as well. By subtly moving your eyes, you can turn the screen on and manipulate information; by gently tilting your head you can scroll through different screens.
During the presentation Mr. Jordan also used Glass to take photos and post to the Google+ social network. He also replied to an email by using voice dictation -- Glass displayed the text of his reply, and allowed him to edit it before sending. And Jordan used Glass to translate the phrase “Thank you” into Japanese; the audio result was loud enough for him to hear but too quiet for the audience to catch.