Today, Google bestowed on the world another slickly made advertisement for Google Glass, giving us a better idea of what the wearable computer will look like, and how it will behave, from the user’s perspective. The ad shows people using voice commands to record video, take photos, send messages, conduct video calls, follow directions, access flight information, get weather updates, get translations, find images, and make out with a snake, all projected into a space no larger than a spectacle lens.
That’s all very cool. But I’m just as interested in how Google Glass will affect the way we consume news.
First, a caveat. We will not be wearing these Google glasses all the time. It probably doesn’t make much sense to wear them, for instance, at the family dinner table, or when you’re sitting at your desk, or, heaven forbid, when you’re engaged in your sweatiest, most intimate moments. When it comes to quality reading time, we’re likely to still rely on tablet computers, and when it comes to word processing and spreadsheets, we’re likely to still rely on desktops. But we will be wearing Google glasses, or something like them, a lot, and they will take the place of some of the functions of desktops, tablets, and particularly smartphones – especially when we’re out and about, and, significantly, when we otherwise wouldn’t be looking at a screen.
The first important thing about that is that it allows the computer to act ambiently and to grab your attention at relevant times, rather than rely on you to dictate a specific time that you want to dedicate to consuming information. In this sense, it is the most advanced example we have yet of “smart” computing – it supplements your brain instead of requiring you to sublimate your brain to it.
And what does that mean for news? Most significantly, it will mean that news, even more so than it does today, will come to you. It will reduce the amount of time you go looking for news. Homepages, homescreens, and news apps will become less important. As it is now to a basic extent, news will be delivered to you according to your context: where you are, what time it is, what you like. But this time the computer’s ability to understand your context and cater to it will be enhanced, because it won’t require you to perform an action, and it can deliver the information directly to your eyeballs.
Now, there’s not a lot of real estate in a spectacle lens, so it’s unlikely that text-based stories will be beamed longform into Google Glass. What’s more likely, however, is that you will receive alerts for breaking news, or news relevant to your location, in the form of headlines, or single sentences, or Tweets. If you’re interested in a particular headline, you could say “Save for later” in order to read it when you get back your tablet or smartphone. Or perhaps you could say “Read it to me” to have the story read out loud as you go about your business. Stitcher already offers an alert-and-read service like this (without the voice commands part). I’ve used it a few times on my Nexus 4.
If Google figures out gaze-tracking, it could also give you the option of getting a longer version of the story to read through, using your eyes to scroll down. A startup called Cube26 (formerly Predict Gaze) has already developed this technology.
This news won’t be a constant feed, as you might find when consuming news on your tablet or desktop. Instead, it will pop up occasionally, and only when the computer thinks it’s relevant. Google Now, the company’s predictive search engine, will play a huge role in that task, delivering sports scores while your favorite teams are playing, or picking out stories relevant to your interests or what you have searched for in the past.
Location, of course, will be extremely important. If you’re walking in Central Park, for instance, and there’s a stampede at the free Simon & Garfunkel concert happening there, you might get a news alert saying, “14 people injured in riot during ‘Sound of Silence’” with a note that the event is happening only 400 yards away. If you’re on vacation in Paris and the subway staff have gone on strike, you might get notified of that fact while you’re gawking at the Eiffel Tower. Google could very well tap into its FieldTrip app, too, to bring up news and trivia relevant to places, landmarks, or buildings. And it’s not hard to imagine that Google could also match its facial recognition technology with its search engine to bring up news relevant to whatever human you happen to be looking at.
All of these possibilities mean that packagers of news will have to adapt to the medium. Combining a striking image with a sentence or a headline to tell a basic story will become more important. You could think of these consumable-in-a-glance content parcels as “cards,” kind of like what Twitter already offers in its expanded Tweets, or how Kik is delivering rich media content in its mobile chat app. For an idea of how that might look, take a look at the part of Google’s video that shows a hot-air ballooner receiving a message from his friend. Her photo is displayed in the leftmost third of a message window with the text displayed in large type to the right of the image. Imagine her picture replaced with a photo of, say, Osama Bin Laden, alongside a headline saying: “Bin Laden killed in Pakistan.”
For other examples of how news cards could work, look at the parts of the Google ad that show how the weather conditions and flight details are displayed. You don’t have to exert your brain too much to figure out how those cards might be adapted for bullet points of newsy information, stock market figures, or quotes.
For video news, Google Glass promises a more immersive experience. A reporter can wear the glasses, for instance, and report from the scene of a news event without having to worry about holding a microphone or having someone point a camera at her. She can speak about the event as it is unfolding, with video being livestreamed via YouTube, and give viewers a real sense of what it is like to be there, whether it be a protest, a sports game, or a celebration. The same content could be edited later and repackaged for delayed broadcast.
By the same token, some on-camera interviews will become much more personal, with viewers being allowed to eyeball the interview subjects and get a sense for what they are like in conversation. This new presentation format might be at first disorienting, but in some sense it will also seem more human and less contrived, breaking down the constructed “It’s you against me and the cameraman” set-up of traditional TV interviews. If you’ve ever seen the British comedy “Peep Show,” which is shot from a first-person perspective, you’ll know what I mean.
It will take some time for Google Glass to become mainstream – it’s got to come down a lot in price, for starters – but the implications for how it will change our information consumption experience are already fairly obvious. When it comes to news, it’ll mean stories will have to be presented in tight headlines and striking images – in other words, in card-friendly layouts. Google Now and Twitter are primed for becoming central to news consumed via Glass, thanks to the former’s predictive abilities and the latter’s brevity and timeliness.
Google Glass won’t provide the definitive news experience, but it will be an important one that supplements our existing habits. For comprehensive news, we’ll likely stick to more “lean back,” reader-friendly delivery mechanisms, such as tablets, desktops, and TVs. But when it comes to news that is contextually relevant? In that case, Google Glass will be almost impossible to beat.