Google Wants You To Talk To Yourself
So, Google finally demo’d their Glass project and exactly how it’s going to look when worn – I have to admit I’m pretty impressed. I’m one of those people that are constantly getting lost so the GPS function looks pretty awesome. But there’s one major issue I can see with this – and it’s the very reason that the vast majority of people I know don’t use Siri or Google Voice – it’s embarrassing.
Seriously, guys wearing Bluetooth headsets were mercilessly mocked in popular culture because it simply looked like they were talking to no one, but at least there was someone on the other end. Now, we’re just speaking to technology and over-friendly AI.
Simply, there’s just too many reasons that this whole technology is flawed.
So the Glass demo video touted the ability to take pictures, but even in that video there’s a definite delay. I’m sure there’s ways to set up micro-expressions to take that picture, but taking control away from the digits of a photographer is a risky gambit indeed.
I can barely understand what people are saying to me most days – and I’m not overly confident that voice recognition has reached a stage of full comprehension. What if you’re wearing these Glasses in a noisy environment? Is it guaranteed to still work? And it’s touted that sounds and music will be conveyed through bone vibration, but how is this going to block out background noise?
One of the beauties of smartphones is their ease of use – they’re practically pick-up-and-play devices, but Glass seems to want to take this away. So everything is voice-controlled but how can we be sure what voice commands will actually produce a result? And at the end of the video, the user clearly uses his finger to take the picture and share it… but how?!
Just makes me think of this.
Also, greasy finger smudges (ergh). And what about the blooming mobile game market?
- YOU’LL BE THAT GUY
This video absolutely nails it.
I admit, I’m still interested in the technology, but is it more than a gimmick?