Today, Google announced “Google Goggles”, a visual search tool. Check out the video after the break for a demonstration on how it’s supposed to work. What I can say on 1st glances is that it uses the camera from a phone to capture an image and bounces the image off the Google servers to come up with the best suggestion for the object. Another cool thing is that it integrates with the compass and/or GPS to know where you are and uses this data to know what you are facing to give more info. Check out the video to see what I mean. It’s a really neat concept, but a couple reports on Twitter today say it’s not really ready for primetime yet. Of course, we know everything Google does is in Beta for 3-4 years initially anyhow. Right? hehe…
When you connect your phone’s camera to datacenters in the cloud, it becomes an eye to see and search with. It sees the world like you do, but it simultaneously taps the world’s info in ways that you can’t. And this makes it a perfect answering machine for your visual questions.
Perhaps you’re vacationing in a foreign country, and you want to learn more about the monument in your field of view. Maybe you’re visiting a modern art museum, and you want to know who painted the work in front of you. Or maybe you want wine tasting notes for the Cabernet sitting on the dinner table. In every example, the query you care about isn’t a text string, or a location — it’s whatever you’re looking at. And today we’re announcing a Labs product for Android 1.6+ devices that lets users search by sight: Google Goggles.
More info can be found at Google Gogghttp://www.google.com/mobile/gogglesles.