On April 4, the Google[x] team unveiled "Project Glass", which makes eyeglasses a computer interface device for both input and output. A video introduction to the concept is below.
There is obviously recognition software involved in this (voice and image) as well as fine-tuned geographic information systems, among other things. The video does not demonstrate anything that would require intensive amounts of metadata, however, I could see that descriptions (metadata) of things and places would be necessary. That would allow for logical connections to be made between places, events, objects, etc. The question is, who will create that metadata? Can it be automatically (of semi-automatically) generated?
By the way, for me, this is reminiscent of SixthSense, which was demoed at TED in 2009.
1 comment:
And with that, I wonder how would this affect the current status quo of society. But somehow, I think this "eyeglasses" would be explained when it's nearing its completion.
Post a Comment