Virtually Mirroring The Physical World
A recent conversation about upgrading some of our NASA public exhibits to “self-narrated tour” capabilities has sent me down a speculative rabbit trail of thinking more broadly about virtual interactivity with physical world objects. Museums have, in some ways, paved the way here: from self-guided Walkman tours to self-guided iPod Shuffle tours to museum cell phone tours, allowing portable end-user interactivity, but I think this interactivity is typically decoupled (absent user intervention) from real world objects. I think (and I suspect many others have thought) that it’s not too far downstream that we’ll have sufficient standards, automation and capabilities in place to allow key (if not many) real world objects to accessible, virtual-world analogues (yep – full circle to the Coffee Pot Webcam).
RFID seems to be another piece of this puzzle, but relies on custom hardware not (yet) accessible to the general public. The world where, for example, key machinery in a plant has embedded RFID which, when scanned with a custom reader and back-end software, allows a laptop-enabled user to access its technical specs, service instructions, service history, etc probably already exists. The world where a company (or individual) doesn’t have to invest in custom reader interface hardware/software probably doesn’t yet exist, and it’s that world I’m interested in. The world where this model extends even further and allows average Joes-on-the-street to directly interact with (learn about, bookmark, GPS-locate, inventory, photograph, comment on) objects beyond industrial machinery is even more intriguing. PDA-plugin reader cards are getting us close to it, but still require a dedicated investment on the user end.
I think two key advances are needed for this to go viral and spread beyond quaint “old economy” expensive and proprietary niche implementation:
- Miniaturization / embedding of RFID (or other next-gen ID tagging) antennae (does Bluetooth have a role here?). Yes, I want my 8th generation iPhone to be able to probe the world around it, know what’s nearby, and quickly take me to the virtual ‘home’ of an object and let me learn or interact with it. This could be a museum exhibit (narration and background info), book at the library or bookstore (reviews), piece of machinery I’m working on (specs / instructions / inventory / geolocation / maintenance history), used car I’m buying (history of insurance claims against it), you name it. Sure, I could fire up the iPhone and Google the generic item, but (a) I’m lazy and that’s a comparatively clunky user interface, and (b) sometime the specific item is of more interest than the generic one. (Note bene, it’s entirely possible that cloud-based image recognition / Googling could outpace the need for items “broadcasting” their existence. The MIT Media Lab is already “going there” with Sixth Sense. If my PDA camera can upload to a smart visual identification Google-system in the sky, RF broadcasting may end up being very quaint and 20th century. If I were Google, I’d be thinking very hard about automated “semantic” image recognition / classification, since far, far more smartphones will have cameras than RFID readers in the foreseeable futures. “Oh, that’s a bar code; oh, that’s a VIN; oh, that’s a Toyota Corolla 2004; oh, that’s an Ares V rocket model” … you get it).
- A protocol for handling arbitrary RFID (or other) object identification and routing to live web sites. Proprietary RFID-based asset management systems probably provide this as customized sandbox software; we need an open and global version for the rest of the world. Something like persistent URLs or CrossRef’s DOI system for scholarly articles could be models.
What’s really intriguing about pairing this capability with cell phones is (ala iPhone) the pairing of identification with geolocation. This truly opens the door to a virtual “mirror world” somewhere down the world; a geotagged/georeferenced “SecondLife” that’s accessible portably and seamlessly. I doubt this is novel thinking, but it is very exciting…