Whenever I think of what’s next when it comes to the evolution of mobile learning, Google Glass comes up to my mind right away. On a side note, according to New York Daily news “LeVar Burton, as Lt. Commander Geordi La Forge, used high-tech eyewear in ‘Star Trek: The Next Generation’ that calls to mind Google Glass.” Oh well, is there anything that Star Trek hasn’t already predicted?
The march towards delivering ubiquitous and contextual learning and performance support has definitely received a shot in the arm with Google Glass. Before I explain how Google Glass can play a role in L&D, here are a couple of basic facts to remember:
The revenge of the humble Text: Yes, Text is the king as far as Google Glass is concerned. When you look at the world through Google Glass you have a small translucent area that displays notifications in text and pulls up information when you activate Glass with the command, “OK Glass” and ask for specific information. The content is presented through “Cards” that are equivalent to pages in a traditional desktop or mobile learning application.
Push Vs. Pull: In the world of mobile learning, you expect the user to install your app on their mobile device (or get the IT department to install a mobile learning app through MDM software). The apps then send notifications to get users open the app. In the case of Glass, you will have to be careful about how and when notifications are sent. From a L&D context, notifications should be driven by the user’s need, context, and based on their jobs. There is a big shift in mindset required for Glass as it is a fundamentally different platform when compared to mobile devices.
Here are some practical use cases for using Google Glass to deliver contextual learning and performance support:
1. Scientists in R&D labs can make use of Glass to pull up contextual information regarding lab documentation, safety procedures, and inventory or even relay what they are seeing to other team members or vice versa.
2. Surgeons can record certain procedures while they are performing it and make it available for training or debriefing.
3. Workers performing specific tasks in factories can use Glass for streaming their “views” to others or record what they are seeing for training or for monitoring purposes. The same idea can be applied across industries such as hospitality, retail, and services to name a few.
4. Glass can deliver contextual software usage guidance. Here is a screenshot I created to demonstrate this point. I used the Mutual Mobile Glass Simulator to create this use case (see screen below). This screen shows how Glass can help you by providing contextual guidance when you are stuck when using a software application. When I say, “OK Glass, how do I show my screen?” it points me to the “Show My Screen” button. I concede that there could be better examples, but I just created this to make a point.
There are some fundamental challenges that Glass has to overcome before it takes off in a big way. Some of these challenges are:
1. It draws too much attention when you wear them in public. So from an Enterprise Learning and Training perspective, it might be distracting when group activities are involved or when interacting with customers (example in retail chains).
2. The platform itself is at a very nascent. The Google Mirror API that allows you to build apps for Glass will continue to evolve.
3. Glass is very expensive as of today ($1500 if you can convince Google to let you buy one). However, this will change once Google make the product available to the market openly.
4. There are privacy concerns that need clarity. Legal teams in enterprises will have their hands full if the product becomes popular among the working population!