I’ve been slacking in my Microsoft patent coverage and a weekend of nothing interesting made me go back to the bookmarked applications. I tweeted about this when I came across it and believe that it is indeed the obvious future of boardroom presentations. Controlling PowerPoint or Keynote with phones is passÃ© given that we could simply navigate through presentations with voice or gestures. A Kinect hack already exists leveraging the Xbox Kinect with a Windows PC allowing you to control presentation Minority Report style. Now, Microsoft has patented an implementation which signals addition of this in PowerPoint natively or through a plugin. Microsoft has applied for more than one patents on making presentations a lot more interactive. Manipulating visualizations with (finger) touch and hands-free (Kinect) control of presentations are explained. The two patent applications don’t complement each other but cater to different use cases.
The first application explains how a user can choose data and intuitively turn into charts using touch. The application further highlights ability to change graph types used to represent the data. The claims put into perspective how Microsoft sees presentations being done on a tablet (10Windows 8 touch screen device). The user will be able to circle a bar graph with his pointer finger and the damn thing will turn into a pie chart that’s pretty wicked. Move your finger over a bar graph and turn it into a line chart! One of the images shows circling a single bar on the chart and a pop-up with statistics about the particular bar shows up.
Windows 8 and Office on a tablet now sound a lot more exciting, right?
The second application is simply Kinect’s gesture brought into PowerPoint. While it seems straightforward, the applications details an implementation where the people in the room will be tagged as:
- Primary user
- Secondary user
The point in tagging users in the room is to allow collaboration. Some excerpts from the patent:
The gestures may control a variety of aspects of the gesture-based system. The gestures may control anything display on a screen, such as adding words to a document, scrolling down, or paging through a document, spanning a column in a spreadsheet, pivoting or rotating a three-dimensional figure, zoom in or out, or the like.
Any computing environment networked in the same gesture-based system may therefore process captured data from any number of capture devices also networked in the gesture-based system.
The gestures may incorporate audio commands or audio commands may supplement the user’s gestures. For example, the user may gesture to add a bullet to a document and then speak words that are then added to the document, following the bullet point. The system may recognize the combination of the gesture to add a bullet and the audio as a control to add the bullet and then write the spoken words to follow the bullet.
The application refers to an older patent I came across where a user’s customized gestures will be stored and available over a network thereby enabling a user to feel comfortable while interacting with the system.