Saturday, January 31, 2015

State of Wearables - Glass, HoloLens and Watch

Last day of the first month of 2015 and the year is off to a quick, exciting start. Three tech announcements that caught my attention; Google Glass, Microsoft HoloLens and Apple Watch. Taken together they foretell an interesting emerging trend in the long journey of modern computing.

We are now lightyears away from controlling the computing experience with punch cards fed into a card reader or sitting on hard chairs at terminals with ethereal green glowing screens in a chilly computing center. Feedback from a huge mainframe computer is no longer measured in minutes until the printout is produced from a huge chattering printer.

Early in 2014 I had the chance to attend one of Google's public preview events for Glass, this one in a classic building in South Seattle. In an earlier life this cavernous building, built from first growth timbers with  solid 12*12 inch timber, having an entrance large enough for naval boats, was in fact used to build navy and fishing boats.

Walking into this building dressed in black skinny jeans, black fitted athletic jacket and matching converse high-tops I was met by greeters dressed exactly like myself. It was comically eerie since I had not received a memo to that effect or for that matter sent them one. Silicon Valley meet Silicon Forest. Et Tu, Brutus.

Nonchalant shock aside, I turned to look at the glass encased displays behind me. They contained what appeared to be actual development models of Glass in its various stages from the past few years. Earliest was something that almost fit inside of a medium sized backpack, along with sensors and smartphone size screens hanging off a helmet. Things progressively shrunk to fanny pack sized support and head things fitted onto something like safety goggles. And finally the 2014 model sleekly hanging from the side of any number of frame types present at this event.

For the next hour I was given a pair of these and wandered around the hall with more computing power of a mainframe computer on the side of my head. The experience was eye opening but not quite game changing. At this stage of development I suspect it would be useful for some specialized tasks needing access to information while using the hands for the actual task you are also seeing. Checklists, assembly training?

Yet it feels a bit like the Segway scooter, technically ingenious but addressing situations already served by simpler solutions available to most people.  Over 2014 Glass seemed to get a lot of social backlash but I suspect much of the the noise was from just a couple incidences being repeated in waves across the internet.

Earlier this January Google took Glass off the market until further notice and safe to say lack of sales at $1500. A useful and perhaps necessary experiment in the evolution of wearable computing. At the end of the day I suspect low battery life is the limiting factor and white elephant nobody wanted to talk about at the event or subsequent Glass developer group meetings I attended.

Or perhaps they knew that Microsoft was about announce HoloLens with Windows 10 later in January.

HoloLens like Glass, fits into the category referred to as augmented reality. This in contrast to virtual reality like Oculus where you can't see what is around you  while you where the goggles.  In terms of size HoloLens reminds me of Glass midway through it's development cycle. Something compact enough to fit on the head but in a still very physically present way as it wraps front to back with lots of plastic. You could not stand relatively inconspicuously in public taking pictures before the subject was aware, which is presumably one gripe with Glass.

What do you get for the extra bulk in this iteration? By all appearances from the demo videos and reports by those who were at Microsoft's recent press conference; a much richer, more intuitive and immersive experience.

Connecting the dots of my almost there experience with Glass and where HoloLens appears to be now in development I could envision a very practical use case for myself. Screen real estate. Speaking as a software developer and perhaps even for them you can not have enough or big enough monitors.

On a typical day you are using half a dozen or more applications that can have multiple windows and tool trays. Even a modest size project a few to dozens of related files. During the day you must maintain a mental model of these parts and either have them plainly visible or spend time finding them.

If you are in an environment the requires jumping between different coding projects the impact is even greater. At the end of the day this uses up mental resources that could better be applied to directly solving the problem at hand.

Enter HoloLens with your current dual monitor rig.  Simply arrange many of the windows into the HoloLens space where you can reference them with a quick glance. Pin them to virtual screens. Expand the text of a window on the far wall. With a hand motion drag another window onto a real  monitors to work efficiently with keyboard and mouse. Done with the window for the moment slide it off to the augmented reality area where you might leave it with some highlighted lines of code. In effect you now have added a room sized monitor to your workspace.

That is a game changing step forward in the evolution of computing.  It also makes complete sense to to bake it into the OS just like a mouse, keyboard and monitor are part of the underlying computer experience for a seamless transition between the two worlds.

In another interesting technical announcement the Apple Watch is set to be released in April. The smart watch category can be described as yet another tool to bridge the two worlds of analog and digital. While Pebble Watch can be considered the torch bearer to this category, the Apple Watch appears to be a necessary refinement in both physical and functional form that perhaps only Jony Ive's scalpel can bring to the design table.

Notably is the 'digital crown' which until Apple has largely been forgotten by any digital watch makers. Here it has been reimagined as a home screen button and up and down scrolling device. There is a sort of but of course sense to this so you aren't blocking the screen while swiping or toggling between two tiny overly sensitive or under sensitive buttons.

Perhaps a future iteration will couple the up down scrolling with a diagonal movement, done by simultaneously tilting your arm perpendicularly to the rotation the digital crown. Imagine your arm extended like when you're checking the time and then move the elbow up or down slightly.

And so the connection between human and machine grew by a few interesting steps. To what end? Quicker access and creation of information in a more immersive environment.

Tuesday, January 6, 2015

Preview of Visual Studio 2015 at Microsoft main campus.

One of the benefits of living in Microsoft's back yard is easy access to events that disseminate information about technology on the horizon. This coupled with new openness such as open sourcing .NET, putting code on GitHub and tools like Meetup bring together many interested parties.


Kicking off the first Monday back at work after Christmas holiday was a .Net Meetup on main campus in building 25. The presentation was about some features being added to Visual Studio (VS) in the next release or shortly following. It included a panel of about 5 leads or devs from the VS team. Most everybody there pitched in $5 for some pizza, given the 6:45 to 8:30pm timeframe and we occupied two rooms each equipped with a large projector screen. The classroom style of the room with tables for everyone was an appreciated addition to this meet ups normal venue.


The focus of the presentation was the new or enhanced debugging tools in the works. Notably real time event logging baked into the IDE with the ability to snapshot on metrics like execution time or memory usage. Drilling into the event stream, one can see each mouse click handler call and the resulting calls to methods to handle actions like saving a file.


It becomes very obvious if the same methods have been called multiple times, like sequences of opening and closing a file. They can be seen occurring over time without having to put some kind of debug statement at each potentially critical point. From this stream one can quickly drill to the actual code, make a change and try re-executing from that point. One can avoid restarting the whole program and waiting to reach the critical point of execution.


For production testing it VS can capture the event stream to a file that then is loaded into VS for viewing the details. In this scenario the real time debugging would not be available.


Another interesting feature is the ability to see line by line the time statements or loops took to execute. There are also a feature to snapshot execution time after some number of loop iterations have completed. A related feature reveals the progressive utilization of the heap as objects are created and destroyed.


Overall I am looking forward to examining the preview myself, particularly as it applies to creating more efficient code for Azure apps and learning about vNext.

Wednesday, December 17, 2014

Apple Watch the Next Frontier in Smart Devices

As the Apple Watch looms on the horizon, a great idea displayed at events in a controlled environment, time will tell if the implementation meets expectation. What problem does this device solve better than before or can it make a leap of faith fueled by magic from the wizards of silicon who regularly put more power in an ever smaller package.

What is the killer application that transforms it from trendy novelty into handy necessity?

Perhaps the digital crown adds an intuitive piece of functionality to the watch vocabulary. Certainly for anyone familiar with setting the time on a traditional watch this control is tightly coupled to two functions, moving the hands forward or backward in time and winding up the spring.