October 23, 2013
Microsoft’s Bill Gates stood on the stage at the now-defunct Comdex show in Las Vegas in 2000 with his schoolboy smile, touting the new “tablet PC.” Penned on the tablet in Mr. Gates’ handwriting was “Tablet PC is SUPER COOL!” Behind the stage, a backlit sign read, “Experience the evolution.”
The Microsoft evolution never became a revolution because the company’s disparate and factional divisions failed to work together to vision and implement a turnkey experience.
The revolution happened in 2007 with the launch of the iPhone.
As with most industries, evolution is often interrupted by black-swan revolutions. Sound (voice communications), touch (pinch and zoom navigation), sight (Heads Up Display [HUD]) all changed the way consumer used the phone and is one of the gating factors in technology adoption.
Knowing what technology will help us evolve and what technology revolutionizes is more of a human insight that a science.
Ergonomics helps us rearrange the digital furniture. However, changing the way we connect with this communication device is profoundly human. What is beyond touch, what is the next revolution?
Short history of touch
Although Mr. Gates told reporters off-stage in Las Vegas that how excited everyone was back at headquarters in Redmond, WA – developers were checking the tablet out to play with, which was “a very good sign,” he said – six months later warehouses were still full of the tablets. Second-quarter shipments had plummeted 25 percent with a meager 100,000 total units sold.
Mike Magee, technology writer for the Inquirer, wrote despondently that “this is another classic case of IT firms thinking they know what technology people will like, and failing to take off the blinkers.”
Touch appeared back in 1971 over a ten-year period in the form of infrared technology – such as the Hewlett-Packard 150 – which shows up in various military applications. The IR matrix of beams is used to detect a finger touching the screen.
But the IR technology was expensive and the technology gained more mainstream adoption was “resistive touch.”
It was a simple concept. Resistive touch screens were built using two layers of conductive material (Indium Tin Oxide). The two layers were separated by a small pocket of air. An action was triggered when a stylus, or other object, pressed the top layer into contact with the bottom layer.
The limitation was that it was like a pin board. You could tell the device where you were by moving the point of contact. But it did not have multi-touch functionality essential to pitch and zoom navigation.
Mass-market adoption was not an option:
1) The screen wore out
2) Required a stylus pen for accuracy
3) The air pocket made the screen appear hazy
4) OEMs had to build a clunky hole in the casing, since the top of the resistive sensor had to be exposed to user’s input
This is the technology that Bill Gates was holding up at Comdex in 2000.
The unit’s resistive touch stylus was used to enter into clunky dialogue boxes to input text and commands. The entire project was “resistive.” The Office team refused to build for the unit, adding to the painful UX.
Meeting Andrew Hsu
In 2013, I ran an event on connected screens in New York. I wanted to tell a story about the importance of the screen in the evolution of mobile phone design and adoption.
I invited Professor Donnell Walton from Corning Glass, as well as representatives from Microsoft’s Surface team and Google Glass, and was looking to find a speaker to explain “touch.”
Maybe I could locate someone from the scuttled Apple Newton team?
I found, much to my surprise, like an anthropologist that finds that we did not evolve directly from monkeys, that the precursor to the 2007 Apple iPhone was a skunk works project headed up by an engineer called Andrew Hsu.
Andrew developed and patented a capacitive touchscreen suitable for mobile devices way back in 1999. He developed a system which computes the location of a user’s fingers based on how they change the capacitance values of an invisible matrix of electrodes.
The capacitive touchscreen did not suffer from the various user experience drawbacks of the resistive touchscreen – it does not wear out, it does not cloudy the underlying display, and it does not require a big hole to be cut into the device casing. But, most importantly, it enables natural finger input.
This capacitive touch is not a mouse click. It is not a data poke with a stylus. Andrew Hsu’s touch allowed us to communicate in a very human way with pointing-and-pinching space.
Don Norman is often quoted about touch.
“We've lost something really big when we went to the abstraction of a computer with a mouse and a keyboard, it wasn't real ... swiping your hand across the page ... is more intimate. Think of it not as a swipe, think of it as a caress.”
While mobile success is almost always based on interface and usability, it took seven years for Andrew Hsu to convince the industry to adopt the technology.
Revolutions come in simple packages: text messaging, Apple’s mobile application SDK, gesture-based gaming.
We talk about the consumerization of technology. Touch was the humanization of technology.
In a world where data appeared cerebral and uninviting, we suddenly can interface in this data and content as we do with real object. The physical world became extensible and less scary.
From click to pinch and zoom
In 2006, handset manufacturer LG trialled launched capacitive touch with its designer Prada phone.
The LG phone had all the correct ingredients – capacitive touchscreen for intuitive finger input, high- resolution display and one of the first graphics co-processors in a handset.
Prada brought style to the table and LG brought the insight that touch that would ultimately inspire the new mobile consumer.
But we had to wait one more year.
When Jobs returned to Apple he shut down the Newton project. This legacy 1993 technology had poor handwriting recognition and had little traction in the market. But Andrew Hsu’s capacitive touch appealed to Steve Jobs UI sensibilities.
As a post-Newtonist, Jobs once said “we are born with five styluses on each hand.”
When he introduced the iPhone, we knew that being able to move large format data on a small screen with a pinch and zoom changed the way that the consumer saw their mobile device.
Where Steve Jobs went further than touch was his insight in designing a full edge-to-edge screen that had the dimensions of a letter-size piece of paper. The screen called out to be touched, worked on and paged through.
Although touch revolutionized the phone and lines weaved around the block for new product releases of Apple’s new “human” interface, the consumer was still nose-to-screen, bumping into lamp posts while elegantly navigating data a hundred miles away.
“Bump” – the file exchange application recently acquired by Google – and other applications including NFC payment extend this love of tactile interface by promoting social touch between other phones and public devices such as POS.
Gesture: Moving beyond the cool?
While touch is an important sense, sight is essential for navigation. The next revolution is to make data come to live seamlessly in the real world.
When we talk about HUD, we think of the new Google Glass and the opportunity to integrate data into our line of site. In parallel, see the world and the data behind it.
We see integrated cyborg solutions such as Google Glass and future visions of embedded epidermal circuit seen in the movie, Total Recall.
Microsoft had the lead in a new HUD interface using gesture. XBOX Kinects was the one product that Microsoft was seeing growth in the consumer sector. However, the leviathan was unable to make this a multiscreen strategy fast enough.
Moving gesture elegantly to PCs and window phones never happened. There is a Kinect for Windows but it lacks the software for controlling the interface.
The Leap motion controller is a step forward: a small multiscreen sensor box not tied to console in the dean but with the ability to tether like a dongle to a wide variety of screens and deliver better sensitivity to Kinect. It has multiple commands down to finger level accuracy.
Andrew Hsu still believes that touch is less ambiguous on the consumer navigation intent.
“How can you disambiguate between ‘accidental’ and intentional gestures? The beauty of touch interaction is that you basically get user intent for “free” – a user typically only touches the device when he/she wants to interact with it. The cases of accidental activation are much lower and easier to reject.”
Arguably, HUD is a solution looking for a problem.
LIKE THE INSPIRED Segue cycles, the inventor’s goal was to develop an urban consumer transport vehicle but he failed to get significant adoption.
The Segue has now found a home with urban tourist touring groups and airport police. Why? It provided an elevated view with minimal multitasking: Ideal for tourists and law enforcement.
Mr. Hsu agrees.
“What these technologies really need to address is what sort of ‘problem’ they are trying to solve. That is, with capacitive touchscreens, there were certainly a number of value propositions that arguably were superior to the previous resistive solution that helped transform/enable touch input. Natural gestures (HUD) is still looking for a compelling value proposition”
Google Glass is a platform without a certain home. While super cool, it has not inspired the consumer.
We have not seen the “a-ha” that Jobs brought to the touch. We know more new intuitive human interfaces are coming. But we need a Steve Jobs to take the technology and humanize it for intuitive consumption.
Gary Schwartz is president/CEO of Impact Mobile, Toronto. Reach him at gary.schwartz@impactmobile.com.
Share your thoughts. Click here