In 1984 the personal computer industry was forever changed by the first Mac. More expensive and less familiar than the DOS-based computers that were gaining popularity, the Mac was a first: It shipped with a point-and-click mouse standard and its core operating system – the thing we used to tell the computer what to do – was a flat desk-like surface. Once we got used to the idea, we could move the pointer around with the mouse, move “icons” that represent ideas on our virtual desk from one place to another.

The nomenclature would take a few years to solidify: the virtual desk would be known as the desktop, the things we move are “files” and like in an office we would put files into folders and even “trash” them when we no longer wanted to keep them around. The virtual concept was borrowed from the real world, and it allowed us to relate to our computers in a way that was both familiar and more intuitive than typing esoteric commands into a terminal window.

Within years the idea of a point-and-click interface would become standard for the (Microsoft) PC world too.

What made the critical difference was the interface that we used to get what we wanted out of the machine. The first Macs could pretty much do the same things that their predecessors could, but you didn’t have to relate to the machine in a cold, distancing syntax that involved learning a series of special things to type. Admittedly, the new point-and-click paradigm did train us, that is – we had to learn its specific idiosyncrasies – but it met us halfway by making the concepts simpler and the interface more intuitive.

Computers by their nature are impersonal. They expect specific instructions to accomplish specific tasks. In the early days, computer programmers were nerdy, anti-social specialists who worked in a world based largely on higher math. Most software in the early days – and most software that the general population doesn’t see today – crunches numbers and performs calculations at levels of higher math most laymen would have no understanding of.

The mathematicians who became computer scientists were classically not very good at understanding people. Our tendency (in the early days), was to make software that works more like the way the computers think than the way that humans think.

Using a modern operating system today, and particularly Mac OS X, is much easier. First, the computer shows us a graphic representation of what’s available right away – without us having to ask for it first. The Mac OS is organized to have default places for our documents, photos, and music. Let’s face it, who cares where it is on the hard drive? Well, the operating system does. The file/folder hierarchy is an arbitrary (albeit necessary) way to organize ideas and things on our computer whose primary function is to meet the needs of the computer, not the user. This has led to the classic story of users saving things some place and not knowing how to find them later.

To make things easier, Apple pioneered the idea that specific software can be written to address the user him or herself. You can search your hard drive, for example, by entering just a few letters or words that might be contained within the document you are looking for. This search can be “canned” so that you can view all things relating to your daughter’s school into a “smart” folder – a folder that doesn’t really exist but it is a representation of all the relevant files all over the drive.

This is just one example of the paradigm shift towards the rise of what Alan Cooper calls in his 1999 book “human interaction design.” 1 On top of the nuts and bolts that make the computer work are layers and layers of software designed to give a unique, albeit somewhat artificial, experience to the user.

Yesterday Apple announced the iPad – a sleek, 10-inch version of the iPhone whose only input is the sophisticated touch screen surface. Is this the latest in an over-saturated world of gadgets, gizmos, toys?

Sure it is the latest tech-bling, but taking a step back most observers would argue that it represents something larger: a paradigm shift in how we interact with our world. The iPhone has become a staple – a near ubiquitous characteristic of our modern world.

Jobs explained yesterday that they have long asked themselves if there is room for a tablet in the market place – something bigger than a smart phone but not quite the same as a laptop either. “If there’s going to be a third category between smart phones and laptops, it is going othave to be far better at some key things – otherwise it has no reason for being.”2

What makes the iPhone – and now the iPad – a game changer is are three key things: (1) There is no mouse or keyboard, (2) the device is portable, and (3) software can be written for it after it is sold and in the wild.

The lack of a mouse and keyboard matters because it, like the point-and-click paradigm was when it was first introduced, changes how we think of interacting with a computer. The iPad’s fundamental change is that the user interface is entirely based on using fingers to make selections from the screen. The interface designers know this, so the way the interface is created takes this into account. Obscure tasks aren’t hidden away in deep menus and submenus. There are just a few options presented to us on any given screen. Higher level choices let us navigate to places where we can make more specific choices (like a phone tree). All of these facets are native to the iPhone/iPad paradigm and represent a key shift from the desktop computer model which has been predominant for 30 years.

Two, portability. Computer use is no longer a matter of sitting at our desk (or internet cafe) and staring at the glowing box (as a friend of mine likes to say). I can be walking down the street and want to find the nearest place to get lunch, open my iPhone and launch the AroundMe app, and see a visual map with push-pins showing where I am and where to eat.

Of course it is possible to do this on a computer too, but it is unlikely that I would have gone to the trouble. Or if I did, I would have to have done it in the morning before I left my house. The immediacy of the iPhone makes it something that not only is useful on-the-go, but means that we will think about being on-the-go in new ways. Most people with iPhones just don’t use Mapquest or Google maps on their computer anymore, because as long as the network converge is good, we know we have access to a real map showing us how to get where we are going with us at any given time.

Portability affects everything. If we’re waiting for an important email we can go to our kid’s little league practice knowing that the device will let us look for an respond to that email should it arrive.

Finally, the third key difference is that software can be written after the fact. Apple knew that the market and users themselves would drive the way the iPhone was used. The same thinking has gone into the iPad.

Apple doesn’t have to think of every way it could be used, all they have to do is make the best hardware they can. The uses will come, and because you can load new applications on to the devices like a computer, the possibilities are endless.

The iPhone has already changed the way lots of us think about being connected. What will the iPad do? Well, the simple answer is take us a lot farther. The interface is the experience. A bigger screen will mean more possibilities, more screen space to layout information, and that will lead to new and interesting cases for use of the information.

References
1. The Inmates Are Runing the Asylum by Alan Cooper. Sams Publishing, 1999
2. Steve Jobs, Apple Special Event Jan 27, 2010

By Jason

Leave a Reply

Your email address will not be published. Required fields are marked *