INTERFACE EVOLUTION – FROM APPS TO CONTEXT
Current interfaces are the reflection of the outer brand-centric world. A grid of logomarked rectangles, displayed on the main screen like small billboards. Most of the functionality is branded — we don’t search, talk and write thoughts, instead we google, skype and tweet. Is this the crown of the interface evolution?
Short Historical Excursion
Things haven't changed much since the introduction of graphical user interface replacing the obsolete command line. The application-centric interface — the successor of the document-centric model — didn’t make interfaces simpler. It added a cognitive layer, as you have to learn first what application is used for each task. This way, instead of selecting a picture and have an edit functionality suggested, you need to remember what certain application you should use, load it up and open the picture in that app.
The task-centric interface inspired by web design practices didn’t succeed in solving the problem described above. Oversimplification and too many steps even for the simplest tasks made it suitable only for specific scenarios.
A mix of document-, app-, and task-centric interfaces is the status quo.
Coming full circle
The next step of the interface evolution, not surprisingly, also comes from the web. We use it every day when we search for information. Though it’s called the 'search box', the name is deceitful. It’s not just search, it’s also capable of executing simple commands, conversions and calculations.
You can appreciate the irony, seeing interface history turn a full circle and return to the starting point. The simple search engine’s text box can be viewed as the zenith of the command line. A command line that doesn’t care for word order or spelling, and that understands commands in natural language. But it would not realise its true potential without the capacity for learning and awareness.
While we were all focused on the superficial changes, like touchscreens making the direct manipulation of GUI even more 'direct', the revolution happened in the backend. It wasn’t a particular technology, but the synergy of various sensors, dedicated processors and advanced algorithms. The simple artificial intelligence was born. The context becomes as important as content – location and spatial position awareness, the ability to access and analyse simultaneously a variety of sources from weather to calendar allows it to react correspondingly to the user input.
The personified digital 'interlocutor' interface for the AI was chosen because of its wow-factor, sci-fi feel and our human tendency to anthropomorphise things. But from the outside it looks like Siri and its equivalents are just a voice interface to the hidden powerful command line. What if it wasn’t hidden and we had a text interface to interact with the AI for situations when the use of voice is impossible or socially unacceptable? For example the current iPhone’s Spotlight search screen could become something far bigger than dumb search. As the user starts typing there seems to be little difference from the current Spotlight search. As the user enters further text and symbols, the search results collapse and disappear, revealing intuitive suggestions of possible actions.
In both visualised scenarios (the visualisations are just a proof of concept) numbers are entered, one turning out to be a contact number and other a message. The closest implementation of such interface was 'Just Type' on the ill-fated Palm OS and unfortunately there is no current equivalent.
Interface dissolves in intelligence
Sony sells fewer and fewer point-and-shoot cameras each year, but more and more image sensors for smartphones. Dedicated devices are being replaced by a single converged device and the single-task device manufacturers become chip suppliers for smartphones.
Continuing the theme, the evolution of intelligent software agents will put an end to the over-glorification of the separated apps and functions. The latter will be hidden under the centralised content/context-centric interface in the same way as Siri currently hides Yelp!, Google, Wikipedia, Wolfram Alpha and many other services. Apps will become APIs.
Instead of prioritising applications or tasks, the new UI will put the user input (content, gestures, voice commands etc) and the context of it (location, weather, traffic conditions etc) first. The 'choose application — input content' model will evolve to just 'input content' as smart algorithms based on the context, your preferences and prior usage will do the rest.
To paraphrase the famous Naoto Fukasawa’s quote about design – the interface will dissolve in intelligence.