What happened to our screens? The history of everyday screens has gone something like this — movie theaters to TV sets in every home, then desktop and portable computers, followed by (smart)phones, tablets and smart watches. In this sequence, each new device didn’t completely replace the previous one; but rather the newer screen gets most of our attention. One trend stands out: screens are getting smaller, from one phase to the other (within each phase, they tend to get bigger: bigger movies theaters, bigger TVs, bigger smartphones, etc. This is optimization vs disruption at work). Consequently these smaller screens are closer to our eyes (or the other way around), which also means they’re closer to our physical bodies, they’re lighter and they travel with us. How far can this go? By understanding the logic in screen evolution, we will explore where the next chapters could take us.

Do smaller screens mean there is less to see? Not really. While screens were getting smaller, they also shifted from consumption-only mode to interaction/consumption mode, with a dramatic increase in computing power in parallel. Computers, smartphones and smart watches are now less screen, more interface, consistently re-designed in a way for us to utilize various applications. We have access to super computers, and use them to display what we need on small screens, at any given moment. These devices are literally at our fingertips: we use them with our hands, so they need to be close, small and light enough. As a result, there’s actually more to see for the end-user.

As said at the beginning, these new interactive devices have not replaced previous screens, but they now prevail. They also resulted in new screen-on-screen behaviors. This existed with traditional TV already (information tickers on news and sports networks; recent TV sets enabling you to show one network in a small overlay screen, while watching another network). Screen-on-screen exists in the digital/interactive era too, with Youtube for instance. Last but not least, the second screen phenomenon is now mainstream: close to 90% of consumers watch TV with a smartphone, tablet or laptop in their hands. Interesting considering that the previous generations were reading the paper while listening to the radio in the morning (sound in the background, news as the focus), while newer generations are checking apps on their smartphones while having the TV or Youtube on (video in the background, interactive apps as the focus). This happens with premium content too: checking Twitter while watching Netflix, playing Pokemon Ruby at the movie theater (true story). Does it mean we’re not paying attention to the content we’re watching? Maybe not our full attention, but do we really need to? What we really need: interaction. Note here that video games — interactive screens — usually focus our minds on a single screen: we don’t need to have another device in our hands.

One very obvious consequence of smaller, interactive and more personal screens is that they follow us at all times: waking up, commuting, at work, during lunch and dinner, hanging out with friends, before we go to bed, when we’re in bed. There’s an app for every moment, every place. As a direct result, these devices record a massive amount of data about us and become really good at helping with real-time, interesting/distracting and personalized information and content, anywhere and anytime. They’re closer to our bodies, our minds and feelings. If we consider this trend to continue in the future, what does it mean? How can screens become smaller — a smart watch is already the size of two fingertips? Will we interact with something other than our fingers?

One recent attempt to predict future behaviors seems in line with these trends: a smaller screen, juxtaposed with another interactive device — an always-on, in-ear headset — enabling frequent voice-based interactions… Ring a bell?

Joaquin Phoenix in Her (Spike Jonze, 2013)

Indeed, in the movie Her, Theodore (Joaquin Phoenix) is always carrying a small device, displaying information for him and also equipped with a camera. He uses the in-ear piece to receive more information and also interact with the device. A small device , that is even closer to the user as before, or is it the user who is closer to it, too close maybe… This prediction also showcased a new type of UI which is could become more common in the near future: voice-based interactions.

On this front, we’re already seeing a ton of progress with mass-market products from Amazon for example. But these are only applicable with specific use cases where a screen is not needed. Looking back at another prediction, it seems the Back To The Future sequel had it half-right:

  • At the diner, Marty is ordering through “interactive” screens: those are analog TVs (probably due to the diner theme), and the interaction is voice-based. No touch screens.
Back to the Future Part II (Robert Zemeckis, 1989)
  • At the house, Marty’s son is selecting multiple networks on the same screen, seeing all of them at the same time. Again, analog screen (projector) and voice-based interactions, interestingly. No touch screens. (The mom also uses voice interaction to prepare the pizza).

If devices keep getting physically smaller, voice might be the only way, since we can’t reduce the size of our fingers. Let’s also keep in mind that voice has always been a way to re-humanize technology: speaking to a machine makes it more human than pushing its buttons, compelling computers to speak our language and not the other way around. Moreover, what if the next screen is a small glass in front of our eyes (glasses), or even contact lenses which display relevant information? It would be difficult to touch them with our fingers, so voice would be a viable alternative. Those glasses or lenses could also overlay content and information on the physical world, resulting in augmented reality. Would we then touch those virtual objects and move them around? Body sensors could help achieve that, and almost abolish the frontier between our body and the “screen”. Screens could even end up being part of our bodies, for example with skin displays (which we could touch). Kind of like a quarterback’s cheat sheet: an interactive screen on the arm?

Pictured above: Colin Kaepernick. He uses his cheat sheet (on his left arm) to call plays.

And what if the smartphone’s super computer was implanted directly in our brains, to support our decisions with real-time information, no physical screen needed? Our brain is fully able to produce content in our dreams, so it could also work for pictures from social media, movies, and all kinds of visual/audio content. We already use medical equipment implanted in our bodies to make sure our heart paces well, or to supply insulin in a continuous way for diabetic people. Would an in-brain “screen” help blind people see? This might be part of the grand vision behind Elon Musk’s Neuralink, which aims at building brain-computer interfaces. The project is still at very early stages, as it will require many significant developments in neuroscience, semiconductors, interface design, wireless connectivity, etc.

Again, these new chapters in the history of screens will not make the previous stages disappear. Displays are still a very effective method of visual communication for background content, or discovery. A table full of curated books is still more efficient at recommending new reads than the Amazon home page. But the future is not going to be a bigger, more powerful iPhone, or a voice-only device. Technology will look for ways to shorten even further the distance between minds and computing capabilities. Some are already trying to merge them. Software is eating our eyes.