The hierarchy of senses

Butterfly Works, a social innovation studio, invited me to organise part of a workshop for their clients and friends. I did a very quick version of the exclusive design challenge I organised a while ago. This time there were three teams, and they had just half an hour to come up with ideas with the material I gave them. After that they took their first ideas and moved over to Kim van den Berg who gave the teams a very quick workshop in visualising ideas by drawing.

In the introduction Kim drew this graph (only much better) to help explain why visualising things is a good idea.

A drawing of a grid with small regions for a nose, a finger, a tongue and an ear, and a very large region for an eye

According to this graph 75% of the things we sense are visual. Which is not entirely accurate, but it does get Van den Berg’s point across that visualising can be a very powerful tool.

On her blog Van den Berg shows a more nuanced image:

a pie graph showing one third for visual processing, one third for visual and other senses, and one third for other stuff

Here it says that according to neuroscience one third of our brain activity is purely visual, one third is visual in combination with another sense, and then there’s the rest.

I tried to find data that backs this idea up. I haven’t been able to find a scientific article that comes up with exactly this number. I did find this very interesting article by the Max Planck institute though. According to this study when people converse in their day-to-day lives, they often speak about what they hear, smell, taste or feel. First and foremost, however, they talk about their visual perceptions.[1] This is true for the 13 languages from around the world that they studied.

One of the assumptions I have about why we’re not very good at designing interfaces for alternative ways of input and output, like keyboard navigation or screen readers, is because designers usually design for themselves. But these numbers support a much simpler reason: processing visual information is a large part of human nature. This means we’re good at consuming it, but it probably also means that we’re good at creating it. And since it plays such a dominant role in the hierarchy of our sense it’s also relatively easy to present information visually.

This would mean that it’s not only because we haven’t tried that interfaces for screen readers are barely functional. It’s also because it’s much harder to get to a same level of clarity without the help of visuals. This supports my idea that we need to study non-visual interfaces much harder.


  1. San Roque, Lila et al, 2014, Vision Verbs Dominate in Conversation across Cultures, but the Ranking of Non-Visual Verbs Varies. Cognitive Linguistics. doi:10.1515/cog–2014–0089.  ↩

Join the discussion

On topic? Friendly? Excellent!