img patternimg pattern


Humans, AIs and 5 design rules for a changing World

Humans are now living and working side by side with Artificial Intelligences. Experience taught us five rules to design for this new context.


18.02.2020 - 7 min read

AIs are increasingly present in our everyday life and the exponential technologies are defining the boundaries of a mostly unknown land and we, as designers, are puzzling how to cope with this new scenario and, starting from our experience, we have found five rules that are helping us designing when there are AIs involved.

How do you design the right interaction between a man and artificial intelligence? But before that, what do we mean by AI? Let’s forget omniscient and malignant entities at AL9000, but more simply those we deal with every day: the navigator, the chatbots, the virtual assistants, the vacuum cleaner, the systems that suggest goods to buy or which series to watch in the evening…

They have been living with us for years, and as a studio, we have been working on expert and cognitive systems for years. During this time, we have developed a series of reflections on the subject.

Our journey in search of this rules starts from a book, and a very important one since it states some of the axioms of modern physics such as the linear motion, the idea of inertia and translational momentum. It is the “Dimostrazioni matematiche intorno a due nuove scienze” written in 1638 by Galileo Galilei, the father of modern science.

The story of this book is an adventure in itself: published in secret after the Roman Inquisition charged the author of heresy and forbade him to publish any other work after the sentence. But Galilei was a rebel and published this book anyway.

This book reports the fictional dialogues between the same characters of the most famous Dialogo sopra i due massimi sistemi del mondo: Simplicio, Salviati e Sagredo. Each one of them embodies a way of thinking and an attitude towards reality: Salviati states for the first time the principles of the modern scientific method. He believes in the power of observation and in experimentation. Simplicio is a traditionalist, he believes that the traditional sources were true. His knowledge is deduced by the sources. Sagredo tries to mediate between these two opposing orientations, being interested in the technical and economic outcomes of the new sciences.

AI and Humans: attitudes, mental habits and biases

The same vision embodied by the protagonists of the latest opera of Galileo can be found in our everyday life and describes how human beings and AIs cope with reality.

Humans act like Simplicio, while AIs are like Salviati. Their approaches are not mutually exclusive. They are complementary and fill the gaps of one another.

Human beings, like Simplicio, base their knowledge on experience and tradition (understood in a broad sense as the things we are taught) and use that information to build models of reality that allow simplifying the decision and react promptly to an ever-changing context.

They deduce their knowledge from cognitive models: some clever philosophers — such as Carl Gustav Hempel — have named this way of thinking Deductive-nomological model. The benefits of this mental attitude are:

  • Intuition: we can use empathy and sensation to find new correlations between facts and events.
  • Deduction: we can create new models and use them to cope with unexpected or unknown events
  • Creativity: we can conceive new ideas, and add something completely new to what we know.

On the other hand, this way of thinking has several limits: We can process only a small and incomplete amount of data, and thus our judgment is biased. We think by using prejudices, that lighten our cognitive burden but may also lead to wrong conclusions.

We are irrational and driven more by instinct, passions and emotions rather than logic.

Machines, like Salviati, think differently, according to a model called statistical-inductive. They find statistical correlations between data and use them to deduce solutions (the most rational way to allocate money for an investment, for example, or the fastest way to move from one point to another). Like the other attitude, this has pros and cons.

AIs are all logical, their inferences are objective, they can handle a huge amount of data and explore a wider context.

AIs are deeply influenced by the quality of the data they use to feed their “mind” and are helpless in the face of unexpected or completely new events.

That is why it is so difficult to design a service that involves both man and artificial intelligence.

Our first AI-based agent — A little story of failure

We should have thought more about these different attitudes when we designed our first Robo-for-Advisor. Long story short, it didn’t go very well.

We designed a tool to advise the financial advisors of a private bank and increase the effectiveness of their work. Our agent would analyze investment proposals and suggest to his human partner how to manage the client portfolio and how to maximize investments up to a theoretical +84%.

The UX of the tool was excellent and those who used it actually improved their performance by 30%. So what went wrong?

Simply the vast majority of financial advisors did not use it: they were afraid that the Robo-for-Advisor could replace them towards clients or in any case return all their actions to management, but it would have gone wrong anyway because, and above all, the relationship between the human advisor and AI was too rigid. There was only one in command: either the human or AI, and the latter imposed the force of its inferences on the human being. The instrument excluded his touch and intuition.

The end customer could have reservations (ethical or simply prejudicial) about a proposed investment and did not intend to use his money accordingly. It would not be a rational choice but would remain an acceptable choice, endowed with meaning and value. At that point, the advisor had no choice but to silence his artificial partner and do without the superpowers of the Robo for Advisor.

Moreover, the tool was not designed to learn from experience or to take into account the financial advisor’s indications, and so it remained there: powerful, feared and largely useless.

Sagredo and the role of design

So, what should a designer do when dealing with complex systems? The key is Sagredo, the third character in the latest work of Galileo, the one who mediates between the two mental approaches of the other two lads.

Design has to get the best out of these two approaches to reality: the flexibility, creativity and adaptability of the human being combined with the inferential and computational power of machines.

In other words, designers should consider Humans and AI as part of a dynamic system and focus not on the interfaces of the service, but rather on a balanced relationship between the two entities.

We already have a beautiful example of this: GPS Navigator. This is the oldest AI we use and we are now accustomed to it. We trust its suggestions. GPS is proactive and respects our choice and changes its suggestions according to an evolving context.

Our relationship with our GPS is based on trust: we rely on the suggestions from our GPS and are confident that we are in control of our route.

5 rules to design a balanced relationship between humans and AIs

We now live in a world where humans live side by side with AIs and they often work together. Designing for this new context needs some rules. The following are those we have learned so far:

  • Grant the openness and completeness of the information: all the information behind an AI/IV choice must be made visible to the human being so that he can accept the choice. es there is an accident, change of road
  • Humans have the right to freely choose: humans must be the only ones able to make more important choices; the final choice must be possible for humans regardless of what the machine says.
  • Create a dialectic, collaborative and trusting relationship between Humans and AI: trust is built through a lasting relationship and on the basis of a series of positive encounters: Humans must be able to ignore the indications of AI/IV and this must be adapted without losing their completeness.
  • Quantify the value of AIs: the value of AI must be made tangible and thus become understandable and acceptable (how many money I can earn? How many minutes I can save if I trust my GPS)
  • Protect privacy: users must be sure that they can trust their computer counterpart. The information they exchange must be available only to the user and not to others: I need to be sure that my GPS will not tell the police every time I break the speed limit, for example.