Sketchin

AI-Enabled Experiences: a designer team’s logbook

In this logbook, we share what we learned while creating conversational assistants, with ten practical lessons for designers ready to take on these challenges.

Gea Sasso & Alberto Andreetto
Service Designer & Interaction Lead Designer

01.10.2025 - 10 min reading
Illustration of an iceberg used as a metaphor: the end user sees only the tip, while beneath the surface lies the hidden complexity of an AI system made up of technologies, data, and professional interactions.

This article is a short guide written by designers for designers.


It’s not a technical piece or an academic paper, but an honest account of what we learned working on projects where generative AI and conversational agents weren’t just one tool among many, but the real core of the designed experience.

If you’re looking for a catchy quote to share on LinkedIn, this article might not be for you. Here we talk about hands-on design, the kind made of trials, mistakes, and constant iterations. It’s a story meant for those who, like us, are “in between”: between strategy and the final outcome, between vision and implementation.

Recently, we worked on two very different projects, both driven by the same goal: building a conversational AI assistant that is useful, accessible, and aligned with people’s real needs.

  • Nexi – Alberto worked on a virtual customer care assistant. It was a 9-month project where, together with Nexi’s internal Design Studio, we analyzed different business areas to understand where generative AI could have the greatest impact on end users. After evaluating KPIs and ROI, customer support turned out to be the most promising field: the goal was to move from generic support to a tailor-made service, able to provide targeted and relevant answers.
  • Lenovo – Gea worked on LIA (Lenovo Intelligent Assistant), a 6-month project in the B2C retail context. We have been collaborating with Lenovo for many years on several initiatives, but this was an experimental stream, created to test the application of AI in retail spaces and understand what real value it could bring.
    LIA is an AI assistant that simplifies product selection within a very broad catalog, adapting language and content to the user’s level of expertise. The experience combines a modular interface, light animations, and a distinctive physical presence in-store. The system integrates multiple layers (from data sources to frontend) and is built on an AI architecture able to understand user needs and return concise, technical, and personalized answers.

After explaining “how it’s done” too many times to colleagues and curious friends, we decided to gather what we learned into ten points and share it with others who, like us, are facing these challenges from scratch or almost. These are not absolute truths, but lessons born from attempts, successes, and above all mistakes. Maybe they’ll be useful for you too.

01. Concreteness first

A conversational assistant only makes sense if it’s useful. And to be useful, it has to be clear, immediate, and action-oriented. The user wants to get an answer in just a few steps, without unnecessary extras. This means designing simple interfaces, short but sharp responses, and smooth interactions.

Behind this apparent simplicity, though, lies a complex structure: intents, datasets, smart fallbacks, continuous iteration between design, data, and development. All of this has to stay backstage. The user should never notice it. What matters is that the assistant enables a new way of conversing, just as natural and immediate, but also consistent with the brand’s identity and values.

Tips for designers

  • To get a feel for interactions and content, start by writing answers as if you were quickly chatting with a colleague: no more than three sentences, verb at the beginning, and a closing action (“Want to see the models?”, “Here’s what you can do now…”).
  • For the interface, make sure to map all the sites and properties where your assistant will live. Define the right placement for each of them, and design accordingly based on how much visual attention it will get.
  • Once you’ve imagined the flow, try to reduce the number of steps needed to reach an answer: every extra click or input is unnecessary cognitive effort. Plan shortcuts and smart suggestions, and decide whether your assistant should be proactive and engaging.

02. Language is design (and vice versa)

Writing for an AI assistant is not the same as writing for a website or an app. Every word, every tone, every pause is part of the experience. The language has to adapt to the audience while staying consistent with the brand identity.

This means working on the assistant’s voice, not as sound, but as personality: attitude, style, and way of responding. With LIA, for example, we defined a professional yet simple tone, in line with the experience of a retail store. For Nexi, instead, we designed a reassuring voice: authoritative, but without being formal or distant.

Grafico che mostra i parametri utilizzati per definire il tone of voice dell’assistente virtuale Nexi: rassicurante, autorevole e al tempo stesso accessibile, non formale

Tips for designers

  • From the start, create a guideline document with tone, words to avoid, and good vs. bad examples. Use it as a living, updatable reference for the whole team. It doesn’t need to be perfect, but it’s essential to align and collaborate even before you have a testable product.
  • Write answers as if they had a body, a posture, an intention. Always ask yourself: “What attitude am I conveying?”If it helps, try role play: imagine how other brands (especially competitors) would express the same concept. Or choose a public figure as inspiration for your bot’s style and tone. At Nexi, for instance, we looked at Cecilia Sala’s way of speaking: direct, friendly, young but also authoritative and reliable.
  • If you’re still unsure about the direction, you can always run A/B tests with different tones of voice for the same answer. You’ll quickly see what builds trust, clarity, or empathy depending on the context. You can test everything, from reply formats to character traits. Pro tip: it can even become a fun exercise for the team to check if everyone imagines the assistant the same way, with the same voice, tone, and personality.

03. Animation is content

When designing an assistant, identity, visuals, and animations play a key role. They’re not decorative extras but help make AI feel tangible, whether in a digital or physical context.

With LIA, we chose to give the assistant a visual presence, without falling into the trap of over-personification. With Nexi, on the other hand, the absence of strong animations, personification, or branded elements was a deliberate choice. Even absence is a message: it allows users to focus more on the content. Every movement, every pause, every transition tells something about the experience you’re offering.

Tips for designers

  • Define when and why the assistant (whether avatar or interface) moves, and how those movements are (or aren’t) part of its identity. Every animation must have a purpose, such as marking a pause or confirming an action.
  • Response times can easily kill the “mood” and make attention vanish in an instant. Use micro-delays and transitions to handle AI timing: they cover dead moments and make waiting feel less cold.
  • Avoid overly human gestures or movements unless they’re a defining trait of the agent you’re designing. Make the assistant feel alive, but not caricatured. Aim for a sober, recognizable balance that fits the content, the brand, and user expectations.

04. AI is more than just a feature, it’s a whole system

An AI assistant isn’t just a function you turn on with a click. It’s a complex system where the interface is only the tip of the iceberg. Everything else, data ingestion, intent mapping, reducing unnecessary calls, fallbacks, and the UI, needs to be designed in an integrated way.

Teamwork is essential here; no one can work alone. You need a multidisciplinary team including designers, developers, data analysts, and product owners. You also need an iterative process: design, test, adjust. More times than we’d like to admit.

Eight-layer chart representing the complexity of an AI system, including technologies, infrastructures, and human support.

Tips for designers

  • Knowing where the assistant will be used is fundamental to its design. Depending on the platform, many things can be completely different. This affects the design environment, the interface, and even how interactions start and end within the broader experience.
  • Design the entire conversational ecosystem, not just the screens: include intents, fallbacks, voice interactions, and human escalation. Think especially about what happens before and after the interaction with the assistant.
  • Organize co-design sessions between design, development, and data teams: a change to one intent can affect the entire experience. To make it more engaging, try switching roles within the team and revisit decisions already made: how would someone from data design this micro-interaction?
  • Document the assistant’s evolution over time with dynamic flow maps. They’ll help you track discarded options, considered alternatives, or ideas you initially set aside that might become useful later.

05. The cost of AI

Generative AI is powerful, but it comes at a cost, not just financially, but also in terms of efficiency and environmental impact. In some cases, deterministic responses work better: they’re faster, more reliable, and less expensive.

In the Nexi project, for example, only 37% of over 130,000 monthly intents are handled with generative AI. The rest operates deterministically, without using AI for every single request. This approach helps control costs while maintaining high performance, especially for a tool accessed by tens of thousands of users every day, 24/7.

Representation of the customer journeys used to design the Nexi virtual assistant.

Tips for designers

  • Identify “repetitive” tasks that can be automated with fixed rules, using lists of common user queries (e.g., FAQs, hours, product info). This reduces calls to AI by providing deterministic answers.
  • Pair each answer with a follow-up question to shorten interaction times, lower costs, and guide users through a proactive conversation.
  • Create dashboards with cost metrics per interaction. Use them to decide what to optimize, reduce, or disable. This makes it easier to deliver the desired answer more efficiently.

06. Data: the real elephant in the room

Data is the foundation of any assistant. It’s the knowledge base that the assistant relies on to generate responses. Often, it’s the biggest challenge: data can be messy, unstructured, or inaccessible. Before thinking about crafting a good conversation, you need to get your data in order.

In some cases, collecting, cleaning, organizing, and updating data becomes a project within the project. And no, opening unrestricted access to the internet is not the solution. The more open the system, the higher the risk of hallucinations or inaccurate answers.

Data quality remains the real critical point, especially when dealing with company information to ensure consistency with the brand. For example, if product line data is inaccurate or outdated, the assistant might provide wrong information or even redirect users to content that doesn’t accurately represent the brand.

The complexity of the data that powers the LIA assistant, hidden behind the user interface.

Tips for designers

  • Take a structured inventory of your data before starting: list what exists, what’s missing, and the quality or accessibility of each data point.
  • Clean and normalize content into AI-readable formats: any anomaly can compromise responses.
  • Define the scope of your assistant’s knowledge in advance: it’s better to have limited but reliable knowledge than broad but potentially incorrect information, especially when it comes to company data or unverifiable information online.

07. Testing is the hardest (and most useful) part

Testing an interface is relatively simple. Testing a conversational assistant is much more complex. There are thousands of possible combinations, and you can’t predict everything. It takes weeks of testing, trial runs, and error analysis.

For Nexi, nearly half of the project time (4–5 months out of 9) was dedicated to testing: we ran prompt tests, simulations, collected user feedback, and iterated continuously. It’s a long and not always rewarding process, but it’s what makes the difference between an assistant that truly works and one that leaves you hanging mid-sentence, or even gives a wrong answer or one that doesn’t align with the brand’s values.

Excel sheet representing the complexity of the Nexi customer support service.

Tips for designers

  • If you can’t test extensively with real users, build a conversational test suite with realistic, imprecise, and poorly phrased questions. Simulate real users, not ideal ones. Use the flows you designed at the start to follow all branches and ensure the assistant provides the intended answers.
  • Keep a conversation log to classify errors: language, tone, understanding, or data. Every mistake is a goldmine of information.
  • Conduct qualitative tests with real users, in person or remotely. Body language often tells more than words, and you can always ask follow-up questions to better understand expectations and frustrations.

08. Design is also education

An AI assistant doesn’t just respond; it educates and guides user choices. And with that comes responsibility. Designing a clear, ethical, and understandable experience also means helping users navigate what they can and cannot ask. Transparency, simplicity, and an approach that considers different levels of attention or knowledge are essential. Even behind an automatic response, there’s a design intention that can improve (or worsen) the user’s experience.

LIA answers a user’s question by clarifying that Lenovo Yoga devices have no gender distinction.

Tips for designers

  • Design a conversational onboarding that immediately shows users what they can do and how. Using natural, conversational interactions makes it easier for users to get oriented, making the onboarding more effective.
  • Avoid vague messages like “How can I help?” Be specific, e.g., “I can help you choose a computer or find a compatible accessory.” This reduces costs while increasing user understanding and clarity.
  • Include responses that positively acknowledge the AI’s limits: “I can’t do X yet, but I can help you this way…” It’s not about the assistant being able to do everything, but about doing what it was designed for accurately and excellently.

09. Reduce overload, maximize understanding

Every word counts, and every image must serve a purpose. The goal is to deliver the message as simply as possible. We worked a lot on balancing text, images, and voice. Users don’t have time or patience to read long paragraphs. Design must consider cognitive limits and support comprehension, even when users are multitasking. A good assistant is one that doesn’t make you think too much but helps you get what you need.

LIA interface displaying Lenovo product search results.

Tip for designers

  • Write in a pyramid structure: start with the answer, then add details, and finally provide deeper information if requested. This simplifies the conversation flow and reduces the number of interactions. A user who quickly finds what they need is a happy user.
  • Simplify visuals and text until it feels almost too minimal. Then remove a bit more, if it still feels essential, you’re probably at the right level.
  • Test interfaces in real conditions, on the move or while multitasking. If it works there, it will work anywhere. You can even conduct some user tests in these scenarios.

10. Do or do not, there is no try (Yoda)

Many things that seem obvious to us today were learned through mistakes. No course or article (even this one) can replace direct experience. We learned by collaborating with developers, iterating on conversations, and reviewing answers a thousand times. And we’re still learning. The best advice we can give? Start. Try. Fail. Repeat.

Tip for designers

  • At the beginning, set up a small experimental environment to test new features, even unfinished ones: it’s better to fail on a small scale. You can even create GPTs to help, you’ll see it’s really simple.
  • Establish a shared process with developers to log and analyze every mistake, turning them into opportunities for improvement. Use gamification to make it fun, like a points game or a fantasy contest.
  • Schedule monthly retrospectives: what did we learn? What didn’t work? What will we try next? This will help you develop your own way of working and collaborating on new challenges.

Conclusion

Designing an AI assistant is, above all, a team effort. There are no rigid roles or clear boundaries: interaction designers, visual designers, copywriters, developers. All must work together, share decisions, debate, and be willing to change their minds. No one designs alone, and no one reaches the finish line by themselves. In our projects, the most effective solutions emerged during intense collaboration, when we combined different skills and perspectives.

We wrote answers together, tested conversations in Excel sheets, discussed animations in Figma, and refined microcopy directly in the code. The final result was never a single moment of inspiration, but the sum of many shared iterations.

If there’s one takeaway, it’s that conversational design isn’t just a new type of design. It’s a new way of collaborating and being a designer within a team.