Sketchin

AI beyond bias: Inclusive design in the era of Generative AI

Artificial intelligence is transforming the way we live and work. While it promises innovation, it must also serve as a driver of inclusion rather than a force for inequality. Only through conscious design and ethical guidance can we ensure fairness and accessibility, embedding inclusion as a fundamental principle of technological development.

Stefania Berselli, Maria Grazia Cilenti
Executive Design Director, Design & Inclusion Director

27.02.2025 - 10 min read
Human hands blending into abstract shapes and particles, evoking connection, transformation, and diversity.

AI has now permeated nearly every aspect of our daily lives: it organizes our schedules, filters information, shapes our entertainment, assists us with tasks, and even influences our job opportunities. This deep integration brings not only technological advancements but also significant responsibilities. We face a paradox: AI systems often inherit—and in some cases, exacerbate—deep-seated societal biases.

These are not mere technical glitches. When Google Photos mistakenly labels Black people as “gorillas”, amplifying racist insults, or when voice assistants default to female voices, reinforcing gender stereotypes that place women in caregiving and support roles, we witness the algorithmic crystallization of bias. These biases are embedded in training data and validation processes, reflecting underlying social dynamics.

The real innovation in AI design lies in the ability to turn these challenges into opportunities. AI, when guided by inclusive design principles, can become a powerful tool for breaking down barriers and creating universally accessible digital experiences. Beyond simply using diverse datasets, it is crucial to consider AI’s perspective, particularly in the way it is prompted. The way we frame a request directly affects the outcomes we get: AI does not spontaneously generate accessible and inclusive solutions—it must be explicitly guided in that direction.

The key lies in adopting a “hybrid model”, where Human Intelligence (HI) and Artificial Intelligence (AI) work in synergy. AI can enhance efficiency, improve work quality, and expand our skills, but human involvement must remain central to decision-making.

In this framework, AI acts as a sophisticated collaborator throughout the design process—from initial research to final implementation. It can rapidly analyze vast datasets and generate multiple ideas, but human intelligence is irreplaceable in enriching data with diverse perspectives, critiquing and validating results, applying emotional intelligence, and driving innovation. Our responsibility is to ensure that AI adoption is guided by active listening, integrating users’ needs, perspectives, and diversity at every stage of design. Only by involving people directly in design through dialogue and research can we transform technology into a true tool for humanity.

This approach is especially crucial when considering AI’s limitations, particularly in user research and validation testing. Synthetic users, generated through machine learning models trained on large datasets, offer the tempting promise of quick, scalable feedback. However, they struggle to capture the crucial nuances of human experience, especially when it comes to understanding the needs of underrepresented or marginalized groups. Synthetic users cannot replicate human unpredictability or explain the emotional reasoning behind their choices; instead, they reflect dominant patterns present in training data. This creates a feedback loop that risks perpetuating the very exclusions we aim to eliminate. Therefore, designing with people is essential for reducing discrimination.

Data is the lifeblood of AI, but it carries a hidden risk: it can be both a tool for inclusion and a mechanism for discrimination. As data and communication expert Donata Columbro emphasizes:

Algorithms do not discriminate on their own—bias emerges from the data and the assumptions we embed in them. These data-driven decisions influence everything, from public policies to workplace dynamics to digital services, but data is never neutral: it reflects societal inequalities and biases.

Donata Columbro, journalist and writer

This awareness is crucial: no dataset is neutral because it mirrors existing power structures and inequalities in society. If the data is biased, the AI trained on it will be biased too. And since AI makes decisions that affect our lives—from access to healthcare services to career trajectories—this issue is not just technical but profoundly ethical.

A striking example is a healthcare triage algorithm in the United States that assigned lower priority to Black patients because it used historical healthcare spending as an indicator of medical need. Since Black people tended to spend less due to systemic inequalities, the system unjustly excluded them from essential care.

Aurora Mititelu explores machine learning as a human-machine symbiosis, emphasizing the ongoing need for human involvement in data enrichment, algorithm design, and governance.

How can we build a more inclusive future?

The solution lies in a methodological and conscious approach that starts with using AI as a tool for exploration rather than the final arbiter of decisions. This means: building more representative data pipelines to ensure diverse and equitable inputs implementing rigorous validation processes that integrate multiple perspectives and actively challenge biases and maintaining ongoing dialogue with the communities our products aim to serve.

Creating truly multidisciplinary teams is essential, as only through diverse perspectives can we identify and correct biases before they become embedded in our systems.

Inclusion in AI-driven processes and experiences is not a one-time achievement but an ongoing process of learning, adaptation, and improvement. It requires continuous effort in critically supervising AI outputs, regularly reviewing them to identify exclusions, and validating results with real users from diverse backgrounds.At this technological crossroads, we have a clear choice: we can allow AI to amplify existing inequalities, or we can consciously guide it toward building a more equitable and inclusive future.

Technology is a powerful tool, but its societal impact will be determined by our ethical guidance and inclusive vision. Inclusion is not an accidental outcome but a deliberate decision that must be integrated into every phase of design. The future of AI is shaped by the choices we make today. By combining human empathy and intelligence with AI’s analytical capabilities, we can develop truly accessible technologies that serve the needs of all people, regardless of background, ability, or circumstances.

Inclusion is a choice—and so is exclusion. The difference lies in our vision, responsibility, and willingness to act with awareness.