

The AI Assemblage
A Human-Centric AI Consultancy


Our Approach
We are a human-centered AI consultancy. We help organisations develop AI-powered products and services that are not just technically sound, legally compliant and following best practice and standards in Responsible AI, but that are also aligned with human values, socially attuned, responsive, and ethically grounded.
We help you build AI systems that are Responsible by Design, meaning that responsible AI principles are embedded throughout the entire AI lifecycle.
The benefit of our people-first approach is that
-
You ensure that your products and services are aligned with your values as an organisation.
-
You avoid wasting time building an AI solution that people will not adopt, as it does not meet their needs.
-
You build trust, loyalty and credibility with your users or customers.
-
Regulations and standards are rapidly evolving. Ensure that you comply with relevant legislation.
Why This Matters

How We Do This
-
Putting empathy first: we engage with your potential users and stakeholders to consider not just their needs in context, but also to explore the potential wider impact on communities or society.
-
We are adopting an iterative, agile and inclusive process: we build in feedback loops that enable your AI system to respond to evolving user needs, and changing social, cultural, legal and environmental standards.
-
We adopt a socio-technical systems perspective: we consider existing user behaviours, internal workflows and processes or organisational culture to ensure a good fit.
-
We are future-orientated: we explore future scenarios in co-creation workshops using design techniques to consider evolving technical capabilities, enabling you to innovate and experiment with a clear direction.
What We Offer
Let’s design a better future together. Please get in touch to discuss how we can help you to build responsible AI-powered products and services for the benefit of your users, customers - and everyone. We tailor all our services to your unique requirements and your specific context, and we offer one-off services or ongoing support.

Problem-Framing and Ideation
Our approach is informed by deep and contextual empathy with users. This may involve co-creation workshops, inclusive stakeholder mapping, or a reframed problem statement from the users’ perspective. This approach will help you with articulating and validating your initial assumptions, explore the impact the solution may have, and finally build a product that people will love and find useful.
.jpeg)
Value Alignment
It is imperative to ensure that the AI system’s objectives align with human values and intentions. We offer to develop and co-create bespoke Responsible AI principles that are aligned with your organisation’s overall vision and strategy. Upon request, we conduct workshops to explore which values and principles are most relevant to stakeholders, and to your users and customers.

Responsible AI Prototyping
We run co-design sprints to rapidly prototype AI solutions that do not just evaluate technical feasibility, but user response and potential adoption, as well as exploring the wider circumstances in which the AI system will be launched. We focus on low-mid fidelity prototypes that can be built and tested easily with users, and which will generate insights quickly that inform future iterations.

Responsive AI
Rather than building a static system, we offer to build in genuine feedback loops and ensure that the AI system is responsive and adapts well to changing circumstances. This may be changes in your users’ circumstances, cultural changes and preferences, or changes in legal or environmental requirements and standards.
.jpeg)
Responsible AI Governance
This may involve
-
establishing your organisation’s governance structures
-
AI governance policies that define how you innovate around Responsible AI principles
-
a bespoke AI governance framework tailored to your needs
-
defining an AI risk taxonomy
-
creating AI risk assessments
-
establishing mitigation plans and procedures
-
setting up AI risk monitoring
-
creating and conducting AI impact assessments.

Usability
We conduct research into your users' responses, perceptions of and behaviours towards your AI systems, and provide a comprehensive analysis and actionable insights.

Innovating for the Future
We offer co-creation workshops with stakeholders to explore future scenarios using techniques such as design fiction or speculative design, to give you an understanding of how to leverage emerging technological capabilities in AI for developing new and innovative products and services.

Capacity Building for Human-Centered AI
We offer a bespoke training programme tailored to developers and data scientists, product managers, or executives on Human-Centered AI and how to effectively embed this approach in your organisation. We run workshops, provide toolkits on Human-Centered AI, and provide ongoing coaching for your organisation.

Research and Insights
We research and provide insights on specific, emerging topics in Responsible AI and develop targeted analysis and recommendations in the form of position or discussion papers. For example, you may want to know about the capabilities of a new, emerging AI-driven technology or application, or how to anticipate or respond to future regulatory changes.
About us
We are The AI Assemblage. Assemblage theory, originally pioneered by philosophers Gilles Deleuze and Felix Guattari, offers a framework to investigate AI as a complex, evolving and dynamic socio-technical system. It emphasises how AI interacts with various elements in the “assemblage”, for example, humans, technology, organisations, and social structures. It enables us to better understand the potential ethical issues and develop strategies for responsible AI development and deployment tailored to a specific context or industry.
Responsible AI requires a cross-functional and interdisciplinary team. Our team is also an “assemblage” of diverse professional backgrounds with significant expertise and experience in Responsible AI gained in industry, academia or the public sector.
Book an Initial Conversation
Book a 30 min initial conversation with us. It's free.