AI is everywhere right now, and the hype can make it hard to separate what’s genuinely useful from what’s just shiny. I’m excited by the possibilities, but I’m also cautious about the real issues that sit underneath: relevance, ethics, and the energy and data costs of building and running these systems.
One area that deserves much more attention is talent. AI is already shaping how people apply for roles, how they’re assessed, and how work is tracked and supported once they’re in post. If we get it right, AI could remove friction and widen access. If we get it wrong, it will quietly scale exclusion.
That’s why I keep coming back to neurodiversity. Neurodivergent people, such as autistic people, and people with ADHD or dyslexia, often process information, communicate, and manage attention differently. Those differences can translate into real workplace strengths (pattern recognition, deep focus, creative problem-solving), but only when the environment is designed to let them show up well.
The risk: automating yesterday’s definition of “great”
A lot of AI in hiring works by learning patterns from historic data: who was hired, who was promoted, who performed “well” according to existing measures. That sounds neutral, but it can lock in a narrow idea of competence, especially when the past wasn’t built for different thinking styles. The output may look objective while simply repeating old preferences at speed.
But AI can also be used to widen the lens. For instance: screening that focuses on demonstrable skills instead of polished CVs; application routes that offer written, audio, or asynchronous options rather than forcing everyone into a single “performative” format; and workplace tools that support planning, prioritisation, and clarity without turning every day into a surveillance exercise.
When inclusion is designed in from the start, AI becomes a lever: it can reduce avoidable bias, uncover overlooked talent, and make work easier to navigate. The goal isn’t to build a “special” track for neurodivergent people. It’s to build systems that work well for a wider range of humans.
AI should lower barriers, not turn them into code.
The opportunity: build with, not for
The most reliable way to make AI tools neuroinclusive is simple: involve neurodivergent people in shaping them. Not as a late-stage “accessibility review”, but in discovery, prototyping, testing, and measurement. If the system will be used to judge humans, it needs humans with varied cognitive styles at the table.
In my experience, many neurodivergent people are early adopters of tools that bring structure: clear prompts, predictable workflows, and asynchronous communication. Those features aren’t niche. Most of us benefit from less ambiguity. Designing for neurodivergent users often produces cleaner, more usable products for everyone.
Doing this early also helps you catch unintended consequences, like assessment formats that punish slower processing speed, or interfaces that create cognitive overload, before they become “just the way the platform works.”
- Multiple ways to participate (for example: written responses, live conversation, work samples, or asynchronous options)
- Control over the experience (customisable layouts, captions/transcripts, quiet modes, the ability to pause and return)
- Measures that prioritise outcomes (quality and impact over performative “activity” signals)
- Regular bias and accessibility checks (audit inputs, outputs, and who is being screened out, then fix what you find)
- Transparency people can understand (plain-language explanations of how a decision was reached and how to challenge it)
These aren’t add-ons. They’re the building blocks of tools that are safer, more human-centred, and more likely to produce decisions you can defend.
I’m starting to see encouraging signs: some employers offer alternatives to one-way video screening; some teams are experimenting with skills-based assessments and structured interviews that reduce guesswork; and some platforms are investing in personalisation and accessibility features as core product requirements rather than optional extras.
Vendors matter here too. Tools that allow configuration, provide clear explanations, and make auditing practical will increasingly stand out, not only because it’s ethically important, but because organisations are under growing pressure to prove that their processes are fair.
So, what now?
Neurodiversity is not a “problem to solve”. It’s a source of insight and innovation. When AI is designed with a wider range of minds in view, organisations don’t just do the right thing; they make better decisions and access a deeper pool of talent.
AI isn’t magically unbiased. It reflects the choices we make: what we measure, what we optimise for, and which trade-offs we accept. “Inclusive by default” means you don’t wait for someone to struggle before you redesign the process. You build the process so more people can succeed from day one.
As AI becomes more embedded in recruitment and everyday work, we have a choice: optimise purely for efficiency, or optimise for humans. If you care about performance and fairness, those two shouldn’t be in conflict.
To me, this is less about lowering any bar and more about asking whether the bar is measuring the right things. Neurodivergent people are often filtered out by noisy processes, such as unstructured interviews, ambiguous tasks, “culture fit” guesswork, or assessments that reward performance under pressure rather than capability. AI can either reinforce those filters or help dismantle them.
What “neuroinclusive AI” can look like in practice
If you’re building or buying AI for hiring or talent management, my challenge is this: ask who it works well for, who it disadvantages, and how you know. Bring neurodivergent voices into the design. Audit what happens in the real world. And treat clarity, choice, and transparency as non-negotiables.
When we design AI with cognitive diversity in mind, we don’t just support neurodivergent people. We build smarter, more trustworthy systems for everyone.

Leave a comment