Ask someone about AI in healthcare, and they might think of cutting-edge diagnostics, hospital robots, or algorithms analysing vast medical datasets. But for most patients, especially those managing ongoing conditions like obesity, the question is much more personal.
They want AI that understands them, supports them, and respects their boundaries.
At Numan, we’ve been researching this first-hand. We interviewed patients, analysed feedback, and tracked real-world interactions with AI tools. What’s striking is how consistent people’s desires are across different demographics, and levels of tech literacy. And it turns out they’re not asking for the moon. They’re asking for support that fits into their lives.
It needs to be available when they need it
Health isn’t 9 to 5. Patients want AI that’s there in the moment. That might be a question at midnight about managing appetite, or a craving on a stressful Wednesday afternoon. Our own research found patients valued AI most for its “instant answers” and around-the-clock presence.
But presence alone isn’t enough. Patients also want to know the support they’re getting is safe, accurate, and grounded in responsible design. Availability only matters when it’s built on trust.
It needs to use the right information, in the right way
Patients don’t want AI that’s operating in the dark. They expect it to use the data they’ve already shared, such as weight trends, medication use, or symptom history, to personalise its advice. But they also want that data use to be transparent and ethical.
In our interviews, patients were clear. They’re happy for AI to use their data if it improves their care and they understand why it’s being used. As one put it: “I don’t mind sharing with AI as long as it’s for my benefit”.
Importantly, this doesn’t mean training the AI on personal data. It means acting like a good coach. AI should remember what’s relevant, draw on consented data, and offer tailored support that reflects someone’s journey.
It needs to know its limits
Patients want AI to be helpful, but not overconfident. They want it to provide useful information about lifestyle, medication, and symptoms. But they also want it to flag when something is beyond its scope. They respect systems that can say, “You should speak to a clinician about this”.
This is echoed in academic literature too. A 2023 review in Digital Health found that users were more likely to trust and follow advice from AI when it clearly distinguished between what it could answer and when to refer to human care.
In short, credibility comes not from knowing everything but from knowing where the line is. Safe AI is transparent about its boundaries.
It needs to feel safe and stay private
The question of trust goes beyond functionality. Patients want to know their conversations with AI are secure. If those chats are reviewed by a human, they want to be informed. Some even preferred AI for its perceived lack of judgement. “I felt less judged by the AI than by a human,” said one participant in our study.
Still, patients expect clarity. What does the AI know about me? Who else can see this? What happens if I say something important?
Consent isn’t just a form. It’s part of the relationship. Safety isn’t just technical. It’s also emotional.
It needs to grow with them
Perhaps most importantly, patients want AI that evolves. Not just technically but relationally. They want it to build up a picture over time. To remember what worked. To adapt when things change.
This desire is part of a broader shift. People no longer see AI as just a tool. They see it as a companion. And like any good companion, it needs to learn, reflect, and respond in a way that feels human, even if it isn’t.
The numan take
AI can be extraordinary, but only when it’s built for ordinary moments - and designed with patient safety as the foundation.
At Numan, we’re committed to making AI useful, not overhyped. That means designing systems that are transparent, respectful, and responsive to real patient needs. It means using consented data to support people, not to sell to them. And it means knowing when AI should help, and when it should hand off to a human.
We believe the future of healthcare won’t be human or AI. It will be a thoughtful mix of both, delivered safely and built on trust.
Because what patients want isn’t flashy. It’s simple. They want to feel understood.
And AI, done right, can help them get there.