Addressing risk to make AI happen in housing
Andrew Whyatt-Sames, from our AI education partner uptake AI, tells us how changing attitudes to risk can lead to an AI shift in organisations.

Risk aversion in social housing is doing its job. That’s the problem.
Recently, I was presenting to an AI steering group at a housing association. The room was experienced, serious, and carrying more than a few institutional memories.
One question came up that I hear often, but rarely this plainly.
“We’re a risk-averse industry. What do you think about that?”
It was not a challenge. It was a check. Almost a request for permission.
My answer was not that risk aversion is holding the sector back. It was that risk aversion has been doing exactly what it was supposed to do.
Risk aversion as a protective instinct
Social housing did not become cautious by accident.
Cosmopolitan. Grenfell. Awaab Ishak. Financial pressure. Regulatory escalation. Personal accountability that is very real for boards and executives.
In that environment, slowing down, tightening controls, and defaulting to what is known were not failures of leadership. They were protective adaptations. They kept people safe. They reduced harm. They preserved trust.
Psychologically, that matters. When a behaviour has protected people, organisations do not simply discard it because circumstances change. Protection becomes identity. It becomes part of what it means to be responsible. What often looks like resistance is usually a system trying to stay safe.
Which is why calls to “be more risk-accepting” often land badly. They can sound like an invitation to abandon the very instincts that have served the sector well.
You cannot throw the baby out with the bath water. And trying to swing wholesale from risk aversion to risk appetite would be genuinely dangerous.
That is not the shift the sector needs.
Where the sector actually is
If we look at this through a behavioural lens, most housing associations are not stuck. They are in contemplation.
The pre‑contemplation phase was survival. Clamp down. Reduce exposure. Do not make things worse.
Contemplation feels different. It is the growing awareness that the same protective behaviours are now creating new problems. Endless pilots. Quiet workarounds. Staff using AI tools without shared guidance. Services struggling to keep pace with the expectations placed on them.
People sense that something needs to change. They just do not yet feel safe enough to change it.
That is why AI adoption so often stalls at experimentation. Not because the sector lacks interest, but because commitment still feels psychologically risky. Experimentation becomes the only acceptable way to move without breaking trust with past experience
Why “be braver” misses the point
The mistake many narratives make is assuming this is about courage. It is not. It is about sense‑making.
AI sits in a different kind of problem space. Much of it is complex rather than complicated. Cause and effect are only clear in hindsight. You cannot analyse your way to certainty upfront.
Yet many organisations are still trying to treat AI as if more assurance, more guidance, or more evidence will eventually make the right answer appear. This is often less about the quality of governance and more about emotional readiness to sit with uncertainty.
So pilots linger. Decisions defer. Governance language becomes a holding pattern.
Not because people are avoiding responsibility, but because the system is applying the wrong logic to the type of uncertainty it is facing.
From risk avoidance to risk stewardship
You may have heard versions of this before, so let’s make it concrete in a way that fits a regulated housing context.
A real, published example can be seen in Craigdale Housing Association’s Hubble pilot in Glasgow. Faced with persistent damp and mould issues and fragmented data, Craigdale chose a clearly bounded, time‑limited experiment rather than a full rollout. Ten homes were equipped with sensors, data was aggregated into a single, mobile‑first platform, and the focus was explicit: earlier identification of risk and better‑informed intervention.
The pilot ran for five months. Governance did not disappear. Human judgement remained central. Outputs were reviewed, limitations were documented, and learning was fed back through existing decision‑making structures. The results were modest but meaningful: earlier interventions before visible mould appeared, improved tenant engagement, and a measurable reduction in mould‑related issues.
What mattered most was not the technology itself, but the shift in conversation. The board was no longer debating AI in the abstract. It was discussing evidence it had generated safely, at small scale, and deciding whether that learning justified the next step.
This is closer to risk stewardship than risk acceptance.
The risk we rarely name
There is a quieter risk the sector does not always acknowledge.
When organisations remain formally cautious but do not create explicit, permissioned ways to learn, AI use does not stop. It fragments. Individuals experiment alone. Guardrails vary. Learning stays private.
On paper, governance looks intact. In practice, exposure increases. That is not safety. It is unmanaged emergence.
Ironically, social housing already has many of the capabilities needed to avoid this. Strong governance. Experience handling sensitive data. A deep moral commitment to fairness and accountability.
The question is whether those strengths are being used to enable learning, or to postpone it.
A more humane next step
The way forward does not require a dramatic mindset shift. It requires honesty.
Name risk aversion as a strength. Acknowledge what it has protected. Create space for small, visible, well‑held experiments inside governance rather than outside it. Treat learning itself as something that can be governed.
For senior leaders, the invitation is not to be braver, but to be more deliberate. To notice where governance language is being used to contain learning rather than enable it. To ask where one carefully designed experiment could replace a dozen quiet workarounds.
AI is not asking social housing to abandon its protective instincts. It is asking the sector to let those instincts mature.
That is harder than buying technology. But it is work that fits the moment, and it fits who the sector is.