AI in Housing: Five Key Observations
Andrew Whyatt-Sames, Co-Founder of uptakeAI, DIN’s AI education partner, explores why the next AI skills gap isn’t about using tools; it’s about AI agents.

I want to start by acknowledging something. If you are reading this as a housing executive, you have probably spent the last two years absorbing wave after wave of AI news, attending briefings, signing off on pilots, fielding questions from your board and reassuring your teams. You have barely got the hang of ChatGPT and now someone is here to tell you there is another thing to think about.
I hear that. And I am not going to pretend this is not a fast-moving landscape.
But this piece is not about adding to the list. It’s about helping you understand where the current journey is going, so the work you’re already doing builds toward something, rather than needing to be redone later.
Where most organisations are right now
The organisations I work with are genuinely trying. They have invested in access, rolled out tools and encouraged their teams to explore. The intent is real and the effort is considerable.
What I see in practice is that most people are at the beginning of the fluency curve, which is exactly where you would expect them to be. They are using AI the way you use any unfamiliar tool at first; treating it like a smarter version of something they already know. Asking it questions the way they would Google something. Getting an answer and moving on.
That is a completely natural starting point. And it produces real value. But genuine AI fluency sits somewhere further along, and getting there requires pushing through a phase that is genuinely frustrating. Outputs that are not quite right. Iteration that feels slower than just doing it yourself. The sense that the thing is not really working. Without support, most people interpret that discomfort as the tool’s failure rather than a stage in their own development. So they step back.
That gap between where most people are and where they could be is significant. And it matters because the next wave of AI tools is arriving, whether people have closed that gap or not.
The conversation we are having is the wrong one
For two years, it feels like the housing sector has been asking the same question: can our people use AI? The training programmes, the awareness sessions, the pilot projects. All of it pointing at the same problem. Basic literacy. Get people comfortable. Reduce resistance. Show them it is not going to take their jobs.
That work was necessary. Some of it has been done well. But the next challenge is already arriving, and it’s a different problem entirely.
Deloitte’s research points to planned deployment of AI agents growing fourfold across industry sectors over the next two years. Not AI assistants. Not chatbots. Agents. Autonomous systems that do not wait for a prompt. They plan, act, make decisions and execute multi-step tasks without continuous human instruction.
Cisco’s work on AI readiness found only 14% of organisations met their definition of being prepared to deploy AI at scale. That figure applies to AI broadly. The bar for agentic AI is higher still.
For a sector navigating slow procurement cycles, legacy system dependencies, and a governance culture that moves deliberately, those numbers should be uncomfortable reading.
The gap nobody is naming yet
The skills gap that’s coming is not about whether people can use AI. It’s about whether they understand what an AI agent is actually doing. Whether they know when to trust it. When to intervene. How to direct it well. How to recognise when something has gone wrong in a process they are no longer watching in real time.
That gap has a name: agentic literacy.
Nobody in housing seems to be talking about it yet. I understand why. There is enough on everyone’s plate. But the organisations that wait until agents are already deployed to think about this will be doing remediation work, not development work. That is a harder place to start from.
What makes housing’s context distinctive
I work with housing organisations on AI adoption. What I see consistently is a sector that is thoughtful, values-led and genuinely trying to get this right. That is a real strength, and it is the right foundation for what comes next. There are also three contextual factors that make it worth being specific about how agentic AI lands here, rather than applying generic advice.
The first is procurement lag. By the time AI agent technologies clear formal approval processes in most housing organisations, the capability landscape will have moved again. Organisations without the internal understanding to evaluate these tools confidently will make rushed decisions or defer decisions that should not be deferred.
The second is the regulatory context. Awaab’s Law, which is expanding in 2026, is already at the top of many executives’ priority lists. In my view, AI agents are genuinely well-suited to monitoring and flagging disrepair risks at scale. But that potential comes with a specific liability question. If an agent identifies a problem and the organisation does not act on it, the legal exposure may be greater than if the tool had never been deployed at all. That is not a reason to not use the tools. It is a very good reason to make sure your people understand what the agent is actually telling them, and what their obligations are when it surfaces with something they now have a duty to address.
The third is professional confidence. Housing’s workforce is skilled, committed and deeply expert in its domain. AI agents will challenge that expertise in a specific way: by making decisions in areas where professional knowledge has traditionally been the source of value. I do not think people will simply refuse to engage. I think they will engage in ways that are either too trusting or too cautious, because nobody has given them a framework for something in between.
What agentic literacy actually involves
A tool responds to instructions. An agent acts on goals. When you give an AI agent a task, you are not just prompting. You are delegating. The agent will interpret, plan, execute. The human role shifts from doing to directing and overseeing.
That is a fundamentally different cognitive relationship with technology. And it involves three things that current AI training programmes are not really addressing.
Understanding how agents reason. What their failure modes look like. What kinds of decisions should never be fully delegated. In housing, where decisions affect tenants’ homes and the organisation’s legal exposure, this is not theoretical.
Knowing how to brief an agent well. Not just what to ask, but how to specify scope, constraints, and when it should stop and check. An under-briefed agent is not a useful agent.
Knowing how to audit an autonomous process. Who is accountable when an agent takes a decision that turns out to be wrong? How do you demonstrate to a regulator that your AI-assisted process was sound? These are governance questions dressed as technical ones.
None of this is beyond reach. It is preparation, not transformation. And it builds directly on the AI literacy work organisations are already doing.
DIN’s own research
DIN’s On the AI Pulse survey will soon uncover the amount of anxiety, or enthusiasm, underneath these questions. What I see consistently across member organisations is enthusiasm about AI’s potential sitting alongside genuine uncertainty about how to govern it, build confidence and make decisions that will not need to be unpicked in two years’ time.
The organisations getting this right are not necessarily the ones with the biggest budgets. They are the ones treating AI adoption as a cultural and strategic project rather than a technology rollout. And they are building understanding at every level, board to frontline, before they deploy, rather than while they are deploying.
Walking the walk
I will be honest about where I am coming from. At uptakeAI, we are not just advising on this. We are living it. We run an AI-powered business and over the last few months I have been building Roxy, a synthetic chief of staff: an always-on operating partner that holds our institutional memory, monitors communications and thinks alongside the team between sessions.
Building it has taught me more about agentic literacy than anything I could have read. I know exactly what it feels like to brief an agent badly and watch it optimise for the wrong thing. I know the governance questions that only become visible when you are actually running autonomous systems, not just reading about them.
The housing sector does not need to build what we have built. But it does need the understanding that comes from taking this seriously before the tools are already in the building.
The opportunity in the constraint
Here is what I genuinely believe. The procurement and governance challenges that slow down technology adoption in housing also create an opportunity. There is time to do this properly. To build genuine understanding rather than surface familiarity. To develop the workforce capability that will matter not just for today’s AI tools but for the agentic systems that are already arriving.
Other sectors are moving faster. That means they will also make more of the avoidable mistakes first.
The work the housing sector has done on AI literacy over the last two years is not wasted. It is the foundation. What comes next is building on it with intention.
The next skills gap is not about chatbots. It is about whether your people know what an AI agent is doing, and whether they are ready to lead alongside it, not just follow it.