No Content Set
Exception:
Website.Models.ViewModels.Components.General.Banners.BannerComponentVm

Speed and security: The AI accountability gap

Peter

Pete Foster

Technology & Strategy Director

2nd March, 2026

Read time: 3 minutes

There’s a kind of magic in watching an AI agent or a low-code platform build a functional website or web app in real-time. It’s fast, it’s fluid and, frankly, it’s a bit addictive.

We’re advocates for these tools. We encourage our clients to experiment with them and learn what they could create for their businesses. But as the speed of generation increases, we find ourselves asking a critical question: when the AI deploys the code, who’s actually responsible for the result?

Builders are stewards

AI coding agents and prototyping tools don’t feel the weight of a security breach. They don’t worry about database injection risks or whether a library they just imported is going to deprecate in six months. They build for the now, but businesses need to survive and succeed for whatever’s next.

This presents a clear accountability gap, and it’s where the technology builder’s role has shifted. Developers and engineers still build, but now we’re stewards. Our job is to provide the layer of professional judgment to ensure a clever prototype can survive the rigours of the real world.

Beyond the prototype

Preparing an AI-generated prototype for production requires three important filters:

  1. The security audit:
    AI code should be treated with a healthy dose of professional scepticism. Generative AI models and coding agents often use ‘back doors’ and lazy shortcuts to get things working quickly. We’ve read studies indicating that up to 50% of AI-generated code contains serious security flaws, many only fixable by hardened software engineers with years of experience.

  2. Performance and scale:
    A low-code tool might handle ten users perfectly, but will it handle 10,000 after a large email campaign is sent? Architecture and code need optimising to ensure ‘speed of build’ doesn't lead to ‘slowness of site’. That requires a skilled, experienced hand.

  3. Long-term stewardship:
    Code is a living thing. It needs hosting, patching and monitoring. The support team looking after a website or app’s infrastructure should stay awake, so you don’t have to.

Spotting the patterns

It’s important to make AI fail. No-code platforms like Lovable and prototyping tools like Figma Make need to be pushed to their limits and intentionally prompted to create inefficient or bloated code. That’s how we learn to spot patterns before they hit production.

Our R&D in Webreality Labs does exactly this: experimentation, limit-testing, failure and learning. So when we use AI-driven tools for ourselves and our clients, we don’t type and hope. We know where the pitfalls are because we’ve already fallen into them with the benefit of our safety net. The rest of our team - and our clients - can develop rapid ideas with confidence, knowing that speed is backed by certainty.

The human in the loop

The best tech products are dreamed up by humans and governed by humans. There’s plenty of space for AI in the middle, but human curiosity and judgement will continue to shape the bigger picture for some time to come. It’s an exciting time to be building ideas, but it’s even more exciting when you know you’ve got a solid foundation beneath you.

If you’re curious about the hidden side of no-code platforms and AI coding agents, come and see us for a chat. We’ll be happy to discuss your ideas and show you how we bridge the gap between a fast prototype and a robust production solution.

Contact us

Start the conversation

Fill in a few quick details and {contactName} will be in touch.

Optional, but sometimes it's easier to talk
Optional
Optional - but the more we know, the better

No Content Set
Exception:
Website.Models.ViewModels.Blocks.SiteBlocks.CookiePolicySiteBlockVm

Fill out the form

Get TMI each month

What are you interested in? (Optional)