Red Airship

We help you discover and unleash value by building for what’s next.
© 2023 Red Airship

Artificial Intelligence

Why Singapore Devs Lose 8 Hours A Week: Fixing AI Inefficiency in Software Development


The latest data from GitLab presents a confusing picture of Singapore’s technology landscape: velocity is up, yet efficiency is down.

While the rapid adoption of AI in software development means we are coding faster than ever, teams are paradoxically losing eight hours a week to inefficiency.

GitLab surveyed 252 Singapore-based software professionals, and the results expose a hidden cost to this new speed:

  • 8 hours lost every week to inefficient processes (one hour more than the global average)

  • 62% using more than 5 development tools

  • 54% using more than 5 AI tools

  • 66% have experienced issues with AI-generated code

  • 98% are using or planning to use AI in development

Those 8 hours of lost productivity during the week? That's just the visible cost. The real cost is:

  • Production fires on Saturday morning from code nobody fully understood

  • Sunday afternoons debugging AI-generated implementations nobody can explain

  • The cognitive burden of maintaining a codebase your team didn't actually write

When your staff engineer says "I lost my weekends," they're not exaggerating. They're telling you the AI productivity gains aren't free—someone else is paying the price.

Why AI Code Speed ≠ Engineering Velocity

Your team is now generating code at unprecedented speed. Pull requests that used to take days now take hours. Features ship faster. Velocity charts look incredible. Management is thrilled.

But beneath those green charts, something dangerous is happening:

Engineers are approving massive AI-generated changesets without truly understanding what's shipping.

The cognitive load of reviewing 1,000 lines is so high that teams unconsciously switch from critical analysis to pattern matching. "Looks fine, ship it."

The GitLab study found 66% of Singapore practitioners have experienced issues with AI-generated code.

With AI, we have accelerated the assembly line of code generation, yet our quality assurance sensors remain calibrated for manual labor.

The result is a velocity mismatch: code is generated at machine speed, but processed at human speed. Under this pressure, thorough review gives way to rubber-stamping, and human overwhelm becomes the breakage point where bugs slip through.

Turning AI Output into Sustainable Code

At Red Airship, we implemented architectural discipline that turns AI’s raw speed into sustainable engineering velocity.

Let’s unpack. 

1. Automated review: use AI to review code

We built an AI code review tool into our CI/CD pipeline. 

When a Merge Request opens, before any human sees it, our tool reviews the code against Red Airship's internal standards. It checks patterns, flags potential issues, and provides actionable feedback immediately.

This isn't about replacing human review—it's about filtering. The AI catches the mechanical issues (inconsistent naming, missing error handling, violations of our coding standards) so senior engineers can focus on architecture, business logic, and system design.

It's AI reviewing code, giving us a first line of defence before human cognitive load kicks in.

2. Establish a non-negotiable rule in your team 

Own Your Code.

We re-established a standard practice that many teams forgot when AI crept in: Own your code.

Not "I generally understand what this function does." Not "The AI said this was the right approach." Line by line. What does it do? Why is it needed? What happens when it fails? What edge cases does it handle?

“You MUST be able to explain every line you commit.” 

  1. Line by line: What does it do?

  2. Context: Why is it needed?

  3. Resilience: What happens when it fails?

  4. Edge Cases: What variables break this?

This is standard practice. If you can't explain it, you don't ship it.

3. Improve clarity on where humans stay in the loop 

Our process now has humans at four critical points:

  1. Planning: Humans and AI co-create the implementation plan, but humans verify it matches actual project requirements (not hallucinated ones)

  2. Pre-commit review: Developers must manually review their code before creating a Merge Request—this is where the "explain every line" rule kicks in

  3. Automated review: Our AI review tool catches mechanical issues against our standards

  4. Senior review: Human reviewers focus on architecture, business logic, and system implications—not syntax and style

Each layer catches different categories of problems. Remove any layer, and quality drops.

  • We co-create a plan with AI, then verify the plan against actual requirements

  • We double-check the implementation line by line before committing

  • Merge Requests are smaller because we break down AI suggestions rather than accepting them wholesale

  • Testing is intentional, not just copied from AI output

Moving Beyond

We're experimenting with using AI throughout the development lifecycle, not just for code generation:

  • Better commit messages that actually explain context

  • More thorough pull request descriptions

  • Comprehensive test coverage

  • Improved planning and breakdown of complex features

The key insight: AI amplifies your process. If your process is "generate code and ship it," AI makes you ship faster—and break faster. If your process includes verification, testing, and review, AI helps you do all of that better.

The Universal Principle

This approach isn't limited to software development.

Whether you're using AI for design, content creation, analysis, or strategy, the principle holds:

You cannot outsource understanding.

AI can draft the initial design, but the designer must verify it meets brand guidelines and user needs.

AI can generate the market analysis, but the PM must verify the data sources and reasoning.

AI can suggest the architecture, but the engineer must verify it scales.

The moment you accept AI output without verification, you've created a knowledge gap. And that gap will surface at the worst possible time—usually in production, or in a client demo, or at 2am on a Saturday.

Written By
Brecht Missotten
Senior Engineering Manager

Transform Knowledge into Action

Leverage our expertise to make informed decisions and drive your business forward.