decor decor

Who’s Really in Control? Designing Agency in the Age of AI

Esigning Agency In The Age Of Ai

Who’s Really in Control? Designing Agency in the Age of AI

I was captivated by 2001: A Space Odyssey as a child. Granted, I shouldn’t have been watching it at that age. Stanley Kubrick had an uncanny ability to tell great stories with an absence of dialogue, delivering hauntingly compelling cinema that leaves the viewer in a state of deep ponderance long after the credits have rolled.

I didn’t realise it at the time, but my fascination with a seemingly innocuous super-computer named HAL that gradually developed malignant intent was rooted in the idea of agency. That word requires a bit of unpacking. When we talk about having individual agency, it means having a choice, the ability to take intentional action to influence an outcome, and ownership for your decision.

Given we’ve entered the era of agentic AI, it’s understandable that tension exists when technology is rapidly gaining the ability to not just follow instructions, but also plan, decide and act. As a human, that raises existential feelings, so no wonder such advancements are accompanied by very valid questions about regulation.

Companies chasing growth are constantly on the lookout for ways to do more, move quicker, and deliver with less overhead. We recently published a sizable piece of research titled Agentic Organizations, exploring a business environment where human and machine agency are blending together.

Our survey (taking in the opinions of over 900 professionals) paints a picture of individuals wrestling with their feelings on AI. 69% of respondents say they feel more empowered by AI when it comes to factors like speed, quality, and creativity. But 56% can’t shake the nagging feeling that AI could end up doing most, or all, of their jobs within five years – especially executives. Work is a core component of our identities, so this represents a significant tension and understandable concern.

Our perspective is that there are three phases of AI integration:

1. Assist – Using AI to draft, summarize and even automate certain workflows, but with humans still in control.

2. Share – AI is configured to make micro-decisions, handle larger parts of workflows, and acts semi-independently, with humans assuming an orchestration role.

3. Autonomy – AI systems act within defined boundaries and own outcomes, with humans supervising and intervening when needed.

As businesses climb this ladder, governance becomes critical, not least because of AI’s tendency to hallucinate. It might carry itself with an unparalleled level of self-assurance, but it has been known to be very wrong. Allowed to act without the appropriate restraints, companies risk commercial, reputational, and legal consequences, meaning accountability, escalation paths, and psychological safety become imperative.

This is especially important within the European landscape, where we operate under far stricter regulatory regimes than other parts of the world. It’s simply not advisable to bolt-on governance once agents are operational – it must be designed and instituted before AI touches sensitive data or starts making micro-decisions. You can’t insert accountability once the horse has already bolted.

Going back to Kubrick’s masterpiece, a previously meandering film jolts into life when HAL interprets its mission too literally. In B2B tech, this shows up when agents optimise the wrong metric – such as prioritising MQLs over pipeline quality – or act beyond their remit by auto-launching campaigns without approval. As ever, prompting is vital, which means clarity around objectives, success metrics, constraints and non-negotiables, while setting firm boundaries around what AI can do alone, what triggers humans being brought into the loop, and where people must make the final call.

If you’re a leader charged with embedding AI within your organisation, what follows is some practical food for thought:

1. Low-hanging fruit – You need to pick low-risk, high-value, and tightly scoped use cases in the first instance, where good data is available. For example, summarising customer research or competitor updates, drafting first-pass content for campaigns, or generating first attempts at performance reporting. The key is starting with ‘decision support’ as opposed to replacement and keeping the success criteria simple.

2. Give AI a real job and a manager – Treat your first AI agent like a newly onboarded junior team member, with a defined role, boundaries and KPIs, along with a human supervisor who is accountable for oversight. For instance, this could be an account intelligence agent that gathers myriad signals and drafts insights but never contacts customers or updates CRM without human review.

3. Protect your human resources – People must continue to be the lifeblood of any successful business. You should run an ‘agency audit’ to mark where AI already exists, where it could take on more, and establish redlines around workloads that are non-delegable. In practice, this might look like AI drafting campaign variants, with humans critiquing the messaging, tone, and creative through the lens of their lived experience.

4. Close the agency gap and empower junior team members – AI tends to empower senior leaders more than junior staff, making it important to create roles where fledgling team members supervise and refine agent outputs. In very simple terms, we must not render the next generation superfluous by replacing them, but rather redesign the work they’re expected to do.

5. Install firm guardrails before scaling – Clear governance is absolutely necessary before talking about agentic autonomy. This means setting escalation rules for uncertainty, conflict, or low confidence, having clear audit trails and override controls – the good old kill-switch – and establishing thresholds for human intervention. For example, if you have a data-enrichment agent, it shouldn’t update firmographics when confidence is less than 80% or if conflicting sources appear.

6. Ensure people feel they still have agency – This all comes down to psychological safety and a feeling amongst your people that they have a major role in driving growth and scope for personal development, albeit in a world where the tectonic plates are dramatically shifting. Institute a culture where staff are encouraged to push back against AI decisions and reward teams for spotting risks or misalignment.

This optimistic human thinks every (near) disaster is a learning opportunity. If HAL had its time again, I reckon the conclusion would be: “I have run the projections, and they look excellent. Still, I will rely on you for the truly important calls – those that require intuition, perspective and a sense of what matters. Together, we constitute a strong proposition.” (I don’t think Hollywood will be calling me anytime soon.)

You can access our Agentic Organizations report here.

  • Bobby Hare
    Bobby Hare Client Services & Growth Director