If you want real results from AI, you need to work with people who actually know how to use AI, not just hype boosters who can make a demo look impressive for 20 minutes.
| Quick Answer (TL;DR)
Most people treat AI like a vending machine – put in a question, get out an answer. That’s not how it works. Getting genuine value from AI requires a clear purpose, structured prompting (using a framework like TCLEI), and multiple rounds of deliberate iteration. The people delivering real AI results in their organisations aren’t chasing new tools – they’re building repeatable systems, automating the predictable, and protecting their human energy for the things AI genuinely can’t do. When hiring or partnering for AI work, look for depth of process, not polish of demo. |
The Gap Between AI Demos and AI Results
There’s a scene playing out in boardrooms across New Zealand right now. Someone opens a laptop, types a prompt, and something impressive appears on screen in a minute. People nod. The meeting ends. Nothing changes.
That gap – between what AI looks like in a controlled demo and what it actually takes to produce consistent, meaningful results – is the most important thing to understand about AI adoption in 2026.
The uncomfortable truth? The output you get on the first try is almost never what you actually need. Working with AI well isn’t about finding the right one-click shortcut. It’s about knowing your purpose, building a process, and iterating until the result is genuinely useful.
For any NZ business evaluating AI investment – or deciding who to trust with it – this matters more than which tools are on the table. Working with people who actually know how to use AI, not just hype boosters, is the difference between real ROI and expensive demos.
What Most AI Conversations Get Wrong
These are the ideas that don’t get enough airtime – and they’re the ones that actually separate AI practitioners who deliver from those who impress:
- AI is a thinking amplifier, not a vending machine. You don’t put in a question and receive a finished answer. You collaborate, refine, and iterate.
- Purpose always comes before tools. The question isn’t which AI tool to use. It’s what you’re actually trying to achieve. That answer shapes everything else.
- The first output is roughly 80% of the way there. That remaining 20% is where the real work – and the real value – lives.
- Automation is about predictability, not convenience. Systems that run reliably regardless of who’s in the role are a structural competitive advantage.
- Communication is the most important skill in an AI-augmented workplace. The ability to articulate what you need clearly, and iterate on feedback, applies equally whether you’re working with a team or an AI model.
- Unlearning matters as much as learning. When technology changes fast, holding methods loosely and retuning quickly is more valuable than deep expertise in any single tool.
Stop Treating AI Like a Vending Machine
The One-Click Mentality Guarantees Disappointment
The vending machine mental model is everywhere. You press a button, something comes out. If the output isn’t good, you press a different button. The assumption is that there’s a magic prompt somewhere that produces exactly what you need on the first try.
That assumption is costing businesses real money – not because they’re using the wrong tools, but because they’ve got the wrong mental model entirely. When you expect one click to produce finished work, you either accept mediocre output or conclude AI isn’t useful. Both are wrong conclusions.
The more accurate framing: AI is a colleague you’re working through a brief with. You give direction, review what comes back, tell it what to keep and what to change, and repeat. The quality of the result depends almost entirely on the quality of that back-and-forth.
What Iterating with AI Actually Looks Like
A disciplined iteration process looks like this:
- Start expecting ~80%. The first response will be in the right direction. It won’t be finished. That’s normal, not a failure.
- Give precise feedback. Be specific about what’s working and what isn’t. ‘Keep this section, change the tone of this part, expand on this point.’ Vague feedback produces vague revisions.
- Target one thing at a time. Ask AI to adjust one element rather than rewriting everything. Surgical changes are easier to evaluate than wholesale rewrites.
- Repeat until it matches your intent. Three to five rounds is common for complex tasks. That’s not inefficiency – that’s how good work gets made.
Here’s something most people don’t expect: you don’t always need to know exactly what you want before you start. If the output you need is fuzzy, use AI to help clarify it first. Ask what considerations matter, what questions need answering, what the finished result could look like. The clarification is part of the collaboration.
Google’s TCLEI Framework: A Structure That Actually Works
One of the most practical frameworks for consistent AI prompting comes from Google. It’s called TCLEI, and it reframes prompting as a process rather than a guessing game.
| The TCLEI Framework
T – Task: What specific job needs doing? Be explicit about what you want. C – Context: What background does AI need to understand your situation properly? L – References: What does the output need to look like? Format, length, tone, audience. E – Evaluate: Assess the result against what you actually needed. I – Iterate: Refine based on what’s missing. Repeat until it’s right. |
What this framework does well is force clarity before you reach for a tool. Most poor AI outputs come from vague prompts – and vague prompts come from people who haven’t yet figured out what they actually need. TCLEI makes you work that out first.
The evaluate and iterate steps aren’t optional extras bolted on at the end. They’re the core of the process. The prompt is just the opening move.
Why Automation Is About Predictability, Not Productivity
The Real Reason to Automate
Most people approach automation from a personal efficiency angle – I want to spend less time on repetitive tasks. That’s a reasonable motivation, but it’s a narrow one. The stronger case for automation is organisational predictability.
The question worth asking isn’t ‘how do I make my job easier?’ It’s ‘how do I build a system that works correctly regardless of who’s doing this job next month?’ In environments where team members change – and most do – that distinction is enormous.
That framing changes what you automate and why. Onboarding notifications, date-triggered tasks, performance data routing, scheduled reports – these aren’t just convenient to automate. They’re genuinely better handled by a system than by a person remembering to do them.
What to Automate and What to Protect
The line between automatable work and human work is worth being deliberate about:
- Automate: Anything with a defined pattern and a repeating trigger. Date-based tasks, data routing, scheduled communications, report distribution. If it looks the same every time, a system should handle it.
- Protect: Human judgment, relationship-building, organisational design, creative problem-solving, and any decision that genuinely requires context and nuance.
The underlying principle: if a person is spending time on pattern-based, repetitive work, their competitive value is being eroded. Machines are better at that. People need to be doing the things machines genuinely can’t.
Automating the predictable isn’t about cutting headcount. It’s about redirecting the people you have towards work that actually requires them.
A Practical Example: The Internal Knowledge Bot
One automation worth knowing about: an internal chatbot connected to a company’s documentation wiki, using an AI model to summarise relevant content and return answers with links to source documents.
No external subscription. No data leaving the company environment. Built within an existing Google Workspace setup using session-based authentication, so no one needs a new login. Cost close to zero.
That last point is more important than it sounds. Tool choice should align with your organisation’s data governance and cost structure, not just technical capability. A powerful external tool that solves one problem while introducing three others – data risk, access management, ongoing licence cost – is a net loss.
A Practical Framework for AI-Augmented Work
Whether you’re using AI tools yourself or evaluating someone else’s AI capability, this is the framework I’d use:
- Define the purpose first. Before opening any AI tool, answer: what outcome do I need, and how will I know when I’ve achieved it? This sounds obvious. Most people skip it.
- Map the output. What does the finished result actually look like? Format, length, tone, audience. If you can’t describe it clearly, AI can’t produce it.
- Choose the tool last. The tool should follow the purpose, not the other way around. Learning a tool before knowing why you need it is a reliable way to waste time.
- Iterate with structure. Use TCLEI. Start expecting 80% output. Give specific feedback. Change one thing at a time. Repeat.
- Build the process, not just the output. A single good AI response is a data point. A repeatable workflow that consistently produces good outputs is a business asset.
- Automate the predictable. Identify the tasks in your workflow that have clear patterns and triggers. Build them to run without manual intervention. Measure whether they’re working.
- Stay in the high-value lane. Use the time you’ve recovered to do the things AI genuinely can’t: read people, navigate ambiguity, make judgment calls, build trust.
What This Looks Like in Practice
For HR and People Teams
Automated onboarding triggers, date-based nudges, performance data routing to the right managers – these are low-complexity, high-value automations. Any team running Google Workspace or Microsoft 365 can build them without a dedicated developer.
The bigger win isn’t the automation itself. It’s what the team does with the time it recovers: actual people work – career conversations, culture thinking, organisational design.
For Marketing Teams
AI-assisted content at scale only works when someone has designed the workflow properly: brand voice embedded in the prompt, a review step before anything goes live, a feedback loop to improve quality over time.
Without that structure, you don’t get 10x content. You get 10x mediocre content – and mediocre content published at scale damages brand trust faster than it builds it.
For Leaders and Decision-Makers
McKinsey research consistently shows that AI transformation outcomes correlate with leader readiness more than with tool selection. Teams that experiment openly, treat early failures as data, and iterate quickly outperform those waiting for a perfect policy before they start.
The practical implication: try something in your own workflow before rolling it out to your team. Demonstrate the loop – attempt, adjust, improve – before expecting others to be comfortable doing the same.
Common Misconceptions Worth Addressing
- “AI removes the need to understand your work deeply.” It does the opposite. The clearer your understanding of what you’re trying to achieve, the better your AI outputs. Vague intent amplified by a powerful tool produces polished rubbish.
- “Heavy AI usage is evidence of AI competence.” Some organisations have tried tracking token consumption as a performance metric, with penalties for below-average usage. The incentive is trivially easy to game. Volume says nothing about value.
- “This is only for technical people.” The skills that matter most for working effectively with AI – defining problems clearly, giving structured feedback, evaluating outputs critically – are transferable. Non-developers can develop them. Developers who skip them will struggle too.
- “Once a system works, leave it alone.” Best practice in AI is a moving target. A workflow built 18 months ago may now be slower, more expensive, or less reliable than what’s achievable today. Review regularly.
- “You need to follow every AI trend.” You don’t. The principle worth holding onto: learn, unlearn, relearn. Hold specific methods loosely. Stay curious, not exhaustive. Depth of mastery in a few well-chosen areas beats shallow familiarity with everything.
FAQ
What does it mean to actually know how to use AI?
It means being able to consistently produce useful business outcomes from AI tools – not just impressive-looking demos. It requires being able to define a clear purpose, structure an effective prompt, evaluate output critically, and iterate until the result is genuinely fit for purpose. The emphasis is on process, not on tool knowledge.
What is the TCLEI framework and how do I use it?
TCLEI stands for Task, Context, References, Evaluate, Iterate. It’s Google’s framework for structuring effective AI prompts. Start by defining the task clearly, provide the relevant context, specify what the output should look like, evaluate what comes back against your original intent, then iterate. The evaluate and iterate steps are where most of the value is created.
How is automation different from using AI tools?
AI tools help you produce outputs – text, analysis, summaries, decisions. Automation builds systems that run without manual intervention: triggers, data flows, scheduled tasks, notifications. Both have distinct value. AI assists thinking. Automation handles repetition. Conflating them leads to using the wrong solution for the problem in front of you.
How do I tell the difference between genuine AI expertise and someone who’s good at demos?
Ask them to walk you through a project that didn’t work the first time – and what they did about it. Genuine practitioners have those stories readily available. Ask what the output looked like at step one versus step four. Ask where they hit limitations. Ask what they’d do differently. People who only show you polished results haven’t been doing the real work.
What’s the most important skill to develop for an AI-augmented workplace?
Communication. Not prompting specifically – communication broadly. The ability to articulate what you need precisely, give useful feedback, and refine based on what comes back. This skill applies identically whether you’re working with a team member, briefing a consultant, or directing an AI model. It’s the same underlying capability.
Should I build AI tools inside my existing tech stack or use external platforms?
Default to your existing stack where possible. Significant value is available through automations built entirely within Google Workspace or Microsoft 365 – at near-zero incremental cost, using existing accounts and security controls. External platforms may offer more capability, but they introduce data governance complexity, access management overhead, and ongoing cost. Add external tools only when your internal environment genuinely can’t do the job.
The Real Competitive Advantage
The businesses and professionals who build lasting value from AI aren’t the ones with access to the best tools. Those tools are increasingly accessible to everyone.
They’re the ones who build the best process around whatever tools they use – defining purpose before reaching for a tool, using structured frameworks to improve outputs, automating the predictable so human effort concentrates where it’s genuinely needed.
There’s a clear signal for who to trust with AI work: look for people who can explain the process behind the output, who talk about iteration and refinement rather than revelation, and who are upfront about what AI got wrong before they show you what it got right.
That’s what it means to work with people who actually know how to use AI, not just hype boosters. And in 2026, that distinction is worth more than any tool licence you’ll ever buy.
Ready to Build AI Capability That Actually Delivers?
If you’re a NZ marketer, HR professional, or business leader thinking seriously about AI – the first step isn’t a tool decision. It’s a process conversation.
- What outcomes do you actually need from AI?
- Where does the real friction sit in your current workflow?
- Who in your team or network has a track record of iteration, not just demonstration?
Start there. The tools are the easy part.

