Part 3: The input imperative: Why smart businesses measure activities, not outcomes
Part 3 of the Growth Operating System series
In Parts 1 and 2, we explored why systematic performance management matters and introduced two powerful frameworks—Balanced Scorecards and OKRs. Now comes the crucial question: what should you actually measure?
Most businesses get this wrong. They measure outcomes—revenue, profit, customer count—and set targets based on what they want to achieve. Then they’re surprised when targets are missed, or frustrated when good people deliver poor results through no fault of their own.
Today, we’re tackling the single most important principle in performance measurement: the difference between activities and outcomes, and why understanding it transforms both your results and your culture.
The fundamental problem: You can’t control outcomes.
Here’s the uncomfortable truth: you cannot control outcomes. You can only control inputs.
Let’s make this concrete with a scenario every field services business understands:
You set your estimator a target: “Win 100 new jobs this quarter.” That’s an outcome. It sounds reasonable. It’s measurable. But here’s what your estimator can’t control:
- Whether the customer actually needs the work done this quarter or next
- Whether a competitor undercuts them (possibly at an unsustainable margin)
- Whether economic conditions make customers delay projects
- Whether the projects that come through are even suitable for your business
- Whether your reputation in the market makes customers predisposed to choose you
Your estimator can do everything right—respond quickly, produce accurate quotes, follow up diligently, build relationships—and still miss the target because of factors outside their control.
Conversely, another estimator might hit the target simply because they happened to get easier leads, or because a major competitor went bust, or because market conditions were favourable. Did they actually perform better? The outcome suggests yes. The reality might be no.
When you measure and reward outcomes, you’re partly measuring luck.
The solution: Measure what you can control
Smart businesses flip this around. They measure and reward the activities that drive outcomes—the inputs people can actually control.
Let’s revisit that estimator example. What can they control?
- Response time: Sending quotes within 24 hours of enquiry
- Quote quality: Conducting proper site surveys rather than desk estimates
- Follow-up discipline: Making contact within 48 hours of sending quote
- Relationship building: Logging detailed notes and understanding customer needs
- Pipeline management: Keeping a healthy ratio of quotes at different stages
These are inputs. They’re entirely within the estimator’s control. And here’s the key: if you know your data, you can quantify which inputs drive the outcomes you want.
Building your input metrics: The reverse engineering process
Let’s work through a complete example using our sales scenario:
Step 1: Start with the desired outcome Target: 100 new jobs this quarter
Step 2: Understand your conversion rates Historical data shows:
- Quote-to-order conversion rate: 25%
- Therefore, to get 100 orders, we need 400 quotes
Step 3: Work backwards further To generate 400 quotes, what activities drive that?
- Enquiry-to-quote conversion: 80% (some enquiries aren’t suitable)
- Therefore, we need 500 enquiries
- Average enquiries per site visit: 60% require a visit, 40% can be quoted remotely
- Follow-up calls that generate additional opportunities: 15% of completed jobs
Step 4: Identify controllable inputs Now you have measurable activities:
- Conduct 200 site visits this quarter (50 per month)
- Send 400 quotes within 24 hours of site visit/enquiry
- Make follow-up contact on 100% of quotes within 48 hours
- Log detailed notes on 100% of customer interactions
- Generate 60 referral opportunities from completed jobs
Step 5: Set the incentive Bonus/recognition based on:
- 70% weighted to activity metrics (site visits completed, quotes sent on time, follow-ups completed)
- 30% weighted to outcome (conversion rate maintained or improved)
Notice what this does: it focuses your estimator on the behaviours that drive results, whilst still keeping an eye on quality (the conversion rate element ensures they’re not just pumping out poor quotes to hit numbers).
Going even deeper: The second-order inputs
The most sophisticated businesses don’t stop at first-order inputs. They ask: “What drives the quality of those inputs?”
Let’s take “send 400 quotes” as our input metric. But not all quotes are equal. What makes a good quote?
- Proper site survey completed (not a desk estimate)
- Customer requirements fully documented
- Accurate materials list and specifications
- Detailed breakdown of labour hours
- Clear payment terms and project timeline
- Professional presentation
These second-order inputs are the activities that make the first-order inputs effective. A field services business might measure:
- Site survey completion rate: Percentage of quotes preceded by a proper survey
- Quote detail score: Does the quote include all required elements?
- Customer needs assessment: Did we ask the right discovery questions?
Now you’re not just measuring quote volume—you’re measuring quote quality through the activities that create quality.
Real examples across different functions
Let’s see how this input-focused approach works across a field services business:
Operations Management
Poor metric: “Achieve 22% gross margin on all jobs”
- Partly outside your control (material price fluctuations, weather delays, customer variations)
Better metrics:
- Conduct pre-job technical reviews on 100% of complex installations
- Complete daily progress reports on all active sites Identify and log variations within 24 hours of occurrence
- Achieve 90% schedule adherence (jobs completed within estimated timeframe)
- Conduct post-job reviews within 48 hours of completion
These activities drive margin. You can measure them daily. People can control them.
Customer service
Poor metric: “Achieve 95% customer satisfaction score”
- You can’t directly control how customers feel
Better metrics:
- Answer 90% of calls within 3 rings
- Resolve or escalate 100% of complaints within 24 hours
- Complete follow-up calls on all completed jobs within 7 days
- Achieve first-time fix rate of 85% on reactive maintenance
- Log detailed notes on 100% of customer interactions
These activities create satisfaction. They’re measurable daily and entirely controllable.
Field engineers
Poor metric: “Complete 25 jobs per week”
- Doesn’t account for job complexity, travel time, or quality
Better metrics:
- Arrive within scheduled appointment window 95% of the time
- Complete vehicle checks and stock checks before first job (100%)
- Use correct PPE and follow safety procedures on every site (100%)
- Capture customer signature and photographic evidence on all completed work (100%)
- Identify and log additional work opportunities on 30% of visits
These activities drive both productivity and quality. An engineer knows exactly what good performance looks like.
The motivation transformation
Here’s where this approach becomes truly powerful: it transforms how your people feel about their work.
When you measure outcomes, people often feel:
- Helpless: “I did everything right but still missed my target”
- Cynical: “Steve hit his number because he got lucky with easier customers”
- Manipulated: “They’re just moving the goalposts again”
When you measure inputs, people feel:
- In control: “I know exactly what I need to do”
- Focused: “These are the activities that matter”
- Fairly treated: “Everyone’s measured on what they can actually control”
Your best performers—the ones doing the right things consistently—get recognised even when external factors are challenging. Your lucky performers get found out when their activity levels don’t match their results.
The quality safeguard
You might be thinking: “If I measure activities, won’t people just do high volumes of poor-quality work to hit their numbers?”
This is a legitimate concern, and it’s why you need quality metrics alongside volume metrics.
Remember our estimator example? We weighted 30% on conversion rate. If someone sends 400 quotes but only converts at 15% (below the 25% average), that’s a flag. They’re hitting activity numbers but not quality.
For field engineers, you might have:
- Activity metric: Complete assigned jobs (volume)
- Quality metrics: Customer satisfaction rating, callback rate, materials efficiency
The combination ensures people are doing the right things, in the right way.
Leading v lagging indicators
What we’re really talking about is the difference between leading and lagging indicators:
Lagging indicators tell you what happened:
- Revenue achieved
- Profit margin delivered
- Customer count at month-end
- Annual employee turnover
Leading indicators tell you what’s happening now and predict what’s coming:
- Quotes sent this week
- Jobs in progress vs. scheduled completion dates
- Customer satisfaction scores on recently completed work
- Employee engagement survey results
Lagging indicators are important—they tell you if your strategy is working. But they’re historical. By the time they show a problem, you’re already in trouble.
Leading indicators let you steer. They’re your early warning system. And crucially, they’re almost always activity-based (inputs) rather than outcome-based.
Bringing it all together: The complete picture
A well-designed performance management system uses both:
Strategic outcomes (lagging indicators): Set the direction and measure ultimate success
- “Achieve 25% revenue growth and 20% net margin”
Tactical inputs (leading indicators): Define the activities that drive those outcomes
- “Send 400 quotes with 90% including proper site surveys”
- “Achieve 95% schedule adherence on all projects”
- “Complete safety checks on 100% of jobs”
Your Balanced Scorecard (from Part 2) includes both types. Your OKRs focus people on the key inputs that will deliver strategic outcomes.
The data requirement
By now, you’ll have spotted the challenge: to measure inputs effectively, you need data. Daily or weekly data. Not P&L figures three weeks after month-end.
You need to know:
- How many quotes were sent this week
- What percentage included site surveys
- Current schedule adherence across all active jobs
- Customer satisfaction scores on recently completed work
- First-time fix rates on reactive maintenance
This data exists in your systems—job management software, CRM, scheduling tools, customer feedback. The problem is getting it all in one place, making it meaningful, and reviewing it regularly.
Which brings us neatly to Part 4…
What’s coming next
You now understand the frameworks (Balanced Scorecards and OKRs) and the principle (measure inputs, not just outcomes). But how do you actually implement this in a field services business without hiring a data team?
In our final post, we’ll get thoroughly practical:
- Why spreadsheets and manual reporting always fall short
- The data integration challenge that stops most businesses
- Why outsourced BI solves the SME dilemma
- Your realistic 90-day implementation roadmap
- What “good” looks like once it’s working
This is where theory meets practice. And where the businesses who implement systematic performance management pull away from those who stay stuck with month-end P&Ls.
Next in this series: Part 4: Building your performance engine: The practical guide to implementation
Previously: Part 2: Two frameworks that drive performance: Balanced Scorecard and OKRs
Find out more
Want to explore how systematic performance management could transform your business? We help field services businesses integrate their data, build meaningful dashboards, and implement frameworks that drive growth.
About the author
Sean Gorman is an Investor and Director at Vizora. A qualified corporate finance lawyer, Sean has spent 15 years in senior leadership roles spanning law, construction, and professional services. As CEO of a Private Equity-backed professional services firm, he led the business through a period where revenues grew by nearly 200%. Sean has a passion for performance improvement through data-driven decision-making.