The AI Fitness Score: How Moderna Measures Human-AI Collaboration
We are asking the wrong question about AI adoption. We want to know: “Are people using it?” But Bryce Challamel from Moderna realized that’s like asking whether your employees have gym memberships instead of whether they’re actually getting fit.
So Moderna created something brilliant: an AI fitness score that measures not just whether people are using AI, but how effectively they’re collaborating with it. And the results speak for themselves – they’ve achieved 100% adoption among knowledge workers with genuinely high engagement levels.
Here’s what makes their approach so smart, and how you can adapt it for your own organization.
Why “Are They Using It?” Is the Wrong Question
Let’s be honest about what most AI adoption metrics actually measure. Login rates. Number of queries. Maybe time spent in the platform. These are vanity metrics that tell you almost nothing about whether AI is actually making people more effective.
You could have someone logging in daily, asking AI to write grocery lists, and your dashboard would show them as a “power user.” Meanwhile, someone else might use AI twice a week but leverage it for complex strategic analysis that saves hours of work. Traditional metrics would call the first person more successful.
Bryce saw this problem early and knew they needed a more sophisticated way to think about AI engagement. The breakthrough was recognizing that effective AI use looks a lot like fitness: it’s not just about showing up, it’s about the quality and consistency of your practice.
The Four AI Fitness Components That Actually Matter
Moderna’s AI fitness score combines four key elements, and understanding why each matters will change how you think about measuring AI success in your organization.
1. Volume of Interaction
This isn’t just counting messages. It’s recognizing that becoming genuinely skilled with AI requires substantial practice. Someone sending three thoughtful, complex queries is probably getting more value than someone sending thirty simple ones, but both volume and depth matter for building real capability.
2. Frequency of Use
Here’s where the fitness analogy really makes sense. Going to the gym once a month won’t get you in shape, and using AI sporadically won’t help you develop the intuitive sense of when and how to leverage it effectively. Consistent engagement is what builds the mental models and habits that make AI truly useful.
3. Leveraging Advanced Capabilities
This is where Moderna gets really clever. They specifically track whether people are using reasoning models – the AI capabilities that can handle complex, multi-step thinking. This matters because these are the tools that can do hours of work in minutes of inference. If someone isn’t using these capabilities, they’re missing the biggest productivity opportunities.
4. Connecting to Data
The most powerful AI applications happen when you can bring your organization’s specific context and information into the conversation. Moderna tracks whether people are connecting AI to company data because that’s where generic AI assistance becomes genuinely strategic insight.
The Genius of the Hidden Formula
Here’s my favorite part of Moderna’s approach: everyone knows the four components, but nobody knows how they’re weighted or combined. Bryce described it as being like a credit score – you understand the general factors, but you can’t game the specific algorithm.
This prevents the kind of metric manipulation that destroys the value of most measurement systems. People can’t just spam the system with meaningless interactions to boost their scores. Instead, they have to focus on genuinely improving their AI collaboration across all dimensions.
Even better, the formula is dynamic. As AI capabilities evolve and organizational needs change, Moderna adjusts the weightings. What constituted “good” AI use six months ago isn’t the same as what constitutes good use today. The scoring system evolves with the technology.
From Individual Scores to Organizational Intelligence
The real power of this system becomes clear when you zoom out from individual scores to organizational patterns. Moderna doesn’t just give people personal fitness scores – they create visibility into team and departmental performance.
This is where the “red streak” concept becomes powerful. Leaders can see when low AI engagement starts with a specific manager and cascades down through their entire organization. Instead of wondering why adoption is lagging in certain areas, they can surgically apply change management interventions exactly where they’re needed.
Imagine being able to look at your organization and immediately see that marketing is highly engaged with AI, sales is moderately engaged, but finance hasn’t really figured it out yet. You’d know exactly where to focus your support and resources.
What This Means for Your Measurement Strategy
Most organizations are still stuck measuring AI adoption like they measured software adoption in the 1990s. Install rates, login frequencies, basic usage statistics. But AI isn’t just software – it’s a new way of working that requires skill development and behavior change.
If you want to create a measurement approach that actually drives better outcomes, start thinking about these questions:
- Are people developing genuine competency with AI, or just checking boxes?
- Are they using AI for high-value work, or just convenient tasks?
- Are they bringing your organization’s unique context into their AI interactions?
- Are they staying current with evolving AI capabilities?
You don’t need to copy Moderna’s exact formula, but you do need to measure things that matter for effectiveness, not just engagement.
Building Your Own AI Fitness Framework
Here’s how you could adapt this approach for your organization, regardless of size:
Start with the behaviors that drive value in your specific context. If you’re a consulting firm, maybe you care about whether people are using AI to synthesize research from multiple sources. If you’re in manufacturing, maybe it’s about using AI for predictive maintenance insights.
Combine leading and lagging indicators. Track both the activities that should drive results (like using advanced AI capabilities) and the results themselves (like faster project completion or higher quality outputs).
Make the scoring criteria visible but not gameable. People should understand what good AI use looks like, but they shouldn’t be able to manipulate scores without actually improving their effectiveness.
Update your standards as capabilities evolve. What constitutes “advanced” AI use today will be basic table stakes in six months. Your measurement system should push people to stay current.
The Cultural Shift This Creates
What I find most interesting about Moderna’s approach is how it changes the conversation around AI from compliance to capability building. Instead of asking “Are people using the tools we bought?”, leaders start asking “Are our people getting better at human-AI collaboration?”
This shift matters because it acknowledges that effective AI use is a skill that develops over time, not a binary switch you flip. It creates psychological safety for people to experiment and learn, while still maintaining accountability for genuine engagement.
It also sends a clear message about organizational priorities. When you measure something this systematically, you’re telling people it matters for their success and career development. You’re making AI fluency part of what it means to be effective in your organization.
The Competitive Advantage Nobody’s Talking About
In the end, companies like Moderna are really building organizational capabilities that will be incredibly difficult to replicate. Every month their people get more skilled at human-AI collaboration. Every quarter their measurement systems get more sophisticated. Every year the gap between them and organizations that are still figuring out basic adoption gets wider.
The AI tools themselves will become commoditized. The models will be available to everyone. But the organizational muscle for effectively leveraging AI at scale is effectively being built right now by companies that are measuring and optimizing for the right things.
Are you already building the measurement systems and cultural practices that will make you genuinely competitive in a world where AI collaboration is a core business skill? Because while everyone else is arguing about which AI platform to buy, companies like Moderna are quietly building workforces that can leverage whatever AI capabilities emerge next. And that’s a competitive advantage that money can’t buy.
Ready to move beyond basic adoption metrics and start measuring what actually matters for AI effectiveness? I’d love to help you design measurement approaches that drive real capability building in your organization. Let’s connect on LinkedIn to explore what this could look like for your specific context.