The Great AI Overconfidence Crisis: Why 54% Think They’re AI Experts (But Only 10% Actually Are)
There’s a confidence crisis happening in workplaces everywhere, and it has nothing to do with imposter syndrome. According to Section’s 2025 AI Proficiency Report, 54% of knowledge workers believe they’re proficient at using AI. However, when their actual skills were tested, only 10% truly were.
This isn’t just a minor miscalibration. We’re looking at a 5x overestimation gap that’s quietly undermining AI initiatives across organizations worldwide.
The Dunning-Kruger Effect Goes Digital
If you’ve ever watched someone confidently give terrible driving directions or seen a relative pontificate about technology they clearly don’t understand, you’ve witnessed the Dunning-Kruger effect in action. It’s the cognitive bias where people with limited knowledge overestimate their competence, while actual experts tend to underestimate theirs.
Now this psychological phenomenon has found its perfect playground: artificial intelligence.
The AI Proficiency Report tested over 5,000 knowledge workers on three key areas:
- their actual usage patterns,
- their knowledge of AI capabilities and limitations,
- and their prompting skills.
The results were eye-opening. While the majority of workers rated themselves as intermediate or advanced AI users, their performance told a very different story.
The pattern becomes even clearer when you look at specific groups. A staggering 73% of “AI experimenters” (people who use AI occasionally but with limited skill) rate themselves as proficient. Meanwhile, 46% of “AI novices” also consider themselves proficient users.
Most telling of all? The actual AI experts in the study were more likely to downgrade their abilities than overestimate them.
Why This Overconfidence Is Dangerous
You might think, “So what if people are a bit overconfident? At least they’re trying to use AI.” But this overconfidence creates real problems that ripple through organizations.
First, it creates a false sense of security. When employees believe they’re already good at using AI, they stop learning. They don’t seek out training, they don’t experiment with new approaches, and they certainly don’t ask for help. This means they get stuck using AI in the most basic ways (typically as a glorified search engine or writing assistant) missing out on its real strategic potential.
The research backs this up. Most workers are still using AI primarily as an assistant (54%) or creator (42%), while the more valuable applications (thought partnership (40%) and research (34%)) remain underutilized. It’s like buying a sports car and only driving it in first gear.
Second, overconfidence leads to poor risk assessment. People who think they understand AI better than they do are more likely to use it inappropriately: sharing confidential information, accepting outputs without verification, or applying AI solutions to problems where they’re not suitable.
Third, it makes it harder for organizations to identify who actually needs help. If everyone claims to be proficient, how do you know where to focus your training resources?
The Skills That Everyone Thinks They Have
The gap between perception and reality becomes stark when you look at specific skills. Take prompting, the art of communicating effectively with AI systems. The average prompting score across all workers was just 33 out of 100. That’s a failing grade by any measure, yet most people believe they’re doing just fine.
This matters because prompting is fundamental to everything else. If you can’t communicate clearly with AI, you can’t get good results. And if you can’t get good results, you can’t capture the productivity gains that make AI worthwhile.
I see this disconnect all around me. Someone will show me their “amazing” AI workflow, and it turns out they’re using ChatGPT like a basic search engine, typing one-sentence questions and accepting whatever comes back. They’re thrilled with the results, but they’re missing 90% of what’s possible.
The knowledge gaps are equally concerning. Understanding of AI fundamentals (how these systems work, what their limitations are, how to identify bias) has actually decreased over the past six months, even as usage has increased. People are rushing to use tools they don’t really understand.
Why Self-Assessment Fails with AI
Traditional skills are easier to self-assess because the feedback loops are immediate and obvious. If you’re bad at presentations, the audience reaction tells you. If you’re struggling with Excel, the spreadsheet won’t do what you want.
AI is different. It’s designed to be helpful and agreeable. Even when you’re using it poorly, it will still give you something that looks reasonable. The AI won’t tell you that your prompt was terrible or that there’s a much better way to approach your problem. It just tries its best with what you’ve given it.
This creates an illusion of competence. People get outputs that seem useful, so they assume they’re doing everything right. They don’t realize they could be getting outputs that are 10x more valuable with better techniques.
There’s also the “magic box” problem. Because AI feels mysterious and powerful, people assume that any interaction with it demonstrates technical sophistication. Using ChatGPT to rewrite an email feels impressive, even if you’re only scratching the surface of what’s possible.
The Real Markers of AI Proficiency
So what does actual AI proficiency look like? The research identified several clear differentiators between people who think they’re good at AI and people who actually are.
Real proficiency shows up in usage patterns. Genuine AI experts use it for strategic thinking, not just task automation. They leverage AI for research, analysis, and problem-solving, not just content creation.
It shows up in their understanding of limitations. Proficient users know when NOT to use AI, understand the risks of bias and hallucination, and have strategies for verification and validation.
Most importantly, it shows up in results. Truly proficient users save significant time (often 8-12 hours per week or more) while maintaining or improving quality. They can point to specific workflows that have been transformed, not just tasks that have been slightly optimized.
Bridging the Gap
The solution isn’t to shame people for overestimating their abilities. Instead, we need to create better feedback mechanisms and learning opportunities.
Start with objective assessment. Just as you wouldn’t let someone perform surgery based on their confidence level alone, organizations need ways to actually measure AI skills rather than relying on self-reporting.
Create safe spaces for experimentation and failure. One reason people overestimate their skills is that they haven’t been challenged to use AI in more sophisticated ways. Give them complex problems to solve and expert guidance on better approaches.
Focus on specific, measurable outcomes. Instead of asking “Are you good at AI?”, ask “How much time did AI save you last week?” and “Can you show me your three most effective prompts?” Concrete examples reveal the gap between perception and reality much more effectively than abstract self-assessment.
Most importantly, normalize the learning process. The field is evolving so quickly that everyone (even actual experts) should expect to feel like beginners on a regular basis. The goal isn’t to achieve permanent expertise, but to stay curious and keep improving.
The Opportunity Hidden in Overconfidence
Here’s the thing about overconfidence: it’s often just misdirected enthusiasm. The people who think they’re AI experts aren’t delusional. On the contrary, they’re excited about the potential and eager to improve. That enthusiasm is exactly what organizations need to fuel real AI adoption.
The key is channeling that confidence in the right direction. Instead of letting people plateau at basic usage, use their enthusiasm to motivate deeper learning. Show them what true proficiency looks like, and most will be eager to bridge the gap.
The alternative (letting this overconfidence persist) is much worse. Organizations will continue to invest in AI tools and training while seeing minimal returns, because their people genuinely believe they’re already doing everything right.
The AI Proficiency Report shows us that the workforce is ready for transformation. Employee excitement about AI has doubled in just six months, and 65% would be disappointed if they could no longer use AI at work.
That’s a foundation we can build on. We just need to help people understand that being excited about AI and being good at using AI are two very different things… and that’s perfectly okay.
Ready to move beyond AI overconfidence and develop real proficiency? I work with teams to assess their actual AI skills and create learning paths that turn enthusiasm into expertise. Let’s connect on LinkedIn to discuss how we can help your team bridge the confidence gap.