Skip to main content
Log in

Insights

19.2.2026

Miikka Kataja

How can I use AI to improve employee performance and engagement?

How AI improves employee performance by enabling better manager feedback, not replacing human judgment—starting with first-time manager enablement.

How can I use AI to improve employee performance and engagement?

TL;DR

  • AI can meaningfully improve employee performance, but only if it's used to enable managers to do better work, not to replace the human judgment that makes feedback land.
  • The biggest gap in most companies isn't data or tooling; it's that first-time managers lack the scaffolding to give useful, consistent feedback without significant support.
  • Continuous feedback isn't about increasing review frequency. It's an architectural shift that brings performance conversations into the tools where work actually happens.
  • Legacy tools like Lattice, Leapsome, and 15Five address the same problems the same way. AI-native approaches are architecturally different, not just faster versions of the same thing.
  • If you're setting up or rebuilding performance management, start with manager enablement as your design principle, not compliance or data collection.

What does "using AI to improve performance" actually mean?

Using AI to improve employee performance means giving managers better input, prompts, and visibility so they can make faster, more consistent decisions about their people. It does not mean automating the human relationship between a manager and their direct report.

According to Mercer, 32% of companies are considering an AI-enabled performance feedback processess in 2025.

Most articles on this topic lead with sentiment analysis dashboards and predictive attrition scores. Those things exist, and some of them are useful. But they miss the actual problem most People Leaders face: managers who do not know how to give feedback, do not know what good looks like for their team, and are not using the PM system already in place. AI applied on top of a broken system makes the system faster, not better.

The more useful framing is this: where in the performance cycle does human judgment break down, and can AI provide just enough scaffolding to keep things moving? The answer, for most scaling companies, starts with manager capability.


Why are first-time managers the real bottleneck?

First-time managers are often the single biggest constraint on team performance, and they are also the group least supported by traditional PM tools. Most of them were promoted for being excellent individual contributors. They have never been trained to give structured feedback, set expectations clearly, or run a 1:1 with any real purpose beyond status updates.

The problem compounds at scale. When a company grows from 30 to 150 people, the number of managers increases, and most are new. People Leaders we talk to consistently describe the same situation: a cohort of junior team leads who want to do right by their people but do not know how, and a PM system that assumes they already do.


"Becoming a manager for the first time is one of the hardest transitions in a career. The skills that made you a great individual contributor are often the least relevant to leading a team."

First Round Capital

This is where AI becomes genuinely useful. Not as a replacement for manager judgment, but as a scaffold. AI can suggest feedback structures, surface patterns in 1:1 notes, prompt managers to address a topic they have been avoiding, and help them translate vague impressions ("she's been off lately") into specific, actionable observations. It can also flag when a manager has not checked in with a direct report in three weeks, without making it feel like surveillance.

One People Lead at a 150-person tech company described the shift this way: "We stopped trying to train managers in workshops and started embedding the prompts directly into the workflow. They're not reading a guide; they're just responding to a Slack message that's already half-structured for them.


What does AI coaching for managers actually look like in practice?

AI coaching for managers works best when it is invisible. The goal is not to give managers a new tool to learn; it is to surface the right prompt at the right moment within a tool they already use.

In practice, this looks like a few distinct interventions. First, expectation scaffolding: helping a manager define what "good" looks like for each role on their team, so feedback has something to anchor to. Many first-time managers skip this step because building competency frameworks feels like a six-month project. AI can compress that down to a structured conversation that outputs a working framework in an hour.

Second, feedback drafting: when a manager needs to give feedback after a specific event, AI can help them move from "I'm not sure how to say this" to a draft that is specific, non-personal, and tied to a clear outcome. The manager still edits, still delivers it, still owns the relationship. The AI removes the blank-page problem.

Third, 1:1 structure: many managers run 1:1s with no agenda, which end up as status updates. AI can suggest topics based on recent feedback threads, flag unresolved items, and prompt the manager to close any open loops.

The consistent theme across all three is that the AI handles the tedious scaffolding work, while the manager does the actual human work of leading their team.


How is this different from just running more frequent reviews?

Frequent reviews and continuous feedback are not the same thing, and conflating them is one of the most common mistakes in PM design. Running quarterly reviews instead of annual ones is a cadence change. Continuous feedback is an architectural change.

The difference is where feedback lives. In a traditional PM system, feedback is handled through a portal. Someone logs in, fills out a form, and submits it. The manager gets a notification. Maybe they read it before the review cycle. Maybe not. The feedback is structurally disconnected from the work that generated it.

Continuous feedback means feedback lives where work conversations happen. For most teams, that is Slack. A quick reaction to a piece of work, a note after a presentation, a prompt when a project closes. These are not mini-reviews; they are ambient signals that accumulate over time and make the formal review far more accurate when it does happen.

"The most effective performance cultures treat feedback as a continuous conversation, not a scheduled event. When feedback is separated from the work it describes, it loses most of its value."

Josh Bersin

Josh Bersin - Irresistible: The Seven Secrets of the World's Most Enduring, Employee-Focused Organizations

One People Leader building this kind of system described the design goal as making feedback feel "like a quick Slack task, not a tax return." That framing matters. If feedback requires logging into a separate tool, navigating a form, and writing three paragraphs, it will not happen consistently. If it is a two-line response to a Slack prompt that took fifteen seconds, it will.

The architectural shift is that feedback lives in Slack, is structured by AI, and surfaces in dashboards only when someone needs the aggregated view. Managers are not doing more work; they are doing less, with better outputs.


What performance data should People Leaders actually be tracking?

People Leaders should be tracking two categories of data: manager behavior data (are managers actually doing the people work?) and outcome data (is the work producing the right results?). Most legacy tools provide only the second category, and even then only through self-reported surveys.

Manager behavior data is more useful and more actionable. It tells you whether feedback is being given consistently, whether 1:1s are happening, whether expectations have been set for each role, and whether managers are closing the loops they open. This is not surveillance; it is infrastructure visibility. Just as an engineering lead tracks PR review times or deployment frequency, a People Lead needs to track whether the management practices that predict team performance are actually being implemented.

When leadership asks, "How are our managers doing?", this is the data that answers the question. Not engagement survey scores or attrition rates, which are lagging indicators. Actual evidence of manager behavior, aggregated across the team, is available without a three-month analytics project.

Several People Leaders we work with described the same moment: their CEO or board asked for a manager's performance update, and they had no data. They had feelings, anecdotes, and maybe some survey results. None of it was convincing. The teams that moved fastest to close that gap were the ones where leadership pressure coincided with a People Lead who was already frustrated by the lack of visibility.

The practical question is what to track. A useful starting set includes: feedback frequency per manager, 1:1 completion rates, expectation-setting coverage across roles, and whether feedback is being acted on (do development areas from one cycle show up as resolved in the next?). AI can surface all of this from natural workflow data without requiring managers to fill out additional forms.



Why do most PM tools feel the same, and what would actually be different?

Most PM tools feel the same because they were built on the same mental model: a structured review process, a portal to complete it in, and a dashboard to report it on. Lattice, Leapsome, 15Five, and their competitors all execute that model with different levels of polish. They are solving the same problem the same way.

The problem with that model is not the tools themselves; it is the assumption that the bottleneck is process structure. Give people a clear review template, a deadline, and a place to submit feedback, and they will perform well in performance management. That assumption is wrong. The bottleneck is manager capability and motivation, not process clarity.

An AI-native approach starts from a different assumption: managers will perform good performance management if it is easy enough to fit within their existing workflow. The design goal is not to build a better portal; it is to remove the portal entirely and let feedback accumulate through ambient signals in the tools managers already use.

This is not a marginal improvement over Lattice. It is a different category. Instead of a system that managers have to use, you are building one they barely notice. The feedback still gets captured. The expectations still get set. The review still happens. But from the manager's perspective, they mostly just answered a few Slack messages.

For the sophisticated buyer, "better than Lattice" is not a compelling pitch. Different from everything is. And the difference is not in the feature list; it is in the underlying assumption about where performance management actually lives.



How should People Leaders think about enabling versus policing managers?

Enabling managers means giving them the tools, prompts, and visibility to do good people work without requiring them to become PM system experts. Policing managers means using the PM system to monitor compliance and escalate gaps. The distinction matters because it determines whether managers experience the system as support or as surveillance.

The enabling philosophy shows up in how the system is designed. Prompts come to managers rather than managers having to seek out a form. Feedback is suggested, not mandated. Dashboards are for the manager's own visibility first, and for People Leaders second. The system assumes managers want to do good work and need scaffolding, not that they need to be caught when they do not.

The policing philosophy shows up in reminder emails, completion rate reports, and executive dashboards that track who has not submitted their review. Those features are present in most legacy tools and are the ones managers hate most. They signal that the system does not trust them.

The practical difference in adoption is significant. When a PM system is built on the enabling philosophy, managers tend to use it because it makes their job easier. When it is built on a policing philosophy, managers use it because they have to and do the minimum required. The data quality and the outcomes are very different.

A useful test for any PM design decision: does this feature help the manager, or does it help the People Leader watch the manager? Features in the first category drive adoption. Features in the second category drive resentment.



What are the prerequisites before AI can actually help?

Before AI can improve performance management, a few structural conditions need to be in place. Without them, AI adds processing power to a system that is not producing useful inputs.

The first prerequisite is clarity about what good looks like for each role. AI can help draft competency frameworks, but someone has to decide what competencies matter. If a manager is giving AI-assisted feedback with no shared definition of what "strong performance" means for their team, the feedback will be specific but not directional. It will describe behavior without connecting it to a standard.

The second prerequisite is an adequate feedback signal. Continuous feedback architectures work well when feedback is genuinely frequent. If the average manager is giving feedback four times a year, AI does not have much to surface or aggregate. The architecture change needs to happen first; AI then makes it more useful.

The third prerequisite is manager buy-in at the basic participation level. AI can lower the activation energy for giving feedback, but it cannot create motivation from nothing. If managers in your organization view performance management as something that happens to them rather than something they do, an AI layer will not fix that. The enabling philosophy needs to be established as a cultural norm before the tooling can reinforce it.

The good news is that none of these prerequisites requires a six-month project. Competency frameworks can be drafted in a day with AI assistance. Feedback culture can shift quickly when the friction is removed. And manager buy-in tends to follow quickly when the system actually makes their job easier rather than adding to it.



How do different approaches to AI-assisted performance management compare?

The comparison that matters most for most People Leaders is not feature-by-feature; it is where the friction lives. Tools that require managers to change their behavior significantly to use them will have low adoption, regardless of how good the AI features are. Tools that embed into existing behavior and reduce friction will be used consistently, which means the data is better and the AI outputs are better.

ApproachWhat it looks likeManager experienceData qualityTime to value
Annual reviews + AI summarizationSame review cycle, AI writes the summaryOne less writing task per yearLow: infrequent, retrospectiveSlow: depends on existing cycle
Pulse surveys + sentiment analysisFrequent short surveys, AI flags trendsAnother thing to fill outMedium: frequent but shallowMedium: 4-8 weeks setup
Legacy PM tool + AI add-onLattice/Leapsome adds an AI featureSame portal, one new buttonMedium: structured but siloedMedium: depends on adoption
Slack-native ambient feedbackFeedback lives in Slack, AI structures itFeels like a quick messageHigh: frequent, contextual, naturalFast: 2-4 weeks to working system
AI coaching for managersAI prompts and scaffolds manager behaviorActive support, not a formHigh: behavioral + outcome dataFast: visible in first 1:1 cycle


FAQ


Q1. What is the most practical first step for using AI to improve employee performance?
Start with manager enablement, not data collection. Identify your highest-leverage managers, map where their feedback process breaks down (usually at the blank-page moment), and introduce an AI scaffolding layer at that specific point. Trying to instrument everything at once produces noise; fixing the manager feedback gap produces visible results within one or two 1:1 cycles.


Q2. How is AI-native performance management different from adding AI features to Lattice or 15Five?

The difference is architectural, not cosmetic. Legacy tools were designed around a portal-based review process, and AI features built on top of that architecture still require managers to change their behavior and log in to a separate system. AI-native tools start from the assumption that feedback should live where work happens, usually Slack, and AI structures that ambient input rather than prompting managers to produce new input elsewhere.


Q3. Can AI actually help first-time managers give better feedback?

Yes, within specific constraints. AI can address the blank-page problem by suggesting feedback structures, helping managers connect specific behaviors to role expectations, and flagging when they have not addressed an open development area. What AI cannot do is substitute for the manager's own observation of their team or replace the trust that makes feedback land. AI handles the scaffolding; the manager still does the actual relationship work.


Q4. What manager performance metrics should People Leaders report to leadership?

The most useful leading indicators are behavioral: feedback frequency per manager, 1:1 completion rate, expectation-setting coverage across roles, and whether development areas from previous cycles are being addressed. These are more actionable than lagging indicators like attrition or engagement scores because they reflect what managers are actually doing, not outcomes that have already crystallized.


Q5. How long does it take to see results from an AI-assisted performance management approach?

For changes in manager behavior, a meaningful signal is typically visible within four to six weeks when the system is embedded in existing workflows. Outcome-level changes (team performance, retention) are harder to attribute cleanly, usually taking two to three quarters. The fastest path to visible results is picking one manager behavior to change (for example, weekly Slack-based feedback) and measuring that specific behavior before trying to instrument the whole system