We use cookies to enhance your browsing experience and analyze our traffic. By using our site, you consent to cookies. Privacy policy

Blog

With AI, your Junior Analysts have Junior Analysts: Why AI supervision is the next core analyst skill

By Joseph Macaluso
Blue wall with gear/document icons representing digital transformation, innovation, and strategy planning.

Artificial intelligence (AI) tools promise to unlock untold business value; but in practice, countless organizations struggle with AI adoption and managing AI outputs effectively, leaving potential gains on the table.

Fewer than half (49%) of senior employees say that new technologies introduced in the past year have delivered on their intended results. And when adoption falls short, 45% say expected benefits are delayed or never realized, according to an Eagle Hill survey of director-level employees and above.

This gap has less to do with the quality of AI tools and more to do with a missing capability in the workforce: the ability to upskill workforces to manage AI successfully.

When an AI tool is managed by a skilled supervisor, it can help accelerate timelines, improve the consistency of deliverables, and expand team capacity. But without strong AI supervision from a seasoned manager, AI can introduce risk by skipping steps, misinterpreting requests, or presenting misinformation as fact. These risks are not always immediately visible, but over time, they can erode the quality of analysis, distort decision-making, and create false confidence in outputs that have not been fully tested.

In that way, AI tools behave less like experienced experts and more like capable but inexperienced analysts: fast, responsive, sometimes over-confident, and highly dependent on direction.

To get better business value from AI tools, it is critical to continuously upskill workforces in strong management and supervisory skills. Great leadership, whether leading human beings or AI, means casting a clear vision, clearly defining work, setting measurable goals, and ensuring output holds up under scrutiny.

In addition to upskilling workforces in management, it’s important to institute shared expectations, workflows, and quality controls to ensure consistent, high-quality outputs across teams.

As AI becomes embedded in day-to-day workflows, the question is no longer whether organizations are using AI, but whether they are using it in ways that strengthen or degrade the quality and consistency of their work.

5 AI supervision skills your workforce should build

Now that your junior analysts have junior AI analysts of their own, pushing them to improve their management skills should be at the top of the list. Rather than a list of “AI hacks,” this playbook demonstrates the core competencies that are key to supervising AI-generated work the way strong managers supervise junior analysts, through clear framing, structured workflows, and careful review.

1

Frame work clearly in plain language.

Like a junior analyst, AI fills in gaps on its own when it lacks clearly defined goals and direction. Even if the output sounds polished, it can be misaligned to the task.

To unleash greater value, users should be taught to go beyond improving their AI prompts. Instead, teach your workforces to write clear, usable briefs that define purpose, audience, constraints, and success criteria in plain language. Laying out expectations clearly is a core analyst capability that organizations already value, but often develop later in someone’s career. With AI, briefing capabilities become essential much earlier.

Clear, jargon-free briefs that highlight important information result in stronger outputs, whether you’re working with AI or with humans. Cornell University research supports this point, finding that overly complex, jargon-heavy language can work against practical decision-making. Focusing on high-quality information rather than lofty language leads to better work products.

In an AI-enabled environment, unclear briefs don’t just create rework, they shape flawed outputs that can cascade into flawed recommendations and decisions.

Example prompt:
“Summarize this report for a director-level audience, focusing on cost reduction and operational efficiency.”

illustration of top view of hands typing on a keyboard at a computer representing AI supervision

2

Break projects into stages.

Anyone who has managed early-career talent knows that assigning too much at once can lead to uneven results. The same applies to AI.

Multi-part requests can produce scattered or shallow outputs. Instead, breaking down projects or tasks into stages and guiding the AI tool through each step can yield responses closer to what the user envisions. Strong analysts and managers should already be communicating with their teams in a sequenced way: breaking down a problem, sequencing the work, and reviewing outputs at each stage before moving forward.

Structuring work into a staged process—outline, draft, critique, refine—improves both clarity and reliability of the end product. The user can inspect the quality of work at each stage, make revisions, and consequently reduce the likelihood of errors, structural problems, or unsupported claims ending up in the final draft.

Verification should be embedded within each stage of this workflow—not treated as a final step. Each phase should include checks for accuracy, sourcing, and alignment before progressing.

Without this structure, organizations risk scaling inconsistency by producing outputs that vary widely in quality, depth, and accuracy depending on how the AI is used.

The underlying capability your workforce should develop is structuring processes in a way that produces outcomes better aligned to the goal, whether the work is done by a person or an AI tool.

Structured processes reinforce talent. Even strong individual judgment cannot compensate for the absence of a repeatable process.

3

Build verification processes.

AI outputs are often well-written and presented with confidence. That confidence can make it easy to skip a critical step: verification. But when outputs go unchecked, businesses can be opened up to significant risk. In the court case Mata v. Avianca, attorneys submitted filings that included AI-generated citations to cases that did not exist, because no one verified the sources.

The capability organizations need is not simply asking for sources, but building a deeper reflex: treat every conclusion—whether AI- or human-generated—as something to be tested. This is especially critical as AI becomes more embedded in core workflows, where unverified outputs can influence business decisions, client recommendations, or public-facing materials.

That means establishing a clear process for challenging assumptions, checking sources, and initiating a human review before work is treated as fact or final.

Example prompt:
“Provide sources for each claim. If a source cannot be verified, label it as unconfirmed.”

illustration of computer with many types of documents representing AI supervision

4

Train constructive pushback and judgment on AI outputs.

One of the less obvious risks of AI is how readily it agrees.

Research from Anthropic and others has shown that language models can exhibit “sycophancy,” a tendency to validate user assumptions rather than challenge them. That can make the tools useful collaborators, but not reliable critics—meaning that the burden of judgment lies with the human operators.

Strong organizations are training employees to actively pressure-test outputs, building AI workforce skills by asking for counterarguments, identifying risks, and clarifying what would change a conclusion. Without this discipline, teams may unknowingly reinforce weak reasoning, as AI-generated outputs mirror assumptions instead of interrogating them. Over time, this can lead to lower-quality thinking becoming normalized.

Pressure testing work extends beyond AI use. Strengthening critical thinking should be a key consideration for organizations who want to continue upskilling their workforces.

5

Coach, review, and iterate.

No experienced manager expects a junior analyst’s first draft to be final. The same expectation should apply here.

AI may accelerate iteration, but it does not eliminate the need for it. Reviewing outputs, providing targeted feedback, and refining direction are essential to improving quality. In fact, faster output cycles can increase risk if review processes do not keep pace, allowing low-quality or partially formed thinking to move forward more quickly than before.

Example prompt:
“This is too high-level. Add more specificity and concrete examples.”

The bottom line

The companies that harness the full business value of AI will not be the ones with the most advanced models. They will be the ones that invest in training their workforces, consistently reinforcing how to frame work clearly, set quality standards, verify outputs, and apply judgment.

That means successful AI usage relies on successful talent development.

World Economic Forum research consistently identifies skills gaps as a primary barrier to business transformation. Upskilling and reskilling workforces will be critical to harnessing value from AI.

Access to AI is widespread. The ability to generate value through better management skills is harder to attain.
That value will scale when organizations pair talent development with intentional structure through shared workflows, embedded verification, and clear expectations.

Increasingly, the differentiator will not be whether organizations use AI, but whether they can trust the outputs it produces. That trust is built through strong supervision and management practices.

The organizations that build the management and supervisory skills of their workforces, emphasizing review, verification, and accountability, will be the ones that turn AI from a promising tool into a reliable source of business value. 

Want to hear more? Let’s talk.

×
Back
×
Back
×
Back
×
Back
×
Back