Picture this - or maybe you don’t have to since you’re living it on the regular:
- You have 12 requests for new training in your backlog.
- You have one direct report
- a shared contractor you can book four hours a month
- and a budget that got cut in January.
- The SME you need is available for a thirty-minute call next Thursday.
- The launch date is in three weeks.
Sound familiar?
This is the reality for the learning design world for the better part of a decade. (I have existential thoughts on this that I’ll save for another time.) The gap between what organizations need and what learning teams can produce has only gotten wider. I’ve experienced this firsthand throughout my career, and it has led to burnout, ER visits (not mine!) and total and utter job dissatisfaction. I don’t think leveraging AI will close these very real gaps and shortcomings totally, but it can make a meaningful dent in them, while giving you back some of your time and your life.
This guide covers how learning designers are using Claude and ChatGPT today, how to actually start, and what to watch out for.
Start Here: AI-Assisted Synthesis
If you’re new to using AI for learning design work, start with getting to a better starting point for your design and development through AI-assisted synthesis.
Synthesizing SME interview notes, survey responses, and performance data into a clear problem statement and recommendation is time-intensive and doesn’t require the creative judgment that content design does. Let AI take some of the load off.
Here’s the workflow.
Step 1: Gather your raw inputs. Notes from three SME interviews, a summary of a performance management report, a handful of learner feedback comments from a previous program. Doesn’t need to be perfect.
Step 2: Write a context prompt. Tell Claude what you’re trying to produce: “I’m a learning design professional at a 2,000-person healthcare company. I’ve conducted three SME interviews and reviewed six months of performance data. I’m trying to write a 1-page needs analysis document that my VP can read in five minutes. It should include: the performance gap, the root cause, and a recommended intervention. Here are my raw notes: [paste notes].”
Step 3: Read the output critically. Claude will produce a clean, readable synthesis. Your job is to check it. Did it get the root cause right? Did it overemphasize something that isn’t actually the central issue? Did it include anything you can’t verify? Edit accordingly.
Step 4: Use the document in your next stakeholder conversation. Walk in with a one-page that took you ninety minutes instead of half a day.
This approach works for program evaluation summaries, post-training survey analysis, and learning analytics reports. Any task where you’re reading a lot of material and producing a shorter, structured synthesis of it. It’s low-risk, and it teaches you the most important thing about working with AI: the quality of what you get out is directly proportional to the context you put in.
Once you’ve got that pattern down, there are ways to supercharge it that I’ll talk about in a subsequent post. Things like using projects in Claude, skills (like producing structured reports or design briefs!), and Claude Co-work. But we’re learning designers, right? We know how to scaffold learning and make it bite sized, which I’m attempting to do here.
The Three Highest-Impact Uses of AI for learning design
Not everything in learning design benefits equally from AI. Some tasks are genuinely transformed. Others are incrementally faster but still require the same human judgment. Here’s where learning designers in our community are getting the clearest wins.
1. First-Draft Content from Structured Input
Content writing is the bottleneck in most learning design workflows. You’ve gathered all the assets, you’ve gotten the subject matter expertise, the learning objectives, all of it. You have the high level design down and the structure of a unit or module. The actual drafting of exercises, facilitator guides, learner workbooks, scenario scripts, knowledge check questions, and job aids eats the time you’d rather spend on design and strategy.
Claude and ChatGPT are genuinely good at producing first drafts from structured input. The key word is “first.” You’re not asking the tool to replace your judgment. You’re asking it to eliminate the blank page and give you something to react to.
The simplest version: you gather your SME notes and learning objectives, paste them into Claude with a prompt specifying your audience and what you need, and get back a working draft in five minutes instead of two hours. Your job is then to edit for accuracy, tone, and alignment to your organization’s standards. That editing process goes faster than starting from zero.
But the real leverage comes when you go further. Give the AI a style guide, a source document, your learning objectives, and a clear description of who the reader is and what they should be able to do when they’re done. What comes back isn’t a generic first draft. It’s a structured draft that already reflects your constraints, and your editing pass becomes substantive (fixing logic, tightening voice, catching what doesn’t land) rather than generative (staring at a blank page).
I’ve been doing this in my own work at an AI startup, developing a companion workbook to a published business book. I feed Claude the author’s manuscript, a detailed voice style guide, a product requirements document, and specific learning objectives for each chapter. What I get back is first a chapter brief that I use to have a deep design convo, and then a full chapter draft that holds the author’s voice, follows the structural patterns I’ve established, and targets the right audience. I still edit every word. But the difference between editing a draft that’s already 70% there and writing one from scratch is the difference between a half-day and a full week.
2. Scenario and Case Study Writing
Scenario-based learning is widely understood to be more effective than content dumps. It’s also time-consuming to write well. Good scenarios require realistic characters, plausible stakes, branching logic, and enough specificity to feel real without being so specific they alienate half the learner population.
AI is useful here in a specific way: generating scenario variations. Once you’ve written one strong scenario, you can ask Claude or ChatGPT to generate three variations on the same learning objective, changing the context, the characters, or the stakes. This gives you the range you’d normally spend a full writing sprint producing.
The prompt structure matters. “Write me a scenario about workplace conflict” produces generic results. “Write me a branching scenario where a team lead named Dana discovers that two direct reports have been giving each other negative informal feedback in front of customers. Dana has to decide whether to address it in the moment, wait for a formal 1:1, or escalate to HR. Write three decision points, each with two choices and a consequence. The learner should make mistakes and recover from them.” That level of specificity produces something you can actually use.
[NEEDS COMMUNITY QUOTE: An learning design professional describing how they use AI to generate scenario variations, particularly how they handle the editing pass to make characters feel more real or culturally specific]
3. Teaching AI How You Work (Not Just What You Need)
Here’s the leap most learning designers haven’t made yet.
Matt, one of my co-founders with a strong education and learning design background puts it this way: “Anything you do more than once, write down how to do it, and then tell Claude how you want it done. The next time, you can just invoke that approach and say, here’s the input, go do it. It turns a generically useful tool into a very specifically useful tool.” (Note: This is a Skill in Claude)
Start by writing down your process for one recurring task. How you structure a facilitator guide. How you write debrief questions. How you sequence a half-day workshop. Then paste that process into Claude alongside your next real request. What comes back will be closer to how you’d actually do it, not just how “any learning design professional” would do it.
This compounds. The more of your thinking you externalize and feed into the tool, the less time you spend coaching it per request. You’re essentially building a co-designer that gets better the more you invest in it.
The Skill That Actually Matters: The Iterative Loop
There’s something missing from most “how to use AI” guides, and it’s the thing that actually determines whether AI saves you time or wastes it.
The real workflow isn’t: write a prompt, get a draft, done. It’s: write a prompt, get a draft, tell the AI what’s not working, get a revised draft, push on a specific section, get it tighter, repeat. The back-and-forth is the skill.
If I had a nickel for every time I got a brain dump from an SME that I had to shape into something usable through seemingly endless rounds of “is this what you mean?’ ….
If that resonates, then you already know how to do this. You’re not starting from zero. You’re applying a skill you’ve been building your entire career, just with a different collaborator.
Your First Week: A 5-Day Micro-Practice Plan for Learning Design
Start with real work, not experiments. The fastest way to build capability is to apply AI to something you actually need to produce. Here’s an idea on a 5 day micro-practice plan I put together to get you spit balling.
Monday: Reverse-engineer your own work. Take a piece of content you’ve already produced, a facilitator guide, an assessment, a job aid. Paste it into Claude and ask: “Here’s a facilitator guide I wrote. What learning objectives does it appear to be targeting? What assumptions is it making about the audience?” Read what comes back. This teaches you how the tool sees your work, which teaches you how to direct it better when you’re asking it to produce something new.
Tuesday: Knowledge check draft. Pick any topic you’ve been asked to address recently. Write your learning objectives (even rough ones). Then prompt Claude: “Based on these learning objectives, write 5 multiple-choice questions with one correct answer and three plausible distractors each. Avoid trivially easy distractors. Here are my objectives: [paste].” Edit the questions against your content.
Wednesday: Scenario first draft. Think of a performance problem you’re currently designing for. Describe the situation, the characters, and what decision you want the learner to practice. Give that to Claude and ask for a scenario with two decision points and two choices each. See how close it gets without coaching from you on the first pass.
Thursday: Facilitator debrief questions. Take an activity from an existing workshop you’ve designed or delivered. Paste a description of the activity into Claude and ask: “Write five debrief questions for this activity that move learners from the experience to the principle to application in their own context.” Edit for the language and tone of your organization.
Friday: Reflect. Which of the four tasks felt most useful? Where did Claude give you something you could work with quickly? Where did you have to heavily edit? That pattern tells you where to focus your practice in week two.
Frequently Asked Questions
Can AI replace my SME interviews?
No. AI doesn’t know your organization, your learners, or the specific context that makes content accurate and relevant. It can help you prepare for SME interviews (drafting interview questions, synthesizing notes afterward), but it can’t replace the source material that comes from talking to real people.
What if I paste internal documents into Claude or ChatGPT? Is that a security risk?
This depends on your organization’s data policies. Many organizations have guidance on what can and can’t be pasted into consumer AI tools. Check with your IT or legal team before pasting anything marked confidential. Claude and ChatGPT both offer enterprise tiers with stronger data privacy commitments if your organization is ready for that conversation.
Will AI produce content that’s accurate for my specific industry?
Not automatically. And this is the part that matters: the dangerous output isn’t the stuff that’s obviously wrong. It’s the stuff that sounds right but misses a nuance specific to your regulatory environment, your learner population, or your organizational culture. Every piece of AI-generated content needs a human review pass, and the person doing that review needs enough domain knowledge to catch what’s plausible but off.
How do I handle the “AI is going to take my job” anxiety some learning designer colleagues are feeling?
Acknowledge it rather than dismiss it. The honest answer is that AI is going to change learning design work, not eliminate it. The skills that matter most in this field, understanding how people learn, designing for behavior change, facilitating conversations that shift perspective, are all things AI can’t replicate. What AI changes is the production layer: first drafts, variations, synthesis. Your expertise is in the judgment layer above that.
You Don’t Have to Figure This Out Alone
Learning designers are navigating this the same way everyone else is: in real time, with no /little roadmap, while still doing their regular jobs.
The ones figuring it out fastest aren’t the ones who took a course on AI or waited until they felt ready. They’re the ones practicing regularly and sharing what works with peers who are working through the same questions. That’s exactly what happens in MVP Club’s community sessions. Come see what people in your situation are building.