An end-to-end training program built specifically for your legal team. Not a generic AI training slide deck — a custom curriculum designed around your matters, your tools, and the work your lawyers actually do each week.
Every M41 training engagement is custom-scoped. The list below shows what's typically included — we adjust scope, format, and follow-up based on your team's size, the depth you need, and the outcomes you're after.
The curriculum is custom every time, but most engagements cover these areas. Each is grounded in your actual matters, tools, and workflows — not generic AI 101.
Microsoft Copilot in Word, Outlook, Excel, and Teams. What it does, what it doesn't, and how to get value from a license you may already own.
What can go into Claude, ChatGPT, or Copilot without crossing a line. When enterprise deployments are worth it. Working alongside your InfoSec lead to set the boundaries.
Building out negotiating playbooks. First-pass markup of incoming contracts against existing playbooks. Issue spotting and key-term extraction. Where AI is strong — and where humans still own the call (deal economics, novel terms).
Summarizing long documents, extracting key dates and obligations from contracts, drafting first-pass memos and research synthesis. The kind of work that eats hours a week.
"Can it help me with my inbox? Can it draft a reply? Can it summarize this 80-page thread?" The questions your lawyers are actually asking, with hands-on answers.
Where Harvey, Legora, CoCounsel, and other legal-AI products fit — and where a general-purpose model plus good workflows gets you 80% of the way at a fraction of the cost.
Book a 30-minute call. We'll talk through your team, your matters, and what an engagement would look like for you.
That's the first thing we work on, not the last. A portion of the training is dedicated to what should and shouldn't go into a public AI tool, when an enterprise deployment (Claude for Work, ChatGPT Enterprise, Copilot M365) is worth the upgrade, and how to think about data residency and DPAs. For client-specific or privileged work, we set up workflows that either stay in your sanctioned environment or use prompts that don't require sensitive input.
The horror stories are real and they happened because lawyers used AI for the one thing it's worst at — citing case law — without verification. We treat hallucination risk as a workflow-design problem: every use case includes a verification step, and the training puts case-citation tasks firmly in the "high risk, not yet" bucket. The work AI is genuinely good at — summarization, redline suggestions, knowledge synthesis — doesn't carry the same risk profile.
Those tools are powerful but they're not always the right starting point. Most in-house teams can get 80% of the value from a general-purpose tool (Claude, ChatGPT, Copilot) plus good workflows — for a fraction of the cost. M41 will help you decide where a vertical legal-AI product is worth the spend and where it isn't. We're tool-agnostic, with no commercial relationships with any of them.
Yes. We'll work in whatever environment your team is sanctioned to use. The training is tool-agnostic by design — the workflows transfer between Claude, ChatGPT Enterprise, Copilot, Gemini, and most private deployments. If your team has its own LLM gateway or a vendor-specific tool, we'll learn its quirks and tailor the training to it.
The training covers the current state of bar guidance (ABA Formal Opinion 512, the state-level opinions, and how to read what's coming next). M41 isn't your ethics counsel — that's a job for your bar counsel or your own ethics team — but the training equips your lawyers to ask the right questions and stay on the right side of competence and confidentiality duties.
M41 is led by Mike Michaels, who heads content design, pre-work, and the lead facilitator role. For larger engagements — especially split-track formats — we bring in vetted co-facilitators from a trusted bench, typically one per 20 hands-on attendees. Co-facilitators run breakouts and table-level Q&A. Curriculum, materials, and the tech-stack analysis are designed end-to-end by M41.
No. Prompt engineering as a separate skill is largely a 2025 thing — modern AI models handle natural language well, and the training focuses on workflows rather than special syntax. Your lawyers will be productive in the first session without learning any special techniques.
Every engagement is custom-scoped. Pricing varies with team size, format, depth, follow-up structure, and whether co-facilitators are needed. We confirm pricing in writing after the free 30-minute scoping call, when we have enough detail to scope the work properly.
Working sessions with team leads or designated SMEs in the weeks before the live session. The goal is to ground the agenda and the prompts in real work — the kind of contracts your team negotiates, the matter types you handle most, the tools you're stuck with and the ones you can choose. By the time the session happens, the materials look like they were written by someone who already works there.
Four to six weeks is the comfortable window — enough to do the pre-work properly and prep custom materials. Faster turnarounds are possible but the customization suffers.
That's our Workflow Adoption Engagement — a 4–8 week embedded consulting follow-up to lock in the workflows after the training is done. Most teams don't need it, but some do.
Book a 30-minute scoping call. We'll talk through your team, your matters, and what would actually move the needle.