# Demodesk AI Summary Prompt Lab

Design custom AI summary prompts, automation rules, and CRM-sync-aware summary systems for Demodesk meeting types. This skill is for admins and RevOps users who configure Demodesk — not for reps requesting one-off call recaps. For each meeting type you specify, it produces a Summary Design Pack: a production-ready prompt, automation recommendation, CRM sync notes, QA checklist, and test scenarios.

## Pre-Work Framework

Before running this skill, gather the following context:

1. **What meeting type are you designing for?** Discovery, demo, or QBR — or a close variant you need mapped to one of these.
2. **What is the business goal for the summary?** Examples: capture MEDDIC fields cleanly, create a manager-readable recap, produce a CS leadership summary, or generate notes that sync cleanly into CRM.
3. **Where does the summary need to land?** CRM destination (Salesforce event notes, HubSpot activity notes, deal-level note) or downstream workflow (manager review, coaching queue).
4. **What language is required?** English or German. For German, the skill defaults to formal business phrasing unless you specify otherwise.
5. **Is methodology relevant?** If your team uses MEDDIC, SPICED, SPIN, or Challenger, provide it — the prompt will reflect it directly.
6. **Should prior-meeting context be included?** State this explicitly. The skill does not assume prior context by default.

## Core Principles

### Prompt Design is Setup Work, Not Rep Work
This skill operates at the admin and RevOps layer. The output is a system configuration decision — it affects every call of that meeting type going forward. Treat it with the same rigor as a CRM field mapping or an automation rule, not as a one-off writing task.

### First Summary Wins
Demodesk automatically syncs only the **first AI-generated summary** to CRM. If the prompt is wrong or the automation fires at the wrong time, the first summary that reaches CRM will be the wrong one. Every design decision in this skill is optimized around getting the first run right.

### Meeting Type Drives Everything
Discovery, demo, and QBR are structurally different conversations with different stakeholders, different information goals, and different downstream uses. A single generic prompt cannot serve all three well. Always design meeting-type-specific prompts, even if they share some structure.

### Observable QA Before Ship
A summary prompt should be tested against at least three scenarios before it goes live: a strong-fit meeting, an edge-case meeting, and a known failure mode. Do not deploy a prompt that has only been tested mentally.

## The Process

### Phase 1: Context Gathering (5 min)
1. Confirm the meeting type. If unclear, infer the closest of discovery, demo, or QBR and state the assumption explicitly.
2. Confirm the CRM or downstream destination. If missing, recommend a generic meeting-note destination and flag the assumption.
3. Confirm language. Default to English unless German is specified.
4. Confirm methodology. If none is provided, use best-practice sections for the meeting type — do not invent a framework.
5. Confirm whether prior-meeting context should be included. If yes, add an explicit instruction to the prompt; otherwise keep the prompt focused on the current meeting.

**Decision point:** If the request is really about scorecards, coaching rubrics, or CRM field extraction rather than summaries, keep those adjacent requests lightweight and note that they belong in a separate skill.

**Exit criteria:** You have confirmed meeting type, destination, language, and methodology (or absence of one).

### Phase 2: Prompt Design (10–15 min)
1. Choose the correct prompt pattern for the meeting type:
   - **Discovery**: Buyer pain, urgency, goals, process, decision criteria, stakeholders, agreed next steps
   - **Demo**: Use cases shown, stakeholder reactions, objections, feature relevance, fit gaps, commercial follow-ups
   - **QBR**: Goals reviewed, usage/adoption data, outcomes delivered, risks, blockers, expansion signals, agreed next steps
2. Draft a custom summary prompt that is goal-oriented, directive, and structured into labeled sections.
3. Specify tone and style explicitly in the prompt. Do not leave it to inference.
4. If methodology is provided, reflect it directly in the section labels (e.g., MEDDIC fields for a discovery prompt).
5. If prior-meeting context was requested, add an explicit instruction: *"Include relevant context from previous meetings where available."*

**Exit criteria:** The prompt is structured into labeled sections, goal-oriented, and ready to paste into Demodesk as a custom summary prompt.

### Phase 3: Automation & CRM Configuration (5–10 min)
1. Recommend the best automation trigger using meeting type, title, team, or meaning-based context.
2. Specify what to avoid: triggers that are too broad (fires on all meetings), too narrow (misses valid meetings), or prone to collision with other meeting types.
3. Add CRM sync notes including recommended destination, relevant fields or objects, formatting considerations.
4. **Always include the first-summary sync warning**: the first automated summary is the one that syncs to CRM — the prompt and trigger must be production-ready before the automation goes live.

**Exit criteria:** Automation trigger is defined with collision avoidance. CRM sync notes include the first-summary warning.

### Phase 4: QA & Testing (10 min)
1. Generate 5–8 pass-fail QA checks specific to this prompt design.
2. Build three test scenarios: a good-fit meeting, an edge-case meeting (mixed content, unusual attendees, or ambiguous meeting type), and a known failure mode (a meeting that should not trigger this prompt).
3. Generate 3 improvement suggestions the admin could test in the next iteration.

**Exit criteria:** All QA checks are observable (pass or fail, not subjective). Three test scenarios cover the full range of realistic inputs.

## Anti-Patterns

### The Generic Summary Prompt
**Description:** Writing a prompt that says only "summarize the call" or "capture key points," with no meeting-type specificity, no labeled sections, and no downstream destination.

**Why it's harmful:** The AI has no structure to follow, so outputs vary unpredictably across calls. CRM notes become inconsistent and unusable for coaching or pipeline review.

**What to do instead:** Always specify the meeting type in the prompt, define labeled output sections, and state the downstream use. A discovery prompt should explicitly ask for pain, urgency, decision process, champion, and next steps — not "key points."

### The Wrong Automation Trigger
**Description:** Setting an automation trigger that is too broad (fires on every meeting) or that uses a keyword likely to appear in unrelated meeting titles.

**Why it's harmful:** The wrong summary type gets generated and syncs to CRM for meetings it was never designed for. Because only the first summary syncs, there is no automatic recovery — someone has to manually correct CRM records.

**What to do instead:** Use meaning-based triggers where available. Scope triggers to specific meeting types, team names, or title patterns that are unlikely to collide. Test the trigger logic against your existing meeting history before enabling.

### Deploying Without a Test Run
**Description:** Designing a prompt in isolation and enabling the automation immediately, without running it against real or representative meeting data.

**Why it's harmful:** The first live meeting becomes the test. If the prompt or trigger is wrong, the first summary syncs to CRM before anyone catches it.

**What to do instead:** Run the three test scenarios from Phase 4 before going live. Use Demodesk's test or staging environment if available. Only enable the automation after at least one successful end-to-end test run.

### Designing One Prompt for All Meeting Types
**Description:** Creating a single summary prompt and applying it to discovery, demo, and QBR meetings because "they're all sales calls."

**Why it's harmful:** Discovery needs pain and decision criteria. Demo needs stakeholder reactions and fit gaps. QBR needs adoption data and expansion signals. A shared prompt produces shallow, generic output for all three.

**What to do instead:** Design one prompt per meeting type. Share structural patterns where relevant (e.g., the next-steps section) but keep the core content sections meeting-type-specific.

## Output Format

The skill produces a **Summary Design Pack** returned in this order:

**1. Summary Design Pack Overview**
- Skill intent, meeting type, primary user, language, downstream destination, methodology used or inferred

**2. Custom Summary Prompt**
- One production-ready prompt inside a fenced markdown block, ready to paste into Demodesk

**3. Automation Recommendation**
- Suggested trigger logic, why it fits this meeting type, what to avoid

**4. CRM Sync Notes**
- Recommended destination, fields or objects to consider, formatting considerations, first-summary sync warning

**5. QA Checklist**
- 5–8 pass-fail checks observable without subjective interpretation

**6. Test Scenarios**
- Good-fit meeting, edge-case meeting, failure-mode meeting — each with expected behavior

**7. Improvement Suggestions**
- 3 specific, testable prompt refinements the admin could try in the next iteration

## Task-Specific Questions

### When setting up for the first time
- What meeting types does your team run most frequently?
- Which CRM object should the summary land on — event, activity, deal note, or custom object?
- Do you use a sales methodology (MEDDIC, SPICED, SPIN, Challenger)? Should it be reflected in summary section labels?
- Who will review the summaries — managers, CSMs, or reps themselves?

### When the meeting type is unclear
- Is this meeting closer to a first conversation (discovery), a product walkthrough (demo), or a business review (QBR)?
- Who attended — primarily the prospect, primarily the customer, or a mix?
- What was the primary goal: to understand the buyer's situation, to demonstrate capabilities, or to review outcomes?

### When German output is needed
- Should the output use formal business German (default) or a more casual register?
- Are there specific German B2B conventions to follow — for example, Sie vs. du in the prompt instructions?
- Is this for a DACH team or a specific country subset (Germany only, Austria, Switzerland)?
- Should the prompt itself be written in German, or only the output?

### When an existing scorecard or methodology is provided
- Should this prompt align with your existing MEDDIC/SPICED fields exactly, or use them as a guide?
- Are there CRM fields already mapped that the summary should feed into?

## Quality Checklist

Before finalizing the Summary Design Pack, verify:

1. **Meeting-type specificity**: The prompt is written explicitly for discovery, demo, or QBR — not generically for "any sales call"
2. **Labeled sections**: The prompt instructs the AI to return output in clearly named sections, not free-form prose
3. **First-summary sync warning**: The CRM sync notes explicitly state that only the first automated summary syncs to CRM
4. **Automation collision check**: The trigger logic includes at least one constraint to prevent misfires on unrelated meeting types
5. **Language match**: The output language and formality level match what was requested
6. **No generic filler**: No section contains advice like "customize as needed" without a concrete option following it
7. **Immediately usable**: An admin or RevOps owner could take this pack and configure Demodesk without needing to interpret or rewrite any section
8. **Test scenarios are realistic**: Each test scenario describes a plausible meeting — not an abstract hypothetical

## Edge Cases

- **Meeting type is missing**: Infer the closest of discovery, demo, or QBR based on available context and state the assumption clearly. Ask for confirmation if the request is ambiguous.
- **No CRM destination provided**: Recommend a generic meeting-note destination and flag that field-level mapping should be confirmed by the admin before going live.
- **No methodology given**: Use best-practice sections for the meeting type (e.g., pain, urgency, stakeholders, next steps for discovery) rather than inventing a framework label.
- **Prior-meeting context requested**: Add an explicit instruction in the prompt: *"Where relevant, reference context from previous meetings with this account."* Do not include this by default.
- **Request overlaps with scorecards or coaching rubrics**: Keep it lightweight and note that scorecard design belongs in a separate skill. Do not attempt to combine both in one output.
- **German output**: Default to formal business language (Sie form). If the user specifies a more casual register, adjust. Keep English loanwords that are standard in German B2B SaaS (Pipeline, Deal, Meeting, CRM, Onboarding).
- **Existing prompt provided for improvement**: Improve it rather than replacing it from scratch, unless the user explicitly asks for a rebuild.

## Related Skills

- [Sales Call Summarizer](/skills/sales-call-summarizer) — Rep-side post-call summary generation from transcripts
- [Post-Demo Summary Writer](/skills/post-demo-summary) — Structured demo recap for AEs and managers
- [CRM Auto-Updater](/skills/crm-auto-updater) — Automated CRM field population from call data
- [Sales Rep Scorecard](/skills/rep-performance-scorecard) — Coaching scorecard and rubric design for managers

## Example Prompts

- "Design a Demodesk summary setup for discovery calls with MEDDIC fields syncing into Salesforce event notes"
- "Create a German summary design pack for demo meetings with formal wording and prior-call context included"
- "Build a QBR summary prompt for customer success reviews in HubSpot with adoption data, blockers, and expansion signals"
- "We're not sure if the meeting is discovery or demo — build the best-fit summary pack and state your assumption"
