What This Template Is For
Assessments are the backbone of any learning product. They prove that learners actually acquired the skills your course promises. Poorly designed assessments either measure the wrong things (recall instead of application) or create a false sense of progress that leads to bad retention and refund requests.
This template helps EdTech PMs and instructional designers plan assessments that are valid (they measure what they claim to measure), reliable (consistent scores across attempts), and fair (accessible to all learners). It covers question design, rubric creation, difficulty calibration, automated grading logic, and integrity safeguards.
Use this template alongside the Learning Experience Spec Template to ensure every assessment maps back to a defined learning objective. If your assessments do not align with objectives, you are measuring noise.
How to Use This Template
- Start with the Assessment Map. List every assessment in your course and connect each one to learning objectives.
- Design individual assessments. For each one, specify question types, difficulty distribution, and passing criteria.
- Write the grading rubric for open-ended assessments (projects, essays, peer reviews).
- Define automated grading rules for machine-gradable items (multiple choice, fill-in-the-blank, code exercises).
- Plan integrity measures. Decide which safeguards are proportionate to the stakes of each assessment.
- Test your assessments with a pilot group before launching to your full learner base.
The Template
Assessment Map
- ☐ List all assessments in the course (quizzes, projects, exams, peer reviews)
- ☐ Map each assessment to 1-3 learning objectives
- ☐ Classify each as formative (practice) or summative (graded)
- ☐ Assign weight toward final course grade or certification
| Assessment | Type | Mapped Objectives | Weight | Passing Score |
|---|---|---|---|---|
| [Module 1 Quiz] | Formative | [LO-1] | 10% | 70% |
| [Midterm Project] | Summative | [LO-1, LO-2, LO-3] | 30% | Rubric 3/5 |
| [Final Exam] | Summative | [All LOs] | 40% | 75% |
| [Peer Review] | Formative | [LO-4] | 10% | Completion |
| [Capstone Project] | Summative | [All LOs] | 10% | Rubric 4/5 |
Question Design
- ☐ Define question types for each assessment
- ☐ Set difficulty distribution (easy/medium/hard)
- ☐ Write question stems that are clear and unambiguous
- ☐ Create distractors (wrong answers) that reflect common misconceptions
- ☐ Ensure questions test application, not just recall
Question Type Reference
| Type | Best For | Auto-Gradable | Bloom's Level |
|---|---|---|---|
| Multiple choice | Factual recall, concept recognition | Yes | Remember, Understand |
| Multiple select | Identifying multiple correct factors | Yes | Understand, Analyze |
| Fill-in-the-blank | Terminology, precise knowledge | Yes (with synonyms) | Remember |
| Matching | Connecting concepts to definitions | Yes | Understand |
| Short answer | Explaining reasoning | No (rubric needed) | Apply, Analyze |
| Code exercise | Programming skills | Yes (test cases) | Apply, Create |
| Case study | Analysis and decision-making | No (rubric needed) | Analyze, Evaluate |
| Project | Synthesis and creation | No (rubric needed) | Create |
Difficulty Distribution
| Assessment Type | Easy | Medium | Hard |
|---|---|---|---|
| Module quiz (formative) | 40% | 40% | 20% |
| Midterm exam (summative) | 25% | 50% | 25% |
| Final exam (summative) | 20% | 45% | 35% |
| Certification exam | 15% | 50% | 35% |
Rubric Design (Open-Ended Assessments)
- ☐ Define evaluation criteria (3-5 dimensions per assessment)
- ☐ Write descriptors for each score level (1-5 scale)
- ☐ Calibrate rubric with 2-3 sample submissions
- ☐ Test inter-rater reliability if using human graders
| Criterion | 1 (Below Expectations) | 3 (Meets Expectations) | 5 (Exceeds Expectations) | Weight |
|---|---|---|---|---|
| [Criterion 1: e.g., Problem Definition] | [Vague, no evidence cited] | [Clear problem with supporting data] | [Compelling problem with multiple evidence sources and stakeholder impact] | 25% |
| [Criterion 2: e.g., Solution Design] | [Incomplete or impractical] | [Workable solution addressing core problem] | [Innovative solution with trade-off analysis and implementation plan] | 30% |
| [Criterion 3: e.g., Communication] | [Disorganized, unclear] | [Logical structure, clear writing] | [Polished, persuasive, audience-appropriate] | 20% |
| [Criterion 4: e.g., Technical Accuracy] | [Multiple errors] | [Mostly correct with minor gaps] | [Fully accurate with edge cases addressed] | 25% |
Automated Grading Rules
- ☐ Define correct answers for all auto-gradable questions
- ☐ Set acceptable answer variations (synonyms, rounding tolerance, case sensitivity)
- ☐ Configure partial credit rules where applicable
- ☐ Define retry policy (unlimited, 2 attempts, 1 attempt)
- ☐ Set time limits if applicable
| Rule | Configuration |
|---|---|
| Answer matching | [Exact match / Case-insensitive / Regex pattern] |
| Partial credit | [Yes: half credit for selecting 2 of 3 correct options / No] |
| Retry policy | [2 attempts, best score kept / Unlimited / 1 attempt] |
| Time limit | [None / X minutes per quiz / X minutes per question] |
| Question randomization | [Randomize order / Draw N from pool of M / Fixed order] |
| Answer shuffling | [Shuffle answer options / Fixed order] |
| Score display | [Show score immediately / Show after deadline / Show with explanations] |
Assessment Integrity
- ☐ Classify each assessment by stakes level (low, medium, high)
- ☐ Select proportionate integrity measures for each stakes level
- ☐ Document the academic honesty policy shown to learners
- ☐ Define consequences for violations
| Stakes Level | Examples | Integrity Measures |
|---|---|---|
| Low (practice) | Module quizzes, knowledge checks | Question randomization, answer shuffling |
| Medium (graded) | Midterm projects, peer reviews | Plagiarism detection, time limits, unique prompts |
| High (certification) | Final exams, certification tests | Proctoring, ID verification, lockdown browser |
Accessibility Requirements
- ☐ Ensure all questions are screen-reader compatible
- ☐ Provide extended time accommodations (1.5x or 2x)
- ☐ Offer alternative formats for visual questions (text descriptions)
- ☐ Test with keyboard-only navigation
- ☐ Review color contrast for interactive elements
See the EdTech Accessibility Template for a full accessibility checklist.
Open Questions
| # | Question | Owner | Status | Decision |
|---|---|---|---|---|
| 1 | [Unresolved question] | [Name] | Open | |
| 2 | [Unresolved question] | [Name] | Open |
Filled Example: SQL Certification Exam
Assessment Map
| Assessment | Type | Mapped Objectives | Weight | Passing Score |
|---|---|---|---|---|
| Module quizzes (5) | Formative | LO-1 through LO-5 | 20% total | 60% each |
| Hands-on SQL exercises (10) | Formative | LO-1, LO-2, LO-3 | 20% total | All test cases pass |
| Final exam (50 questions) | Summative | All LOs | 30% | 75% |
| Capstone query project | Summative | LO-2, LO-3, LO-4, LO-5 | 30% | Rubric 3.5/5 |
Automated Grading: SQL Exercises
| Rule | Configuration |
|---|---|
| Answer matching | Test case output comparison (exact match on result set) |
| Partial credit | Yes: correct columns but wrong filter = 50% credit |
| Retry policy | Unlimited attempts, best score kept |
| Time limit | None (self-paced) |
| Anti-cheating | 3 dataset variants per exercise, randomized assignment |
Success Metrics
| Metric | Target | Measurement |
|---|---|---|
| Average quiz score | 78%+ | LMS analytics |
| Exam pass rate (first attempt) | 65% | LMS reporting |
| Item discrimination index | > 0.3 for all questions | Psychometric analysis |
| Learner satisfaction with assessments | 4.2/5 | Post-course survey |
Track these with your product analytics dashboard to identify questions that need revision.
Key Takeaways
- Map every assessment to specific learning objectives before writing any questions
- Match integrity measures to the stakes level of each assessment
- Use a mix of auto-gradable items (efficiency) and rubric-graded items (depth)
- Analyze item discrimination after your first cohort to improve question quality
- Design questions that test application and analysis, not just recall
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
