TemplateFREE⏱️ 90-180 minutes
Knowledge Assessment System Template
Free template for building knowledge assessment systems. Plan question banks, scoring rubrics, item analysis, proctoring requirements, and adaptive...
Updated 2026-03-05
Knowledge Assessment System
| # | Item | Value (1-10) | Effort (1-10) | Score | Priority | Owner | |
|---|---|---|---|---|---|---|---|
| 1 | 3.0 | ||||||
| 2 | 2.5 | ||||||
| 3 | 1.8 | ||||||
| 4 | 1.2 | ||||||
| 5 | 1.1 |
#1
3.0
#2
2.5
#3
1.8
#4
1.2
#5
1.1
Edit the values above to try it with your own data. Your changes are saved locally.
Get this template
Choose your preferred format. Google Sheets and Notion are free, no account needed.
Frequently Asked Questions
How many questions should be in my item bank per assessment?+
Target a minimum 3:1 ratio (3 bank questions for every 1 on the exam). For high-stakes certification exams, aim for 5:1 or higher. A 50-question certification exam should draw from a bank of 250+ questions. This prevents memorization and ensures form equivalence. For low-stakes practice quizzes, a 2:1 ratio is sufficient since learners retaking the quiz are reinforcing learning, not gaming the system.
What is a good discrimination index for a question?+
Above 0.3 is good. Above 0.5 is excellent. Below 0.2 means the question fails to distinguish between learners who know the material and those who do not. It should be revised or retired. A negative discrimination index (strong learners get it wrong more often than weak learners) means the question is misleading or the keyed answer is wrong. Fix it immediately.
Should I show correct answers after an assessment?+
For formative assessments (practice quizzes, module checks): yes, immediately. The feedback is the learning. For summative assessments (graded exams): show after the submission deadline for the cohort. For certification exams: never show correct answers. Showing certification exam answers creates a memorization pipeline that destroys exam validity. Instead, show domain-level performance so learners know where to study.
How do I handle AI-generated answers on written assessments?+
Layer your defenses. Use in-session writing (browser-locked, timed) rather than take-home essays. Compare writing style against the learner's historical submissions. Ask questions that require personal experience or application to specific scenarios (hard to generate generically). For high-stakes assessments, add an oral follow-up where the learner must explain their written response. Accept that AI is a tool and design assessments that test the ability to evaluate and improve AI output, not just produce text from scratch.
When should I use adaptive testing (CAT) vs fixed-form exams?+
Use adaptive testing when you have a large item bank (500+ calibrated items per domain), need to measure precisely across a wide ability range, and want shorter exams (CAT typically needs 40-60% fewer questions for the same measurement precision). Use fixed-form exams when your item bank is small, you need to compare scores across learners directly, or regulatory requirements demand all learners answer the same questions. Most platforms should start with fixed-form and migrate to CAT after accumulating sufficient item-level performance data.
Related Tools
Founder Fit Assessment
Match your founder profile to validated SaaS ideas.
PLG Readiness Score
Score your product-led growth readiness across 7 dimensions.
PM Maturity Assessment
Assess your product management maturity across 6 key dimensions.
Launch Readiness Scorecard
Assess product launch readiness across 10 dimensions with a 40-item checklist.
Explore More Templates
Browse our full library of PM templates, or generate a custom version with AI.