Four guides, one system. Where to start on kickoff day, when to write each entry type, what Emerging vs Expert actually means, and what Design Award judges look for across 5 minutes of review.
1
Start Here
2
Season Timeline
3
Rubric Deep Dive
4
Student Ownership
5
Award Prep
// Section 01
The Notebook Pathway — Where to Start
Four guides, one pathway. Which guide to open on kickoff day, which to open the night before competition, and how they connect.
🎯
Start here if you have never set up a notebook before. This pathway walks you through every resource on this site in the order that makes sense — from kickoff day to pre-competition audit.
Open Getting Started first. Complete the first three entries before doing anything else. The template can wait until Day 2.
If you’re a coach setting up
Open Template Guide first. Build the master, duplicate for each team, then run kickoff. Students start in Getting Started.
If competition is in 2 weeks
Open Mission Control → Notebook Audit tab. Fix the red items. Then open Engineering Notebook → Interview Prep.
If you want Design Award
Open Engineering Notebook and read the rubric table. Your goal is Expert on all six criteria. Then read Judge Interview Playbook.
1 / 5
// Section 02
When to Write What
A season-by-season schedule showing when each entry type gets written and which site resources support each phase.
📅
The notebook is written in parallel with the robot — not after it. This schedule shows when each type of entry gets written and what site resources support each phase.
Kickoff Day
K
First Three Entries
All Team Members
Write: (1) Team roster with roles and ownership statements. (2) Season goals — measurable targets for robot performance and notebook quality. (3) Game analysis — scoring breakdown, priority elements, criteria and constraints.
Build log (date, members, what changed, why, before/after photo), CAD drawings, programming log with constants. Orange slides. One entry per change — not one entry per week.
Re-identify the problem based on competition data. The iteration divider slide signals the new EDP loop. This is what separates Expert notebooks from Proficient ones.
Run the notebook audit in Mission Control or the Template Guide. Fix every red item. Every entry needs Written By, Witnessed By, and Date. TOC must be current.
ExpertDetailed build log with photos. Code shown alongside design intent. Every change has a reason.
5. TEST & EVALUATE
EmergingTesting mentioned but results not recorded.
ProficientTests performed and results noted with some data.
ExpertOriginal testing. Data tables with n≥5 trials. Benchmark targets set before testing. Conclusions drive next action.
6. ITERATE (EDP CYCLES)
EmergingOnly one design cycle shown across the season.
ProficientTwo or more cycles visible with some continuity.
ExpertMultiple full cycles. Each cycle explicitly linked to data from the previous one. V1 → V2 comparison shows measurable improvement.
🔭
“Fully Developed” = scoring Emerging or higher on the first four criteria. That threshold gets your notebook scored at all. Below it, judges set it aside. Above it, every criterion is ranked and compared against other teams.
RECF
The Rubric Is a Sorting Tool, Not a Test
Judges use the rubric to rank notebooks quantitatively first, then apply qualitative judgment for final award decisions. A notebook that scores Expert on every criterion is not automatically the Design Award winner — but it is guaranteed to be in the final deliberation. A notebook that scores Emerging on most criteria will not make the cut, regardless of how good the robot was.
3 / 5
// Section 04
Student Ownership and EN4
What student-centered means in practice, what mentors can and cannot do, and the RECF EN4 rule on AI-generated content.
⚠️
RECF EN4 is explicit: using AI tools to generate, organize, enhance, or alter notebook content violates the Student-Centered Policy. This includes using AI to draft entry text, improve writing quality, suggest what to write, or fill in placeholder prompts. The template provides structure. Students provide everything else.
What “Student-Centered” Means in Practice
A student-centered notebook has one test: can every team member explain every entry they wrote, in detail, to a judge who asks follow-up questions? If yes, the notebook is student-centered. If no, it is not — regardless of how well-organized it looks.
✅
A student who says “I wrote that entry the night after we tested the intake and the numbers surprised us” owns that entry.
✅
A student who can point to the page, explain the test conditions, and describe what changed in response to the data owns that entry.
❌
A student who reads from the notebook but cannot answer “why did you choose that test protocol” does not own that entry.
❌
Entries all written in one session after the season ends. Judges use version history to verify chronological writing. This is visible.
❌
Writing style that does not match the student’s vocabulary, grade level, or demonstrated knowledge in the interview. Judges notice vocabulary mismatches.
The Originality Check — 3 Questions Before Every Submission
Could I explain this entry to a judge who asks three follow-up questions? If not, rewrite it in your own words until you can.
Does this entry describe something that actually happened, in the order it happened? A good entry reads like a lab notebook, not like a report written after the fact.
Are there at least two different writing styles visible across all entries? On a 3-person team, judges expect three voices. Identical phrasing across all entries is a flag.
What Mentors Can and Cannot Do
✅ Mentors can
Set up the template structure
Explain what the rubric criteria mean
Ask students questions about their work
Review entries and point out missing elements
Show examples of strong vs weak entries
Set up version history monitoring
❌ Mentors cannot
Write entries or rewrite student text
Tell students exactly what to write
Use AI to draft or improve entries
Fill in decision matrix scores for students
Edit entries after submission
Reconstruct entries retroactively
❌
RECF EN4: “The use of artificial intelligence / large language model (AI/LLM) programs or tools to generate, organize, enhance, or alter Engineering Notebook content or programming code is contrary to the RECF Student-Centered Policy.” This is not a gray area. If AI wrote it, it is a violation.
4 / 5
// Section 05
What Judges Actually Look For
What judges see in 5-8 minutes, the 5 things they skip, and what separates Design Award notebooks from the rest.
🏆
The notebook is judged on the same criteria as the interview. A team that knows their notebook cold — can point to any entry, explain what happened, and describe what they did next — will win more judge interviews than a team with a beautiful notebook they barely remember writing.
How Judges Evaluate Notebooks
Judges typically have 5–8 minutes per notebook. They are not reading every word. They are looking for:
Evidence that the EDP happened — problem defined, options compared, decision documented, built, tested, data recorded
Evidence that it happened more than once — iteration is the single biggest differentiator between Proficient and Expert
Evidence that multiple students contributed — different writing styles, different “Written By” names
Evidence that entries were written in real time — dates that match the season calendar, version history that shows progressive editing
The 5 Things Judges Skip
Decoration without content. Themed slide backgrounds, icons, and custom fonts do not score rubric points. Substance scores. Judges skip visually busy slides that say nothing.
Entries without dates. An undated entry is evidence-free. Judges cannot tell if it was written the day it happened or the week before competition.
Summaries of decisions without showing the process. “We chose a four-bar lift” is not evidence. “We compared three lift designs using a decision matrix — here are the scores and here is why we weighted torque most heavily” is evidence.
Test logs with no data. “We tested the intake and it worked” is not a test log. It is a one-sentence absence of evidence.
One author across 80 slides. If every entry shows the same “Written By” name, judges assume only one person understands the robot. They will probe the others in the interview.
What Separates Design Award Winners
Across the rubric criteria, Design Award notebooks consistently show:
3+ full EDP cycles with each cycle explicitly referencing data from the previous
Decision matrices for every major choice — drivetrain, primary mechanism, autonomous strategy, rebuild decisions
Test data with before/after comparisons — not just “we tested it” but “V1 jam rate: 15%, V2 jam rate: 3% after roller gap change”
STEM connections that name specific principles — gear reduction, moment of inertia, PID control, Newton’s second law — connected to specific mechanisms
Competition reflections that feed directly into the next EDP cycle
📝
The interview and the notebook tell the same story. When a judge asks “why did you choose that intake design,” the answer should match what is on page 18 of the notebook. Practice the interview with the notebook open. Point to the evidence as you speak.