📝 Notebook · Judging · Awards

Engineering Notebook &
Design Process

The engineering notebook is not a journal — it is evidence of disciplined thinking. Testing data, decision matrices, and STEM connections are what separate a Design Award winner from a team that just competed hard. This guide teaches you how to build that evidence all season long.

1
Design Process
2
Testing
3
Decision Matrix
4
Documentation
5
STEM Connections
6
Interview Prep
// Section 01
The Engineering Design Process
The EDP is the backbone of your notebook. According to the RECF, judges evaluate notebooks against a rubric that mirrors each EDP step. A notebook that follows each step — repeatedly, across multiple design cycles — scores in the Expert column. One that does it once scores Developing.

The Six EDP Steps — Click Each to Explore

📝
New to the notebook? Read Getting Started with the Engineering Notebook first. That guide covers entry structure, common mistakes, and your first three entries. Come back here for the deep dive on testing protocols and decision matrices.
🔎
1. Define the Problem
🔎
2. Research
💡
3. Brainstorm Solutions
4. Select Best Solution
🔧
5. Build & Program
🔬
6. Test & Iterate
📝
Iteration is the key insight. The RECF and SigBots both emphasize: the EDP is not a checklist you complete once at the start of the season. Every significant design change — new mechanism, code overhaul, strategy pivot — is a new cycle. Teams that win Design Award repeat the full EDP 5–10 times across the season.

The Rubric Levels — Know Your Target

The RECF rubric scores each EDP component at three levels. Your goal is Expert on every row.

Criterion Emerging (1 pt) Proficient (2 pts) Expert (3 pts)
Identify ProblemProblem is listed but not clearly describedProblem is clearly stated with objectives and constraintsProblem is thoroughly described, constraints are specific and measurable
Brainstorm SolutionsOne or two ideas listed without explanationThree or more labeled sketches with descriptionsMultiple detailed diagrams with pros/cons and research backing
Select Best SolutionChoice made without explanationChoice explained with some reasoningDecision matrix with weighted criteria, written conclusion explaining why
Build & ProgramBuild notes or code exists but not linked to design decisionsSteps are recorded; code changes are notedDetailed build log with photos, code shown alongside design intent
Test & EvaluateTesting mentioned but results not recordedTests are performed and results notedOriginal testing with tables, graphs, benchmark targets, and conclusions
Repeat ProcessOnly one design cycle shownTwo or more cycles visibleMultiple full cycles with each cycle clearly linked to data from previous

🔬 Check for Understanding

According to the RECF rubric, what is required to reach the Expert level for “Select Best Solution”?
Choose a design and briefly say you liked it best
List the pros and cons of each option in a paragraph
A decision matrix with weighted criteria and a written conclusion explaining the choice
Build all three options and pick whichever works best
// Section 02
Testing Protocol
The SigBots wiki identifies testing as one of the most common point losses on the rubric — teams do the tests but fail to document them. Expert-level testing has a defined procedure, measurable benchmarks set before the test, recorded results, and written conclusions.
💡
The three-part testing rule: every test entry must have (1) a clear hypothesis — what you expect to happen and why, (2) a procedure — exact steps so someone else could repeat it, and (3) results with a conclusion — what actually happened and what you will do because of it.

What Makes Testing “Original”

The RECF rubric specifically requires original testing — performed by the team, not copied from another source. This means:

Types of Tests for a VRC Robot

Performance Tests
  • Cycle time: how fast does the intake collect one element?
  • Autonomous consistency: how many of 10 runs land within 2 inches of target?
  • Drive speed: time to cross the field
  • Scoring rate: elements per 30 seconds of driver control
Reliability Tests
  • Stall detection: does it trigger in under 200ms?
  • PID repeatability: standard deviation across 20 runs
  • Battery impact: does performance change at 60% vs 100%?
  • Mechanical stress: runs without failure over 10 matches

Interactive Testing Log — How to Format Entries

This is what a well-documented test entry looks like in your notebook. Every row tells the complete story:

DateTest & HypothesisResultTarget?Next Action
Oct 14Intake cycle time. Hypothesis: intake will collect one ring in under 0.8s based on 600rpm roller speed.0.94s avg (n=10)❌ MissIncrease roller voltage cap; retest
Oct 17Intake cycle time v2. Hypothesis: 12V voltage cap will bring cycle time below 0.8s target.0.71s avg (n=10)✅ PassDocument in notebook; move to drive testing
Oct 21Auton consistency. Hypothesis: robot will land within 2 inches of goal in 8/10 runs after IMU recalibration.6/10 within 2 in⚠ PartialTune kP up 0.1; retest at competition tiles
Oct 28Auton consistency v2. Hypothesis: kP=1.3 will reach 8/10 target on competition-spec tiles.9/10 within 2 in✅ PassLock in; document final PID constants
STEM Highlight
The Science of Measurement & Statistics

Every test entry above uses sample size (n=10), average, and comparison to a benchmark. These are core scientific measurement concepts. Being able to explain them to a judge demonstrates genuine STEM understanding — not just robot building.

σ
Standard Deviation
Measures consistency. A PID with σ = 0.3in is more reliable than one with σ = 1.5in. Calculate it across repeated autonomous runs.
n
Sample Size
Why n=10 matters: one good run could be luck. Ten consistent runs is evidence. The more trials, the more trustworthy your conclusion.
H
Hypothesis
A hypothesis is a prediction with a reason: “If we increase kP, then the robot will reach its target faster because the proportional response is larger.”
Δ
Change Variables
Change one variable at a time. If you change kP and the tile surface, you cannot know which caused the improvement.

💬 Interview answer: “We use controlled experiments — changing one variable at a time — and record n=10 trials per test so our conclusions are based on data, not luck. We compare results to pre-set benchmarks to decide whether a change is actually an improvement.”

🔬 Check for Understanding

Your teammate ran the autonomous 3 times, it worked all 3, and wrote “auton works” in the notebook. What is the biggest problem with this testing approach?
The test should have been run on a different field
The sample size is too small, there’s no benchmark, no procedure, and no conclusion — judges can’t evaluate it
Autonomous results should not be documented in the notebook
The test passed, so it doesn’t need more documentation
// Section 03
Decision Matrix
A decision matrix turns a subjective team debate into a structured, documentable, defensible decision. It shows judges that you made choices based on defined criteria — not just gut feeling or whoever argued loudest.

How a Decision Matrix Works

  1. List your design options — typically 3–4 alternatives being compared (e.g., three different intake designs)
  2. Define criteria — the specific factors that matter for this decision. For an intake: speed, consistency, part count, ease of build, risk of jamming
  3. Assign weights — not all criteria matter equally. Speed might be 3× more important than part count. Weights make this explicit
  4. Score each option — each option gets a score (1–5) for each criterion. Multiply by weight to get weighted score
  5. Total and conclude — the highest total does not automatically win. Write a paragraph explaining the result and whether you agree with the matrix
ℹ️
Always define your scoring scale in the notebook. Write: “5 = Excellent, 3 = Acceptable, 1 = Poor” before the matrix. Judges need to know what a score of 4 means for each criterion — otherwise the matrix is subjective and unconvincing.

Interactive Decision Matrix — Fill It In

Practice using a real decision matrix. Score each intake design 1–5 and see the weighted totals update automatically.

Criterion Weight 🔁 Roller Intake 👉 Claw Intake ♠️ Suction Intake
Cycle Speed×3
Consistency×3
Ease of Build×2
Jam Risk×2
Part Count×1
TOTAL
STEM Highlight
Mathematics: Weighted Scoring & Multi-Criteria Optimization

The decision matrix is a real mathematical tool called Weighted Sum Model (WSM) — used by engineers everywhere from NASA to software product teams. Each total is calculated as: Total = ∑(criterion score × weight). This is a dot product of two vectors.

Weighted Sum
Total score = sum of (score × weight) for each row. This is the same operation as calculating a weighted average on a report card.
Trade-off Analysis
Engineering is always about trade-offs. A faster intake that jams more often may score lower overall than a slower but more reliable one.
w
Weight Assignment
Weights encode your team’s priorities. If you decide speed matters 3× more than part count, you are making a mathematical statement about your strategy.
?
Sensitivity Analysis
What happens if you change one weight? If the winner changes, your decision is sensitive to that criterion — document this in your notebook.

💬 Interview answer: “We use a weighted decision matrix. We assign weights based on what matters most for this game, score each option 1–5, and multiply. The math makes our reasoning visible — judges can see exactly why we chose what we chose, not just that we chose it.”

🔬 Check for Understanding

Your decision matrix gives Intake Design B a total of 47 and Design A a total of 44. A teammate says “so we pick B, done.” What should you add to the notebook?
Nothing — the math speaks for itself
A written conclusion explaining the result, whether the team agrees, any factors the matrix may have missed, and confirmation of the final decision
Build both designs and test them before deciding
Re-run the matrix with different weights until Design A wins
// Section 04
Documentation Best Practices
RECF guidance is explicit: notebooks are evaluated on content and clarity, not length or production quality. A notebook entry that clearly records one design decision with labeled diagrams, a test result, and a conclusion beats 10 pages of photos with no explanation.

The Anatomy of a Notebook Entry

Every entry should have these elements. Build them into a template your whole team uses so documentation is consistent even when written by different members:

1. Header
Date · Team member(s) present · Entry title · Which EDP step this entry covers. Example: “Oct 21 · Alex & Jordan · Intake v3 Build Log · EDP Step 5: Build & Program”
2. Context
Why this entry exists. What problem or question prompted this work. Link to the previous entry that led to this decision. “After testing showed intake cycle time was 0.94s (above our 0.8s target), we redesigned the roller geometry.”
3. Labeled Diagram or Code Snippet
Drawings must have labels. Photos must have annotations. Code should be shown alongside its purpose — not dumped in without context. “Figure 3: New intake geometry with 45° angle showing how rings now self-center”
4. Test Results
A data table or measurement. What was the specific result? Include n= (how many trials). Compare to your benchmark. “Cycle time: 0.71s avg (n=10). Target: 0.8s. PASS.”
5. Conclusion & Next Step
What did you learn? What will you do next? This creates the chain between entries that judges look for. “New roller angle solved the timing issue. Next: test whether the new geometry causes any jam risk with bent rings.”

Physical vs Digital Notebooks

The RECF states both formats are evaluated equally and neither has an inherent advantage. The key rules:

🏆
The “5-minute rule” for notebooks: after every build or coding session, spend 5 minutes writing what you did and why. Do not wait until the end of the week. Memory fades, details are lost, and the notebook starts to sound reconstructed rather than genuine. Judges can tell the difference.

📝 Practice: Build a Notebook Entry

Fill in these fields as if documenting today’s session. The completed entry appears below.

STEM Highlight
Engineering: Traceability & Design History

Professional engineers maintain design history files — every decision, test, and change is traceable forward and backward. Your notebook is your design history file. In aerospace, medical devices, and automotive engineering, this documentation is legally required.

Traceability
Each entry links to a previous one. “This arm redesign was triggered by test data from entry Oct 14.” Judges can follow the chain of reasoning.
Iteration Loop
Real engineering is iterative: design → build → test → analyze → redesign. Your notebook should show multiple passes through this loop, not just one.
📋
Change Log
Document every meaningful change. Even failed attempts are valuable — they show you explored the problem space thoroughly before committing to a solution.
👥
Team Attribution
Note who was present for each session. Judges verify that students (not coaches) did the work. Each member’s contribution should appear across multiple entries.

💬 Interview answer: “Our notebook is structured like a professional design history file. Every entry has a date, who was there, what EDP step it covers, and a link to what came before. You can trace any design decision from the problem statement all the way to the final robot.”

🔬 Check for Understanding

A teammate suggests going back to your October entries and adding more detail before submitting the notebook to judges next week. What does the RECF say about this?
It’s encouraged — judges want to see thorough entries
You can edit digital entries any time since there’s no paper trail
Retroactively editing past entries is not appropriate. Instead, add a new current-date entry that references and supplements the earlier one
Only physical notebooks need to stay unedited; digital ones can be revised
// Section 05
STEM Connections Across Your Robot
Every system on your robot is a live STEM lesson. Judges want to hear you connect what you built to what you learned. This section maps the real science, math, engineering, and technology behind the things your team already does.
🏫
Why STEM connections matter in the interview: judges are often STEM professionals. When a student says “we chose a 4-bar linkage because it maintains a constant angle through its range of motion — that’s conservation of angular relationships” instead of “we built an arm,” the judge hears someone who actually understands engineering. This depth is what separates Design Award winners at State and Worlds.

Click a System to See Its STEM Connections

🔬 Check for Understanding

A judge asks: “Tell me about the math behind your PID controller.” Which answer best demonstrates STEM understanding?
“PID stands for Proportional, Integral, Derivative. We tuned it until it worked.”
“We used EZ Template which has PID built in, so we didn’t need to do math.”
“The P term produces a correction proportional to the current error — like a spring force. When error is large, correction is large. kP controls how aggressive that response is. We tuned it by measuring overshoot and settling time, which are control theory metrics.”
“Our PID scored 9 of 10 autonomous runs so it’s working well.”
// Section 06
Interview Preparation
The pit interview is your chance to show judges the thinking behind the notebook. Every team member should speak. The EDP should structure your presentation. SigBots: “Practice, Practice, Practice.” This section gives you the questions to practice with.
🏆
Two types of interviews: the pit interview (every team gets this — broad overview of your whole season) and the secondary interview (judges pull you aside for deeper questions on specific topics). Scoring high on the rubric earns you the secondary interview. Nail the secondary to win the award.

The Six Interview Rules

  1. Every team member speaks — assign each person a topic. Judges verify that all members understand the robot, not just one programmer.
  2. Follow the EDP structure — introduce the problem → research → options considered → why you chose your design → how you tested → what changed → where you are now.
  3. Bring data — hold the notebook open to a test table or decision matrix when discussing it. “As you can see here...” is stronger than reciting numbers from memory.
  4. Mention iterations — tell judges when things failed and what you learned. Failure + learning = engineering. Judges specifically look for this.
  5. Tie back to STEM — connect at least one technical decision to a math or science concept. This one moment often changes how judges see your team.
  6. Practice with a timer — pit interviews are short. Covering everything in 5 minutes requires deliberate practice, not improvisation.

Practice Questions — Click to Reveal Answer Frameworks

“Walk me through your design process this season.”
This is the big EDP walkthrough question. Structure: problem → options → decision → build → test → iterate.
Start at the beginning of the season: “We began by analyzing the game manual to define what a high-performing robot needed to do. We identified three key challenges: [X, Y, Z]. We then brainstormed and sketched three different approaches for our intake mechanism, evaluated them using a decision matrix weighted for speed and consistency, and chose [option] because it scored highest while also being within our build timeline. We built a prototype, tested it against our benchmark of [number], found it [passed/failed], and iterated by [change]. By October we had completed [n] full design cycles on the intake alone.”
“Why did you choose this drivetrain?”
Reference your decision matrix. Mention physics. Show you considered alternatives.
“We evaluated three drivetrain options in a decision matrix using criteria: speed, maneuverability, pushing power, and ease of build. We weighted maneuverability 3× because the game rewards quick positioning changes. [Chosen drive] scored highest weighted total of [X]. From a physics standpoint, our 4-inch wheels with a 36:48 gear ratio give us a theoretical top speed of [value] while maintaining enough torque to push opponents off scoring positions. We tested measured speed at [X in/sec] and confirmed it matched our calculation within 8%.”
“How did you develop your autonomous?”
Testing methodology + iteration + data. Mention PID tuning approach and consistency metrics.
“We use EZ Template’s PID system. We tuned kP for the drive by running 20 trials at 36 inches and measuring overshoot and settling time using data logging to an SD card — which is in our notebook here [show table]. Our benchmark was landing within 2 inches in 8 of 10 runs. We reached that after three tuning iterations. We also document our autonomous consistency rate across practice matches — currently we succeed in [X]% of qualification matches, which we track in our competition log.”
“What failed, and what did you learn?”
This is your chance to shine. Every judge wants to hear about failure and recovery — it shows genuine engineering.
“Our first intake design jammed constantly in testing — we documented [X] failures in 30 trials. We analyzed the failure mode in the notebook: the ring was catching on the bottom edge of the roller housing. We redesigned with a 15° chamfer on that edge, retested 30 trials, and got zero jams. This is on page [X] of our notebook. The lesson was to test edge cases, not just typical operation — our benchmark now always includes deliberate stress tests at the design’s limits.”
“Tell me about a math or science concept in your robot.”
Pick one concept you understand deeply. Quality over quantity. Do not list five things superficially.
“Our PID controller implements closed-loop feedback control. The core math is: output = kP×error + kI×∫error dt + kD×(d/dt)error. The P term acts like Hooke’s Law — the correction force is proportional to how far you are from target. The D term acts like a damper in a spring-mass system — it reduces oscillation. We tuned these by treating it as a physics problem: we wanted critical damping, where the robot reaches the target without overshoot. Our data logs show the damping ratio we achieved.”
“How is your robot this year different from last year?”
Show growth. Reference notebook entries from early in the season vs now. Mention specific metrics.
“We have measurable improvements in three areas. Autonomous consistency improved from [X]% to [Y]% — documented across [n] competition matches. Our intake cycle time decreased from [A]s to [B]s after redesigning the roller geometry, documented with before/after test tables in the notebook. And our engineering process itself improved — last season we had [n] documented test entries. This season we have [n+X], with consistent benchmark targets in every entry. We learned that documentation is not extra work — it is what allows us to improve reliably instead of guessing.”
STEM Highlight
Technology: Systems Thinking & Feedback Loops

Your entire robot is a system — interconnected subsystems that depend on each other. The EDP itself is a feedback loop: you test, analyze, and redesign based on results. Systems thinking is one of the core competencies of engineering and computer science.

🌐
System Integration
A change to the intake affects autonomous timing, which affects scoring strategy, which affects your driver’s practice needs. Documenting these interactions shows systems-level thinking.
Feedback Loop
The EDP is a feedback loop: design → test → result → analyze → redesign. This is the same structure as a control algorithm. Your notebook makes this loop visible.
📈
Performance Metrics
Choosing the right metrics to track is itself a technology skill. Cycle time, standard deviation, pass rate — each measures something different about your robot’s performance.
👥
Collaboration
Software engineering uses version control; your team uses a notebook with dated entries and named contributors. The parallels to professional team software development are direct.

💬 Interview answer: “We view the robot as a system where every subsystem affects the others. Our notebook documents not just isolated tests but how changes in one area rippled through to others — for example, when we sped up the intake, we had to re-tune our autonomous timing because the cycle was now shorter than the PID had assumed.”

🔬 Check for Understanding

During your pit interview, the judge asks “what science is in your robot?” and you have 60 seconds. Which approach will score the most points?
Quickly list every STEM concept you can think of: “there’s physics, math, programming, electricity, mechanics...”
Say “robotics uses a lot of STEM” and move on to describing the robot
Pick one concept you understand deeply and explain it with specifics: “Our drive PID implements feedback control — the correction is proportional to error, like Hooke’s Law. We tuned it by measuring settling time across 20 trials and targeting critical damping.”
Show the judge your test data table and let them figure out the STEM themselves
Related Guides
🏆 Judge Interview Prep →🔬 Testing System →📅 Season Timeline → 📄 Notebook Template Guide → 🗺 Notebook Pathway Overview →
← ALL GUIDES