Sources & confidence: Templates are designed to align with RECF V5RC Engineering Notebook Rubric (EN1–EN10) and the Engineering Design Process (Identify, Brainstorm, Plan, Build, Test, Reflect). EN4 prohibits AI-generated content in notebooks — these templates show structure and prompts; your team writes the actual content in their own words. The templates are not graded by us; we provide a scaffold based on common rubric expectations.
// Section 01
Sensor Notebook Templates 📝
Engineering notebook entry patterns specifically for sensor work. Five entry types: build, calibration, programming, debug, and tournament logs. Each one structured to satisfy EN1–EN5 rubric criteria with sensor-specific evidence.
The standard engineering notebook template (covered in Notebook Template guide) handles general design entries well, but sensor work has rhythms that don't fit cleanly:
Calibration entries are mostly tables of measured values. Different from a build entry.
Debug entries trace from symptom to cause to fix. Different from an iteration entry.
Tournament logs for sensors capture what worked under venue lighting, what re-calibrated. Different from match notes.
The five templates in this guide are designed to be reused across the season. Pick the right one for the work you did, fill it in, paste sensor data, photo of the mount.
The Five Entry Types
Build Entry
You mounted a sensor. What part, where, why, with what hardware, and what does it look like? — Section 02.
Calibration Entry
You measured live values from a sensor and built a lookup. Hue ranges, pot angles, GPS reference points. — Section 03.
Programming Entry
You wrote or revised code that uses a sensor. Pseudocode (NOT actual code per EN4), test results, integration with chassis library. — Section 04.
Debug Entry
A sensor wasn't working. You traced the cause. Symptom, hypotheses, tests, root cause, fix. — Section 05.
Tournament Log
What sensor performance looked like at a real event. Venue conditions, recalibrations needed, failures observed. — Section 06.
EN4 Reminder Up Front
⚠️
RECF EN4 prohibits AI-generated content in engineering notebooks. The templates below show structure and prompts, never finished content. Your team fills in their own observations, decisions, and reasoning. If a template question feels too generic, that's by design — it asks the question; you write the answer.
If it didn't work first time, what did you change? ____________________________________
What's Next (EN6: Reflect)
The next step for this sensor is: __________________________________________________
(Likely: a calibration entry, see Section 03.)
Tips for Filling This Out Well
Be specific in "Why this location." "Mounted on the rear of the robot, 10.5″ off the ground, because the GPS Sensor needs to see the field code strip and game elements would block a forward-facing mount" is much better than "Mounted on the back."
Photos beat sketches. A clear photo with arrows pointing to relevant parts is judge-friendly. Take the photo at the time of build, not later.
The "alternatives considered" section is where EN2 (Brainstorm) is graded. Don't skip it. Even if the choice is obvious, name the alternatives.
2 of 7
// Section 03
Calibration Entry ⚙️
Use this every time you measure live sensor values to build a lookup table for code use. Hue ranges, pot angles, IMU drift checks, GPS reference points.
What This Entry Captures
The setup conditions (what was measured, where, lighting).
The raw measurements (table of values).
The decisions made from those measurements (constants, ranges).
This shows roughly what a filled-in version might look like. The student writes the actual content; this is illustrative only.
🔬 What Filled-In Looks Like (illustrative structure only)
The team mounted the Optical Sensor inside the intake throat to detect ring colors. They held a red ring at the sensor face three times, recording hue (~5°), saturation (~0.85), proximity (~225). Repeated for blue rings (~230° hue). Built a hue range table: red rings = 350–15°, blue rings = 215–245°. Saturation gate of > 0.3 used to reject grayscale readings. Proximity gate > 200 used to reject distant objects.
Re-calibration triggers: any new venue, any time the LED PWM is changed.
3 of 7
// Section 04
Programming Entry 💻
Use this when the team adds or revises code that uses a sensor. Per EN4: pseudocode and explanation only, NEVER pasted source code (especially never AI-generated).
⚠️
EN4 critical reminder: Don't paste actual C++ code into the engineering notebook. Use pseudocode, flowcharts, or natural-language descriptions. The notebook documents the thinking, not the implementation. Source code lives in your Git repo.
What This Entry Captures
What the code is supposed to do.
Pseudocode or flowchart of the logic.
How the code integrates with the chassis library (EZ-Template / LemLib).
Step-by-step logic in plain language. Example structure:
WHILE the auton has not finished:
READ proximity from Optical Sensor
IF proximity > threshold:
READ hue
IF hue is in alliance range:
[stop intake; trigger scoring]
ELSE:
[reverse intake to eject]
DELAY 20 ms
Your pseudocode here:
[ team writes pseudocode here ]
Flowchart (Optional)
[ FLOWCHART OF SENSOR LOGIC ]
Integration with Chassis Library
If this code interacts with the chassis library (EZ-Template / LemLib), describe the composition pattern:
RECF EN4 specifically prohibits AI-generated programming code in engineering notebooks. The intent of the rule is to ensure students understand and can explain their code, not just paste it. Pseudocode forces you to articulate the logic in your own words — which is what judges want to see anyway.
The actual C++ implementation lives in your Git repository, with a commit hash referenced from the notebook entry. Judges can ask to see the repo if they want; they typically don't.
4 of 7
// Section 05
Debug Entry 🐛
Use this when a sensor wasn't working and you traced the cause. Judges love seeing diagnostic reasoning — it's the cleanest evidence of the engineering design process.
What This Entry Captures
The symptom (what was wrong, observed behavior).
Hypotheses (what might be causing it).
Tests run for each hypothesis.
Root cause and fix.
What you'll do differently to prevent it.
Template
Sensor Debug Entry — [SHORT NAME OF ISSUE]
Date
_______________________
Author
_______________________
Sensor
_______________________ (port _____)
Severity
_______________________ (blocks competition / blocks practice / cosmetic)
Time to fix
_______________________ (minutes / hours)
Symptom
What was wrong? Be specific. "Doesn't work" is not a symptom; "Pot reads 0 constantly even when arm moves" is.
Verification that fix worked: _____________________________________________________
Prevention (EN6: Reflect)
How do we prevent this from happening again?
[ ] Updated build documentation
[ ] Added to pre-match checklist
[ ] Changed mounting
[ ] Code change (assertion / range check)
[ ] Other: ___________________________
Why Debug Entries Are Notebook Gold
Most engineering work is debugging, but most teams don't notebook it. Debug entries hit several rubric criteria simultaneously:
EN1 (Identify): the symptom is the problem you identified.
EN2 (Brainstorm): the hypotheses are alternate explanations you considered.
EN5 (Test): the tests are systematic, hypothesis-driven experiments.
EN6 (Reflect): the prevention plan shows you learned from it.
Even small debugging episodes (5 minutes, simple cause) deserve an entry if they were memorable. They show judges your team thinks systematically, not by trial and error.
5 of 7
// Section 06
Tournament Sensor Log 🏆
Use this at every tournament to capture how sensors performed under real venue conditions. Drives off-season improvements.
What This Entry Captures
Venue conditions (lighting, field state, opponent disruption).
Per-match sensor behavior summary.
Re-calibrations done at the venue.
Failures observed and fixes applied.
Lessons for next tournament.
Template
Tournament Sensor Log — [TOURNAMENT NAME]
Date
_______________________
Venue
_______________________
Lighting
_______________________
Robot version / git commit
_______________________
Pre-Tournament Sensor Checklist (Section 06 of Sensor Roadmap)
[ ] IMU calibrates and reads stable values
[ ] All limit/bumper switches respond correctly
[ ] Potentiometer matches expected angle for known positions
[ ] Optical Sensor sees objects at expected proximity / hue
[ ] AI Vision sees test AprilTag
[ ] GPS reads valid X/Y at known field position
Re-Calibrations Done at Venue
Sensor
What changed
Old value
New value
__________
__________
__________
__________
__________
__________
__________
__________
Per-Match Sensor Notes
Match #
Auton selected
Sensor performance
Issues observed
____
__________
__________
__________
____
__________
__________
__________
____
__________
__________
__________
____
__________
__________
__________
____
__________
__________
__________
Failures & Fixes During Tournament
(For each failure: link to a Debug Entry if substantial; brief inline note if quick.)
Fill in the per-match table during the tournament, not after. Memory fades; numbers don't.
Photograph the venue lighting if it's unusual (very dim, very bright, mixed sources). Useful when comparing GPS / Optical performance across venues.
Log even tournaments where sensors worked perfectly. "Nothing went wrong" is itself a data point.
6 of 7
// Section 07
EN4 Compliance Tips ✅
RECF EN4 prohibits AI-generated content in engineering notebooks. Here's how to use these templates safely.
What EN4 Says
The EN4 rule (Engineering Notebook Authentic Work) requires that the content of the engineering notebook be the team's own work. Specifically prohibited: AI-generated text, AI-generated code, copy-pasted content from third-party sources without attribution.
What These Templates Are (and Aren't)
These templates ARE
Question prompts. Section headings. Field labels. Suggested table structures. The same as a printed lab notebook with section dividers.
These templates are NOT
Pre-written content. Filled-in answers. Anything the team can paste verbatim and submit as their own.
Using a template for structure is the same as using a graph paper notebook with pre-printed grid lines. The grid is provided; the data and analysis are yours.
How to Stay Compliant
Write every answer in your own words. Even if your reasoning is similar to another team's, articulate it yourself.
Don't copy-paste from this guide's prose into your notebook. The body text of these guides is for learning, not for filling in.
Photos and sketches must be your own. Don't use stock images or VEX product shots in your notebook.
Numbers and tables are yours by definition. The template provides the table structure; you provide the numbers from your robot.
If you ask AI for help understanding a concept, don't paste the AI's response. Read it, understand it, then write your own explanation in the notebook.
Pseudocode vs. Code
Pseudocode describes what the code does in natural language with light syntax. Allowed in the notebook.
Source code is the actual C++ implementation. Lives in Git, not in the notebook.
If you must reference a specific function or value, use a name like "the arm_to_angle() function" without pasting the function body.
Citing Sources
If your team referenced an external resource (a VEX KB article, the PROS docs, a competitor's reveal video), cite it. Examples:
"Per the VEX KB 'Best Practices with the GPS Sensor' article, we mounted the GPS rear-facing at ~10.5 inches."
"The PROS V5 documentation specifies that pros::adi::DigitalIn::get_value() returns 1 when released."
Citations strengthen your notebook by showing you researched. They're EN4-compliant because you're acknowledging the source.
Judge-Friendly Practices
Date every entry. Chronology is part of EN's rubric.
Sign every entry. Each team member's entries should be identifiable.
Use a TOC. Update it as new sensor entries are added.
Cross-reference between entries. A debug entry might reference a calibration entry from two weeks earlier. Page numbers help.
Show iteration. The first calibration entry will have rough numbers. The third will be tight. Showing improvement over the season is what wins notebook awards.
Sample Sensor Entry Topics for the Season
Here are 12 entries that any V5RC team using sensors might write across a season. These aren't mandatory; they're just realistic data points:
Build entry — first IMU mount on Clawbot
Calibration entry — IMU drift baseline
Build entry — switches added to Clawbot arm
Calibration entry — arm pot ranges established
Programming entry — arm preset macros (pseudocode)
Debug entry — pot reads 0 in deadband (early season failure)
Build entry — Optical Sensor moved from Clawbot to comp robot intake
Calibration entry — alliance ring hue tables (game elements arrived)
Tournament log — first competition, sensor performance summary
Debug entry — intermittent switch reading at tournament
Programming entry — vision-corrected auton (if AI Vision used)
Each one is 1–2 notebook pages. Across the season, that's ~20 pages of high-quality, sensor-specific evidence in your notebook — substantial enough to influence judging awards.