Sources & confidence: Templates are designed to align with RECF V5RC Engineering Notebook Rubric (EN1–EN10) and the Engineering Design Process (Identify, Brainstorm, Plan, Build, Test, Reflect). EN4 prohibits AI-generated content in notebooks — these templates show structure and prompts; your team writes the actual content in their own words. The templates are not graded by us; we provide a scaffold based on common rubric expectations.
// Section 01

Sensor Notebook Templates 📝

Engineering notebook entry patterns specifically for sensor work. Five entry types: build, calibration, programming, debug, and tournament logs. Each one structured to satisfy EN1–EN5 rubric criteria with sensor-specific evidence.
📝 Notebook Templates 🏆 EN4-Friendly 🔮 Sensor-Specific

Why Sensor Work Deserves Its Own Templates

The standard engineering notebook template (covered in Notebook Template guide) handles general design entries well, but sensor work has rhythms that don't fit cleanly:

The five templates in this guide are designed to be reused across the season. Pick the right one for the work you did, fill it in, paste sensor data, photo of the mount.

The Five Entry Types

Build EntryYou mounted a sensor. What part, where, why, with what hardware, and what does it look like? — Section 02.
Calibration EntryYou measured live values from a sensor and built a lookup. Hue ranges, pot angles, GPS reference points. — Section 03.
Programming EntryYou wrote or revised code that uses a sensor. Pseudocode (NOT actual code per EN4), test results, integration with chassis library. — Section 04.
Debug EntryA sensor wasn't working. You traced the cause. Symptom, hypotheses, tests, root cause, fix. — Section 05.
Tournament LogWhat sensor performance looked like at a real event. Venue conditions, recalibrations needed, failures observed. — Section 06.

EN4 Reminder Up Front

⚠️
RECF EN4 prohibits AI-generated content in engineering notebooks. The templates below show structure and prompts, never finished content. Your team fills in their own observations, decisions, and reasoning. If a template question feels too generic, that's by design — it asks the question; you write the answer.

Related Guides

// Section 02
Sensor Build Entry 🔧
Use this when your team has just mounted a sensor for the first time, or moved an existing sensor to a new location.

What This Entry Captures

Template

Sensor Build Log — [SENSOR NAME]

Date
_______________________
Author
_______________________
Sensor type
_______________________
VEX part #
_______________________
Port
_______________________
Robot
_______________________ (Clawbot / Hero Bot / Comp Bot)

Why This Sensor (EN1: Identify)

Question: what problem does this sensor solve that we couldn't solve without it?

Your answer: ___________________________________________________________________

_______________________________________________________________________________

Alternatives Considered (EN2: Brainstorm)

List at least two other ways the team could have solved the same problem, and why you picked this sensor over them:

  • Alternative A: ___________________________ — why not chosen: ___________________
  • Alternative B: ___________________________ — why not chosen: ___________________

Mounting Location (EN3: Plan)

Where on the robot is it mounted? Why this location?

Location description: ___________________________________________________________

Reason for this location: _______________________________________________________

Photo / Sketch (paste below):

[ PHOTO OR SKETCH OF MOUNT ]

Hardware Used

PartPart #QuantityPurpose
________________________________________
________________________________________
________________________________________

Build Steps (EN4: Build)

Brief sequence of how the team mounted it. Bullet points are fine. Include any improvisation:

  1. ______________________________________________
  2. ______________________________________________
  3. ______________________________________________

Initial Sanity Check (EN5: Test)

Did the sensor return live data when first powered on? What did the V5 Brain screen show?

Result: ______________________________________________________________________

If it didn't work first time, what did you change? ____________________________________

What's Next (EN6: Reflect)

The next step for this sensor is: __________________________________________________

(Likely: a calibration entry, see Section 03.)

Tips for Filling This Out Well

// Section 03
Calibration Entry ⚙️
Use this every time you measure live sensor values to build a lookup table for code use. Hue ranges, pot angles, IMU drift checks, GPS reference points.

What This Entry Captures

Template

Sensor Calibration Log — [SENSOR NAME]

Date
_______________________
Author
_______________________
Sensor
_______________________ (port _____)
Robot
_______________________
Location
_______________________ (school workshop / venue / etc.)
Lighting
_______________________ (bright / dim / fluorescent / mixed)

Calibration Goal

What are we trying to learn?

_______________________________________________________________________________

_______________________________________________________________________________

Measurement Procedure

How were values collected? (e.g., "Held each ring 5cm from sensor face, recorded value from V5 Brain screen, repeated 3 times per ring.")

_______________________________________________________________________________

Raw Measurements (Table)

Trial / Position / ObjectValue 1Value 2Value 3Average
_______________________________________________
_______________________________________________
_______________________________________________
_______________________________________________
_______________________________________________

Constants & Ranges Derived

From the measurements above, the team will use these constants in code:

Constant NameValueUsed In
_______________________________________________
_______________________________________________
_______________________________________________

Reliability Check

How consistent were the measurements? Did any outlier values show up?

_______________________________________________________________________________

Re-Calibration Triggers

When will we need to re-do this calibration?

  • [ ] Different venue / lighting
  • [ ] After mechanical changes to the mount
  • [ ] After cable replacement
  • [ ] Before every tournament
  • [ ] Other: ___________________________

What We Learned (EN6: Reflect)

Anything surprising? Anything we'd do differently next time?

_______________________________________________________________________________

Example: Optical Sensor Calibration

This shows roughly what a filled-in version might look like. The student writes the actual content; this is illustrative only.

🔬 What Filled-In Looks Like (illustrative structure only)

The team mounted the Optical Sensor inside the intake throat to detect ring colors. They held a red ring at the sensor face three times, recording hue (~5°), saturation (~0.85), proximity (~225). Repeated for blue rings (~230° hue). Built a hue range table: red rings = 350–15°, blue rings = 215–245°. Saturation gate of > 0.3 used to reject grayscale readings. Proximity gate > 200 used to reject distant objects.

Re-calibration triggers: any new venue, any time the LED PWM is changed.

// Section 04
Programming Entry 💻
Use this when the team adds or revises code that uses a sensor. Per EN4: pseudocode and explanation only, NEVER pasted source code (especially never AI-generated).
⚠️
EN4 critical reminder: Don't paste actual C++ code into the engineering notebook. Use pseudocode, flowcharts, or natural-language descriptions. The notebook documents the thinking, not the implementation. Source code lives in your Git repo.

What This Entry Captures

Template

Sensor Programming Entry — [FUNCTION / FEATURE NAME]

Date
_______________________
Author
_______________________
Reviewed by
_______________________
Sensor(s) used
_______________________
Library context
_______________________ (standalone / EZ-Template / LemLib)
Git commit
_______________________

Goal (EN1: Identify)

What is this code supposed to do? Describe the outcome, not the implementation.

_______________________________________________________________________________

_______________________________________________________________________________

Approach (EN3: Plan)

How does the code achieve the goal? (Natural language description, no source code.)

_______________________________________________________________________________

_______________________________________________________________________________

Pseudocode

Step-by-step logic in plain language. Example structure:

WHILE the auton has not finished:
   READ proximity from Optical Sensor
   IF proximity > threshold:
      READ hue
      IF hue is in alliance range:
         [stop intake; trigger scoring]
      ELSE:
         [reverse intake to eject]
   DELAY 20 ms

Your pseudocode here:

[ team writes pseudocode here ]

Flowchart (Optional)

[ FLOWCHART OF SENSOR LOGIC ]

Integration with Chassis Library

If this code interacts with the chassis library (EZ-Template / LemLib), describe the composition pattern:

  • [ ] Sensor-triggered motion exit (sensor cancels chassis motion early)
  • [ ] Pre-move sensor check (sensor decides whether to issue chassis motion)
  • [ ] Background sensor task (independent task; chassis runs separately)
  • [ ] None (this code doesn't touch chassis)

Brief description: _____________________________________________________________

Test Plan (EN5: Test)

What did you test, and how?

Test CaseExpected ResultActual ResultPass / Fail
_________________________________________________
_________________________________________________
_________________________________________________

Reflection (EN6: Reflect)

What worked, what didn't, what's next?

_______________________________________________________________________________

_______________________________________________________________________________

Why Pseudocode Instead of Code

RECF EN4 specifically prohibits AI-generated programming code in engineering notebooks. The intent of the rule is to ensure students understand and can explain their code, not just paste it. Pseudocode forces you to articulate the logic in your own words — which is what judges want to see anyway.

The actual C++ implementation lives in your Git repository, with a commit hash referenced from the notebook entry. Judges can ask to see the repo if they want; they typically don't.

// Section 05
Debug Entry 🐛
Use this when a sensor wasn't working and you traced the cause. Judges love seeing diagnostic reasoning — it's the cleanest evidence of the engineering design process.

What This Entry Captures

Template

Sensor Debug Entry — [SHORT NAME OF ISSUE]

Date
_______________________
Author
_______________________
Sensor
_______________________ (port _____)
Severity
_______________________ (blocks competition / blocks practice / cosmetic)
Time to fix
_______________________ (minutes / hours)

Symptom

What was wrong? Be specific. "Doesn't work" is not a symptom; "Pot reads 0 constantly even when arm moves" is.

_______________________________________________________________________________

_______________________________________________________________________________

When First Observed

Was this a new failure, or had we seen it before? What changed since it last worked?

_______________________________________________________________________________

Hypotheses

What could cause this symptom? List at least three:

#HypothesisHow to test
1______________________________________________
2______________________________________________
3______________________________________________

Tests Run

HypothesisTestResultConclusion
__________________________________
__________________________________
__________________________________

Root Cause

What was actually causing the symptom?

_______________________________________________________________________________

Fix Applied

What did you change to fix it?

_______________________________________________________________________________

Verification that fix worked: _____________________________________________________

Prevention (EN6: Reflect)

How do we prevent this from happening again?

  • [ ] Updated build documentation
  • [ ] Added to pre-match checklist
  • [ ] Changed mounting
  • [ ] Code change (assertion / range check)
  • [ ] Other: ___________________________

Why Debug Entries Are Notebook Gold

Most engineering work is debugging, but most teams don't notebook it. Debug entries hit several rubric criteria simultaneously:

Even small debugging episodes (5 minutes, simple cause) deserve an entry if they were memorable. They show judges your team thinks systematically, not by trial and error.

// Section 06
Tournament Sensor Log 🏆
Use this at every tournament to capture how sensors performed under real venue conditions. Drives off-season improvements.

What This Entry Captures

Template

Tournament Sensor Log — [TOURNAMENT NAME]

Date
_______________________
Venue
_______________________
Lighting
_______________________
Robot version / git commit
_______________________

Pre-Tournament Sensor Checklist (Section 06 of Sensor Roadmap)

  • [ ] IMU calibrates and reads stable values
  • [ ] All limit/bumper switches respond correctly
  • [ ] Potentiometer matches expected angle for known positions
  • [ ] Optical Sensor sees objects at expected proximity / hue
  • [ ] AI Vision sees test AprilTag
  • [ ] GPS reads valid X/Y at known field position

Re-Calibrations Done at Venue

SensorWhat changedOld valueNew value
________________________________________
________________________________________

Per-Match Sensor Notes

Match #Auton selectedSensor performanceIssues observed
__________________________________
__________________________________
__________________________________
__________________________________
__________________________________

Failures & Fixes During Tournament

(For each failure: link to a Debug Entry if substantial; brief inline note if quick.)

_______________________________________________________________________________

_______________________________________________________________________________

Lessons for Next Tournament (EN6: Reflect)

What will we do differently? What new failure modes did we discover?

_______________________________________________________________________________

_______________________________________________________________________________

Tournament Log Tips

// Section 07
EN4 Compliance Tips ✅
RECF EN4 prohibits AI-generated content in engineering notebooks. Here's how to use these templates safely.

What EN4 Says

The EN4 rule (Engineering Notebook Authentic Work) requires that the content of the engineering notebook be the team's own work. Specifically prohibited: AI-generated text, AI-generated code, copy-pasted content from third-party sources without attribution.

What These Templates Are (and Aren't)

These templates AREQuestion prompts. Section headings. Field labels. Suggested table structures. The same as a printed lab notebook with section dividers.
These templates are NOTPre-written content. Filled-in answers. Anything the team can paste verbatim and submit as their own.

Using a template for structure is the same as using a graph paper notebook with pre-printed grid lines. The grid is provided; the data and analysis are yours.

How to Stay Compliant

Pseudocode vs. Code

Citing Sources

If your team referenced an external resource (a VEX KB article, the PROS docs, a competitor's reveal video), cite it. Examples:

Citations strengthen your notebook by showing you researched. They're EN4-compliant because you're acknowledging the source.

Judge-Friendly Practices

Sample Sensor Entry Topics for the Season

Here are 12 entries that any V5RC team using sensors might write across a season. These aren't mandatory; they're just realistic data points:

  1. Build entry — first IMU mount on Clawbot
  2. Calibration entry — IMU drift baseline
  3. Build entry — switches added to Clawbot arm
  4. Calibration entry — arm pot ranges established
  5. Programming entry — arm preset macros (pseudocode)
  6. Debug entry — pot reads 0 in deadband (early season failure)
  7. Build entry — Optical Sensor moved from Clawbot to comp robot intake
  8. Calibration entry — alliance ring hue tables (game elements arrived)
  9. Tournament log — first competition, sensor performance summary
  10. Debug entry — intermittent switch reading at tournament
  11. Programming entry — vision-corrected auton (if AI Vision used)
  12. Reflection entry — end-of-season sensor stack review

Each one is 1–2 notebook pages. Across the season, that's ~20 pages of high-quality, sensor-specific evidence in your notebook — substantial enough to influence judging awards.

Related Guides

← ALL GUIDES