🔬 Test · Track · Improve · Repeat

The Spartan
Testing System

Building a robot is the easy part. Knowing it works — that takes a system. This page teaches you how top teams test, track data, and turn failures into wins.

1
Testing
2
Data Tracking
3
Iteration
4
Failure Protocol
5
Spartan Connect
// Section 01
How to Test a Robot Properly
Testing isn’t just “run it and see what happens.” That’s hoping. Testing is running a controlled experiment with a target, a method, and a conclusion. Purdue SigBots doesn’t just build and hope — they test everything, document everything, and improve from data.
💡
Why testing matters in one sentence: If you don’t test before competition, the competition is your test — and that’s the worst time to find problems.

Bad Testing vs Good Testing

❌ Bad Testing
Run the robot once and say “it works”
Test on a different surface than competition
No target — you don’t know what “good” looks like
Change two things at once, then guess what fixed it
Only test when something breaks
Keep results in your head, not on paper
✅ Good Testing
Run 10 trials, calculate average and consistency
Test on competition-spec tiles at competition height
Set a benchmark first: “Pass = 8 of 10 within target”
Change one variable at a time, retest, conclude
Test every session before free practice begins
Write results in your notebook immediately

The Three Testing Types

Every robot needs all three. Don’t skip one because you think it’s less important.

⚙ Mechanical Testing Hardware

Verifies that the physical robot is built correctly, is not about to break, and behaves the way physics says it should.

  1. 1Visual inspection — all screws tight, no loose cables, no bent axles. Do this before every session.
  2. 2Drive straight test — command 6 feet forward with no corrections. Measure sideways deviation. Pass: less than 1 inch off-center.
  3. 3Turning accuracy — command 90° turn 10 times, measure actual heading with a protractor or IMU. Pass: within ±2° in 8 of 10.
  4. 4Motor temperature — run a full 1:45 at match speed, check motor temps. Pass: all motors below 55°C at the end.
  5. 5Mechanism function test — run each mechanism 20 times consecutively. Pass: 0 jams or failures.
💡
Pro Tip — Test tables beat memory: Use the interactive tracker on the Data Tracking page to log results as you run each test. You will not remember exact numbers from Tuesday by Friday.
💻 Programming Testing Code

Verifies that your code does what you think it does, consistently, under real conditions.

  1. 1Autonomous consistency test — run your routine 10 times from the exact same starting position. Pass: 8 of 10 land within 2 inches of target.
  2. 2Tile surface test — run the same routine on competition-spec VEX foam tiles. Different surfaces = different results. Never count results from gym floors or carpet.
  3. 3Battery level test — run autonomous at 100% battery and again at 60%. Pass: results are within 1 inch of each other. If not, your code is battery-sensitive.
  4. 4Cold start test — power off the robot, wait 30 seconds, power on, immediately run autonomous. The IMU needs 3 seconds to calibrate. Verify your code waits for it.
  5. 5Driver control response test — verify all mechanisms respond within 100ms of button press, no dead zones, no latency.
🏎 Driver Testing Performance

Measures how the driver actually performs under conditions that simulate competition. Not just “free driving” — structured and timed.

  1. 1Cycle time test — pickup one element, score it, return to start position. Time from first contact to reset. Run 10 consecutive cycles. Average is your benchmark.
  2. 2Full match simulation — run a complete 1:45 with a teammate counting errors and missed pickups. Pass: fewer than 3 errors per match.
  3. 3Stress test — add verbal distractions, opponent presence simulation, and unusual element placement. Does consistency hold? A driver who is good only in practice is not competition-ready.
  4. 4Skills run — run a complete Driver Skills attempt and log the score. Track improvement across sessions.

Example Test Tables

Copy these formats into your notebook. Fill them in during testing, not after.

TestTargetRun 1Run 2Run 3AvgStatus
Drive straight (6ft)<1 in off0.5 in0.8 in0.3 in0.53 inPASS
90° turn accuracy±2°88°91°89°−1.3°PASS
Auton consistency8/10 within 2in7 of 10 passedCLOSE
Driver cycle time<6s avg6.2s5.8s5.9s5.97sPASS
Motor temp (end of match)<55°C48°C52°C49°C49.7°CPASS
⚙ STEM Highlight Science: Controlled Experiments & Sample Size
Professional engineers run multiple trials because one result could be luck. The scientific method requires: a hypothesis (“the robot will land within 2 inches”), a controlled procedure (same start position, same field, same routine), and a result compared to the hypothesis. Running n=10 trials gives you statistical confidence — your conclusion is based on evidence, not luck.
🎤 Interview line: “We use controlled experiments with n=10 trials so our conclusions are based on data, not a lucky single run. We set benchmark targets before testing so we can make objective pass/fail decisions.”
Your autonomous works perfectly 3 times in a row during practice. Your teammate says “it’s ready for competition.” What’s the problem?
Nothing — 3 successful runs proves the auton works reliably
3 runs is too small a sample and there’s no benchmark target — you need at least 10 trials with a pass/fail criterion before calling it competition-ready
The test should be run on the competition field, not your practice field
Autonomous doesn’t need testing — only driver control does
// Section 02
Data & Performance Tracking
Top teams don’t guess whether they’re improving — they measure it. You can’t manage what you don’t track. This section shows you what to track, how to track it, and — most importantly — how to make decisions from data.

What to Track

🎯
Auton Success Rate
Passes / total runs. Track per routine, per week. Target: 80% or higher before competition. Below 70%? Downgrade to a safer routine.
Driver Cycle Time
Seconds from picking up one game element to being ready for the next. Track best and average. Average matters more than best — consistency wins matches.
Skills Score
Log every skills run attempt. Date, score, what went wrong if it wasn’t perfect. Skills scores build rankings — every point matters.
🏆
Match Performance
Win/loss, final score, autonomous result, any mechanism failures. One log entry per match. Patterns show what you need to fix before the next competition.
🚫
Error Count
Missed pickups, jams, collisions, wrong zone entries during practice matches. Track which error happens most — that’s your drill priority this week.
🔧
Build Changes
Every mechanism change logged with date and test result. If it broke at competition, you can trace exactly what changed. No log = no diagnosis.

If the Data Says This — Do This

❌ IF the data shows... ✅ THEN you should...
Auton success rate below 70%Switch to a simpler routine immediately. Reliability beats score ceiling. Fix the failing routine on your own time, not at a competition.
Driver cycle time getting slower over a sessionStop driving. Check motor temperatures. The robot is tired, not the driver. Let motors cool, do something else for 10 minutes.
Drive straight test showing >1 inch deviationStop all programming testing — the hardware is wrong. Check wheel alignment, chain tension, and motor power balance before any code changes.
Same error occurring more than 3 times per practiceStop treating it as bad luck. That error has a root cause. Run the failure protocol before next practice session.
Cycle time is fast in practice but slow in match simulationYour driver is not comfortable under pressure. Add more stress-test drills — distractions, opponents, verbal pressure. Comfort is earned under those conditions, not regular practice.
Auton works on your tiles but fails at a tournamentThe tiles at the tournament are different. Test on different surfaces. Add an IMU-based correction loop to your autonomous code.
Build log shows 3+ changes in one week with no improvementYou are changing too many variables. Pick one thing to fix, test it 10 times, then decide. The problem is your process, not the robot.
Skills score plateaued for 2+ weeksYour current practice approach has stopped working. Change the drill structure, add harder constraints, or ask for coach input.

Interactive Session Log

Log your practice data here. Everything saves to your device and persists between sessions.

⚙ STEM Highlight Mathematics: Averages, Variance & Trend Analysis
Average tells you your typical performance. Standard deviation tells you how consistent you are — a driver averaging 5.8s with σ=0.3s is far more valuable than one averaging 5.2s with σ=1.8s. Top teams track both. Trend lines show whether you’re improving: if your average cycle time gets faster each week, your practice is working. If it’s flat, your approach needs to change.
🎤 Interview line: “We don’t just track our best performance — we track average and standard deviation. A low standard deviation means we’re consistent, which is more important than a high ceiling at competition.”
Your autonomous succeeds 9 of 10 times this week but failed in your last 3 real matches. What does the data suggest?
The autonomous code has a bug that only appears at competitions
Your practice conditions don’t match competition conditions — different tiles, cold start, or battery level differences. Test under competition-matching conditions.
9 of 10 is good enough — the 3 competition failures were just bad luck
Switch to a different autonomous routine immediately
// Section 03
The Iteration Loop
Iteration is the core of engineering. It means improving something deliberately — not by random tinkering, but by following a process. The teams at the top of the leaderboard aren’t smarter than you. They just iterate faster and more systematically.
📝
The most important rule in iteration: Change one thing at a time. If you change the gear ratio AND the wheel size AND the motor port AND the code all in one session, and the robot improves, you have no idea what fixed it. Next time it breaks, you still won’t know why.

The Iteration Loop

🔧BUILD
Construct or code the change
🔬TEST
Run structured trials with n≥10
📊ANALYZE
Compare results to target
💡IMPROVE
Make one specific change
REPEAT
Start the loop again
How fast should you iterate? At least one complete loop per practice session. “We tested something, found a result, and made a decision” should happen every time you practice. If you practice 3 times a week with no documented iterations, you are not improving — you are just driving.

When NOT to Fix Something

Not every problem needs to be fixed right now. Knowing when to leave something alone is as important as knowing how to fix it.

🚫 Do NOT fix this now
Competition is tomorrow and the robot is currently working — do not touch it
The problem only happens once every 20 tries — low-frequency issues often disappear under investigation
You don’t know what’s causing it yet — diagnose first, change second
The fix requires rebuilding a major system two days before a tournament
The problem affects something that doesn’t affect scoring in this specific game
✅ Fix this immediately
The problem affects your autonomous consistency or driver scoring rate
The failure mode can end a match — intake jam, drive failure, auton bug
The same failure has happened 3+ times and you understand why
You have enough practice sessions before competition to test the fix
The root cause is confirmed — you’re not guessing

Real Examples: Iteration in Action

Robot drives and consistently veers right Show Iteration
BuildRobot built with four-motor drive. Session started normally.
TestDrive straight 10 times, 6 feet. Average deviation: 1.8 inches right. Fails target of <1 inch.
AnalyzePossible causes: motor power imbalance, bent axle, wheel alignment, code sign error. Test hardware first before touching code. Manually spin each drive motor — right side motors spin harder than left. Root cause: one right motor was loose on its port connection, causing voltage drop.
ImproveReseat the motor cable firmly. Retest 10 times. New average: 0.4 inches. PASS.
NotebookLog the failure, root cause, fix, and new test result. Mark as resolved. Note to check all motor cables weekly.
🏆 Autonomous scores in practice but not in matches Show Iteration
BuildAutonomous routine built and tested on your practice field. 9/10 success rate.
TestRun at tournament — fails 3 of first 4 matches. Robot ends up 4-6 inches off-target consistently, always to the same side.
AnalyzeConsistent miss in the same direction — not random noise. Hypothesis: tile surface at tournament is different from practice tiles. Test: ask permission to test on tournament tiles during lunch. Result: same drift pattern appears. Secondary hypothesis: tile joints affecting straight-line tracking. IMU data shows heading drift of 3° before first turn.
ImproveAdd IMU correction after first movement: chassis.set_heading(0); at the start of autonomous. Retest 5 times on tournament tiles. 4 of 5 now pass. Adjusted acceptable.
NotebookDocument the tile-surface dependency. Add “test on different tiles” to pre-competition protocol.
🏎 Driver is fast in drills but slow in full matches Show Iteration
BuildDriver has been practicing cycle time drills. Best cycle: 5.1s. Average cycle: 5.6s.
TestRun full 1:45 match simulation. Average cycle during match: 7.2s — 1.6 seconds slower than isolated drills. Error count: 5 per match.
AnalyzeIsolate what’s different in a full match vs a drill: (1) driver is tracking score in their head, (2) field is cluttered with opponent elements, (3) alliance partner movement is unpredictable. Each one adds cognitive load. The driver is not physically slower — they’re mentally slower.
ImproveIteration 1: add verbal distraction to drills (teammate counts points aloud). Cycle time drops to 6.8s. Iteration 2: add random opponent elements on field during drills. Drops to 6.4s. Three weeks of progressive difficulty drills later: match average is 6.1s and still improving.
NotebookLog weekly match averages. Show the downward trend as evidence of improvement from deliberate practice.
⚙ STEM Highlight Engineering: The Design Iteration Model
What Spartan Design calls the Iteration Loop is what NASA calls the Design-Build-Test-Evaluate cycle. SpaceX runs hundreds of iterations on Starship subsystems before a launch. Every major engineering company in the world uses a version of this loop. The reason it works is that each iteration narrows the solution space — you eliminate what doesn’t work and home in on what does. Random tinkering is infinite. Systematic iteration converges.
🎤 Interview line: “Our team follows a structured iteration process: build, test, analyze, improve, repeat. Each cycle is logged in the notebook, so you can trace the history of every decision we made. This is the same process professional engineers use.”
You change the intake roller speed AND the gear ratio in the same build session. The robot improves. What is the problem?
There is no problem — the robot improved, so both changes were good
You changed two variables at once, so you cannot know which change caused the improvement — or whether one change helped and the other actually hurt
You should have tested the gear ratio first because it’s more important
The improvement needs to be confirmed with 10 trials before it counts
// Section 04
The Failure Protocol
Every robot fails. The teams that win don’t fail less — they recover faster. That’s because they have a system for failure. When something goes wrong, they don’t panic, don’t guess, and don’t randomly rebuild. They follow four steps.
⚠️
The most dangerous response to failure: immediately start fixing it. If you don’t understand what failed and why, your fix is a guess. Guesses that accidentally work teach you nothing and leave the root cause in place — ready to surface at the worst possible time.

The Four-Step Failure Protocol

1
What failed?
Be specific. Not “it broke” — describe the exact symptom with measurable details.
Bad: “The intake stopped working.”
Good: “The intake jams on game elements tilted more than 20° from horizontal. It happens on approximately 1 in 4 pickups from the left side of the field.”
2
Why did it fail?
List every possible cause. Then rule them out one at a time with tests. Don’t start changing things until you’ve confirmed the root cause.
Possible causes: roller geometry, intake speed, element orientation, field tile position, code stall detection triggering early.
Testing: Reproduce the failure manually (no code) — if it still jams without code, the problem is mechanical. Then test each mechanical factor individually.
3
What did we learn?
Write this down even if the fix seems obvious. The learning is the most valuable part of any failure. This entry is gold for your notebook.
Learning: “The intake roller angle of 30° is too shallow for tilted elements. When elements tilt past 20°, the roller pushes them sideways instead of pulling them in. The fix is a steeper roller approach angle or a guide channel.”
4
What do we change?
Pick ONE thing to change based on the root cause. Test the change with at least 10 trials. Compare results to your pre-failure baseline.
Change: Add a 15° guide chamfer to the intake entrance.
Retest: 30 consecutive element pickups including deliberately tilted elements.
Result: 0 jams. PASS. Document in notebook with before/after data.

Interactive Failure Log

Use this during practice when something goes wrong. Write it down now, not later.

⚙ STEM Highlight Science: Root Cause Analysis & Fault Isolation
The failure protocol is a simplified version of Root Cause Analysis (RCA) — a formal engineering methodology used in aerospace, manufacturing, and medicine. NASA uses RCA after every mission anomaly. The key insight is separating symptoms (what you observe) from causes (what actually happened). When you rule out causes systematically with tests rather than guesses, you are performing fault isolation — the same process that aircraft maintenance engineers follow before every repair.
🎤 Interview line: “When something fails, we follow a four-step protocol: define the exact symptom, identify and test possible causes, document the learning, then make one specific change and retest. We’ve found that most failures have a single root cause — fixing it properly prevents the same failure from happening at competition.”
Your robot’s intake jammed three times during practice. Your engineer immediately starts rebuilding it. What step did they skip?
Step 4 — they forgot to plan what to change
Steps 1 and 2 — they jumped to fixing without defining the exact symptom or confirming the root cause. The rebuild might not fix the actual problem.
Step 3 — they forgot to document what they learned
Nothing — three failures is enough evidence to start rebuilding immediately
// Section 05
How This Connects to Spartan Design
Testing, data, and iteration aren’t a separate thing you do occasionally. They are the daily operating system of your team. Here’s how every piece connects to your roles, your workflow, and your competition results.

The System Works Together

🔬 Testing Connects To…
Engineer — runs mechanical and programming tests every session before free practice
Driver — structured drills with pass/fail criteria instead of free driving
Build Team — no mechanism is “done” until it passes 20 consecutive cycles
Programming Team — auton is not competition-ready until 8/10 trials pass
📊 Data Connects To…
Strategist — auton success rate directly determines which routine to run in quals vs elims
Competition — match log data identifies patterns that determine practice priority
Engineering Notebook — data tables and trend graphs are your strongest evidence for Design Award
Season planning — improvement rates determine realistic competition goals
↻ Iteration Connects To…
Engineering Design Process (EDP) — each iteration is one complete EDP cycle
Notebook rubric — “Repeat Design Process” is an expert-level criterion that requires visible iteration
Competition prep — teams with 10+ documented iterations are more reliable than teams who rebuilt once
Interview — you can point to specific iterations and explain what you learned from each one

The Weekly Practice Loop

🏆 What a Spartan Design Practice Session Looks Like
  1. 1Pre-check (5 min): Engineer runs drivetrain check, motor temps, IMU calibration. Not optional.
  2. 2Structured testing (15 min): Run this week’s target tests. Log results. Compare to last week’s baseline.
  3. 3Targeted practice (20 min): Driver and Engineer work on the specific area the data says needs improvement — not whatever seems fun.
  4. 4Iteration session (10 min): Make one change based on today’s data. Test it. Log the result.
  5. 5Match simulation (5 min): Full 1:45 run to verify everything works end-to-end. Log score and errors.
  6. 6Reflection (5 min): Strategist fills out squad reflection: what worked, what needs fixing, goal for next session. In the notebook.

What This Gets You

A team that follows this system for 8 weeks before a competition will have: 8+ documented iteration cycles per major system, a logged auton success rate that shows improvement over time, a driver whose cycle time is tracked and trending down, and a notebook with data tables that judges immediately recognize as expert-level work. You are not just building robots — you are engineering and competing like top teams.

Related Pages

📝
Engineering Notebook
EDP, decision matrix, judging rubric
🔬
PID Diagnostics
Symptom → root cause → fix
📊
Data Logging to SD Card
Automated data collection in code
🏆
Autonomous Tournament Strategy
Expected value, reliability tiers
🎮
Driver Practice Curriculum
Structured drills and session logging
Spartan Design Hub
Today’s focus, role pages, workflow
Your team has been practicing for 3 weeks. The robot drives fine, the auton is “usually” working, and the driver is improving. You have no written data. What is the biggest risk heading into competition?
No risk — the robot is working and that’s what matters at competition
You might lose track of what code version you are running
Without data, you cannot make evidence-based decisions when something goes wrong at competition — and you have no baseline to know whether your competition performance is actually good or just feels good
Judges will penalize you for not having a notebook
Related Guides
📝 Engineering Notebook → 📊 Data Logging → 🔬 PID Diagnostics → 📝 Getting Started with the Notebook → 📄 Notebook Template Guide →
← ALL GUIDES