A 100% reliable 10-point autonomous beats an 80% reliable 18-point one in a tournament. Understanding how autonomous scoring works strategically — not just technically — is the difference between winning and losing close matches.
1
Scoring Tiers
2
Alliance Coord
3
Reliability
4
Match Day
// Section 01
The Autonomous Scoring Tiers
Every season has a specific autonomous period scoring structure. This section uses general principles — always verify current season point values in the game manual.
💡
The autonomous period is disproportionately impactful. In most seasons, the autonomous win bonus shifts 4–12 points toward the winning alliance before driver control begins. In close matches — which are the only ones that matter in elimination rounds — autonomous often decides the outcome.
Build Your Autonomous Portfolio in Tiers
TIER 1 — BaselineMinimum viable: AWP touch line + 1 game element
This autonomous must be 100% reliable. You run this if the field conditions are unusual, if your alliance partner has a conflicting routine, or if anything feels off during setup. Never fail the baseline.
TIER 2 — StandardTarget 90%+ reliability: full AWP + maximum scoring for your side
This is your typical match autonomous. It completes the AWP condition and scores as many points as your robot can reliably execute on your side of the field. This is what you run in most qualification matches.
TIER 3 — High Risk/RewardOnly in elimination matches where you need to win the autonomous period
Maximum scoring attempt. Cross to partner's side, attempt a difficult element, or run a coordinated two-robot autonomous. Use only when: (a) you have practiced it extensively, (b) it is coordinated with your alliance partner, (c) the match situation requires it.
1 / 4
// Section 02
Coordinating With Your Alliance Partner
Two robots running independent autonomous routines that interact with the same game elements can hurt each other. Coordination before the match prevents this.
Pre-Match Autonomous Coordination
Before every match — especially elimination matches — find your alliance partner and agree on:
Who goes where — which side of the field each robot handles. Never assume you know without confirming.
Who does the AWP task — if both robots attempt the same AWP condition, one may interfere with the other. Agree on who owns it.
Crossing risk — does either autonomous cross the center line? If one robot crosses and the other is coming the opposite direction, a collision can disqualify both teams' autonomous.
Starting positions — confirm both robots can fit in their starting tiles without violating size limits or touching each other.
Fallback — if either robot fails autonomous, what does the other expect to see during driver control? Brief your partner so they are not surprised.
⚠️
A failed coordination conversation is not the other team's fault. If you assume and they assume, neither assumption is communicated. Walk to their pit before the match. Spend 90 seconds coordinating. It is the highest ROI time investment in the match preparation process.
2 / 4
// Section 03
Reliability vs Point Ceiling
The most common strategic mistake in VRC: optimizing for maximum possible score at the expense of reliability. In a 12-match qualification tournament, each failed autonomous costs more than you think.
💡
Expected value math: a 90% reliable 12-point autonomous gives you 10.8 expected points per match. An 80% reliable 18-point autonomous gives you 14.4 expected — BUT the 20% fail rate is 2.4 matches where you score 0. In a qualification round, those zero matches usually cost you a seeding position. The 18-point routine is only worth the risk if you can push that reliability above 90%.
How to Measure Reliability
Run minimum 10 consecutive attempts on a field that matches competition conditions (same field tiles, same starting position method)
Log every attempt — success/fail/partial, and what caused each failure
Do not count practice attempts on your practice surface if it differs from competition field tile material
Test on a cold robot (not just after warmup) — competition starts are often cold robots from transport
Test with different alliance colors if your autonomous uses sensors that could detect field color
When to Run Each Tier
Situation
Run
Seeded well, need to protect ranking
Tier 1 or Tier 2
Standard qualification match
Tier 2
Elimination match, tied series
Tier 3 (if practiced >90% reliable)
Partner has weak autonomous, you carry
Tier 2 — your reliability matters more
Field looks unusual (debris, sloped tile)
Tier 1 — never risk reliability on unusual conditions
3 / 4
// Section 04
Match-Day Autonomous Decisions
The decisions you make in the 2 minutes before each match matter as much as the code you wrote during the season.
The Pre-Match Autonomous Checklist
Confirm field starting position — place robot in exact starting tile orientation. Use a consistent physical reference (robot back corner against tile edge, not eyeballed).
Select correct autonomous on Brain screen — say the name aloud to your partner as you select it. They confirm.
IMU calibration — verify IMU shows calibrated on Brain screen before the match starts. A robot placed on the field after IMU calibration is complete avoids recalibration issues.
Ask your partner their starting position — confirm neither robot is in the other's path.
Stand back and observe — do not touch the robot after placement. Wait for the field control signal.
🏆
The 90-second warmup walk: when you arrive at the field queue, use the wait time to mentally run through the autonomous sequence. Visualize the robot’s path. Note any field conditions that differ from practice. You can ask the referee to see the field briefly before lining up — use that time.
After a Failed Autonomous
Do not chase the robot immediately — wait for the field to clear and driver control to begin normally
Note what failed — not for debugging now, but for the debrief after the match. Keep it in your head.
Drive control adjustments — if autonomous left your robot in a bad position, your first driver action should be repositioning, not scoring. A bad start from autonomous recovers with a calm driver, not a panicked one.
Post-match: debrief before fixing — understand what failed before changing code. Confirm it was code (not starting position error, not field condition) before deploying a fix that might not address the real cause.
⚙ STEM HighlightMathematics: Expected Value & Probability
Choosing between a safe and aggressive autonomous is a probability problem. Expected value = (probability of success × points if success) + (probability of failure × points if failure). A 90% reliable 10-point auton has EV = 0.9×10 + 0.1×0 = 9.0 pts. An 80% reliable 15-point auton has EV = 0.8×15 + 0.2×0 = 12.0 pts. Across 12 matches, that 3-point EV gap is 36 ranking points — often the difference between 1st and 4th seed.
🎤 Interview line: “We use expected value math to choose our autonomous routine. We track our success rate over practice matches, multiply by the point value, and compare options. A higher score on paper isn’t worth it if the reliability drops enough to lower our expected value.”
🔬 Check for Understanding
Routine A scores 12 pts at 95% reliability. Routine B scores 18 pts at 65% reliability. Which has the higher expected value?
Routine B — 18 points is more than 12
Routine A — EV = 11.4 pts vs Routine B EV = 11.7 pts, they are nearly equal
Routine A — EV = 11.4 pts vs Routine B EV = 11.7 pts so B is actually slightly higher, but A is safer
Cannot determine without knowing the opponent’s auton