📍 Robot Positioning

Odometry —
Know Where You Are

Stop guessing where your robot ended up. Odometry tracks your exact position on the field so your autonomous runs the same way every single time.

🟢 Sections 1–2: Everyone 🔵 Section 3: EZ Template users 🟡 Section 4: Coordinate paths 🔴 Section 5–6: Advanced tuning
1
Concept
2
Hardware
3
EZ Template
4
Coordinates
5
Dist. Sensor
6
Tuning
// Section 01 · Everyone
What is Odometry? 📍
Understanding why your robot doesn't know where it is — and how to fix that.
🚶
Imagine walking through a dark room counting your steps
You start at the door. You take 10 steps forward, turn right, take 5 steps. You now know you're probably near the window — even though you can't see anything. That's odometry: tracking where you are by measuring your own movement instead of looking at the world around you.

Why Can't You Just Use the Drive Motors?

Your V5 drive motors have built-in encoders — they count how many times they spin. So why isn't that enough?

💧
Wheel Slip
When a drive wheel spins on a smooth floor or during a hard turn, the encoder counts rotation that didn't actually move the robot. Now your position estimate is wrong.
⚖️
Unequal Load
Heavy mechanisms or a one-sided weight shifts how each side grips the floor. One side pushes harder and the robot drifts, but both encoders look "normal."
🔄
Accumulated Error
Small errors compound. Drive 5 feet with 1% slip and you're off by 0.7 inches. Drive a full autonomous routine and you could be off by several inches or degrees.
📐
No Sideways Data
Tank drive wheels only measure forward/backward. If the robot gets bumped sideways by another robot, the drive motors have no idea — but your path is now wrong.

The Solution: Dedicated Tracking Wheels

🛞
Tracking wheels are like a mouse on a mousepad
A computer mouse doesn't care what surface it's on — it directly measures movement against the surface below it. Tracking wheels work the same way: small wheels that press against the floor and spin freely, connected to precision sensors. They only measure real ground movement — no motor torque, no chain, no gear flex — just pure position data.

What Odometry Gives You

📍
X Position
How far left or right the robot is from its starting point, in inches.
📏
Y Position
How far forward or backward the robot is from its starting point, in inches.
🧭
Heading (θ)
Which direction the robot is facing, in degrees. Usually provided by the IMU sensor.
🗺️
Drive to Coordinates
Instead of "drive 24 inches," you say "go to position (24, 48)" — and the robot figures out how to get there from wherever it is now.
🏆
This is the single biggest upgrade you can make to your autonomous. With odometry, a robot that gets bumped during autonomous can still find its way to the right position. Without it, one bad interaction ruins the whole run.
// Section 02 · Everyone
Tracking Wheel Hardware 🛞
What to buy, how to mount it, and how to wire it — rotation sensors, optical encoders, and distance sensors.

The Two-Wheel vs Three-Wheel Setup

How many tracking wheels you need depends on what you want to measure:

ROBOT FRONT LEFT PARALLEL RIGHT PARALLEL CENTER PERPENDICULAR (optional) IMU X Y
■ Parallel wheels — measure forward/back (Y axis)  ·  ■ Perpendicular wheel — measures sideways drift (X axis)  ·  ■ IMU — measures heading
💡
Minimum viable setup: One parallel tracking wheel + IMU gives you reliable Y position and heading — enough for most competition autonomous routines. Add the perpendicular wheel if your robot gets bumped sideways or uses strafing motion.

Sensor Options — Which Should You Use?

SensorConnectionAccuracyBest For
VEX V5 Rotation Sensor Smart Port (3-wire cable) ★★★★★ Excellent Primary choice — high resolution, easy to use, works great with EZ Template
Optical Shaft Encoder (3-wire) ADI Port (A–H) ★★★★ Good Budget option — slightly lower resolution, still reliable for competition
IMU (Inertial Sensor) Smart Port ★★★★★ Excellent Heading only — pairs with any tracking wheel setup. Almost required.
Distance Sensor Smart Port ★★★★ Good Position correction against field walls — used alongside tracking wheels, not instead of them

Tracking Wheel Physical Design

🛞 Key Design Rules
  • Spring-loaded mount — the wheel must press against the floor with consistent force. Use a swing arm with a rubber band or spring so it stays in contact even when the robot flexes over a game element.
  • Omni wheel recommended — use a 2.75" or small omni wheel. Regular wheels resist sideways motion and can drag. Omni wheels spin freely in all directions.
  • Centered on the robot's rotation point — parallel wheels should ideally be equidistant from the robot's center of rotation for the most accurate heading calculations.
  • As close to the ground as possible — a wheel mounted high on an arm will swing in an arc during turns and give inaccurate readings. Mount tracking wheels low and close to the axle line.
  • No slop in the bearing — any looseness in the wheel mount adds noise to your data. Use proper VEX bearings and check for wiggle.

Wiring: Rotation Sensor vs Optical Encoder

🔌 VEX V5 Rotation Sensor (Recommended)

Plugs into any Smart Port (1–21) using a standard V5 3-wire smart cable. Gives you high-resolution angle data (0.088° resolution).

// Declare in robot-config.cpp pros::Rotation tracking_left(11); // Smart Port 11 pros::Rotation tracking_right(12); // Smart Port 12 pros::Rotation tracking_back(13); // Smart Port 13 (perpendicular) // Check position in code: double pos = tracking_left.get_position(); // in centidegrees (×100)
🔌 Optical Shaft Encoder (3-Wire / ADI)

Two-wire sensor that plugs into ADI ports (A–H) using two adjacent ports. Lower resolution than rotation sensor but still reliable.

// Optical encoder uses TWO adjacent ADI ports pros::ADIEncoder enc_left('A', 'B', false); // ports A + B pros::ADIEncoder enc_right('C', 'D', true); // ports C + D, reversed // Get encoder count (ticks): int ticks = enc_left.get_value();
⚠️
The V5 Brain only has 8 ADI ports (A–H). Two encoders use 4 ports. Plan your port usage before you start wiring.
// Section 03 · EZ Template Users
EZ Template Odometry Setup 🚗
The simplest path to robot positioning — EZ Template handles all the math for you.
💡
This is the recommended starting point for most V5RC teams. EZ Template's built-in odometry is well-tested, easy to configure, and integrates directly with the PID movements you already know.

How EZ Template Odom Differs from Plain PID

FeaturePID Only (no odom)With Odometry
MovementsDrive X inches, turn Y degreesGo to coordinate (X, Y)
After a bumpRobot is lost — path continues wrongRobot knows it moved, adjusts next move
Path planningSequential — each move is independentWaypoint-based — robot finds its own path
Setup complexitySimpleModerate — needs tracking wheels mounted

Step 1 — Update Your EZ Template Version

Odometry was added in EZ Template 3.x. Make sure you're on the latest version:

// In VS Code terminal: pros c fetch ez-template@3.2.0 // replace with latest version number pros c apply ez-template // apply to current project
Check the latest version at github.com/EZ-Robotics/EZ-Template/releases — always use the newest stable release.

Step 2 — Declare Tracking Wheels in robot-config.cpp

Tell EZ Template which sensors are your tracking wheels. This replaces or supplements the drive motor encoders:

📄 src/robot-config.cpp — with Rotation Sensors
// ─── TRACKING WHEELS (Rotation Sensors) ───────────── // One parallel tracking wheel on the left side ez::tracking_wheel left_tracker( &chassis, // which chassis this belongs to new pros::Rotation(11), // rotation sensor on smart port 11 2.75, // wheel diameter in inches -1.0 // distance from robot center (negative = left side) ); // Optional: perpendicular (back/center) tracking wheel ez::tracking_wheel back_tracker( &chassis, new pros::Rotation(12), 2.75, 0.0 // 0 = centered on robot );
📄 src/robot-config.cpp — with Optical Encoders (3-wire)
// ─── TRACKING WHEELS (Optical Shaft Encoders) ──────── ez::tracking_wheel left_tracker( &chassis, new pros::ADIEncoder('A', 'B', false), 2.75, // wheel diameter -1.0 // offset from center );

Step 3 — Enable Odom in the Chassis Constructor

📄 src/robot-config.cpp — updated chassis declaration
ez::Drive chassis( {-1, -2, -3}, // left motors {4, 5, 6}, // right motors 10, // IMU port 3.25, 1.0, // wheel diameter, gear ratio left_tracker, // ← add tracking wheel here back_tracker // ← and perpendicular wheel (optional) );

Step 4 — Calibrate in initialize()

📄 src/main.cpp — initialize()
void initialize() { ez::ez_template_print(); pros::delay(500); chassis.odom_enable(true); // turn on odometry chassis.imu_calibrate(); // calibrate IMU (takes ~3 seconds) chassis.odom_pose_set({0, 0, 0}); // starting position: x=0, y=0, heading=0° default_constants(); ez::as::initialize(); }

Step 5 — Use Coordinate Movements in Autonomous

Now instead of only using distance-based movements, you can drive directly to field coordinates:

void my_odom_auton() { // Set starting position (usually done in initialize) chassis.odom_pose_set({0, 0, 0}); // Drive to coordinate (x=0, y=24) — straight forward 24 inches chassis.pid_odom_set({{0, 24, 0}}, 110); chassis.pid_wait(); // Drive to coordinate (x=24, y=24) — right 24 inches chassis.pid_odom_set({{24, 24, 90}}, 110); chassis.pid_wait(); // Chain multiple waypoints in one call — smoother path! chassis.pid_odom_set({ {{0, 48, 0}, 110}, {{24, 48, 90}, 90}, {{24, 0, 180}, 80}, }, true); chassis.pid_wait(); }
🎯
Each waypoint is {x, y, heading}. The robot automatically calculates the turn and drive needed to reach that point from wherever it currently is. This is the power of odometry — you plan a path, not a sequence of blind movements.
// Section 04 · Coordinate Paths
Thinking in X, Y, and Heading 🗺️
How to plan autonomous paths as coordinates instead of distances — and how to read the field like a map.
🗺️
The field is a grid — your robot is a dot on it
Think of the VRC field like a coordinate plane from math class. (0, 0) is wherever you place your robot at the start. Every tile is 24 inches wide. Moving forward increases Y. Moving right increases X. Your heading is how many degrees you've rotated — 0° faces your starting direction.

The Field as a Coordinate Plane

(0, 0) START (0, 48) (24, 48) (24, 96) (48, 96) END X → Y ↑ 0" 24" 48"
Each grid square = 24 inches (one VRC floor tile) · ■ Start · ■ Waypoints · ■ End

Heading Convention

In EZ Template, means facing your starting direction. Positive degrees turn clockwise when viewed from above:

0° FORWARD 90° 180° 270°
Heading increases clockwise · EZ Template uses degrees (0–360)

Planning a Path — Step by Step

1
Draw your path on paper first
Sketch the field tiles and mark your robot's starting position as (0,0). Draw the path you want, marking each turn or scoring location as a dot. Measure the tile distances — each tile is 24 inches.
2
Convert dots to (x, y, heading) coordinates
Each dot on your sketch becomes a waypoint. Count how many inches right (X) and forward (Y) it is from your start. Decide what direction the robot should be facing when it arrives (heading).
3
Write the waypoints into code
Each waypoint becomes one entry in your pid_odom_set() call. String them together and the robot will navigate smoothly between all of them.
4
Test and adjust
Run the autonomous, watch where it ends up versus where you intended, and adjust the coordinates. With odom, a 1–2 inch offset in a coordinate fixes the whole path — not 5 different distance values.

Resetting Position for Each Auton

Different autonomous routines start from different positions on the field. Always set the starting pose at the beginning of each auton function:

// Auton starting from the left side of the field: void left_side_auton() { chassis.odom_pose_set({-36, 0, 0}); // 36" left of center // ... movements ... } // Auton starting from the right side: void right_side_auton() { chassis.odom_pose_set({36, 0, 0}); // 36" right of center // ... movements ... }
// Section 05 · Advanced
Distance Sensor Position Correction 📡
Use field walls as a reference to snap your robot's known position and eliminate accumulated drift before or during autonomous.
🧭
It's like resetting your GPS using a known landmark
Even a good GPS drifts over time. But if you drive past a sign that says "You are here," you can snap your position to that exact spot and reset the drift. A distance sensor does the same thing — it measures the gap between your robot and a field wall, and if you know where that wall is, you know exactly where your robot is.

Where to Mount the Distance Sensor

⬆️
Front-Facing
Drive to the alliance wall, read distance, correct your Y position. Most common setup — easy to do before auton starts.
➡️
Side-Facing
Drive next to a long field wall, read distance, correct your X position. Useful for robots that travel sideways during auton.
🔄
Both Directions
Two sensors — one front, one side. Correct both X and Y before the run for maximum accuracy. Takes more ports and planning.
⚠️
Distance sensors have a minimum range of ~50mm (~2 inches). Don't press against the wall — drive to within 4–8 inches and read from there. Also, game elements in front of the wall will give false readings. Only use wall correction in an area guaranteed to be clear.

Declare the Distance Sensor

📄 src/robot-config.cpp
// Distance sensor on Smart Port 14, facing forward pros::Distance dist_front(14); // In main.h: extern pros::Distance dist_front;

Basic Wall Correction Function

Drive toward the wall, read the distance, calculate where you must be, and update the odometry pose:

📄 src/main.cpp
// ─── WALL CORRECTION ───────────────────────────────── // Call this BEFORE autonomous starts, facing the alliance wall void correct_position_from_wall() { // The alliance wall is at Y = 0 on the full field (144" wide) // Our robot's front is reading distance from that wall double wall_dist_mm = dist_front.get(); // in millimeters double wall_dist_in = wall_dist_mm / 25.4; // convert to inches double robot_half_len = 7.0; // half your robot's length // Calculate robot's true Y position // (distance to wall) + (robot half-length) = center of robot from wall double true_y = wall_dist_in + robot_half_len; // Update odometry — keep current X and heading, correct Y only ez::pose current = chassis.odom_pose_get(); chassis.odom_pose_set({current.x, true_y, current.theta}); printf("Wall dist: %.1f in | Y corrected to: %.1f\n", wall_dist_in, true_y); }

Using It in Autonomous

void my_auton() { // 1. Set approximate starting position chassis.odom_pose_set({0, 8, 0}); // 2. Drive slowly toward alliance wall to get a clean reading chassis.pid_drive_set(-4_in, 40); // slow approach chassis.pid_wait(); // 3. Correct position using wall distance correct_position_from_wall(); // 4. Now drive forward into the field — position is accurate! chassis.pid_odom_set({{0, 48, 0}}, 110); chassis.pid_wait(); }

Mid-Auton Correction

You can also correct position partway through an autonomous routine — for example, after driving to a wall to drop a game element:

// After scoring against the far wall, correct Y before returning: chassis.pid_drive_set(24_in, 100); chassis.pid_wait(); // Now we're near the far wall — correct position double dist_in = dist_front.get() / 25.4; ez::pose p = chassis.odom_pose_get(); chassis.odom_pose_set({p.x, 144 - dist_in - 7.0, p.theta}); // 144" = full field width; subtract sensor reading and half-robot-length
Best practice: Use wall correction at the very start of autonomous (before any movement) so your entire path benefits from the corrected position. Mid-auton corrections are a bonus when your path takes you near a wall naturally.
// Section 06 · Advanced Tuning
Tuning and Testing Your Odometry 🔬
How to verify odom is working, find the push-test, fix common failure modes, and tune wheel offsets.

The Push Test — Your First Verification

Before running any autonomous, do this test to confirm odom is reading correctly:

1
Add position printing to your code
In opcontrol(), print the robot's current pose every 100ms:
// In opcontrol() while loop: ez::pose pos = chassis.odom_pose_get(); printf("X: %.1f Y: %.1f H: %.1f\n", pos.x, pos.y, pos.theta); pros::delay(100);
2
Upload code, open pros terminal, set robot at (0, 0, 0°)
Place the robot at its starting position. You should see X: 0.0 Y: 0.0 H: 0.0 in the terminal.
3
Manually push the robot forward 24 inches (one tile)
Watch the terminal. Y should increase toward 24.0 as you push. X and H should stay near 0. If Y goes negative, your tracking wheel is mounted reversed — negate the sensor in your declaration.
4
Rotate the robot 90° in place
Heading should increase from 0 to ~90. X and Y should barely change. If they shift a lot during a pure rotation, your tracking wheel's offset (distance from center) is wrong — adjust it.

Tuning Tracking Wheel Offset

The offset value in your tracking wheel declaration is how far the wheel is from the robot's center of rotation. Getting this right is critical for accurate turning:

// How to measure offset: // 1. Spin the robot in place 5 full rotations (1800°) // 2. Measure how far the robot's center moved (should be 0) // 3. If it moved N inches, adjust offset by N/(2π×5) // 4. Repeat until rotation causes minimal X/Y drift ez::tracking_wheel left_tracker( &chassis, new pros::Rotation(11), 2.75, -1.0 ← tune this value until spin test passes );

Common Failure Modes and Fixes

SymptomLikely CauseFix
Y reads negative when pushing forward Tracking wheel mounted backwards Add true as last param in sensor declaration to reverse it
Position jumps or stutters Tracking wheel losing contact with floor Check spring tension — wheel must press firmly. Tighten mount.
X/Y drifts a lot during turns Wrong offset value Do the 5-rotation test and recalculate offset
Heading drifts over time IMU not calibrated or disturbed Ensure imu_calibrate() completes fully in initialize(). Don't move robot during calibration (~3 sec).
Odom accurate at first, wrong after 15 sec Wheel slipping under load during turns Move tracking wheel closer to center of robot or increase spring pressure
Robot doesn't reach coordinate target Exit conditions too strict or odom PID not tuned Tune pid_odom_constants_set() — similar to regular drive PID tuning

Odom PID Constants

Coordinate-based movements use their own PID constants separate from regular drive PID:

// In default_constants() — autons.cpp: chassis.pid_odom_drive_constants_set(8.0, 0.0, 60.0); chassis.pid_odom_turn_constants_set(3.0, 0.05, 15.0); chassis.pid_odom_angular_constants_set(6.0, 0.0, 52.5); // Tune kP up if robot undershoots the target // Tune kD up if robot oscillates (wiggles) around the target
🏆
Odometry is a process, not a one-time setup. Expect to spend 2–3 practice sessions tuning. Once it's working, write down your constants in a comment so you can restore them if someone accidentally changes them before a tournament.
🎓
Going deeper? Once you're comfortable with EZ Template odometry, look into Pilons' OkapiLib, ARMS, or building your own odom from scratch using the Kalman filter. The math behind dead-reckoning is genuinely fascinating computer science — and it's the same math used in self-driving cars and spacecraft.
⚙ STEM Highlight Mathematics: Trigonometry, Integration & Dead Reckoning
Odometry is dead reckoning — estimating current position from a known starting point by accumulating movement. Each update: Δx = Δd · cos(θ), Δy = Δd · sin(θ). This is trigonometric decomposition of a displacement vector. Over N updates: x = x₀ + ∑Δxᵢ — discrete numerical integration. The position uncertainty grows with distance: if each encoder reading has ±ε error, after N steps the position error is bounded by N·ε in the worst case. This is error propagation — a core concept in measurement theory.
🎤 Interview line: “Odometry uses trigonometric decomposition and numerical integration to track position. Each encoder reading gives a displacement Δd; we decompose it into X and Y components using sin and cos of the current heading, then accumulate. The error grows as O(N) with the number of steps — which is why long runs drift more than short ones.”
🔬 Check for Understanding
Your robot drives a 36-inch straight line but odometry reports 37.2 inches. You re-measure the wheel diameter and find it is 4.1 inches, not the 4.0 inches configured. How does this explain the error?
Wheel diameter doesn’t affect distance measurement — only encoder ticks matter
A larger wheel travels farther per encoder tick. Ticks × circumference/ticks-per-rev = distance. Using 4.0 instead of 4.1 underestimates circumference, so all distances read short — wait, that means odometry would read less, not more
A larger actual wheel means each encoder tick represents more travel than the code assumes, so the robot actually travelled 36 inches but the code calculated a shorter distance using the smaller assumed circumference — wait that means odometry reads low too
The 37.2 reading suggests the wheel may actually be smaller than 4.0 inches — each tick covers less distance than assumed, so more ticks accumulate for the same travel, reading longer
Related Guides
🔪 Build Your Odometry Pod → 🌍 V5 GPS Sensor → 🔬 PID Diagnostics →
← ALL GUIDES