Sources & confidence: Hardware specs, code APIs, AprilTag detection details, and field-of-view (~73°) figures verified against the official VEX Library articles on the AI Vision Sensor and the VEX AIM curriculum "Exploring AI Vision" activity. PROS API references and code patterns verified against the official PROS V5 documentation at pros.cs.purdue.edu (PROS 4 / pros::AIVision class). EZ-Template integration patterns drawn from EZ-Template's public docs at ez-robotics.github.io/EZ-Template. Override-specific details (kickoff April 24): AprilTags are confirmed on the goal bases of the cup-shaped scoring receptacles. The full Override game manual drops Monday April 27, 2026; tag IDs, count, exact placement geometry, any additional tag locations on the field, and any AprilTag-specific scoring rules will be confirmed then. Re-read this guide after the manual is published.
// Section 01

AI Vision Sensor — Overview 📷

VEX's computer-vision sensor for V5. Detects color signatures, color codes, AprilTags, and pre-trained AI classifications of game objects. The required sensor for using AprilTags in Override.
🔎 Vision + AI 🎯 Override-Ready 🏆 V5RC / VEX U / VEX AI Legal

Hardware Summary

VEX Part Number276-8659 (V5 / EXP version). Replaces the older Vision Sensor (276-4850) which cannot detect AprilTags.
Resolution320 × 240 pixels. Frame center is (160, 120). All position/size data the sensor returns is in pixel coordinates.
Field of View~73° horizontal (per VEX AIM curriculum measurement, ±2°). Wider than the older Vision Sensor.
ConnectionOne V5 Smart Port via Smart Cable. Counts as a sensor port, not a motor.
MountingStandard #8-32 screw holes (same pattern as other V5 sensors). Mounts to any C-channel, plate, or gusset in the V5 system.
ConfigurationInitial setup via the AI Vision Utility (VEXcode) is easiest for first-time AprilTag mode toggle. Subsequent code in PROS C++ via pros::AIVision.
Toolchain CompatibilityPROS C++ (pros::AIVision, official PROS 4 API). Also works in VEXcode V5 / EXP and VS Code Extension. Native PROS support means full integration with EZ-Template / LemLib / OkapiLib chassis libraries.

What it can detect

The AI Vision Sensor has four detection modes that can be enabled independently in the AI Vision Utility:

1. Color Signatures
Up to 7 user-defined colors at once. Tune each one's hue range and saturation range. Used for tracking colored game objects (e.g., alliance-colored rings, balls). Works similarly to the older Vision Sensor.
2. Color Codes
Sequences of stacked color signatures the sensor reads as one identifiable pattern. Useful for distinguishing alliance-colored objects positioned next to each other in a known layout.
3. AprilTag IDs
Black-and-white fiducial markers. Numbered 0–37 (38 total tags available). No setup required — just toggle Detection Mode on. Reports tag ID, angle, position, and bounding box. This is what Override uses.
4. AI Classifications
Pre-trained object recognition for VEX game elements (e.g., Buckyball, Cube, Ring). VEX has stated competition game objects are added each season starting 2024-25, so Override game objects are likely supported in the model after the season ships.
📋
Power budget note: The AI Vision Sensor is a sensor, not a motor — it does not count against any motor watt cap. Power-cap concerns are unaffected by adding it. The cost is one sensor port + one Smart Cable.
// Section 02
AprilTags Explained 🎯
What they are, why VEX is using them, and what data the sensor reports for each detected tag.

What They Are

AprilTags are square fiducial markers — like QR codes, but engineered specifically for fast, robust computer-vision detection at varying angles, distances, and lighting conditions. Each tag has a unique black-and-white pattern that encodes a numeric ID. The standard was developed at the University of Michigan and is widely used in robotics research, FRC, AR/VR, and self-driving car development.

Per VEX's documentation: there are 38 different AprilTags, numbered 0 through 37. VEX provides a printable PDF of all 38 tags from kb.vex.com. They're free to print on standard paper for practice, and the official Override field will have laminated/durable versions placed at fixed locations.

Override 2026-27 confirmed (kickoff April 24): AprilTags are mounted on the goal bases — the dark base portion of the cup-shaped scoring receptacles. This means the AI Vision Sensor is high-value for Override: each goal can be uniquely identified by tag ID, and robots can use vision to align to the goal opening for accurate element placement. Tag IDs, count per goal, and exact placement geometry will be confirmed Monday with the manual. The manual may also reveal additional tag locations — on alliance bases, on scoring zones, on the field perimeter — which would unlock additional use cases (auton start-position confirmation, alliance-side awareness, general field navigation). Read the field-elements section first thing Monday morning to see what else is tagged.
🧠 Why AprilTags Beat Color Detection
  • Identity: Each tag has a unique ID. The sensor doesn't just see "a tag" — it sees "Tag 4" specifically. Color detection can't distinguish identical-looking objects.
  • Angle: The tag's rotation relative to the camera is part of the detection output. You know not just where it is but how the camera is oriented to it.
  • Robustness: The high-contrast black-and-white pattern is far more lighting-robust than color. Gym lighting, shadows, and color casts barely affect detection.
  • Position correction: A known tag at a known location gives you absolute robot position. No drift over time.

What the Sensor Reports per Detected AprilTag

When you take a snapshot in AprilTag detection mode, the sensor returns an array of all detected tags. For each tag, the following properties are available (per the VEX Library "Coding with the AI Vision Sensor" articles):

idThe tag's numeric ID (0–37). Use this to identify which tag you're looking at.
angleRotational angle of the tag in the sensor's view, reported 0–359 degrees. Tells you if you're square to the tag or off-axis.
centerX / centerYPixel coordinates of the tag's center in the 320×240 frame. Frame center is (160, 120). Use these for left/right/up/down corrections.
originX / originYPixel coordinates of the top-left corner of the tag's bounding box. Combined with width/height, defines the full bounding box.
width / heightBounding box dimensions in pixels. Smaller = farther away; larger = closer. Used for distance estimation.

Important Behaviors

Distance Estimation Math

If you want to use AprilTag width to estimate distance, the relationship is approximately:

🔘 Pinhole Camera Approximation

distance = (real_tag_width × focal_length_pixels) / measured_width_pixels

In practice: measure the pixel width at known distances (12″, 24″, 36″, 48″, 60″), build a lookup table or fit a curve. Width is approximately inversely proportional to distance — if at 24″ the tag is 80px wide, at 48″ it'll be ~40px wide.

For tasks where you just need to drive toward a tag until you reach it, you don't need true distance — just compare width to a target threshold ("drive forward until tag width > 200px") and stop. Code example for this pattern is in the next section.

// Section 03
Mounting the Sensor 🔧
Where to put the AI Vision Sensor on your Override robot, and how to mount it so it actually works in matches.

Where to Mount It

The AI Vision Sensor sees in roughly a 73° horizontal cone. Its placement on your robot needs to satisfy several constraints simultaneously:

  1. Unobstructed forward view. No intake, arm, or chassis structure should block the camera's sight line during normal driving and scoring. This is the #1 mounting failure — teams place the sensor where the lift arm covers it on extension.
  2. Tag-height alignment. Tags will likely be placed at a defined height around the field perimeter (we won't know exact height until the manual drops Monday). Mount the sensor at a height that puts those tags inside its vertical FOV at typical robot positions.
  3. Low vibration location. Mount to a rigid section of the chassis. Don't mount on a movable arm or near a high-RPM motor. Vibration blurs frames and degrades detection accuracy.
  4. Cable management. The Smart Cable needs to run to the V5 Brain without being pinched, tangled in a moving mechanism, or loose enough to snag during contact. Use cable clips/zip-tie tabs.
🎯 Recommended Mounting Locations
  • Front-facing, top of chassis — canonical placement. Above the intake, below any high-extension lift. Sees forward with minimal obstruction.
  • Front of mast (if you have a vertical structural mast) — gets a bit of elevation, sees over short obstacles, sees tags placed higher on the field walls.
  • Behind clear polycarbonate if mechanical contact with opponents is a concern — polycarbonate is mostly transparent to the sensor and protects it.

How to Mount It Mechanically

The AI Vision Sensor uses standard #8-32 screw holes, the same pattern as other V5 sensors. Per VEX Library's mounting guidance:

⚠️
Real-world note from public forum discussion: Multiple teams have reported needing to physically rotate the sensor ~10–15° to one side after initial mounting because the captured image wasn't centered as expected. Plan to verify alignment in the AI Vision Utility before competition. If your tags are showing centerX values consistently off-center when you're square to them, the sensor is rotated — adjust mechanically.

Tilt Considerations

Tilt = the up/down angle of the sensor relative to horizontal.

You can ALSO mount two sensors at different tilts on the same robot — one looking forward-flat, one looking down for floor-level rings. The V5 Brain has 21 Smart Ports; vision sensors don't conflict with each other.

Wiring

Use a Smart Cable from the sensor's Smart Port to any Smart Port on the V5 Brain. Note the port number — you'll reference it in code (e.g., aivision1 on Port 8). For competition robots, label the cable with a piece of electrical tape and a port number to make field debugging fast.

// Section 04
Coding the Sensor (PROS + EZ-Template) 💻
PROS C++ patterns for AprilTag detection, with EZ-Template chassis integration. Verified against the official PROS V5 documentation (pros.cs.purdue.edu).
⚠️
EN4 reminder: RECF EN4 prohibits AI-generated programming code in your engineering notebook. Use these examples to understand the pattern, then write your own implementation in your own style. Don't copy-paste these into your notebook.

Setup

  1. Plug AI Vision Sensor into a Smart Port. Note the port number.
  2. You still need to enable AprilTag detection mode the first time. The simplest way: connect briefly to VEXcode V5 once, open the AI Vision Utility from the Devices menu, toggle AprilTag Detection ON, save. Settings persist on the sensor across power cycles.
  3. Alternatively, in PROS code call aivision.enable_detection_types(pros::AivisionModeType::tags) at startup — this enables tag detection programmatically.
  4. In your PROS project, include #include "pros/aivision.hpp" (it's included in pros/api.h by default in recent PROS).

Verified PROS API Reference

From the official PROS V5 documentation:

Classpros::AIVision
Constructorpros::AIVision aivision(uint8_t port);
Resetaivision.reset(); — resets sensor to initial state
Enable detectionaivision.enable_detection_types(pros::AivisionModeType::tags);
Tag familyaivision.set_tag_family(pros::AivisionTagFamily::tag_16H5); — or tag_21H7, tag_25H9, tag_61H11
Get all detectionsstd::vector<Object> objects = aivision.get_all_objects();
Type checkpros::AIVision::is_type(object, pros::AivisionDetectType::tag)
Tag fieldsobject.id, then four corners: object.object.tag.x0/y0, x1/y1, x2/y2, x3/y3
📋
Important PROS-vs-VEXcode difference: PROS gives you the four corner coordinates of each detected tag, not centerX/centerY/width/height directly. You compute the center yourself from the corners (typically: centerX = (x0 + x2) / 2, where x0 and x2 are diagonal corners). For width/distance estimation, use the diagonal length or one edge length.

Pattern 1: Drive Toward a Specific AprilTag (Standalone)

This is the "hello world" of AprilTag use. Centers on Tag ID 4, drives forward until the tag fills enough of the frame that you're close. Uses raw motor commands — the EZ-Template-integrated version follows in the next pattern.

// PROS V5 C++ — standalone, no EZ-Template chassis
// EN4: rewrite in your own words and structure for your notebook. #include "pros/apix.h" #include "pros/aivision.hpp" // Sensor on Smart Port 8 pros::AIVision aivision(8); // Helper: compute approximate tag center and "width" from the four corners struct TagInfo { int center_x; int center_y; int diagonal; // approximate size; bigger = closer }; TagInfo measure_tag(const pros::AIVision::Object& o) { int cx = (o.object.tag.x0 + o.object.tag.x2) / 2; int cy = (o.object.tag.y0 + o.object.tag.y2) / 2; int dx = o.object.tag.x2 - o.object.tag.x0; int dy = o.object.tag.y2 - o.object.tag.y0; int diag = (int)std::sqrt(dx * dx + dy * dy); return { cx, cy, diag }; } void drive_to_tag(int target_id, pros::Motor& left, pros::Motor& right) { while (true) { auto objects = aivision.get_all_objects(); bool found = false; for (auto& obj : objects) { if (!pros::AIVision::is_type(obj, pros::AivisionDetectType::tag)) continue; if (obj.id != target_id) continue; TagInfo t = measure_tag(obj); int turn_error = t.center_x - 160; // -ve = tag is left double turn_power = turn_error * 0.3; // P-control gain double drive_power = (t.diagonal < 80) ? 50.0 : 0.0; left.move(drive_power - turn_power); right.move(drive_power + turn_power); found = true; break; } if (!found) { left.move(0); right.move(0); } pros::delay(20); } }

Pattern 2: Vision-Corrected EZ-Template Auton

This is the pattern that actually fits your competition workflow. EZ-Template handles the chassis movement (PID, odometry, swings); your vision code triggers and corrects within EZ-Template's flow.

The core idea: use EZ-Template's pid_drive_set to approach a known-tag location, then exit early when vision confirms alignment, then continue with the next motion.

// EZ-Template + AI Vision integration pattern
// EN4: rewrite in your own words and structure for your notebook. #include "main.h" #include "EZ-Template/api.hpp" #include "pros/aivision.hpp" extern Drive chassis; // declared in subsystems.hpp extern pros::AIVision aivision; // declared in subsystems.hpp // Helper to find a specific tag in the current view struct TagResult { bool found; int center_x; int diagonal; }; TagResult find_tag(int id) { auto objects = aivision.get_all_objects(); for (auto& obj : objects) { if (!pros::AIVision::is_type(obj, pros::AivisionDetectType::tag)) continue; if (obj.id != id) continue; int cx = (obj.object.tag.x0 + obj.object.tag.x2) / 2; int dx = obj.object.tag.x2 - obj.object.tag.x0; int dy = obj.object.tag.y2 - obj.object.tag.y0; int diag = (int)std::sqrt(dx * dx + dy * dy); return { true, cx, diag }; } return { false, 0, 0 }; } // AUTON ROUTINE: drive to scoring zone, vision-correct alignment, score void score_at_tag_zone(int expected_tag) { // Step 1: rough approach using EZ-Template odometry chassis.pid_drive_set(48_in, DRIVE_SPEED, true); // 48 inches forward chassis.pid_wait(); // Step 2: vision correction pros::delay(150); // settle, take a clean snapshot TagResult t = find_tag(expected_tag); if (t.found) { // Tag is left of center -> we're rotated CCW relative to it -> turn CW (positive) int turn_error_px = t.center_x - 160; // Convert pixel error to degrees (calibrated empirically; ~0.2 deg/pixel for ~73 deg FOV) double turn_correction_deg = turn_error_px * 0.22; if (std::abs(turn_correction_deg) > 1.0) { // only correct if significant chassis.pid_turn_relative_set(turn_correction_deg, TURN_SPEED); chassis.pid_wait(); } } // If tag not found, we proceed on encoder-only path (graceful degrade) // Step 3: final approach + score chassis.pid_drive_set(6_in, DRIVE_SPEED, true); chassis.pid_wait(); // ... activate scoring mechanism ... }

Pattern 3: Async Vision Watch Task

Sometimes you want vision data continuously available without blocking your auton flow. Run a PROS task that polls vision and updates a shared state variable; your auton reads from it. This pattern is used by top teams.

// Async vision task pattern
// EN4: rewrite in your own words for your notebook. // Shared state guarded by a mutex struct VisionState { pros::Mutex mutex; int last_seen_tag_id = -1; int last_seen_center_x = 0; int last_seen_diagonal = 0; uint32_t last_update_ms = 0; }; VisionState vision_state; // Background task: continuously polls AI Vision, updates shared state void vision_task_fn(void* param) { while (true) { auto objects = aivision.get_all_objects(); for (auto& obj : objects) { if (!pros::AIVision::is_type(obj, pros::AivisionDetectType::tag)) continue; int cx = (obj.object.tag.x0 + obj.object.tag.x2) / 2; int dx = obj.object.tag.x2 - obj.object.tag.x0; int dy = obj.object.tag.y2 - obj.object.tag.y0; int diag = (int)std::sqrt(dx * dx + dy * dy); vision_state.mutex.take(); vision_state.last_seen_tag_id = obj.id; vision_state.last_seen_center_x = cx; vision_state.last_seen_diagonal = diag; vision_state.last_update_ms = pros::millis(); vision_state.mutex.give(); break; // first tag wins; iterate if you want all } pros::delay(20); // ~50 Hz update } } // Start the task in initialize() void initialize() { aivision.enable_detection_types(pros::AivisionModeType::tags); pros::Task vision_task(vision_task_fn, nullptr, "vision"); }

Pattern 4: EZ-Template Async + Vision Exit

EZ-Template supports async PID motions: start a drive, then continuously check sensor state, exit motion early when condition met. This is the cleanest pattern for "drive forward until you see Tag X centered, then stop."

// EZ-Template async + vision exit
// EN4: rewrite in your own words for your notebook. void drive_until_tag_centered(int target_id) { // Start a long forward motion (will be cancelled by exit condition) chassis.pid_drive_set(60_in, DRIVE_SPEED, true); // Watch loop: cancel motion when tag is centered & large enough while (chassis.pid_wait_quick_chain() == false) { TagResult t = find_tag(target_id); if (t.found && std::abs(t.center_x - 160) < 10 // centered && t.diagonal > 100) { // close enough chassis.pid_targets_reset(); // cancel current motion chassis.drive_set(0, 0); // hard stop break; } pros::delay(20); } }

Note: pid_wait_quick_chain is from EZ-Template's API for chained motions. Check the EZ-Template docs (ez-robotics.github.io/EZ-Template) for the exact return semantics in your installed version — the API has been refined across releases.

LemLib Equivalent

If you switch to LemLib (or compare libraries): LemLib uses chassis.moveTo(x, y, theta, timeout) for absolute-coordinate motion. The same vision-correction pattern works:

  1. Move to approximate target with LemLib's motion call.
  2. After motion settles, take vision snapshot, compute correction.
  3. Issue a small corrective moveTo() with the corrected coords.

LemLib's chassis.cancelMotion() is the equivalent of EZ-Template's pid_targets_reset() for early exits.

🚧
EZ-Template and LemLib do NOT wrap the AI Vision Sensor. They are chassis libraries — they handle drivetrain PID, odometry, motion profiling. For sensors like AI Vision and Optical, you use the standard PROS API directly. Integration happens in your own auton code: combine sensor reads with chassis motion calls.
// Section 05
Auton & Skills Run Tips 🏆
How to use AprilTags to consistently rack up skills run points. The 60-second skills format rewards precision auton over flashy autonomous — vision lets you get both.

Why AprilTags Matter Most in Skills Run

Skills runs are 60-second autonomous-only matches with no opponent. The score depends entirely on how reliably your robot executes a planned sequence. The two biggest causes of skills point loss in past V5RC seasons (per VEX Forum discussion):

  1. Drift in odometry/encoder-based navigation. Wheel slip, motor stalls, and minor field variations compound over a 60-second run.
  2. Position errors in scoring approach. Misaligned scoring drops a goal, knocks a ring off, or misses a target entirely.

AprilTags address both. By placing position correction at known-tag waypoints during the skills sequence, you reset accumulated error every few seconds. By aligning to a tag before each scoring action, the scoring approach is repeatable.

Recommended Skills Strategy

Pattern A: Tag-Anchored Waypoints
Plan your skills sequence as a series of known field zones. Each zone has at least one AprilTag visible from typical robot positions. At each zone entry, snapshot → correct position → execute scoring → move to next zone. Worst-case fallback: if a tag isn't visible, fall back to encoder-only navigation.
Pattern B: Pre-Score Correction
Before every scoring action (placing a ring, dropping a goal, climbing), do a quick AprilTag snapshot for the relevant tag and adjust position by the centerX/angle errors. Adds ~0.5–1 second per scoring action, but the success rate improvement usually more than pays for it.
Pattern C: End-of-Run Position Lock
If your skills routine ends with a positional task (climbing onto a specific structure, parking in a zone), use a final AprilTag check before executing it. The whole 60-second run can drift by inches; a final correction ensures the last action lands.

What to Do Before You Even Code It

  1. Print the AprilTag PDF from kb.vex.com. Cut and tape tags around your practice field at the locations the manual specifies (or your best guess).
  2. Drive your robot manually around the field with the AI Vision Utility open in webcam mode on a connected device. Verify which tags are visible from which positions. Find the dead zones (positions where no tag is in view).
  3. Measure tag pixel widths at known distances. Build a calibration table. Drive 12″ from a tag, record the width. Repeat at 24, 36, 48, 60. This becomes your distance lookup.
  4. Test detection latency. Snapshot, then move, then check how long until the data updates. Plan your code to wait this long before reading after a snapshot.

Auton Match Tips (Head-to-Head, Not Skills)

Match autons are 15 seconds. Tradeoff is different from skills:

🧠
Skills run benchmark target: If your encoder-only skills run scores X points, a vision-corrected version of the same sequence should reliably score X + ~15–25% from reduced drops/misses, assuming your scoring mechanism is the limiting factor (not ring count). Track this metric across iterations.
// Section 06
Driver Skills Run Tips 🎯
In driver skills, vision can also assist the driver — not just the autonomous code. "Driver assist" modes are legal and are how top teams hit their scoring caps.

Driver Assist: What It Is

Driver assist is a programming pattern where holding a button on the controller activates an autonomous routine that performs a specific task — aligning to a goal, picking up a ring, climbing — while the driver continues normal control on the rest of the robot. Vision is the enabling technology.

This is fully legal in V5RC. The R12 family of rules requires the robot to be controlled by a driver during driver-control periods, but autonomous-assist macros that the driver triggers are explicitly allowed.

Useful Driver Assist Modes

"Lock to Tag" Alignment
Hold a button: robot turns to center a specified AprilTag in its view. Driver continues to control forward/backward speed manually. Usable for scoring approaches where alignment matters but distance is driver-judged.
"Auto Approach" Macro
Hold a button: robot drives toward the nearest visible AprilTag of a specified ID, both turning and approaching, until it reaches a target distance (tag width > threshold). Useful for known-position scoring zones.
"Climb Sequence" Macro
If Override has a climbing endgame (likely), tag-correct to the climb structure first, then execute the climb. Removes the most error-prone manual action of the match. Mapped to a single button press.

Implementation Tips for Driver Assist

  1. Hold-button activation, not toggle. Release = exit the macro. Driver maintains override at all times. (Toggle macros lock the driver out and tend to fail in unexpected ways.)
  2. Visual feedback on the Brain screen. Show whether the assist is active and what tag it's tracking. Drivers need to know if vision is working or has failed.
  3. Test for "tag lost" behavior. If the tag goes out of view mid-macro, what happens? It should NOT spin in circles searching forever. Best behavior: stop, beep/light up, return control to driver.
  4. Don't let assist fight the driver. When the assist exits, motor commands from the joystick should immediately take effect. No lag.

Driver Practice With Vision

📋
Driver Skills Run is 60 seconds, no opponent. Same vision benefits as auton skills apply: every alignment is repeatable, every scoring approach is consistent. Top driver skills scores will likely come from teams with well-tuned assist macros.
// Section 07
Override Action Plan 🎯
A timeline for integrating the AI Vision Sensor into a competitive Override robot. Pre-manual prep, post-manual integration, and through-season optimization.

This Weekend (April 24–26): Pre-Manual Prep

Monday April 27 (Manual Drop Day)

  1. Read the field-elements section first — this confirms how many goals there are, which tag IDs map to which goals (red alliance, blue alliance, neutral), and tag dimensions.
  2. Scan for tag locations beyond the goal bases. The manual may reveal additional AprilTags on alliance starting bases, scoring zones, field perimeter walls, or elsewhere. If so, each location enables a different use case — auton start-position confirmation, alliance-side awareness, general field navigation. Build a separate lookup table for each tag category.
  3. Map tag IDs to goal locations. Build a reference doc: tag ID → goal position → alliance ownership. This is the lookup table your auton and driver-assist code will use.
  4. Check sensor-related rules. Are there restrictions on sensor count? Mounting locations? Modifications to game elements? (Manual sections that typically cover this: R6, R14, SG rules.)
  5. Confirm tag placement geometry. Are tags on the front face only of the goal base, or all four sides? Tag size? Tag mounting height above the floor? These determine your sensor mounting height.

Week 1 (April 28 – May 4): Build & Mount

Weeks 2–4: First Auton Routine With Vision

Mid-Season: Driver Assist Macros

Late Season: Skills Optimization

Common Pitfalls (Learn From Other Teams)

⚠️ Pitfalls We've Seen
  • Mounting the sensor too low. Tags positioned higher on field walls fall outside the sensor's vertical FOV. Mount high enough to see the full intended tag area.
  • Trusting width-based distance without calibration. The pixel-width-to-distance relationship is approximate. Always calibrate per-robot and per-mounting.
  • Skipping the "sensor not detected" case. If the sensor unplugs mid-match (loose Smart Cable, contact damage), every vision-dependent function fails. Always have an encoder-only fallback path.
  • Vision-only auton, no fallback. If a tag isn't visible at the right moment (occluded by your alliance partner, by an opponent), the auton fails entirely. Tag correction should be additive, not load-bearing.
  • Forgetting to enable AprilTag detection mode in the Utility. The most common "why isn't this working?" failure. Detection mode must be ON; verify before every match if you reset the sensor configuration.

Related Guides

← ALL GUIDES