Knowing your robot is not the same as being able to explain it under pressure. This lab builds the communication skills — STAR, active listening, structured answers, hand-offs — that let your team talk about its work as confidently as it builds it.
Knowing your robot is not the same as being able to explain it. Knowing your design process is not the same as communicating it clearly under pressure to a stranger in 10 minutes. These are two different skills. Most VRC teams practice the first one constantly and the second one almost never.
The programs that consistently earn Design Award and Excellence Award are not smarter than yours. They know their work and they have practiced talking about it. This lab trains the second part.
Each section has one or more timed drills. Run them with your team during a regular practice session — they take 5–15 minutes each. The weekly training plan in Section 4 shows how to combine them into a complete interview prep routine without adding extra practice time.
A memorized answer works exactly once — when the judge asks the exact question you prepared for. As soon as they ask a follow-up, go deeper on a detail, or ask the same question from a different angle, the script collapses. Students pause, repeat themselves, or look to teammates for help.
The fix is not a better script. It is practicing with randomized questions so your brain learns to find the answer from knowledge rather than retrieve it from memory.
One team member gives the whole-team overview in exactly 60 seconds. This is the opening statement when a judge says “Tell me about your robot.”
Structure to follow: (1) What the game needs → (2) What your robot does → (3) What makes your approach different → (4) One data point that proves it works. That’s 60 seconds.
Engineer picks one subsystem. In 60 seconds, explain: what it does, why you chose it, what problem the first version had, and what you changed. Timer runs. Team scores on Claim/Evidence/Decision formula.
Good subsystems for this drill: intake, drive, autonomous routine, lift, pneumatics, sensor setup.
STAR stands for Situation, Task, Action, Result. It is a structured way to answer reflection questions without rambling.
Total time: 30–45 seconds for that complete answer. It is complete because it has a problem, a diagnosis, an action, and a result — four things judges look for.
Use this for test results and iteration questions:
What: “Our first intake had a 15% jam rate on tilted elements.”
So What: “That cost us about half a scoring cycle per match — roughly 9 points at average Push Back scoring rates.”
Now What: “We increased the roller gap 4mm, brought jam rate below 3%, and have held that for the last 3 competitions.”
Each team member picks one thing that went wrong this season and answers it in STAR format. Timer: 45 seconds. Team listens for: did all four parts appear? Was the Result specific and measured?
Starter prompts: “Tell me about a time your robot failed at competition.” / “What is the biggest thing you would do differently?” / “Walk me through a change you made based on test data.”
Judge asks an evaluation or iteration question. Answerer must structure their answer as What / So What / Now What — explicitly. They can say the headers out loud at first: “What: our first lift was inconsistent. So what: it dropped 2–3 points per match. Now what: we redesigned the hard stop.”
After 3 reps each, try it without saying the headers. The structure should be invisible.
Most students assume active listening is a personality trait — either you have it or you do not. It is actually a skill built through practice. In a judge interview, there are specific moments where listening collapses:
Each of these can be fixed with deliberate practice.
Judge asks a question. Before answering, the answerer must repeat the question back in their own words in one sentence: “So you’re asking why we chose chain over gear, specifically for the drivetrain?” Judge confirms or corrects. Then the answer begins.
This slows the brain down, confirms understanding, and eliminates the habit of answering before fully processing the question.
One person answers a question normally. Mid-answer, the judge interrupts with a follow-up. The answerer must stop immediately and answer the follow-up before returning to the original point.
This trains the skill of staying flexible when the conversation shifts — the most important listening skill in a real judge interview.
Example interruption: student is explaining the four-bar decision — judge interrupts: “Wait — you mentioned motor budget. How many motors does your robot use total?”
Judge asks a question. The answerer must point to a specific notebook page while answering: “That test is on page 14 — you can see the before/after data there.” Every claim must be backed by a page reference.
This trains the habit of treating the notebook as a reference, not a prop. And it forces each member to know where the evidence is, not just that it exists.
Common failures: silence before the pass, complete answer before the pass (nothing left for the owner), or the owner re-starting from scratch instead of picking up the thread.
Rate yourself 1–5 on each. 1 = needs work, 5 = would not change anything.
When giving feedback to a teammate, use this structure to keep it useful and not personal:
Each session is 2 hours. Add these drills at the start or end depending on energy level and competition proximity.
Session 1 of week: Each member writes a 1-paragraph summary of their role. What do they own? What are the 2 most important decisions they made? Keep it. Update it weekly.
Session 2 of week: Run the 45-second burst drill — 2 questions per category, all 8 categories. No scoring. This is just about getting comfortable talking about the work. (16 reps, ~15 minutes)
Session 3–4 if applicable: Elevator pitch drill. Every team member practices the 60-second robot overview. No timer pressure — focus on the 4-part structure: game need → what your robot does → what makes it different → one data point.
Session 1: STAR reflection drill — each member picks one failure or change from the last 2 weeks and answers in STAR format. Team scores for all 4 parts.
Session 2: What/So What/Now What drill on test results — use the most recent test log entries. Each result gets the framework treatment. (5–10 minutes)
Session 3: Question Replay drill — repeat-back before every answer for a full 10-minute set. Forces the listening habit.
End of mid-season: one video review session. Film a 5-minute mock interview. Fill in the self-scoring card. Fix the top 1–2 things before the next session.
Session 1: Hand-off drill — full protocol, all categories, with wrong-person deliberate calling. Score every hand-off. Fix what is awkward.
Session 2: Follow-up interrupt drill — judge interrupts mid-answer on every question. Trains the ability to stay flexible when the conversation shifts.
Session 3: Notebook evidence drill — every answer must include a page reference. Forces final review of what is actually in the notebook.
Final session before competition: Full 10-minute mock interview, timer running, judge keeps it real. Film it. Watch it. Score it. One fix per person.
First session after competition: debrief the interview. What questions came up that you were not ready for? Write those down. They become the week 1 practice material for next competition prep.
Running rule: every answer that was weak in the real interview gets practiced 3 more times before the next competition. The debrief entry in the notebook is mandatory.
Use the timer below. Assign the judge role. Run through as many categories as time allows. Score every answer after the timer ends.
After each answer, the team scores it. 1 = description only. 2 = explanation with some evidence. 3 = claim + evidence + decision, specific data. Track trends over multiple sessions.
The judge asked how you tested it. You explained how you built it. Common when nervous and falling back to prepared material.
Three mechanisms, two test results, and a STEM connection in one breath. Judges cannot follow it, even if every fact is correct.
“We built it. We tested it. We decided.” Judges want to know what you did. If everything is “we,” judges cannot tell what any individual contributed.
“We kind of decided” / “it basically works” / “I think we chose it because” — hedging signals uncertainty about your own work. If you know it, state it.
Mid-answer glances at a teammate asking “am I right?” without saying it. Judges see it. It signals that the answer is not coming from your own knowledge.