Skip to main content

Vision/Odometry Fusion

Disclaimer: WIP and untested on 2026 hardware; tune your pose estimator and camera transforms. Custom code built on WPILib APIs (not official WPILib sample).

Purpose: Use AprilTag measurements when fresh; otherwise rely on odometry.

Code (Java/WPILib PoseEstimator)

public class PoseFusion {
private final SwerveDrivePoseEstimator estimator;
private final PhotonCamera cam = new PhotonCamera("photonvision");
private final Transform3d robotToCam; // set your transform
private final PoseStrategy strategy = PoseStrategy.MULTI_TAG_PNP;

public PoseFusion(SwerveDriveKinematics kinematics, Rotation2d gyroAngle, Transform3d robotToCam) {
this.robotToCam = robotToCam;
estimator = new SwerveDrivePoseEstimator(kinematics, gyroAngle, getModulePositions(), new Pose2d());
}

public void update(Rotation2d gyroAngle, SwerveModulePosition[] modules) {
estimator.update(gyroAngle, modules);
PhotonPipelineResult res = cam.getLatestResult();
if (res.hasTargets()) {
Optional<Pose2d> est = PhotonUtils.estimateFieldToRobot(res, robotToCam, strategy);
if (est.isPresent()) {
estimator.addVisionMeasurement(est.get(), res.getTimestampSeconds());
}
}
}

public Pose2d getPose() { return estimator.getEstimatedPosition(); }
}

How to test

  • Field with AprilTags: drive around, compare estimator pose to known landmarks.
  • Sim: use PhotonVision sim to feed tag observations; ensure estimator accepts them.

Pitfalls

  • Correct robot-to-camera transform is critical.
  • Reject bad tag solves (low ambiguity) if needed.