Skip to content

Commit

Permalink
Merge branch 'doc_skeleton' into main_nav
Browse files Browse the repository at this point in the history
  • Loading branch information
gerth2 committed Dec 15, 2024
2 parents e65b0a5 + 61c61bc commit 61d1594
Show file tree
Hide file tree
Showing 16 changed files with 2,666 additions and 23 deletions.
1,027 changes: 1,027 additions & 0 deletions .docs/architecture.graphml

Large diffs are not rendered by default.

48 changes: 48 additions & 0 deletions .docs/autoSequencerV2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# AutoSequencer V2

## Goals
Better flexibility for expressing complex sequences of autonomous routines

## Key Updates from V1
Rename `Event` to `Command` for better alignment with wpilib command based

Introduce `CommandGroup` as an ordered list of `Command`s to be run together

Introduce the following flavors of groups:
* `CommandRaceGroup`
* member commands run at the same time, finishing when the FIRST one is done
* Extends `Command`, with pass-trough on all functions to do the same thing on all commands in the group, except for `isDone()` implemented as an OR
* `CommandParallelGroup`
* member commands run at the same time, finishing when ALL are done
* Extends `Command`, with pass-trough on all functions to do the same thing on all commands in the group, except for `isDone()` implemented as an AND
* `CommandSequentialGroup`
* member commands run one after another

Requirements for `Command`
* Abstract (extender-must-implement) methods for:
* `initialize` - called once right before the first call to `execute`
* `execute` - Called at a rate as long as the command is active
* `end(boolean interrupted)` - called once right after the final call to `execute`. `interrupted` indicates whether the end was due to this command finishing "naturally" or because somethign else stopped it prematurely
* `isDone()` - return true if we're done, false otherwise
* Commands also implement convienence "composition" methods:
* `withTimeout` - returns a `raceWith()` a `WaitCommand`
* `raceWith()` - returns a race group with this command the input commands
* `alongWith()` - returns a parallel group with this command the input commands
* `andThen()` - returns a sequential group with this command and the input command
* Commands can `schedule()` or `cancel()` themselves with the `AutoSequencer()`

Pre-supplied implementations of `Command`:
* `WaitCommand` - waits the given duration

Existing requirements for `Mode`
* Singular `CommandSequentialGroup` for all `Command`'s in the mode
* Must provide API to:
* Supply the initial pose

Existing requirements for `AutoSequencer`:
* Top-level state machine for a command group
* Singleton
* Registration of multiple auto `Modes`, including publishing the available list to NT
* NT-based selection of the current auto event
* Ticks the `Mode`'s `CommandSequentialGroup` forward

18 changes: 18 additions & 0 deletions .docs/closedLoopDrivetrainControl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Closed Loop Drivetrain Control

We achieve smooth control of our drivetrain through a slightly modified [Holonomnic Drivetrain Controller](https://docs.wpilib.org/en/stable/docs/software/advanced-controls/trajectories/holonomic.html)

Swerve Drive has three _independent_ axes of control - Translation (X, Y), and Rotation (Theta). Unlke a tank drive, where side-to-side translation can not be done independently of forward motion, our control startegy may be easier.

Our control logic feeds the drivetrain with desired X, Y, and Rotational Velocity as the sum of two sources:

1) Open Loop velocity command
2) Closed loop position-correction command

All control starategies (manual, auto-drive, trajectory, etc.) must supply an open loop velocity command for all three axes at all times. Manual control gets these from the joysticks, auto-drive gets them from looking at how we're moving from the current to the next step, trajectories in autonomous pre-calculate the velocity throuhgout the trajectory.

Optionally, a `Pose2d` might be available from the controlling source. Manual control generally will not have this, but auto-drive and trajectory control probably will.

Since we have `odometry` to calculate an estimated `Pose2d` of our robot, we can subtract this from our commanded `Pose2d` to get an error in position. Each axis of the error (3 in total) are run through an independent PID controller, which in turn generates a small additional _corrective_ velocity to get the robot's estiamted pose closer to where we want to be.

Notably, in the past, wpilib's implementation did not have open-loop control of rotation. This was a strage omission. For this reason, we've had our own minimal `HolonomicDriveController` class for quite some time.
9 changes: 9 additions & 0 deletions .docs/docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# About Docs

These documents are designed to help developers of this codebase understand its architecture and purpose.

## Editing Diagrams

`.grapahml` files may be edited with the [yEd graph editor](https://www.yworks.com/products/yed/download#download).

In general, exporting to .pdf will make viewing easier once the diagram is deemed "correct"
87 changes: 87 additions & 0 deletions .docs/drivetrain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Drivetrain Control

## Swerve Drive Overview

A "swerve drive" is the most common type of drivetrain in the FIRST Robotics Competition in the last few years.

It consists of four "modules", each mounted on a corner of the robot.

Each module has two motors - one to point the wheel in a specific direction ("azimuth"), and one to actually spin the wheel ("wheel").

As a coordinate frame reminder: for our rectangular chassis, the robot's origin is defined as lying in the same plane as the floor, and in the exact center of the rectangle of the drivetrain. Positive X points toward the front of the robot, positive Y points toward the robot's left side. Positive rotation swings from +X to +Y axes.

## Overall Drivetrain Control

Our control is "field-oreiented" - the commands into the drivetrain are assumed to be velocity commands aligned with the field walls.

A few steps are needed in the overall control algorithm.

1. Rotate the incoming velocity commands to be relative to the robot's heading, not the field's +X axis. WPILib provides these functions.
2. Figure out the velocity and azimuth angle _at each module's contact patch with the ground_, using kinematics and the drivetrain dimensions. WPILib provides these functions.
3. Perform per-module state optimization (see below)
4. Send specific wheel velocity and azimuth angle commands to each module.

### Module State Optimization

At any given time, there are four possible ways a module could go from its present azimuth angle to a new commanded one:

. Rotate left, keep wheel velocity the same
. Rotate right, keep wheel velocity the same
. Rotate left, invert wheel velocity
. Rotate right, invert wheel velocity

In this was, the maximum number of degrees a module should ever have to be commanded to rotate through is 90. By optimizing the state, we reduce the amount of time the module is in a "skidding" state, where the wheel is not smoothly rolling across the floor.

WPILib provides the functions to do this, we simply have to call them.

## Module Control

Controlling the module requires controlling both motors.

### Wheel

The wheel velocity is achieved through a basic feed-forward, plus a small feedback portion.

The feed-forward model is the standard motor veloicty model, consisting of:

`kS` - static friction - maximum voltage that can be applied to the motor without motion occurring.
`kV` - number of volts to achieve a certain rotational velocity

Future adds include `kA` - number of volts to achieve a certain _change_ in rotational velocity.

Feedforward should be doing the vast majority of the control effort. Feedback is just a small additional factor to help compensate.

### Azimuth

For now, we are just using a simple P/D feedback controller on the azimuth angle. This seems to be sufficent.

Future adds could be to motion profile the commanded position of Azimuth angle, and then using that for a velocity feed-forward.

## Constants & Configuration

As with most good code, striving to minimize "magic constants" throughout the code is perferred.

Most drivetrain related constants are in `drivetrainPhysical.py`. Key ones that users may have to adjust:

. `WHEEL_BASE_HALF_*_M` - Distance from the origin of the robot, out to the center of where the wheel makes contact with the ground. Note this is not to the frame rail or the bumper. Note it's only _half_ the distance between two wheels.
. `WHEEL_GEAR_RATIO` - Reduction ratio in the modules from the driving motord down to the wheel.
. `WHEEL_RADIUS_IN` - radius of the wheel from center of rotation, to where it contacts the carpet
. `MAX_DT_MOTOR_SPEED_RPS` - maximum achievable speed from the drive motor. WPILib has most of these internally in their physical plant model configurations.
. `*_ENCODER_MOUNT_OFFSET_RAD` - adjusts for the physical mounting offset of the module/magnet and angle sensor on each drivetrain azimuth.
. `ROBOT_TO_*_CAM` - 3d Transforms from robot origin to the camera described (including both translation, and angle)

Other constants are present, but they are tied to things that don't change much year to year (robot weight, azimuth steer gear ratio, bumper thickness, etc.)

Most other constants in the file are derived from these constants.

### Encoder Mount Offset Cal Procedure

Must be updated whenever the module is reassembled

1. Put the robot up on blocks.
2. Reset all these values to 0, deploy code
3. Pull up dashboard with encoder readings (in radians)
4. Using a square, twist the modules by hand until they are aligned with the robot's chassis
5. Read out the encoder readings for each module, put them here
6. Redeploy code, verify that the encoder readings are correct as each module is manually rotated

30 changes: 30 additions & 0 deletions .docs/drivetrainControlStrategy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Picking a Drivtrain Control Strategy

There are multiple parts of robot code which might want to control the drivetrain. The following logic is used to determine what is actually controlling it.

## Arbitration

The drivetrain logic uses a "chained" approach to arbitration. Each feature takes in the command from the "upstream" features, has an *opportunity* to modify any part of the command, and passes the result downstream.

As more automatic assist or control features are added, they should be put into this stack. The further "downstream" in the list they are, the higher priority the feature will have.

### Manual Control

Manual control takes velocity commands from the driver's joysticks. There is no specific pose desired, only velocity is commanded.

Manual control must always be available. It is the default option if no other feature is taking control.

### Autonomous Path Following

During autonomous, pre-planned paths are often executed. At each time step, a desired pose and velocity is read out of a file and passed to the drivetrain. This only happens if the autonomous path planning logic is active.

### Teleop Navigation

The navigation stack can create commands to automatically drive the robot toward a goal, avoiding obstacles. This generates velocity commands, and a desired pose.

### TODO - align to gamepices

This logic should create rotational commands which point the robot at a gamepiece, but otherwise do not alter the X/Y velocity commands.

This has not yet been written

112 changes: 112 additions & 0 deletions .docs/localization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# Localization

Localization is the process of estimating where, on the field, the robot is at.

This "where" is answered by three numbers:

X position - in meters - from the origin
Y position - in meters - from the origin
Rotation - in radians - in angular deflection from the positive X axis, toward the positive Y axis.

Collectively, these are known as a "Pose" - specifically, a Pose2d

The sensors on our robot provide clues as to these numbers, but always some inaccuracy. Theses clues provide overlapping information, which we must *fuse* into a single estimate.

We use WPILib's *Kalman Filter* to fuse these pieces of information together, accounting for the fact each has some inaccuracy.

## About Inaccuracy

All sensors measure a particular quantity, and add some amount of random *noise* to that signal. The noise is in the same units as the quantity being measured.

When looking at sensor data, you can often see the signal "jittering" around the real value. This noise can be at least roughly measured.

The most common way to understand this noise is to assume it is random, with a *Gaussian Distribution*. The amount of noise will be proportional to the *standard deviation* of the noise's distribution.

The Kalman Filter takes the standard deviation of each measurement as an input, as a way to know how much to "trust" the measurement. Measurements with high standard deviations have a lot of noise, and are not particularly trustworthy.

Trustworthy measurements will change the the estimate of the robot's Pose rapidly. Untrustworthy measurements will take a lot of time to have an impact on the Pose estimate.

## Data Sources

### Gyroscope

FRC robots almost always include a gyroscope, which measures rotational velocity. By adding up the rotational velocity measurements over time, the sensor measures changes in the robot's angular position.

The gyroscope is one of the most accurate, least-noisy measurements of robot position. It should only drift by a degree or two every minute. However, it only measures one component of pose. Additionally, it is at best relative to a given starting pose, which must be accurate.

### Swerve Module Encoders

The encoders inside the swerve modules (wheel motors, and absolute encoders measuring azimuth position) provide a good estimate of movement. As long as wheels are rolling along the ground (and not slipping), linear displacement can be derived through determining distance rolled from number of rotations, multiplied by the circumference of the wheel.

The wheel encoders are also generally very accurate, with most noise being driven by slippage during high-acceleration maneuvers. Additionally, it is at best relative to a given starting pose, which must be accurate.


### AprilTags

Using a camera and a coprocessor, we can estimate our pose relative to the *fiducial markers* that FIRST provides.

These estimates provide all three components of Pose, and are absolute - they do not care whether the initial pose estimate was accurate or not. However, the signal often has a lot of latency (as it takes 100ms or more to get the image, process it, and send the result to the roboRIO). Additionally, their accuracy varies, depending on how far away the observed tag is, and whether or not [the observed pose is impacted by a common optical illusion.](https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html#d-to-3d-ambiguity).

## Initial Position

Software must make an accurate initial assumption of pose. This is done through one of two ways:

### Autonomous Init

Each autonomous routine must provide an assumed starting pose, where the drive team places the robot at the start of the routine.

This pose is returned from the sequencer, and used by the pose estimation logic to reset pose at the start of the match.

### Teleop Gyro Reset

The code support resetting the rotational component of pose to be aligned toward "downfield" - this helps during development or driver practice where autonomous is not run.

This only fixes the rotational component of pose. While rotational position is most important, X/Y translation can only be corrected by looking at an apriltag.

## Tuning Localization

The core tuning to be done involves picking how much to trust each source of data.

Ultimately, this is done by picking the "standard deviation" for each sensor.

Currently, default values are used from WPILib's pose estimator for gyro and wheels (as implemented in `drivetrainPoseEstimator.py`).

When `WrapperedPoseEstPhotonCamera` is updated, part of the `CameraPoseObservation` includeds the standard deviation of the rotational component (`rotStdDev`), and the translational component (`xyStdDev`) of the pose estimate.

Both of these `StdDev`'s can be thought of as, roughly, how much is the "plus or minus" of the estimated pose.

For example, currently, we assume the pose is accurate in X/Y, plus-or-minus one meter. This is pretty mid - not horrible, but not great either.

We also say the rotation is accurate, plus-or-minus 50 degrees.... which is a roundabout way of saying "we don't trust it much at all"

In the future - differnet pieces of info about the apriltag seen (ambiguity ratio, distance from the tag, overall tag size, etc.) could be used to change those fixed assumptions.

At the end of the day, all these numbers change is how rapidly the robot's pose estimate will "respond to" a particular vision observation.

## Debugging Localization

A set of objects is put into NetworkTables to help debug how the robot is estimating it's position on the field.

[AdvantageScope](https://docs.wpilib.org/en/stable/docs/software/dashboards/advantagescope.html) is the recommended tool for viewing a 3d representation of this. The simulation GUI and [Glass](https://docs.wpilib.org/en/stable/docs/software/dashboards/glass/index.html) also provide 2d views of this.


### Field 2d Poses

In `drivetrain/poseEstimation/drivetrainPoseTelemetry.py`, the following 2d and 3d poses are published:

In the `DT Pose 2d` `Field2d` object:

`Robot` - the current estimate of the robot pose
`ModulePoses` - the current estimated position and state of each swerve module.
`visionObservations` - any pose the vision system has detected the robot at. 0 or more, depending on how many apriltags are in view.
`desPose` - the desired pose of the current thing commanding the drivetrain. Might be `null` or just the origin if only velocities are commanded at the moment
`desTraj` - current autonomous path-planned trajetory (empty if not running)
`desTrajWaypoints` - Current lookahead waypoints from the Potential Field Auto-drive (empty if not running)
`curObstaclesFixed` - Poses associated with the obstacles assumed to be always present on the field
`curObstaclesFull`, `curObstaclesThird`, `curObstaclesAlmostGone` - current transient obstacle poses. As these poses decay in strength, they'll move from one list to the other (this allows the visualization tool to show different colors based on how strong the obstacle is)

### 3d poses

Additionaly, 3d poses are published for the assumed camera locations. This is useful for debugging the camera orientation on the robot with an `axis` object in AdvantageScope's 3d viewer.

`LeftCamPose`, `RightCamPose`, and `FrontCamPose` are all provided. Other poses should be added as we add more cameras.
Loading

0 comments on commit 61d1594

Please sign in to comment.