An Oversimplified Overview on How to Control a Robot

2020/04/26

When learning a new topic we can become quickly lost if many topics are presented at once. Here I write for future reference a very oversimplified summary on how to control a robot to perform a simple task.

For illustration we consider a simple robot arm with two revolute joints, working on the $(e_1,e_2)$ plane, as in the following figure.

Robot Arm

Define the robot end-effector’s position and velocity in task space as $\boldsymbol{x} = (x_1, x_2, \dot{x}_1, \dot{x}_2)^\top$, with coordinates in the basis $(e_1, e_2)$. And similarly $\boldsymbol{q} = (q_1, q_2, \dot{q}_1, \dot{q}_2)^\top$, the robot’s joint angle and angular velocities.


1. Task specification

The goal is to move the robot’s arm from point A to B in $N$ steps, following a pre-defined trajectory in task space $ \tau_{\textrm{task}} = \{\boldsymbol{x}^i \}_{i=1}^N$. Where $\boldsymbol{x}^i$ is the vector $\boldsymbol{x}$ at the trajectory step $i$.


2. Kinematics

Specifying a desired trajectory in task space is more intuitive to humans, because it is the space we inhabit. But the robot lives in the joint space. Therefore, we need to convert our desired trajectory in task space to joint space, which we can do by computing first the robot’s Forward Kinematics (FK) and then the Inverse Kinematics (IK) models. Computing the IK itself is not trivial, since there are many possible configurations in joint space that correspond to the configuration in task space.

Assuming we have the IK, let us then transform the trajectory in each step of the task space $\tau_{\textrm{task}}$, to each step in the joint space $\tau_{\textrm{joint}} = \{\boldsymbol{q}^i \}_{i=1}^N $.


3. Euler-Lagrange Mechanics

Now that we have angles and angular velocities for every joint along the trajectory, how to we actually move from one angle (or angular velocity) to another?

We need to compute the dynamics of the robot, which relate the changes in the robot’s state with the forces (or torques) applied to the robot’s joints. Very simplified, this is roughly Newton’s second law $F=ma$, written in the form

$ \dot{\boldsymbol{q}} = f(\boldsymbol{q}, \boldsymbol{u})$,

where $\boldsymbol{u}$ are the forces (or torques) applied on each joint.

At this point, assuming that we can solve the dynamics equation w.r.t. $\boldsymbol{u}$, obtaining the so-called inverse model $\boldsymbol{u} = f^{-1}(\boldsymbol{q}, \dot{\boldsymbol{q}})$, we can compute for each point in the trajectory ($\boldsymbol{q}^i$) the right forces to apply. With everything we have until now we can simply do open loop control, which in theory is sufficient to solve our initial task description.


4. Optimal Control. Which trajectory $\tau_{\textrm{task}}$ and $\boldsymbol{u}$?

The initial trajectory we defined $\tau_{\textrm{task}}$ was just arbitrarily chosen. What if instead we want to find a trajectory, and therefore the respective controls, which are optimal in some sense. Here we define optimal as the minimization of a cost function $J = \sum_{i=1}^N c(\boldsymbol{x}^i, \boldsymbol{u}^i) $, where $c:\mathcal{X} \times \mathcal{U} \to \mathbb{R}$ is an immediate cost computed for each step in the trajectory.

A very simple model assumes that the dynamics are linear in the state and control signal, and the immediate cost is quadratic. This model is known as the Linear-Quadratic-Regulator (LQR) (here simplified)

$$ \begin{align} \min_{\{\boldsymbol{x}^i \}_{i=2}^N, \{\boldsymbol{u}^i \}_{i=1}^N} & J=\sum_{i=1}^N {\boldsymbol{x}^{i}}^\top \boldsymbol{Q} \boldsymbol{x}^i + {\boldsymbol{u}^{i}}^\top \boldsymbol{R} {\boldsymbol{u}^{i}} \\ \textrm{s.t.} \quad & \boldsymbol{x}^{i+1} = \boldsymbol{A}\boldsymbol{x}^i + \boldsymbol{B}\boldsymbol{u}^i \\ % & \boldsymbol{y}^i = \boldsymbol{C}\boldsymbol{x}^i + \boldsymbol{D}\boldsymbol{u}^i \end{align} $$

If we solve the optimization problem above, it turns out that the resulting sequence of states and controls is given as a time-dependent linear feedback (closed loop) controller $\boldsymbol{u}^i = \boldsymbol{K}^i \boldsymbol{x}^i$, where $\boldsymbol{K}^i$ are computed in closed form. We can then define a sequence of idealized states, giving rise to an optimal trajectory computed recursively with $$ \begin{align} & \boldsymbol{x}^1 \quad \textrm{given} \\ & {\boldsymbol{u}^*}^i = \boldsymbol{K}^i {\boldsymbol{x}^*}^i \\ & {\boldsymbol{x}^*}^{i+1} = \boldsymbol{A}{\boldsymbol{x}^*}^i + \boldsymbol{B}{\boldsymbol{u}^*}^i. \end{align} $$

The optimal trajectory is then $ \tau_{\textrm{task}}^* = \{{\boldsymbol{x}^*}^i \}_{i=1}^N $. We can easily convert the task space trajectory into a joint space one using the Inverse Kinematics derived in step 2, $ \tau_{\textrm{joint}}^* = \{{\boldsymbol{q}^*}^i \}_{i=1}^N $.


5. State Estimation

Until now we have a control law that depends on the state $\boldsymbol{q}$ (given we have the IK).

But how do we get the robot’s state? We can measure the angles and angular velocities with sensors. E.g., a laser sensor that converts angle displacement to voltage. For simplicity let us assume the relationship between the sensor reading $\boldsymbol{y}$ and the state is linear, given by $\boldsymbol{y} = \boldsymbol{C}\boldsymbol{x} $. To get the state, the first thing we can do is

$$\boldsymbol{x} = \boldsymbol{C}^{-1}\boldsymbol{y}. $$

This relation is OK if $\boldsymbol{C}$ is invertible, but it might not be the best option to take.

What if the sensor readings $\boldsymbol{y}$ are noisy measurements? Should we trust the observation?

$$\boldsymbol{y} = \boldsymbol{C}\boldsymbol{x} + \boldsymbol{\varepsilon}, \quad \boldsymbol{\varepsilon} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{\Sigma}). $$

With noisy measurements you should not trust the observation $\boldsymbol{y}$ blindly. A more clever way is to keep track of the state and see how it matches with the obervation model. If an observation is very far from what the transition model predicts, it can happen that there has been a problem with the sensor reading, for instance an overheated sensor.

Therefore, instead of applying the linear controller on the state given by $\boldsymbol{x} = \boldsymbol{C}^{-1}\boldsymbol{y}$, we apply it on an estimated (filtered) state $\hat{\boldsymbol{x}}$, $\boldsymbol{u} = \boldsymbol{K} \hat{\boldsymbol{x}}$.

How do we compute the estimated state $\hat{\boldsymbol{x}}$?

If the dynamics are linear and the noise follows a Gaussian distribution, it turns out that the estimated state is also a random variable that follows a Gaussian, from which we can compute its parameters (mean and variance) in closed form. This leads to the Kalman Filter. Nothing more, nothing less.


Putting All Together

We have now all the steps needed to perform the proposed task, move a robot arm from an initial to a final position in task space.

Summary of what we need

comments powered by Disqus