As I've been writing about the quadcopter, a couple of my colleagues and I have been working on cleaning up the code. One of the big goals has been to make sure that we're doing things the "right" way instead of the "just make it work so that we can show it off as a demo" way. We've also been providing a lot of feedback to R&D about stability, edit-time performance, etc. The progress has been a bit slow due to the fact that I have a real day job as well, but regardless we're trying to be cautious to get everything fixed and reusable.

Alright, enough messing around. Let's get into the nitty-gritty controls stuff.

LQR stands for "linear quadratic regulator." Wikipedia has a pretty good description of it here, and there are some other good links here and here. The control algorithm comes from a state space representation from the system, and is essentially some matrix math that "optimally" regulates the system about a setpoint. Like any linear controller, it places poles and zeros of the closed loop system in specific places..in other words, magic. Also like any other control system, an LQR controller has to be tuned, but there are some LabVIEW tools for that...and since this is a linear control system, we need to turn the nonlinear model into a linear one. Here again, we're lucky because LabVIEW has some built in tools for that.

However, before we get to the linearization, we need to add a couple of elements to the model. As I've written about before (and as we've discussed in the comment sections of the posts), the model is entirely defined within the reference frame of the quadcopter. That means that the accelerations that result from the propeller thrusts are defined with respect to the body of the copter. Intuitively that should make sense because the sensors and actuators are also attached to the quadcopter. For "simple" PID control, this all works and is all you need. However, for the LQR algorithm, we want to control the quadcopter in the global reference frame. We want to control its position and orientation in the X-Y-Z reference frame. Therefore, we need to translate the body-fixed accelerations and rates into the global reference frame.

The following equations make that all happen (in the ME-like notation that I've developed). They rotate the body rates and represent that information in the global reference frame. These equations perform that translation for the angular rates (pqr in aerospace notation).

These equations perform the translation for the linear velocities (uvw in aerospace notation).

The screen shot of the LabVIEW simulation shows these equations in the second block of text-based code to the right. The nonlinear model then has outputs in both the body-fixed and global reference frames. Please excuse the mixing of notations!

Before the model can be plugged into any of the tools for developing the linear controller, it needs to be linearized. Luckily we have an app tool for that. From a LabVIEW window, select Tools>>Control Design and Simulation>>Linearize Subsystem... to launch the linearizer window (note that you can do this in a data-flow / block diagram way, but the wizard is frequently easier to use). You should get a window that looks a lot like this one...

Select "From Disk," find the subsystem that you want to linearize and hit the "Refresh" button. Switch to the "Initial Inputs," "Initial Outputs" and "Initial States" tabs and fill in the appropriate values. Once you get that how you want it, hit the "Trim" button. This trims the model to a stable equilibrium condition. That should also populate the list boxes of inputs, outputs and states. You might want to go back and update the initial inputs with a trimmed values, and then click the "Linearize" button. This will generate a linear state-space model of the system...without having to do it by hand!!! If you're ever had the amazing joy of doing this the old fashioned way, you'll understand why I'm so excited.

Now you'll have a .lti model and a linearized subsystem that you can save for later use. The next step is to design the gains for the LQR controller. Hey look, there's an app tool for that as well! The following chunk of code (taken from Quadcopter Dynamics CL Comp.vi) shows how to put that together. A typical way of picking the Q-matrix comes from the C-matrix of your linearized system (Q=rho*C*C^(-1)). There are other tricks and tools for tuning the LQR gains, but here the "make everything diagonal and tweak the magnitude" method works pretty well here. I'm also checking the eigenvalues of the closed-loop system to make sure that things are all stable (inside the unit circle for the discrete system). This generates a set of optimal gains for the LQR controller based on the model and the Q and R matrices that you pick.

Once that you have a model and some gains, it's always a good idea to make sure that the linear model matches the original non-linear model. In "I've closed the loop with an LQR block and I'm using it to drive a non-liear and linearized version of the model. You'll see that if you start from the initial conditions that the model will regulate to the state that we linearized the model about (in this case everything is zero except for Z which is 1m because that's the starting state of our robot...it's supposed to take off from a platform and fly from there). You'll also see that the linear and non-linear models perform pretty similarly. This is well and good except that it only regulates the model about a specific set of initial conditions. "What happens if you want to track a setpoint?" you might ask. Well, that's a great question!

The following image shows a block diagram of a compensator that has been added to the LQR algorithm.

Picking the reference gain is a bit of a tricky bit, and requires a little intuition about how the inputs affect the outputs. In this case, I know how the inputs affect the state vector, so I've used that to fill in the values. I know this is a bit of a hand waving explanation, but there are actually formal methods for doing this. We just went with the heuristic approach.

Here's a screen shot of the model running closed-loop. You'll see that it starts from a (0,0,1) position in (X,Y,Z) and ends up at the reference of (0.5,0,2).

Phew! That took a long time for me to get this all cleaned up and posted, but this is where we're getting into some of the more meaty control algorithm stuff. Unlike the PID approach, the LQR algorithm works relatively well even when there's noise introduced to the system. Also, the LQR approach gives you the ability to individually control all of the states of the system in a coordinated manner. The downside of the LQR by itself is that the algorithm assumes perfect state feedback (or even full state feedback). To fix that, the next step will be to add a linear Kalman Filter (and eventually a non-linear KF or Extended Kalman Filter) to the control algorithm. That will use the model of the system as well as all of the sensor feedback to create a balanced picture of the state values, balancing between the model-based observer and the noisy / redundant sensor data.

**Attachments:**

- closed loop sim with LQR.zip (617.9 K)