“Modern Control” versus “Linear Control”

Control Engineering
Abolfazl Mohammadijoo

Abolfazl Mohammadijoo

I am a freelance "Full-Stack Developer" and "Full-Stack Engineer". I have Bachelor and Master Degrees in Mechanical Engineering (Control & Robotics) from best Universities in Iran, and have a great knowledge in Artificial Intelligence, Computer engineering and Electrical Engineering.

Control Engineering is an Interdisciplinary field which could be used in other engineering fields like “Mechanical Engineering”, “Electrical Engineering”, “Aerospace Engineering”, “Chemical Engineering” and etc. There is no bachelor degree for control engineering in universities, but “Linear Control” course is a mandatory course in bachelor for all of above engineering fields. In this article, we first talk about control engineering in general and then we explain “Linear Control” and “Modern Control” in two separate sections.

If we want to describe “Control Engineering” in one paragraph, we could say, it is all about controlling some desired variable or state to reach to our desired situation. For example, you have governing dynamic equation of a powerplant and you want the “temperature” of special part of plant to be exactly at some desired Degrees Celsius. I think it is obvious what the desired variable or state is (which is temperature) and what the desired situation is. The “control input” could be “heat” that transferred among the piping which is controllable by us. Another example could be robots or vehicles. You have dynamic equation of motions and you want the robot or vehicle to be at a desired point or follow a desired path. Your control inputs are forces by actuators (usually electrical motors). In many times you need to observe your desired situation and if it is getting away from desired point, you need to tune control input. In this situation, we call it a control loop with feedback or in another word, a “closed loop control”. Feedbacks like temperature of our desired part or location of robot or vehicle, is provided by sensors. Below diagram describes “Control Engineering” pretty well:

However, like many other sciences, this field is not as easy as it seems to be. If the dynamic equation are linear differential equations, your job is easier. But many times, your dynamic equations are nonlinear and many “Nonlinear Control” techniques developed for this purpose. Sometimes there are many disturbances or uncertainties in your dynamic model and you need to use “Adaptive Control” or “Robust Control” approaches in these situations. Sometimes, you even don’t have the dynamic model of your system and you just know about your desired output and your control input. In these situations, you need to use “System Identification” approaches and you also could use “Fuzzy Controller” or “Neural Network Based Controller” for this purpose. In the rest of this article, first we talk about “Linear Control” and then we talk about “Modern Control” and its advantages over linear control. Other control methods need its own article in future.

Linear Control (Classic Control)

In linear control, we only study LTI systems which is abbreviation for “Linear Time Invariant” systems. It means dynamic equations of our system should be “Linear” and “Time Invariant”. We could write dynamic equations of system as below equation:

In above equation, we consider “Y” variable as our output and “X” as our input variable and  Y^{(n)} is n-th derivative of Y and  X^{(m)} is m-th derivative of X and all of them are function of time. “Linear” condition means that X and Y and their derivatives should be linear (not polynomial, not a nonlinear function like cos, sin, etc) and “Time Invariant” condition means that coefficients of X and Y and their derivatives should be constants and not a function of time.

The dominant approach in linear control for solving LTI systems, is “Laplace Transform” or “S-domain”. We take Laplace transform from our dynamic equations and the equations will transform from “time-domain” to “S-domain” or “frequency-domain”. The job of Laplace transform is to convert a differential equation system to a polynomial equation system and obviously working with polynomials are easier than differential equations. Rest of our analysis is in S-domain in “Linear Control” course.

There are 2 visual approaches in linear control to make understanding of system better. They are “SFG” and “TF” approach. SFG stands for “signal flow graph” and TF stands for “transfer functions”. In below picture, the right chart is SFG and the left chart is TF.

The first step in linear control, is finding the transfer function from dynamic equations of system. The question is, what transfer function is? The transfer function of a system is defined as the ratio of Laplace transform of output to the Laplace transform of input where all the initial conditions are zero. T(S) = Transfer function of the system, C(S) = output and R(S) = input.

There are many techniques to find transfer function not only from dynamic equations of system, but also from its visual representations that are SFG and TF block diagrams. One of these techniques is Mason’s Rule. After finding the transfer function, we are ready to do control analysis of system or stability analysis or at the end, designing a linear controller for our system.

First type of analysis, is finding characteristics of our system. We insert an input to system which is usually step function and then we find the properties of system like: “Dead Time”, “Rise Time”, “Peak Time”, “Percent Overshoot”, “Settling Time”, “Steady-State Error” and etc. A transfer function is like this:

If we solve the polynomial of numerator of T(s), we find “Zero”s of system and if we solve the polynomial in denominator of T(s), we find “Pole”s of system. Most of our analysis in linear system is based on poles and zeros of transfer function of our system. In realistic and useful systems, we always have  n \geq m .

Sometimes we know our desired characteristics of system like “Settling Time” or “Percent Overshoot” and we also know the rank of our transfer function (highest degree of polynomial in denominators of transfer function) and then we place our poles and zeros somehow to reach our desired characteristics. In above mentioned case, we could tune our system components that means we can tune our transfer function, but in real world systems, we have a solid system or model of system is unknown and we need to tune our controller parameters to reach our desired characteristics. The main controllers we design in linear controller course are P controller, PD controller and PID controller which we describe later in this article.

One important issue to discover about a system is “stability” of system in response to some inputs like step function or ramp function or etc. A system is “stable” if natural response of system to input tends to zero in finite time and If response tends to infinity, the system is “unstable” and if the response fluctuates, the system is “boundary stable”. If the response of system is bounded for bounded input, we call it “BIBO stable” or bounded-input bounded-output stable.

One criterion for investigating of stability of a linear system is “Routh-Hurwitz”. In this criterion we just need to know about coefficients of “Characteristic Equation” which is denominator of transfer function. We also can investigate stability by “Root Locus Graphs” which is based on location of poles of closed loop system and some tuning parameters of closed loop system. If all poles of closed loop system are in left side of Root Locus Graph, then system is stable or in another word, real part of all poles of closed loop system must be negative.

We can reach to our desired working condition of our system by designing “Controller” or “Compensator” for our system. As the number of controllers and compensators are infinite, the priority is with simplest one. There are 3 types of compensators: “Lead Compensator”, “Lag Compensator” and “Lead-Lag Compensator”. Transfer function of a compensator is in this form:

It is noteworthy that compensators are same as controllers but they are simplest ones. A linear compensator or controller in general form, is like below:

If all of above coefficients are non-zero, we call above controller as “PID” controller which is abbreviation for Proportional-Integral-Derivative. If we only have  K_p , we call it P controller which is the simplest one and if have proportional and derivative terms, we call it PD controller as below:

When we analysis our system in time domain, our input function is usually step function or ramp function or combination of some of them. If our input function is periodical like sinus function, we analysis our system in frequency domain. It could be shown that for a LTI system, if input is a sinus function, output is a sinus function as well but with different amplitude and phase. Frequency analysis has some advantages than time-domain analysis, for example:

  • In frequency analysis, we can use experimental data achieved from our plant and we don’t need imperial formulation of our plant.
  • We can consider the noises and disturbances in our model in frequency analysis.
  • We can use some of frequency analysis techniques in “Robust Control” and “Nonlinear Control”.

There are 3 most important techniques in frequency analysis which are: “Bode Plot”, “Nyquist Plot” and “Nichols Plot”. Each of these 3 methods needs its own description which could be find in any linear control text book. “Phase Margin” and “Gain Margin” are two criteria for investigating stability of system in frequency analysis. Nyquist has also its own criterion for stability analysis in frequency domain.

We also can design controller in frequency analysis using frequency parameters like “Gain Margin”, “Phase Margin”, “Frequency” and etc. As we said already in frequency analysis, we don’t need the mathematical model of plant and we can use “Bode Plot” and “Nyquist Plot” and “Nichols Plot” in controller design.

PID controllers are most frequently used linear controller, specially in industrial application and their pre-built module could be found in market. PID controllers could be used when we don’t have mathematical model of plant and they can tune their parameters. By using Ziegler–Nichols method, we can tune PID controller coefficients.

Modern Control

In modern control, we talk about LTI systems like linear control and our system is in below form:

But the main difference is that our analysis is in time domain instead of frequency domain or S-domain. The question is, why we didn’t use time domain analysis in linear control at first place. The trick we used in linear control is that, we take Laplace transform from our system and convert a higher order differential equation to polynomial equation. In modern control we need a new trick and clearly, we don’t work with above equation as it is.

This time we convert a differential equation with degree of n (highest order of derivative in equation) to n separate differential equation with degree of 1. Solving differential equation with degree of 1 is very easy and its solution is taught in first lessons of “Ordinary Differential Equations” course in first semesters in bachelor degrees in engineering. Our new system of equation is in below form:

In above equation, A, B, C and D are matrices and not scalar values. Therefore, if we expand above matrices equation, we will have n separate 1-degree differential equation. In above equation, x is state variables, u is input and y is output.

As a simple example, consider below mechanical system which is a Mass-Spring-Damper model.

The equation of this system is:

This is a second-degree differential equation. How we convert it to 2 separate 1-degree differential equation??!!

Our trick is as below. We put:

So, our new system of equation is as below:

As you see, above equation is a system of 1-degree equation. This form is also called, State-Space form and modern control is all about converting dynamic model of system to state-space form and then solve it.

In above equation, our input or u(t) is F(t) and we have:

Usually in motion equation, the x variable or location of system is important for us and we consider it as our output. Therefore, we have below equation for output variable:

Thus, C and D matrices is as below:

After we convert our equation to state-space form, for solving them, we use linear algebra techniques. Therefore, solid background in linear algebra is a must for this course and modern control is teaching usually as a graduate course in universities.

We can summarize main differences between modern control and classic control (linear control) in this items:

  • Modern control is based on time-domain analysis and classic control (linear control) is based on frequency-domain analysis. In other word, modern control is based on state variables and classic control is based on transfer function.
  • State-Space model of a dynamic system is not unique and depends on state variables we choose, but transfer function of a dynamic system is unique.
  • In modern control, state-space model could be used in linear systems, nonlinear systems, time invariant systems, time variant systems, Single-Input / Single-Output systems (SISO) and Multi-Input / Multi-Output systems (MIMO), but classic control approach is only capable for analysis of Linear Time Invariant (LTI) systems.
  • In modern control, we have information about internal states of system and state variables could be used as feedback, but in classic control (linear control) we don’t have any information about internal states of system.
  • In modern control, state variable doesn’t need to be physical variables and also doesn’t need to be measurable or observable, but in classic control (linear control), input and output variables must be measurable.
  • Modern control approach could be used in optimal control design or adaptive control design.

There are multiple representation for a system like, “Ordinary Differential Equations”, “State-Space”, “Transfer Function” and “Signal Flow Graph”. In modern control course you learn how to convert a representation to another.

For example, how could we extract state-space representation when we have transfer function of system?! For this purpose, there are 3 techniques: “Canonical Decomposition”, “Series Decomposition” and “Parallel Decomposition”.

For solving state-space equation, we use linear algebra techniques, although many software packages like MATLAB, does this pretty well for us. Below is some step-by-step simple approach for solving state-space equation:

In above solution, when we want to calculate  e^A , we are dealing with A as matric not a scaler, and also B appears in integral as a matric, that is why we need linear algebra techniques. There are many approaches for solving state-space equation like Cayley-Hamilton method. We also can use modal transform to convert our matrices in modal form or diagonal form and it makes solving equations easier and this is another linear algebra approach. We encourage readers to use modern control text books to know more about how to solve state-space equations.

For stability analysis in state-space form, when our system is in this form  \dot{X} = A X(t) , we first calculate “Eigen Values” of matric A. In a stable system, real part of all eigen values is negative (less than zero). Another method for investigating stability of systems in state-space form, is “Lyapunov Method”. In this approach, we only need differential equations of system and not the solution of dynamic equation. This approach is applicable to all linear and nonlinear systems and all time-variant and time-invariant systems.

Another important concepts in modern control are “Controllability” and “Observability”. Controllability is the relation between input and state variables and therefore A and B matrices are involved in Controllability Analysis. We call a system controllable, if we could design u(t) somehow that for each state variable xi, we can reach to our desired value for xi from an initial value of xi(0) in a finite time T. If a system is controllable:

  • it is possible to design linear feedback for system.
  • An unstable system can be stabilized.
  • low system can be speeded up.
  • The natural frequencies can be changed.
  • The existence of solutions to certain optimal control problems can be assured.

Observability is the relation between output and state variables and therefore C and D matrices are involved in Observability Analysis. A LTI system is observable, if we could calculate each of state variable xi(t), only by knowing input u(t) and output y(t).

There are some approaches to investigate controllability and observability of systems in state-space form. One of these approaches is Controllability and Observability “Gramian” Matrices.

We also can design controller for systems in state-space form. One of this controller design approaches is “Pole Placement”. There are 3 methods for pole placement that are: “State Feedback”, “Ackerman Formula” and “Similarity Transformation”.

In pole placement approach, we assume that all state variables are ready for feedback, but in real world systems, some state variables are not measurable. For this state variables, we need to design an estimator or observer. So, we need to design a “full state observer” for our system or for only some state variables we can design “reduced order observer”. The details of controller and observer design for state-space form systems, could be found in many text books of modern control.

share this post on your social media

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter