Setting pid controller in practice

PI and PID controllers are the most commonly used controllers in practice. Even if the actual control law is very simple, the choice of suitable control parameters is anything but trivial. We present a practical and well-proven method for setting the parameters.

Manuel Graber

© kinwun / 123RF.com

Step by step adjust the regulator

PI and PID controllers are the most frequently used controllers in practice. Even if the actual control law is very simple, the selection of suitable setting parameters is anything but trivial. There are a variety of theoretically and empirically derived methods to determine optimal or good parameters for a concrete application (controlled system). However, the mathematical-theoretical methods are often too complex and therefore not suitable for practice. And empirically derived methods such as the frequently taught Ziegler-Nichols method can lead to very poor results in practice.

In the following we present a method step by step, which has been proven in numerous applications on real plants or in system simulation. The results are certainly not optimal in the theoretical sense but actually always lead to a good behavior of the closed control loop.

The individual steps are:

  1. Check the direction of action of the controlled system
  2. Set pure proportional share (P)
  3. Add integral component (I)
  4. Add differential component (D)

In the following we will go into the individual steps in detail. We assume a basic understanding of control engineering and the terms controlled system, manipulated variable, controlled variable, setpoint and control loop. Otherwise it is better to start with the basics of the control loop.

Direction of action of the controlled system

The basic idea of closed-loop control is to control, via specific Feedback to change the behavior of a dynamic system. For feedback in dynamic systems the sign is elementary important. Positive feedback leads to unstable – i.e. exponentially growing – behavior. A current example is the spread of a pandemic: more infected people lead to more infections, thus even more infected people. Such a positive amplification is undesirable in control engineering, control loops must always be closed in such a way that the feedback is negative.

Practically this means: Select the sign of the controller opposite to the effective direction of the controlled system. The standard setting of a controller usually assumes a positive direction of action of the controlled system and therefore binds the actual value of the controlled variable negatively. For:

Plus (controlled system) * Minus (controller) = minus (feedback in the control loop)

For example, this works wonderfully with a temperature controller to which an electric heater for a water bath is connected. A higher heating power (manipulated variable) causes a higher temperature of the water bath (controlled variable). Due to the negative consideration of the actual value in the controller, the feedback of the control loop is negative and thus fulfills a prerequisite for stability.

If the same controller would control a refrigerating machine which cools the water bath, the operating direction of the controlled system would be cooling capacity → temperature negative. If the controller were left at the factory setting, the control loop would always be unstable. For such cases you need to actively change the sign of the controller.

The effective direction of a controlled system can usually be easily found out by a few physical considerations. Personally, I prefer the trial-and-error method: switch on the controller with I-component and observe whether the manipulated variable runs in the right direction, if not, change the sign of the controller.

Control parameter

There are different mathematical representations for the PID control law. These are all equivalent but lead to different definitions of the setting parameters. It is important that you understand how the parameters are defined in your specific controller in order to be able to adjust them.

The basic form of a PID controller can be described with this differential equation:

The manipulated variable u calculated from the sums of the P, I, and D components. Where e describes the control error, i.e. the difference between setpoint and actual value of the controlled variable. The individual components are calculated with independent parameters K described.

A common alternative description form is:

The proportional component is described unchanged by the gain \(K_\), but the integral component is described by the Reset time \(T_\) and the differential component over the Derivative time \(T_\) defined. Both newly introduced parameters have the dimension time and are entered in industrial controllers either in seconds or minutes. In many industrial controllers the proportional gain is entered in a slightly modified form, dimensionless as Proportional band (or Proportional band) \(X_\). This value is given in percent and related to the maximum range of output and input signal of the controller:

The exact definitions are not crucial for the method described here. It is only important to understand that small values for \(X_\) and \(T_\) generally lead to larger manipulated variables and thus a more aggressive control behavior, while it is exactly the opposite for \(T_\) and \(K_\).

Setting the proportional component

At first, the controller is operated as a pure P-controller, i.e. I-part and D-part are completely turned off. Repeated jumps to the setpoint are given and the Step response of the closed loop observed.

We start with a low gain, i.e. a rather passive controller. A good starting point for \(K_\) can be found by considering in which order of magnitude changes of the manipulated variable cause a change of the controlled variable. From this value you then take a fraction – for example one hundredth. If in your controller the P component is preset via the proportional band \(X_\), choose a high value – for example 100.

It should be noted that with a pure P-controller there is always a permanent control deviation exists. That means we will never hit the set point exactly. The first step response should look something like below in the first figure. Repeat the experiment step response with stepwise increased gain, i.e. a more and more aggressive controller. The response of the control loop becomes faster and faster and the remaining control deviation decreases. At some point, you reach the point where the control loop oscillates significantly and even becomes unstable if the gain is increased further. Then you have overdone it. A good setting is when a detectable overshoot occurs but decays quickly.

Generally you have some leeway here. For a robust conservative setting, choose a smaller \(K_\) or. larger \(X_\), and for a more aggressive possible fast control exactly the other way round. We will keep the P-portion selected in this way for all the following steps.

Set integral term

Now the controller is operated as a PI controller. The integral part ensures after a first fast reaction of the P-part that the remaining control error is compensated over time.

We first start with a large value for the reset time \(T_\), which corresponds to a sluggish behavior. The intuitive estimation of the time constant of the controlled system gives a good indication. Approximately how long does it take for the controlled system to return to a steady state after a step change in the manipulated variable?? The answer to this question is a good starting point for \(T_\).

Analogous to setting the P component, we repeatedly perform setpoint jumps and look at the closed-loop response. Here we gradually decrease \(T_\) and thus increase the aggressiveness of the controller. The same applies to the integral component: too aggressive a setting leads to unwanted oscillations or even instability.

Here again there is a margin. Larger reset times lead to more robust controls and smaller reset times lead to faster controls. We keep the chosen setting for \(T_\) for the following step.

Adjusting the differential ratio

My personal opinion is: For many practical applications a well tuned PI controller is sufficient. That is, if you are already satisfied with the performance of the controller right now, just stop. If you still want to get something out of it by adding the D component, then we repeat the procedure from the previous two steps.

We start again with a slow setting, so choose a small derivative time \(T_\). As a guideline, one tenth of the previously selected reset time of the I component can be used. We reduce this gradually until we are satisfied with the performance of the control loop.

Theoretically, the D component also means that larger P components can be selected without the system starting to oscillate. That means, we could now readjust the P-component with active PID-controllers.

Robustness and nonlinearities

With the method described here we succeed in practice to adjust a PID controller well. Regardless of the adjustment method, however, it is always true for a PID controller that it is a linear controller is, which in a nonlinear world can only ever be adjusted well for one operating point. It depends strongly on the controlled system – more precisely on its non-linearity – how well the control parameters found also work at other operating points. I think this phenomenon is known to many from practice. A previously well-functioning controller starts at a different operating point (z.B. partial load instead of full load) to oscillate all at once.

To avoid such problems, you can set the PID controller more robustly from the outset. Generally, there is always a performance/robustness trade-off. That means, if I choose in the steps above, the parameters rather towards the slow side, I get a more robust controller, which then also works rather under changing operating conditions.

If you want to fully understand the nonlinearities of the controlled system and design controllers that are as performant as possible, you need a detailed analysis of the dynamics of the controlled system. A powerful tool for this is system simulation with the modeling language Modelica. Systems can be clearly constructed from individual components and existing libraries. Virtual experiments can be used to specifically investigate and understand the most important physical effects and interactions.

For many technical areas, there are industry-proven model libraries that greatly reduce the modeling effort. We offer professional Modelica libraries and know-how with TIL Suite for thermodynamics simulation and PSL for process engineering systems. We would be happy to help you with control development for your specific application.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: