Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Introduction to Linear Control Systems
Introduction to Linear Control Systems
Introduction to Linear Control Systems
Ebook2,305 pages156 hours

Introduction to Linear Control Systems

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Introduction to Linear Control Systems is designed as a standard introduction to linear control systems for all those who one way or another deal with control systems. It can be used as a comprehensive up-to-date textbook for a one-semester 3-credit undergraduate course on linear control systems as the first course on this topic at university. This includes the faculties of electrical engineering, mechanical engineering, aerospace engineering, chemical and petroleum engineering, industrial engineering, civil engineering, bio-engineering, economics, mathematics, physics, management and social sciences, etc.

The book covers foundations of linear control systems, their raison detre, different types, modelling, representations, computations, stability concepts, tools for time-domain and frequency-domain analysis and synthesis, and fundamental limitations, with an emphasis on frequency-domain methods. Every chapter includes a part on further readings where more advanced topics and pertinent references are introduced for further studies. The presentation is theoretically firm, contemporary, and self-contained. Appendices cover Laplace transform and differential equations, dynamics, MATLAB and SIMULINK, treatise on stability concepts and tools, treatise on Routh-Hurwitz method, random optimization techniques as well as convex and non-convex problems, and sample midterm and endterm exams.

The book is divided to the sequel 3 parts plus appendices.

PART I: In this part of the book, chapters 1-5, we present foundations of linear control systems. This includes: the introduction to control systems, their raison detre, their different types, modelling of control systems, different methods for their representation and fundamental computations, basic stability concepts and tools for both analysis and design, basic time domain analysis and design details, and the root locus as a stability analysis and synthesis tool.

PART II: In this part of the book, Chapters 6-9, we present what is generally referred to as the frequency domain methods. This refers to the experiment of applying a sinusoidal input to the system and studying its output. There are basically three different methods for representation and studying of the data of the aforementioned frequency response experiment: these are the Nyquist plot, the Bode diagram, and the Krohn-Manger-Nichols chart. We study these methods in details. We learn that the output is also a sinusoid with the same frequency but generally with different phase and magnitude. By dividing the output by the input we obtain the so-called sinusoidal or frequency transfer function of the system which is the same as the transfer function when the Laplace variable s is substituted with . Finally we use the Bode diagram for the design process.

PART III: In this part, Chapter 10, we introduce some miscellaneous advanced topics under the theme fundamental limitations which should be included in this undergraduate course at least in an introductory level. We make bridges between some seemingly disparate aspects of a control system and theoretically complement the previously studied subjects.

Appendices: The book contains seven appendices. Appendix A is on the Laplace transform and differential equations. Appendix B is an introduction to dynamics. Appendix C is an introduction to MATLAB, including SIMULINK. Appendix D is a survey on stability concepts and tools. A glossary and road map of the available stability concepts and tests is provided which is missing even in the research literature. Appendix E is a survey on the Routh-Hurwitz method, also missing in the literature. Appendix F is an introduction to random optimization techniques and convex and non-convex problems. Finally, appendix G presents sample midterm and endterm exams, which are class-tested several times.

LanguageEnglish
Release dateSep 19, 2017
ISBN9780128127490
Introduction to Linear Control Systems
Author

Yazdan Bavafa-Toosi

Yazdan Bavafa-Toosi received B.Eng. and M.Eng. degrees in electrical power and control engineering from Ferdowsi University, Mashhad, and K.N. Toosi University of Technology, Tehran, Iran, in 1997 and 2000, respectively. He earned his Ph.D. degree in system design engineering (also known as systems and control) from Keio University, Yokohama, Japan, in 2006. His multi-disciplinary research spans systems and control theory and applications. Between and after his educations he has held various research and teaching positions in Germany, Japan, and Iran, and co-authored about 40 technical contributions. He is a reviewer of some journals in the field of systems and control theory and applications. His wide experience in math and engineering is reflected in this book whose core materials have been taught and class-tested several times in the past 10 years.

Related to Introduction to Linear Control Systems

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for Introduction to Linear Control Systems

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Introduction to Linear Control Systems - Yazdan Bavafa-Toosi

    Introduction to Linear Control Systems

    Yazdan Bavafa-Toosi

    Table of Contents

    Cover image

    Title page

    Copyright

    Dedication

    Preface

    Acknowledgments

    Part I: Foundations

    1. Introduction

    Abstract

    1.1 Introduction

    1.2 Why control?

    1.3 History of control

    1.4 Why feedback?

    1.5 Magic of feedback

    1.6 Physical elements of a control system

    1.7 Abstract elements of a control system

    1.8 Design process

    1.9 Types of control systems

    1.10 Open-loop control

    1.11 Closed-loop control

    1.12 The 2-DOF control structure

    1.13 The Smith predictor

    1.14 Internal model control structure

    1.15 Modern representation—Generalized model

    1.16 Status quo

    1.17 Summary

    1.18 Notes and further readings

    1.19 Worked-out problems

    1.20 Exercises

    References

    Further Reading

    2. System representation

    Abstract

    2.1 Introduction

    2.2 System modeling

    2.3 Basic examples of modeling

    2.4 Block diagram

    2.5 Signal flow graph

    2.6 Summary

    2.7 Notes and further readings

    2.8 Worked-out problems

    2.9 Exercises

    References

    3. Stability analysis

    Abstract

    3.1 Introduction

    3.2 Lyapunov and BIBO stability

    3.3 Stability tests

    3.4 Routh’s test

    3.5 Hurwitz’ test

    3.6 Lienard and Chipart test

    3.7 Relative stability

    3.8 D-stability

    3.9 Particular relation with control systems design

    3.10 The Kharitonov theory

    3.11 Internal stability

    3.12 Strong stabilization

    3.13 Stability of LTV Systems

    3.14 Summary

    3.15 Notes and further readings

    3.16 Worked-out problems

    3.17 Exercises

    References

    4. Time response

    Abstract

    4.1 Introduction

    4.2 System type and system inputs

    4.3 Steady-state error

    4.4 First-order systems

    4.5 Second-order systems

    4.6 Bandwidth of the system

    4.7 Higher-order systems

    4.8 Model reduction

    4.9 Effect of addition of pole and zero

    4.10 Performance region

    4.11 Inverse response

    4.12 Analysis of the actual system

    4.13 Introduction to robust stabilization and performance

    4.14 Summary

    4.15 Notes and further readings

    4.16 Worked-out problems

    4.17 Exercises

    References

    5. Root locus

    Abstract

    5.1 Introduction

    5.2 The root locus method

    5.3 The root contour

    5.4 Finding the value of gain from the root locus

    5.5 Controller design implications

    5.6 Summary

    5.7 Notes and further readings

    5.8 Worked-out problems

    5.9 Exercises

    References

    Part II: Frequency domain analysis & synthesis

    6. Nyquist plot

    Abstract

    6.1 Introduction

    6.2 Nyquist plot

    6.3 Gain, phase, and delay margins

    6.4 Summary

    6.5 Notes and further readings

    6.6 Worked-out problems

    6.7 Exercises

    References

    7. Bode diagram

    Abstract

    7.1 Introduction

    7.2 Bode diagram

    7.3 Bode diagram and the steady-state error

    7.4 Minimum phase and nonminimum phase systems

    7.5 Gain, phase, and delay margins

    7.6 Stability in the Bode diagram context

    7.7 The high sensitivity region

    7.8 Relation with Nyquist plot and root locus

    7.9 Standard second-order systems

    7.10 Bandwidth

    7.11 Summary

    7.12 Notes and further readings

    7.13 Worked-out problems

    7.14 Exercises

    References

    8. Nichols-Krohn-Manger-Hall Chart

    Abstract

    8.1 Introduction

    8.2 S-Circles

    8.3 M-Circles

    8.4 N-circles

    8.5 M- and N-Contours

    8.6 NKMH chart

    8.7 System features: GM, PM, DM, BW, stability

    8.8 The high sensitivity region

    8.9 Relation with Bode diagram, Nyquist plot, and root locus

    8.10 Summary

    8.11 Notes and further readings

    8.12 Worked-out problems

    8.13 Exercises

    References

    9. Frequency domain synthesis and design

    Abstract

    9.1 Introduction

    9.2 Basic controllers: proportional, lead, lag, and lead-lag

    9.3 Controller simplifications: PI, PD, and PID

    9.4 Controller structures in the Nyquist plot context

    9.5 Effect of the controllers on the root locus

    9.6 Design procedure

    9.7 Specialized design and tuning rules of PID controllers

    9.8 Internal model control

    9.9 The Smith predictor

    9.10 Implementation with operational amplifiers

    9.11 Summary

    9.12 Notes and further readings

    9.13 Worked-out problems

    9.14 Exercises

    References

    Part III: Advanced Issues

    10. Fundamental limitations

    Abstract

    10.1 Introduction

    10.2 Relation between time and frequency domain specifications

    10.3 The ideal transfer function

    10.4 Controller design via the TS method

    10.5 Interpolation conditions

    10.6 Integral and Poisson integral constraints

    10.7 Constraints implied by poles and zeros

    10.8 Actuator and sensor limitations

    10.9 Delay

    10.10 Eigenstructure assignment by output feedback

    10.11 Noninteractive performance

    10.12 Minimal closed-loop pole sensitivity

    10.13 Robust stabilization

    10.14 Special results for positive systems

    10.15 Generic design procedure

    10.16 Summary

    10.17 Notes and further readings

    10.18 Worked-out problems

    10.19 Exercises

    References

    Appendices A–G

    Appendix A. Laplace transform and differential equations

    A.1 Introduction

    A.2 Basic properties and pairs

    A.3 Differentiation and integration in time domain and frequency domain

    A.4 Existence and uniqueness of solutions to differential equations

    References

    Appendix B. Introduction to dynamics

    B.1 Introduction

    B.2 Equivalent systems

    B.3 Worked-out problems

    References

    Appendix C. Introduction to MATLAB®

    C.1 Introduction

    C.2 MATLAB®

    C.3 Simulink

    C.4 Worked-out problems

    References

    Appendix D. Treatise on stability concepts and tests

    D.1 Introduction

    D.2 A survey on stability concepts and tools

    D.3 Lipschitz stability

    D.4 Lagrange, Poisson, and Lyapunov stability

    D.5 Finite-time and fixed-time stability

    D.6 Summary

    Appendix E. Treatise on the Routh’s stability test

    E.1 Introduction

    E.2 Applications of the Routh’s array

    E.3 The case of imaginary-axis zeros

    Appendix F. Genetic algorithm: A global optimization technique

    F.1 Introduction

    F.2 Convex optimization

    F.3 Nonconvex optimization

    F.4 Convexification

    F.5 Genetic algorithms

    Appendix G. Sample exams

    G.1 Midterm Exam (ME) – Sample 1 (4 h, closed book/notes)

    G.2 Endterm Exam (EE) – Sample 1 (4 h, closed book/notes)

    G.3 Midterm Exam (ME) – Sample 2 (closed book/notes)

    G.4 Endterm Exam (EE) – Sample 2 (closed book/notes)

    G.5 Midterm Exam (ME) – Sample 3 (closed book/notes)

    G.6 Endterm Exam (EE) – Sample 3 (closed book/notes)

    Colour Plate

    Index

    Copyright

    Academic Press is an imprint of Elsevier

    125 London Wall, London EC2Y 5AS, United Kingdom

    525 B Street, Suite 1650, San Diego, CA 92101, United States

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom

    Copyright © 2019 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    ISBN: 978-0-12-812748-3

    For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Mara Conner

    Acquisition Editor: Sonnini R Yura

    Editorial Project Manager: Ana Claudia Garcia

    Production Project Manager: Mohana Natarajan

    Cover Designer: Victoria Pearson

    Typeset by MPS Limited, Chennai, India

    Dedication

    تقدیم به صاحبان و جویندگان علم و اندیشه

    To men possessed of minds

    آناهیتا و اهورامزدا در تنفیذ پادشاهی خدا به خسروپرویز - طاق بستان، کرمانشاه، ایران

    Anahita and Ahura Mazda in the divine investiture of Khosrow Parviz

    Tagh-e Bostan, Kermanshah, Iran

    (Photo: Adapted from anonymous source – internet)

    Preface

    The book Introduction to Linear Control Systems is architected as a comprehensive, provident, and standing introduction to linear control systems for all those who one way or another deal with the concept of control. It can be exploited as a reference for a one-semester 3-credit undergraduate course on linear control systems as the first course on this topic. It is designed for as large an audience as possible. It can be adopted in all departments where courses on linear control systems are or can be offered. This subsumes all faculties such as electrical, computer, mechanical, industrial, aerospace, chemical and petroleum, bio, neuro, material, civil and environmental, marine, medical, psychological, pharmaceutical, health, food, agricultural, veterinary, geological, physics, mathematics, economics, management, social, political, military, etc. science and engineering departments. Albeit the materials are for undergraduate level the approach is theoretically firm and contemporary.

    The author was educated in engineering and mathematics departments in Iran, Germany, and Japan. He obtained a vocational degree in EE (electronics) from Khorasan Province in 1996, a BEng degree in EE (power) from Ferdowsi University of Mashhad in 1997, and an MEng degree in EE (control) from K. N. Toosi University of Technology of Tehran in 2000, all in Iran. From 2001 to 2003 he held a research position at the Department of Mathematics, Technical University of Berlin, Germany. He earned a PhD degree in Integrated Design Engineering (also known as EE, Systems & Control) from Keio University of Japan in 2006. His research interests span systems and control theory and applications. He has held various research and teaching positions since his student years. The book reflects his wide experience in the field. The idea of writing an undergraduate book on linear control systems was conceived when he was an undergraduate student. Expanding on the materials taught to him at college and obtaining new results has started since that time. In fact almost all the results of the book have been independently obtained by the author, however since they have appeared in some scientific forums before, he acknowledges this fact by citing the appropriate references and giving the credit to them. The core materials of this book have been taught by him at different universities in Iran over the past years, starting from 2003.

    It transpires that in many–in fact well-nigh all–places in the world the scientific communities–especially students–do not have access to an advanced, theoretically rigorous, comprehensive, and provident reference for this course which instructively and constructively guides them through, from the rudiments to the status quo and future. The present textbook which has evolved from our course notes serves this purpose–it vividly instills and crystallizes the present and future picture of the field into the reader’s mind. It is published to carpe diem with the intention of mitigating the so-called north and south distance.

    The book scrutinizes, rectifies, and resolves numerous mistakes in the available texts and tenders various several contributions to the existing literature on this topic. In addition to students, our colleagues in academia as well as researchers and engineers in industry will also find it beneficial. It proffers over 600 archetypal and imagery examples and worked-out problems as well as 1800 such unsolved exercises. The problems, both solved and unsolved, are carefully designed one by one so as to inculcate and shed more light on the ins and outs of the subjects and to ameliorate and expedite the assimilation of the lessons by manifesting new facets of them. Myriads of modern issues are discussed therein. The book also features a chapter on advanced topics that are essential for undergraduate students to embrace at least at an introductory level. Every chapter involves a part on further readings where more modern topics and pertinent references are recited for further studies. Around 2000 references are distilled from the seemingly abiding list of the available germane results. Most of these articles/books are published in the 21st century, especially after 2010. A short description of each chapter as well as a summary of the unique features of the book and the acknowledgement follow.

    PART I: In this part of the book, Chapters 1–5, we present the foundations of linear control systems. These include the introduction to control systems, their raison d’être, their different types, modeling of control systems, different methods for their representation and fundamental computations, basic stability concepts and tools for both analysis and design, basic time domain analysis and design details, and the root locus as a stability analysis and synthesis tool.

    Chapter 1: In this chapter we contemplate what control theory is and why it is needed. Open-loop and closed-loop control structures are studied, and their features are compared. Subsumed issues are stability, performance, sensitivity/robustness, disturbance/noise rejection, failure tolerance, optimality, and linearization. We observe that a well-designed feedback can stabilize and robustify an unstable or poorly stable system, enhance its performance, and reject the disturbance/noise. On the other hand a poorly-designed feedback can bring about the opposite outcomes. The 2-DOF and 3-DOF control structures, the internal model control structure, the Smith predictor, and the modern representation of control systems are also discussed. We have a glimpse at the history of automatic control and an elaborate scrutiny of its status quo and future as well. The chapter includes about 30 examples and worked-out problems as well as over 100 exercises to enrich and facilitate reader’s ken of the subject.

    Chapter 2: In this chapter we learn how to model a control system and its constituents. Particular attention is paid to the plant where different examples are provided, including electrical, mechanical, liquid, thermal, hydraulic, chemical, structural, biological, economics, ecological, societal, physics, and time-delay systems. Instances of discrete-time, discrete-event, stochastic, and nonlinear models are also discussed. State-space and transfer function methods are introduced as two basic frameworks for modeling. Linearization is then introduced so as to convert a nonlinear model to a linear one. The presentation is followed by the introduction of block diagrams and their algebra for representing the integration of different components of a control system with each other. Signal flow graph is subsequently introduced as an alternative method to block diagram representation. We comprehend how to compute the transmittance of a given block diagram or signal flow graph. The chapter offers about 70 examples and worked-out problems along with over 150 exercises which help shed more light on the particulars of the lessons and better the awareness of reader.

    Chapter 3: In this chapter we introduce the concept of stability. The reader becomes familiar with basic definitions of stability and tests for its verification. The included stability concepts are Lyapunov and BIBO stability. The stability tools we discuss are the Routh, Hurwitz, Lienard-Chipart, and Kharitonov methods. We show how these results should be used for controller design. Included are also the issues of relative stability, D-stability, internal stability, strong stabilization, and stability of LTV systems. Further, in the exercises we succinctly discuss several other modern results including the secant condition for stability, the Gerschgorin circles, the stability of the internal model control structure, the Hermite-Biehler theorem and its extensions, and the stability of linear time delay, linear time varying, and switching systems. The chapter features about 60 examples and worked-out problems as well as over 150 exercises to consolidate and improve the grip on the subject. Two pertinent appendices at the end of the book provide further specialized nitty-gritties of the topic.

    Chapter 4: In this chapter we are interested in the characteristics of the time response of the system for different inputs. We consider both the transient state and the steady state of the response. The inputs we consider are the impulse, step, ramp, parabolic, and sinusoid. For a reason that will become clear in the text, in the literature the response of the system to a sinusoidal input is called the frequency response of the system, and thus sinusoidal inputs and bandwidth are often considered at the end of the course, along with Bode diagrams. However, the reality is that unlike what the name intimates this response also evolves in time and is in fact the time response of the system. We thus study sinusoidal inputs in this chapter and the students will have more time to master this important topic along with the concept of bandwidth. Particular attention is paid to second-order systems. Contained in this chapter are also the bandwidth of the system, a dissection of high-order systems, model reduction, effect of addition of a pole/zero, performance region, inverse response, analysis of the actual system considering the effect of the sensor dynamics and delay, and introductory probe and design for robustness in both stability and performance. The chapter presents about 70 examples and worked-out problems in addition to over 200 exercises so as to assist and conduce to increased cognizance of the lessons.

    Chapter 5: In this chapter we deliberate how to find the location of the closed-loop poles of the system—the so-called root locus of the system—without explicitly computing them. The method is called the root locus and uses the open-loop information of the system. More precisely, with some simple rules we draw the root locus of the system from the information of the open-loop poles and zeros of the system. We also consider the problem of root contour which is the problem of root locus when more than one parameter of the system varies. Included in this chapter is finding the appropriate value of the gain from the root locus for satisfactory performance, and the implications of the root locus on the controller design. Numerous sophisticated and instructive systems, especially NMP ones, are discussed. The chapter subsumes about 80 examples and worked-out problems as well as more than 250 exercises to enhance the grasp of the subject.

    PART II: In this part of the book, Chapters 6–9, we present what is generally referred to as the frequency domain methods. This refers to the experiment of applying a sinusoidal input to the system and studying its output. Apart from the advanced methods, there are basically three different methods for representation and exploration of the data of the aforementioned frequency response experiment: these are the Nyquist plot, the Bode diagram, and the Nichols-Krohn-Manger-Hall (NKMH) chart. We sieve these methods in detail. We learn that the output is also a sinusoid with the same frequency but generally with a different phase and magnitude. By dividing the output by the input we obtain the so-called sinusoidal or frequency transfer function of the system which is the same as the transfer function when the Laplace variable s . Finally we use the Bode diagram for the design process.

    Chapter 6: In this chapter we examine the Nyquist plot. The analysis starts by a review of the principle of argument and how it is used to obtain the Nyquist stability criterion. We then focus on the niceties of drawing the plot. This involves the behavior of the plot at high- and low-frequency ends, the cusp points of the plot, handling of the proportional gain, the case of the j-axis zeros and poles and the relation with the root locus. The presentation continues with the introduction of the gain, phase, and delay margin concepts and a stability analysis in terms of them. The chapter lays on about 70 examples and worked-out problems as well as over 200 exercises to escalate comprehension of the lessons.

    Chapter 7: In this chapter we introduce the Bode diagram. We realize how the Bode magnitude and phase diagrams are constructed, with emphasis on details for second-order systems, and how they are beneficial to control systems analysis and synthesis. The inverse problem as identification of the transfer function from the Bode diagram is studied. The relation between the Bode diagram and the steady-state error of the system is discussed as well. Integrated are a study of non-minimum phase systems and the concepts of gain margin, phase margin, delay margin, bandwidth, stability, and sensitivity. Special attention is paid to the case of multiple crossover frequencies. Relations to the Nyquist plot and root locus are also discussed. The chapter includes about 50 examples and worked-out problems along with over 200 exercises to firm up apprehension of the subject.

    Chapter 8: In this chapter we sift what is known in the literature as the Nichols chart, to which actually Krohn, Manger and Hall also contributed. We start by the introduction of the S-circles, M-circles, and N-circles, and end up with the M- and N-contours. We apprehend how the NKMH plot is constructed and used in the analysis and synthesis of a control system. Subsumed is also a study of the system features: gain margin, phase margin, delay margin, bandwidth, stability, and sensitivity in this context. Special attention is paid to the case of multiple crossover frequencies and non-minimum phase systems. Relations to the method of robust quantitative feedback theory as well as Bode diagram, Nyquist plot, and root locus are discussed. The niceties of the method are glossed and we learn why it was not in the limelight at its birthtime and why the situation is now different. The chapter supplies about 30 examples and worked-out problems as well as over 200 exercises to smooth and contribute to deeper assimilation of the lessons.

    Chapter 9: In this chapter we reflect on the design procedure in the Bode diagram context. By some motivating examples we introduce three basic dynamic controllers: lead, lag, and lead-lag, and learn how they affect the system and when they should be used. Their simplifications as PI, PD, and PID are studied. The effect of the controllers is uncovered in the Nyquist and root locus contexts. The development is followed up by a general design procedure in the Bode diagram context. Specialized design and tuning rules for PID controllers are also incorporated. Next, the design in the IMC and Smith predictor structures is briefly presented. Finally, implementation of the controllers with operational amplifiers is discussed. The chapter proffers about 40 examples and worked-out problems, many of them with unstable and NMP plants, as well as over 400 exercises to ameliorate appreciation of the reader.

    PART III: In this part we introduce some miscellaneous advanced topics under the appellation of fundamental limitations, which should be zealously embraced in this undergraduate course at least to an introductory level. We unravel the interconnections and make bridges amongst some seemingly motley aspects of a control system and theoretically complement the previously studied subjects.

    Chapter 10: This chapter is devoted to fundamental limitations. We succinctly scrutinize the relation between time and frequency domain constraints, ideal transfer functions, controller design via the TS method, interpolation conditions, integral and Poisson integral constraints, constraints implied by poles and zeros, actuator and sensor limitations, delay, eigenstructure assignment, eigenvalue sensitivity, non-interactive performance, robust stabilization, and positive systems. Among the numerous issues that we learn in this chapter are that the ideal transfer function re sensitivity is not consistent with setpoint tracking requirements; in general when we reduce sensitivity in the frequency range we inevitably increase it in another range; in general small imaginary zeros result in large control inputs; in general pole placement by output feedback, the ideal minimal eigenvalue sensitivity, and non-interactive performance are not achievable. A comprehensive and coherent picture of the whole subject of control system design is also presented. The chapter contains about 80 examples and worked-out problems in addition to over 300 exercises to boost digestion of the subject.

    Appendices: The book incorporates seven appendices. Appendix A is on the Laplace transform and differential equations. Appendix B is an introduction to dynamics. Appendix C is an introduction to MATLAB®, including SIMULINK. Appendix D is a survey on stability concepts and tools. A glossary and road map of the available stability concepts and tests is provided which is missing even in the research literature. Appendix E is a survey on the Routh-Hurwitz method, also missing in the literature. Appendix F is an introduction to genetic algorithms as a random optimization technique, convex and non-convex problems. Finally, Appendix G presents sample midterm and endterm exams, which are class-tested several times. These appendices include about 80 examples and worked-out problems as well as about 50 unsolved problems.

    Unique features: The text boasts some hallmark features:

    • Limning and instilling a coherent, pellucid, proportionate, profound, punctilious, and allegorical perspective of the present and future of the field of systems and control theory;

    • Contemporary approach even for classical issues;

    • Scrutinizing, rectifying, and resolving numerous mistakes in the available literature;

    • Garnering and articulating numerous salient points scattered in the research literature;

    • Many new results and/or minutiae in Chapters 3–10 and Appendix A;

    • A circumstantial glossary and road map of stability results sprawled throughout the literature;

    • Addressing numerous sophisticated NMP and unstable plants;

    • A chapter on advanced topics in fundamental limitations;

    • Inculcating alternative facets of the lessons, not available in the literature, by the help of especially-designed versatile, instructive, quintessential, imagery, and archetypical problems–over 600 examples and worked-out problems along with their simulation source codes available on the website of the book for download, as well as 1800 such unsolved exercises;

    • Orchestrating the latest results–a multitude of which obtained in the 21st century–wherever appropriate;

    • Allocating a subchapter to Further Readings in each chapter, where more advanced topics and references are recited.

    As such, it can be veritably labelled with the epithet:

    • The Canon of Control.

    Our final words are about using the book. Our objective is to write a comprehensive self-contained reference book for the uninitiated to control that can also be exploited as the standard text for the first course on linear control systems. As such, as many pertinent results as fitting for the contemporary undergraduate level are orchestrated and articulated in the book. The instructor may wish to omit some parts from the syllabus especially in Chapters 1, 3, 4, 9, and 10 due to taste and paucity of time. Further Readings, Problems, and Exercises are carefully designed and are inalienable parts of each chapter–they complement and consolidate the lessons. It should be obvious that when we talk about the frontiers of knowledge and the status quo it refers to the present time–the year 2017. After some time they will be different but the core materials of the book will sustain unchanged–conducive to its fitness even in far future. The first expansion to the book is through its Further Readings. The reason that we do not incorporate these issues inside the text is to keep it a manageable undergraduate course.

    For the convenience of the reader the accompanying CD (which is also available on the webpage of the book for download) contains all the MATLAB® and SIMULINK codes (of the 2015a release) of the examples and worked-out problems. The naming of the files is as follows:

    For MATLAB® files:

    Example X.Y is named as ExampleXpointY.m. Thus for instance to run the m file of Example 1.2 (which is the Example 2 of Chapter 1) you should simply go to Chapter 1 and then run the corresponding m file Example1point2 in the MATLAB environment. (The same for Problem X.Y.)

    For SIMULINK files:

    Example X.Y is named as ExampleXpointYS.slx. Hence for instance to run the slx file of Example 1.3 (which is the Example 3 of Chapter 1) you should simply go to Chapter 1 and then run the slx file Example1point3S in the SIMULINK environment. Then in the MATLAB® environment you run the corresponding m file Example1point3 which produces the desired figures. (The same for Problem X.Y.)

    The reader is prompted to visit the companion website of the book on the Elsevier homepage (https://www.elsevier.com/books). Alternatively a simple search on the web directs the reader to the exact page. Update and relevant information are announced there.

    Several tokens of faulty operation of MATLAB® are pointed out in this book. Rectifying some of them is straightforward and they will probably be put right in near-future releases of MATLAB®. The reader should be aware that when we discuss the functionality of MATLAB® we refer to its 2015a release.

    We have made every possible effort to remove the mistakes (including ambiguities and unintended construals) that might have inadvertently crept into the text. (See also the last paragraph of Appendix A.). With regard to reference citation it is due to add that for almost all the topics of interest in this text, there exist an enormous number of germane works. It is ineluctable to suffice it to a handful of them. By no means does it and should it imply negatively on other works. (Visit also Remark 4.23.) Finally, we would appreciate the feedback of our colleagues who choose the book as their text. The future edition will correct the aforementioned oversights and address their concerns.

    Yazdan Bavafa-Toosi

    Mashhad, Iran

    July 2019

    ybavafat@yahoo.com

    Acknowledgments

    The author appreciatively beholds the efforts and excellent job of his advisors: Professors Ali Khaki-Sedigh, Volker Mehrmann, Hiromitsu Ohmori, and Hossein Tabatabaei-Yazdi.

    On the other hand, the author is grateful to Professors Karl Astrom (Sweden), Alberto Bemporad (Italy), Stephen Boyd (USA), Maryam Fazel (United States), James Freudenberg (United States), Graham Goodwin (Australia), Zhi-Hong Guan (China), Mark Halpern (Australia), William Heath (United Kingdom), Hooshang Hemami (United States), Vladimir Kharitonov (Russia), Khashayar Khorasani (Canada), Mehran Mesbahi (USA), Reza Moheimani (USA), Manfred Morari (Switzerland), Gerardo Naumis (Mexico), Ahmet Palazoglu (USA), Wilfrid Perruquetti (France), Anna Piwowar (Poland), Ahmad Rad (Canada), Ali Saberi (United States), Bahram Shafai (United States), Dragoslav Siljak (United States), Sigurd Skogestad (Norway), and Eduardo Sontag (United States) who kindly provided their research articles/books and let him use them in his book. He is also much obliged to Professor Nader Safari-Shad (UW Platteville, USA) for his constructive critiques on the summary of the manuscript. Additionally, the original Word documents have been edited and transformed into book format by the publication department of Elsevier and the author wishes to thank them for their nice work. The positive attitude, assistance, and efforts of all the members of the Elsevier group in all the stages of the work are appreciated.

    Last but not least, it should be declared that though the text caters copious contributions no claim of originality can be or is made in its true sense. It is built on the works of others – the academic/industrial society en masse. The author blesses his lucky stars that this heritage finds him, which he refurbishes and graciously tenders back with all possible modesty.

    Part I

    Foundations

    Outline

    1 Introduction

    2 System representation

    3 Stability analysis

    4 Time response

    5 Root locus

    1

    Introduction

    Abstract

    In this chapter we contemplate what control theory is and why it is needed. The open-loop and closed-loop control structures are studied and their features are compared. The deliberated issues are stability, performance, sensitivity/robustness, disturbance/noise rejection, failure tolerance, optimality, and linearization. We observe that a well-designed feedback can stabilize and robustify an unstable or poorly stable system, enhance its performance, and reject the disturbance/noise. On the other hand a poorly-designed feedback can bring about the opposite outcomes. The 2-DOF and 3-DOF control structures, the negative and positive feedbacks, the internal model control structure, the Smith predictor, and the modern representation of control systems are also discussed. We have a glimpse at the history of automatic control and an elaborate scrutiny of its status quo and future as well. The chapter subsumes about 30 examples and worked-out problems as well as over 100 exercises to strengthen grip on the subject.

    Keywords

    Linear control systems; open-loop/closed-loop control; benefits of feedback; design procedure; 1-DOF/2-DOF/3-DOF control structure; negative/positive feedbacks; IMC structure; Smith predictor; modern representation; history/status quo/future of control

    1.1 Introduction

    The course linear control systems is a three- to four-credit undergraduate course. It is or can be offered in a variety of disciplines and departments including electrical engineering, mechanical engineering, industrial engineering, aerospace engineering, civil engineering, bio-engineering and bio-medicine, chemical and petroleum engineering, physics, mathematical sciences, economics, management and social sciences,¹ etc.

    Electronics engineering! What do you think of when you hear this term? What image do you have of it? Your answer is probably diodes, transistors, capacitors, resistors, electrons, radio, TV… And your answer is sort of correct.

    Mechanical engineering! What do you think of when you hear this term? What image do you have of this term? Perhaps your answer is springs, dashpots, gears, levers, mechanical arms and cranes, robots… Well, your answer is kind of correct.

    Now consider other branches like Aerospace engineering! Civil engineering! Chemical engineering! What perception and image do you have of these fields? Your answer includes space shuttles and satellites, structures, roads, urban infrastructures, chemical products, etc. Your answer reveals that you do have an image of all these fields – you are on the right track.

    How about control! Is your answer A system to be controlled.? Well, your answer is sort of correct. But what is that system? The answer is that it can be anything, for instance society, economy, ecology, a chemical/pharmaceutical/biological/mechanical/electrical/structural product/device/system, etc. etc. In fact control is nothing in itself, but a theory dealing with all these systems. This mathematical theory (thus also in mathematics departments)—i.e., tools, algorithms, and rules, for analysis and synthesis, to achieve the desired objectives—is what control is. More precisely, it is an interdisciplinary field where mathematics (as highlighted in Section 1.16.2) is used for the purpose of controlling a phenomenon that can have any nature.

    In this textbook we take a vivid and instilling journey from the rudiments of control theory and practice to its status quo and future. It can be exploited by any person interested in becoming familiar with the topic. En route we pay a visit to all the whistle-stops and hub-cities. The materials that lend themselves to a first course on linear control systems are treated in details. Other parts that are outside the purview of this book are also concisely discussed/named so that the reader perceives the sublime, outright, and allegorical perspective of the present and future of the field with all the indelible.

    • The school of thought of the book invests it as The Canon of Control;

    • It is vital to have in mind that if any of the corners and ingredients of the picture is reckoned without – as in the available literature – then the field would be demeaned and underrepresented: anything from inglorious and utterly abject to modest or so, depending on the lost items.

    The book articulates a compendium of foremost appurtenant issues dotted about in the literature and contributes over them. Each chapter features numerous especially-designed, instructive, imagery, and archetypical examples, worked-out problems, and exercises to inculcate and shed more light on the nitty-gritties of the lessons and to facilitate, consolidate, enrich, and expedite the learning of the reader. They are inalienable ingredients of the text. Guidelines for further research are proffered at the end of each chapter. The appendices of the book make it self-contained. See also ‘Unique Features’ of the book as succinctly highlighted in the Preface of the book. More will be said about the sobriquet Canon of Control in Section 1.16.5.

    In the rest of this chapter we examine and articulate the raison d’être of control in Section 1.2, the history of control in Section 1.3, and the philosophy behind feedback in Section 1.4. The magic of feedback is discussed in Section 1.5. Physical elements and abstract elements of a control system are studied in Sections 1.6 and 1.7. Then we briefly comment on the design procedure in Section 1.8. Types of control systems, open-loop and closed-loop structures follow in Sections 1.9–1.11. Next we proceed to the two-degree-of-freedom (2-DOF) control structure, the internal model control (IMC) structure, and the Smith predictor in Sections 1.12–1.14. The modern representation of control systems and the status quo of the field are presented in Sections 1.15 and 1.16. The chapter is wrapped up by the summary, further readings, worked-out problems, and exercises in Sections 1.17–1.20.

    1.2 Why control?

    Two palpable and most compelling reasons to control something are: (1) To keep a variable near a constant target value or in a target interval, also called regulation, e.g., the speed of rotation of a computer disk, moisture content of paper in paper making, chemical composition of products (drugs, pills, etc.), physical characteristics of products (width of paper, metal plates, wood, etc.), maintaining a minimum rate of data transfer in communication systems (including inside electronic chips and computers), and (2) To keep a variable nigh a varying target value, also named tracking or servo, e.g., robot manipulator along a trajectory, homing devices (missiles, rockets, and aircrafts), and target tracking by antennas and cameras. We should add that sometimes the use of the term regulation is confined to the class of problems whose target value is zero, and the term tracking is used for the class of problems whose target values are nonzero, either constant or nonconstant.

    The most advanced control system is the human body which is replete with control mechanisms. Regulation includes temperature, blood sugar, etc., whereas tracking comprises daily activities, reading this book, going to work and pursuing our daily schedule, etc. etc.—God must be the supreme control engineer! (Note that in the previous sentence the tracking problems are beyond the conventional tracking problems in control systems as exemplified in the above paragraph.)

    There are numerous reasons to control a system. In the broad sense all these reasons fall in the following categories:² stability (in some sense safety), performance, failure tolerance, robustness, optimality, and linearity. Roughly speaking, and in most cases, a system is said to be stable (or in some sense safe) if it just works, either well or poorly, and thus in particular does not explode, break, burn, etc. (See item 1 of Further Readings.) Performance encompasses regulation/tracking, time-response specifications such as rise time, settling time, steady-state error, and decoupling. By failure tolerance we mean that the system works and keeps the stability/performance as much as possible unchanged if a failure (e.g., in a sensor) occurs. Robustness (or low sensitivity, in the literal meaning) means tolerating uncertainties in system model, parameters, and signals. Optimality refers to optimality/minimality in time/energy/pollution/side effects/etc of a control system. Examples of energy minimal systems are especially in long-running processes, mass product and usage, and large-scale industries. For instance, small savings in the gas price/use of cars and public transportation services will result in huge economical savings. Linearity of the system has the common meaning as in the mathematics literature and we shall deal with it in Chapter 2. High gain feedback control pushes a nonlinear system towards linearity.³ Performance encompasses regulation/tracking, time-response specifications such as rise time, settling time, and steady-state error, robustness/low sensitivity, optimality such as minimum energy and minimum pollution, etc. That is, the system must work well and as desired. By reliability (or failure tolerance) we mean that the system works and keeps the performance as much as possible unchanged if a failure occurs. Economics (or savings) is especially true for continuous (in the sense of long-running) processes, mass product and usage, and large-scale industries. For instance, small savings in the gas price (or, in general, optimizing and enhancing the efficiency) of cars, public transportation services, power plants, and printed circuit boards of mass electronic products will result in huge economical savings.

    1.3 History of control

    History of control is not apart from the history of science and humankind in general. This history is interleaved with many unfortunate periods. In the past millennia and centuries abundant historical evidences and scientific achievements embraced in books and libraries were burned to ashes and people aplenty—including scientists—were massacred when a country was invaded. This is in particular true of Iran. Before about 650 AD Iran was the most advanced country in the world. Thanks to practice and teachings of the divine religion scientific research was firmly intertwined and entrenched in the educational system of the empire. Many foreign scientists and philosophers actually came to Iran for (continuing) their educations. Astronomy, physics, chemistry, mathematics, medicine, arts, music, agriculture, industries (e.g., metal and textile), etc. were very advanced in the Iran of that time. In fact, scientists and musicians were highly revered by people and were amidst the nearest consultants to the Kings. Perhaps it can be best highlighted by mentioning that the oldest university of the world was instituted in Iran about 2 millennia ago – a millennium prior to the first in Europe. In particular, basic theory of control in the form of feedback mechanism and regulation was present in the technology of that time. In the aftermath of the invasion of Iran by Arab Moslems in about 650 AD (starting from many years prior to that) the situation changed. For a nice account of the un-nice history the committed reader is referred to such history books as History of Prophets and Kings (Al-Tabari, 1989) which is a millennium-old book. Anyway, Iran was under the seizure of foreigners. Nevertheless, despite the conditions Iranian people ambitiously and tenaciously stayed the course of their culture and tradition of scientific research and abounding historical progresses and achievements were made in the subsequent centuries – the country arose to prominence anew. This continued until about 1220 AD when the country was invaded afresh; this time by Mongol Moslems, and history was repeated. In recent centuries invasions of the country in the modern fashion have had a long-lasting hindering effect on its progress.

    Well, coming back to our theme of control after this eulogy and lamentation, as we shall shortly see in Section 1.8 the process of control can be divided to some steps. The serious attention to and formal formation of (parts of) some of these steps are rather new. On the other hand, contributions to some of the aforementioned steps have a long root in science and engineering – some millennia old. In recent centuries due to more interaction among countries, including Japan in the far east, there has been a rapid growth in science and technology. This has been enhanced after the discovery of the American and Australian continents. Alas the details of all the momentous achievements and contributions are not available – at least to us. With respect to feedback mechanism (whose definition will be given in this Chapter) and the issue of stability, from among the contributions that we are aware of, some historical milestones are as follows: In the 17th century feedback mechanism was implicitly used by C. Drebbel in his invention of incubator and thermostat. Around the mid of the 18th century L. Euler and J. Lagrange studied stability of mechanical systems. After the invention of the steam engine in the 18th century, need for controlling the speed of the engine was felt. It was in 1769 that J. Watt invented the flyball governor for this purpose. The device was theoretically analyzed by J. C. Maxwell in 1868. About the same time a theory for regulators was supplied by I. A. Vyshnegradskii. In 1855, 1877, 1892, 1895 and 1898 C. Hermite, E. J. Routh, A. M. Lyapunov, A. Hurwitz, and H. Poincare respectively proposed their methods for stability analysis/determination of dynamical systems (see also Appendix D). In 1910 E. A. Sperry developed the gyroscope and autopilot for airplanes. At this time control theory and practice was being developed in ad hoc manners in the US, Europe, and Russia. In Russia it was mostly dominated by mathematicians dealing with differential equations describing the problem of interest. In Europe and the US electronic amplifier design was the main concern of engineers. Following this line of thought, the precursor for stability and robustness analysis and design methods stemmed from the work of physicist H. Barkhausen around 1921⁴ (see Chapter 6 for details), about two decades before its rediscovery and publicizing by H. W. Bode. One year later in 1922 N. Minorsky, who was concerned with automatic steering of ships, showed how stability of the system can be determined from the governing differential equations of the system. In 1923 S. Motora designed and implemented a stabilizer for ship attitude control. Five years later in 1927 negative feedback mechanism was used in the design of amplifiers by H. S. Black (see the proceeding Section 1.4). Five years later in 1932 H. Nyquist introduced his criterion for the determination of the closed-loop stability of LTI systems from their open-loop data. Two years on, in 1934, H. L. Hazen introduced the term servomechanism for position tracking systems and proposed a relay feedback theory for it. In 1937 a remote (wireless) controller was designed and successfully implemented on a warship, the Settsu, in Japan. Wireless control of power lines and lamps, etc. were started in 1938 in Japan. In the same year of 1938 H. W. Bode introduced his Bode diagrams for the analysis and synthesis of control systems—and in the 1940s rediscovered and publicized the gain and phase margin concepts. In 1947 the Nichols chart⁵ and the bounded-input bounded-output stability concept were introduced by N.B. Nichols et al., and H. M. James & P. R. Weiss, respectively. In 1948 W. R. Evans proposed the root locus technique. In 1950 L. A. Zadeh published his classical results on linear time-varying systems. In the late 1940s and early 1950s pioneering steps on discrete-time control were taken independently in Russia (Y. Z. Tsypkin), Europe (D. F. Lawden and R. H. Barker), and the US (W. Hurewicz, W. K. Linvill, J. R. Ragazzini, L. A. Zadeh, E. I. Jury). In 1952 L. A. Zadeh proffered a general mathematical theory for linear systems in function spaces. In 1953 and 1956 M. J. Mason proposed his signal flow graph method. On the other hand, the technological advancements of Germany resulted by worldwide consensus in the first international conference on automatic control to be held in Heidelberg in 1956. At that conference the participants pledged to promote the formation of national organizations and also to found an international organization of automatic control, which led to the inauguration of the International Federation of Automatic Control (IFAC) in 1957. By the elapse of time the scientific community witnessed a hail and boom of further discoveries. The main technological achievements of the time were the successful launch of the first earth satellite, Sputnik, in 1957, and landing of the first rocket on the moon in 1959, both by Russia. Subsequently the first IFAC world congress was held in Moscow in 1960. After many years of extensive research by the scientific community new classical results in the state-space domain were obtained in the 1960s. At the same time formal ramification to and formation of different branches of control theory (like optimal, nonlinear, robust, etc. control; as well as identification, filtering, etc.—see Section 1.16) took place in the late 1950s and early 1960s by distinctive contributions of the time whose minutiae go well beyond the scope of this book. Of course, pertinent results started to appear from at least one or two decades earlier. Some typical fundamental results in this category are the filtering theory of Wiener (1942) and its generalization by Zadeh and Ragazzini (1950) and Zadeh (1953), the initiation of cybernetics by N. Wiener in 1948 (see Section 1.16.1), the finite-time control problem by G. Kamenkov in 1953 (see Appendix D), the formulation and classical results on the identification theory by L. A. Zadeh in 1956 who coined the term identification (see Chapter 2), classical converse results on the Lyapunov stability theory by J. L. Massera and J. Kurzweil in 1949, 1950, and 1956, and the invariant stability principle by E. A. Barbashin and N. N. Krasovskii in 1959 and J. P. LaSalle in 1960.

    1.4 Why feedback?

    From one standpoint, there are two types of control: open-loop control and closed-loop or feedback control. These will be rigorously dealt with in the rest of this book, where the characteristics of each type are studied to a nicety. For the moment some general information will suffice.

    The structure of an open-loop control system is displayed in Fig. 1.1.

    Figure 1.1 Schematic of an open-loop control system.

    To expound why the input of the system, i.e., the reference input r, i.e., the desired output, we start by phrasing that in the first place in an open-loop control system there is no controller C ? The answer is lucid: we must tell it, and this is done through its input gate. This is the only way we can talk to the system, so r is chosen as yd.

    If we have accurate enough knowledge about the plant P, open-loop control might work, depending on the application. Some typical examples are traditional idle speed control of an engine, traditional printer, traditional traffic lights, traditional screen display in computer monitors, and traditional washing machine. We will shortly see in Section 1.10 how the controller C must be designed.

    The structure of closed-loop control or feedback control is given in Fig. 1.2.

    Figure 1.2 Schematic of a closed-loop control system, Left: Abstract, Right: Actual.

    or not. To verify this, we must collate them, i.e., we must form their difference, and this is done by feeding back the output to the input through a comparison element or the feedback element as shown in the right panel of Fig. 1.2.

    In comparison to open-loop control, if our knowledge of the plant P is not precise, e.g., in view of aging, friction, etc., open-loop control will not be that good, if applicable at all, or might not work at all. Thus we use feedback control. Of course, this lack of precise knowledge is not the only reason we use feedback. At times and indeed often we have to use feedback to make the system stable (i.e., roughly said, to have it work at all) in the first place, or to have it work as desired; as will be discussed later. Some examples include the closed-loop version of the aforementioned open-loop control systems, robot manipulator, homing devices, and sophisticated industry.

    We wrap up this part by stressing that for the sake of instructiveness we have introduced the variable yd in this section. Otherwise it is superfluous and we can simply use r in lieu of it, and this is actually what we do hereafter in this book with the understanding that r=yd. In Section 1.11 we shall introduce yd (along with yr and yn) which should not be mistaken for yd.

    1.5 Magic of feedback

    The aforementioned six reasons of Section 1.2 including their details (like disturbance and noise rejection as parts of robustness) are why we use feedback and constitute what is achieved by feedback. We summarize this as the magic of feedback! The use of this term will be further clarified when we design some complicated closed-loop systems in future chapters. See also Exercise 6.17.

    Feedback is the most pervasive system in nature—it is ubiquitous. When a speaker asks whether the audience hear him/her clearly or not, so that he/she can speak louder or more quietly, the speaker invokes a feedback mechanism. When we ride a horse/bike or tread on a tightrope, we use a feedback mechanism to keep our balance. When a government exercises its controls (subsidy, loan, bond, interest rates, etc.) aiming at stabilizing the national economy, they attempt to avail themselves of feedback. When the presence of employees or students is checked by a roll-call mechanism, a feedback is used. When a football coach applies its controls (tactics, replacements, etc.) in order to achieve his objectives, he invokes a feedback mechanism. When one’s body shivers or sweats, it uses a feedback mechanism to control its temperature. It is feedback by which the temperature in a building or oven is kept to within a small neighborhood of a desired setting. It is feedback by which unmanned aircrafts fly without the intervention of human beings, etc. etc.

    Remark 1.1

    When we talk about control, either in this book or in the general literature on control, we mean closed-loop or feedback control, unless otherwise is explicitly mentioned. For example, this course is called linear control systems not linear closed-loop control systems!

    1.6 Physical elements of a control system

    Among the first control systems of former millennia are the control of water level in tanks and dams and oil level in lamps. As a modern control system, a typical multitank fluid level and flow control system is delineated in Fig. 1.3.

    Figure 1.3 Schematic of a typical three-tank fluid level and flow control system.

    To consider all control systems and plants in a unified pictorial framework, we use Fig. 1.4 in which the physical elements of a general control system are shown (see also Problem 1.5). We should clarify that the plant may have several sub-plants as in Fig. 1.3 and that in general there are interactions between the sub-plants. A typical example is the electric power system of a country where there are several power stations, and numerous loads that consume the energy. The objective of control is to stabilize the voltage and frequency at the loads (which are the end plants). While such systems may be pictorially depicted as the interconnected version of several systems as in Fig. 1.4, we may simply use a single system as in Fig. 1.4 with the apprehension that the signals around the loop are not scalar valued but vector valued, and that each block (e.g., Amplifier) constitutes several sub-blocks. Therefore, a 3-input 2-output system, for example, is not depicted by a system which has 3 arrows entering its plant and 2 arrows coming out of its plant, but with a single vector-valued arrow at its input and output.

    Figure 1.4 Physical elements of a control system.

    It is noteworthy that the plant is not necessarily a physical one, like that in Fig. 1.3. The economy of a nation, for instance, is a nonphysical plant to control. For this plant, inputs are the monetary and fiscal policies, and outputs are the gross national product, growth rate, unemployment reduction (rate), inflation reduction (rate), etc. Sensors can be various relevant criteria and standards (observing whether the desired outputs are achieved, e.g., by considering the satisfaction of people or by statistical results/poling). Amplifiers and actuators can be support and encouragement for creative industries and job-making companies (e.g., by levying lesser tax on them) and injection of some money from the government savings (of e.g., oil revenue of oil exporting countries). The latter case is in the form of subsidy on some goods or founding low-price governmental shops so that private shop owners have to lower their prices as well. Finally, sensor noise refers to mistakes in polling and statistics, whereas disturbance may refer to fraud, war, or serious natural disasters, like a catastrophic flood or earthquake, which considerably impact on the economy. (The arguments will be complemented in Example 10.17 of Chapter 10.)

    In the case of a mechanical plant such as moving a load by a robot manipulator, the manipulator is rotated by a motor which is the actuator. The control elements are a computer, microprocessor, or even some CMOS/TTL ICs. The output of these devices is in the range of volts (5, 12, 24, etc.) and milli-amperes (e.g., 1–10). This is not enough to drive a motor (i.e., the actuator, needing, e.g., 220 V and 10 A) and thus an amplifier is needed in between. These are the power amplifiers we design in courses on electronics.

    All in all, the control system simply has four parts: Control Elements (or the coral part of the controller which makes the decision and decides what signal should be inputted to the plant), Amplifier/Actuator, Sensor, and Connections. These can be conceptualized, respectively, as the Brain, Muscles/Hands, Eye (as the optical Sense, or other Senses), and Nerves of the control system.

    1.7 Abstract elements of a control system

    The abstract elements of a control system are displayed in denote the controller, plant, and sensor. (We do not use the letter S denote the setpoint (which is the desired output), control signal, actual output, and measured output. Here as well we note that the signals around the loop are in general vector valued and the blocks constitute several sub-blocks. (It is also possible to incorporate the amplifier and actuator in the plant.)

    Figure 1.5 Abstract elements of a control system.

    ⁷, r is constant, and ein all time, i.e., the measured output is not the actual output in all time, but roughly after the settling time of the sensor. We will talk more about the delay in Chapter 2, where we see that—at least in classical physics—all actual systems are Lipschitz and in fact beyond that: the slope of the output at the starting time is zero. In fact, in practice, every system (including any sensor) has some delay. Moreover, note that representation of measurement/sensor noise also includes measurement/sensor error, if any.

    1.8 Design process

    A pellucid and elaborate description of the design procedure is not possible at this stage of the book, but in Section 1.16. For the moment it suffices to voice that roughly speaking the process can largely be divided into four rather independent steps, namely:

    • Modeling

    • Formulation

    • Computation

    • Implementation

    Modeling refers to finding a model that represents the problem at hand. Formulation refers to formulating the problem and finding the solution in the form of a formula (typically for the control signal). Computation refers to numerically producing the answer of the former step.⁸ Implementation means designing (probably manufacturing,) and implementing a device (or rather a system—a combination of different devices) that performs the task. To be more specific note that the least objective of control is to keep y as close as possible to r. That is, tracking—and note that tracking includes regulation. It may be more; in fact there are often some time specifications, energy specifications, etc., as well. Regardless of what exactly the control objectives are, the above-mentioned design process is further divided in two distinct parts dealing respectively with the physical and abstract levels as discussed in Sections 1.6 and 1.7.

    (formula/operator),¹¹ which is called a mathematical model of the plant, (2) Characteristics of r and d e.g., proportional-integral-derivative (PID) controller

    . However if we opt to choose digital control—which is certainly the case in the present time—then the sophisticated controller

    Enjoying the preview?
    Page 1 of 1