Péter Gáspár, Zoltán Szabó, József Bokor

Discrete Feedback Systems 2.

Modern Control


A.9 Dynamic Programming

Dynamic programming is a mathematical technique for solving sequential decision and optimization problems. It was developed by Richard Bellman and his associates in the 1950’s and it is especially important in stochastic optimization problems. Dynamic programming is based on the principle of optimality allowing the solution of sequential optimization/decision problems in a recursive manner. An optimal strategy or an optimizer solving the optimization/decision problem is called an optimal policy. Principle of Optimality: an optimal policy has the property that, whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with respect to the state resulting from the first decision.

Discrete Feedback Systems 2.

Tartalomjegyzék


Kiadó: Akadémiai Kiadó

Online megjelenés éve: 2019

ISBN: 978 963 454 373 2

The classical control theory and methods that we have been presented in the first volume are based on a simple input-output description of the plant, expressed as a transfer function, limiting the design to single-input single-output systems and allowing only limited control of the closed-loop behaviour when feedback control is used. Typically, the need to use modern linear control arises when working with models which are complex, multiple input multiple output, or when optimization of performance is a concern. Modern control theory revolves around the so-called state-space description. The state variable representation of dynamic systems is the basis of different and very direct approaches applicable to the analysis and design of a wide range of practical control problems. To complete the design workflow, finally some introduction into system identification theory is given.

Hivatkozás: https://mersz.hu/gaspar-szabo-bokor-discrete-feedback-systems-2//

BibTeXEndNoteMendeleyZotero

Kivonat
fullscreenclose
printsave