论文标题
不确定的四极管系统的强大模糊Q-学习严格的假想跟踪控制器
Robust Fuzzy Q-Learning-Based Strictly Negative Imaginary Tracking Controllers for the Uncertain Quadrotor Systems
论文作者
论文摘要
二次手机是流行的无人机(UAV)之一,由于其多功能性和简单设计。但是,对四摩托飞行控制器的收益调整可能会很费力,并且在外源性干扰和不确定的系统参数下,准确地控制轨迹的稳定控制可能很难维持。本文介绍了一种新型的鲁棒和自适应控制合成方法,用于四型机器人的态度和高度稳定。开发的方法基于模糊的增强学习和严格的负面虚构(SNI)属性。我们控制方法的第一阶段是通过反馈线性化(FL)技术将非线性四极管系统转换为等效的负象征(Ni)线性模型。第二阶段是设计一种控制方案,该方案通过模糊的Q-学习来在线适应严格的负面虚构(SNI)控制器,灵感来自生物学学习。拟议的控制器不需要任何事先培训。将设计控制器的性能与固定增益SNI控制器,模糊SNI控制器和一系列数值模拟中的常规PID控制器进行了比较。此外,使用NI定理对拟议控制器和自适应定律的稳定性进行了证明。
Quadrotors are one of the popular unmanned aerial vehicles (UAVs) due to their versatility and simple design. However, the tuning of gains for quadrotor flight controllers can be laborious, and accurately stable control of trajectories can be difficult to maintain under exogenous disturbances and uncertain system parameters. This paper introduces a novel robust and adaptive control synthesis methodology for a quadrotor robot's attitude and altitude stabilization. The developed method is based on the fuzzy reinforcement learning and Strictly Negative Imaginary (SNI) property. The first stage of our control approach is to transform a nonlinear quadrotor system into an equivalent Negative-Imaginary (NI) linear model by means of the feedback linearization (FL) technique. The second phase is to design a control scheme that adapts online the Strictly Negative Imaginary (SNI) controller gains via fuzzy Q-learning, inspired by biological learning. The proposed controller does not require any prior training. The performance of the designed controller is compared with that of a fixed-gain SNI controller, a fuzzy-SNI controller, and a conventional PID controller in a series of numerical simulations. Furthermore, the stability of the proposed controller and the adaptive laws are proofed using the NI theorem.