IJSTR

International Journal of Scientific & Technology Research

IJSTR@Facebook IJSTR@Twitter IJSTR@Linkedin
Home About Us Scope Editorial Board Blog/Latest News Contact Us
CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT
QR CODE
IJSTR-QR Code

IJSTR >> Volume 6 - Issue 2, February 2017 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



Application Of Reinforcement Learning In Heading Control Of A Fixed Wing UAV Using X-Plane Platform

[Full Text]

 

AUTHOR(S)

Kimathi, S., Kang’ethe, S., Kihato, P

 

KEYWORDS

UAV, Reinforcement Learning, PID, X-Plane

 

ABSTRACT

Heading control of an Unmanned Aerial Vehicle, UAV is a vital operation of an autopilot system. It is executed by employing a design of control algorithms that control its direction and navigation. Most commonly available autopilots exploit Proportional-Integral-Derivative (PID) based heading controllers. In this paper we propose an online adaptive reinforcement learning heading controller. The autopilot heading controller will be designed in Matlab/Simulink for controlling a UAV in X-Plane test platform. Through this platform, the performance of the controller is shown using real time simulations. The performance of this controller is compared to that of a PID controller. The results show that the proposed method performs better than a well tuned PID controller.

 

REFERENCES

[1] A. Mansoor et. al, "Heading control of a FIxed wing UAV using Alternate control surfaces," IEEE, vol. Vol. 2, December 2012.

[2] Z. Qing and G. Zhiqiang, "On Practical application of Active Disturbance Rejection Control," in Proc. of the 29th Chinese Control Conference, Beijing, 2010, pp. 6095-6100.

[3] H. Ferreira et.al, "Disturbance Rejection in a Fixed WIng UAV using nonlinear H-Infinty stste feedback," in 9th International Conference on Control and Automation, Santiago, USA, 2011.

[4] M. Prabhudas and V.Nagababu, "A Fuzzy logic strategy on attitude controlling of Longitudinal autopilot for better disturbance rejection," International Journal of Engineering Research and Technology, vol. Vol. 2, no. Issue 12, pp. 186-190, December 2013.

[5] D. Stojcsics, "Fuzzy controller for small size Unmanned Aerial Vehicles," in 10th IEEE Intl. Symposium on Applied machine Intelligence and Informatics, Herl'any, Slovakia, 2012, pp. 91- 95.

[6] X. Hua et. al, "Disturbance rejection in UAV's velocity and Altitude Control: Problems and Solutions," in Proc. of the 30th Chinese Control Conf., Yantai, China, 2011, pp. 6293-6298.

[7] L. Jing-Mei and Z. Ke, "Design of Active Disturbance Rejection Controller for Autonomous Aerial Refuelling UAV," in IEEE, 2013.

[8] A. Brezoescu et. al, "Adaptive Trajectory following for a Fixed wing UAV in Presence of Crosswind," Journal of Intelligent Robotic Systems, vol. Vol. 69, pp. 257-271, 2013.

[9] A. Noth et. al, "Dynamic Modelling of Fixed Wing UAVs," Swiss Federal Institute of Technology, Zurich, Laboratoty Report 2006.

[10] R. Nelson, Flight Stability and Automatic Control, 2nd ed. Singapore: McGraw Hill, 1998.

[11] R. Beard and T. McLain, Small Unmanned Aircraft; Theory and Practice, 1st ed. Princeton, New Jersey: Princeton University Press, 2012.

[12] Y. Paw, "Synthesis and Validation of Flight Control for UAV," University of Minnesota, Miinnesota, PhD Thesis Dec. 2009.

[13] R. Sutton and A. Barto, Reinforcement Learning; An Introduction, 1st ed. Massachusetts: MIT press, 2005.

[14] S. Rusell and P.Norvig, Artificial Intelligence; A Modern Approach, 3rd ed. New Jersey: Pearson Education, 2010.

[15] A. Meyer, X-Plane 10: Operation Manual., 2012.

[16] R. Lucio and O. Neusa, "UAV Autopilot Controllers Test Platform Using Matlab/Simulink and X-Plane," in 40th ASEE/IEEE Frontiers in Education Conference, Washington, Dc, 2010, pp. pg 6262-6269.

[17] B. Haitham et. al, "Controller Design for Quadrotor UAVs using Reinforcement Learning," in IEEE International Conference on Control Applications, Yokohama, Japan, 2010, pp. 2130-2135.

[18] R. Sutton et. al, Experiments with Reinforcemnet Learning in Problems with Continous State and Action Spaces, Unpublished.

[19] B. Anderson and J. Moore, Optimal Control, Linear quadratic Methods. London, UK: Prentice-Hall International, 1989.