Intelligent Control of Unmanned Aerial Vehicles in Areas of High Turbulence
Unmanned aerial vehicles are gaining immense popularity and have found applications in delivery services, monitoring tasks and in military operations. There is one question we have yet to answer, which is, how well can a drone perform in an area with extreme turbulence? This question will be addressed in this study by comparing the performance of the drone under harsh weather conditions, with regular control mechanisms, to an intelligent control or learning technique. This article will demonstrate how Reinforcement Learning can be used to train a quadcopter to perform a certain task in turbulent areas while maintaining stability. A trained quadcopter will be assigned to a mission and will be controlled from the start point to the desired destination, maintaining stability, utilizing minimal energy, and taking the shortest time to complete without having to communicate with the drone's control system explicitly or directly. Specifically, we will be working with the Proximal Policy Optimization Reinforcement Learning Algorithm, which seeks out an optimal policy that ensures the maximum rewards for an agent or a group of agents participating in interactions with the environment. It belongs to a class of RL algorithms called Policy Gradient Algorithms, which aim to obtain the most optimal policy, rather than the most optimal state or state-action value function. We will examine the behavior of the agent when the Dryden Turbulence Model is introduced.