Advances in Reinforcement Learning: A Comprehensive Review of Real-World
Applications in Industry
K Ussenko* and VI Goncharov
Division for Automation and Robotics, School of Computer Science and Robotics, Tomsk Polytechnic University, Russian Federation
*Corresponding Author: K Ussenko, Division for Automation and Robotics, School
of Computer Science and Robotics, Tomsk Polytechnic University, Russian Federation.
Received:
March 15, 2023; Published: April 11, 2023
Abstract
This paper investigates the current feasibility of utilizing reinforcement learning algorithms in the industrial sector. Although many studies have showcased the success of these algorithms in simulations or on isolated real-world objects, there is a paucity of research examining their wider implementation in real-world systems. In this study, we identify the obstacles that must be surmounted to fully leverage the potential benefits of reinforcement learning algorithms in practical applications. Moreover, we present a thorough overview of existing literature aimed at tackling these challenges.
Keywords: Reinforcement Learning; Deep Learning; Sim-to-real; Engineering; Artificial Intelligence; Control; Robotics; Autonomous Control
References
- Sutton Richard S and Andrew G Barto. “Reinforcement Learning: An Introduction”. The MIT Press, (1998).
- Nian Rui., et al. “A Review On Reinforcement Learning: Introduction and Applications in Industrial Process Control”. Computers and Chemical Engineering 139 (2020): 106886.
- Mnih, Volodymyr., et al. “Human-Level Control through Deep Reinforcement Learning”. Nature7540 (2015): 529-533.
- van Hasselt Hado., et al. “Deep Reinforcement Learning with Double Q-Learning”. arXiv:1509.06461, arXiv, 8 Dec. (2015).
- Barto Andrew G., et al. “Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems”. IEEE Transactions on Systems, Man, and Cybernetics SMC-13.5 (1983): 834-846.
- Lillicrap Timothy P., et al. “Continuous Control with Deep Reinforcement Learning”. 1, arXiv:1509.02971, arXiv (2015).
- Dulac-Arnold Gabriel., et al. “Challenges of Real-World Reinforcement Learning: Definitions, Benchmarks and Analysis”. Machine Learning9 (2021): 2419-2468.
- Hwangbo Jemin., et al. “Control of a Quadrotor With Reinforcement Learning”. IEEE Robotics and Automation Letters4 (2017): 2096-2103.
- Haarnoja Tuomas., et al. “Soft Actor-Critic Algorithms and Applications”. arXiv:1812.05905, arXiv, 29 Jan. (2019).
- Singh Avi., et al. “End-to-End Robotic Reinforcement Learning without Reward Engineering”. arXiv:1904.07854, arXiv, 15 May (2019).
- McClement Daniel G., et al. “Meta-Reinforcement Learning for the Tuning of PI Controllers: An Offline Approach”. Journal of Process Control 118 (2022): 139-152.
- Hui Jonathan. “RL — Model-Based Reinforcement Learning”. Medium (2019).
- Moerland Thomas M., et al. “Model-Based Reinforcement Learning: A Survey”. arXiv:2006.16712, arXiv, 31 Mar. (2022).
- Sünderhauf Niko., et al. “The Limits and Potentials of Deep Learning for Robotics”. The International Journal of Robotics Research4-5 (2018): 405-420.
- Thuruthel Thomas George., et al. “Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators”. IEEE Transactions on Robotics1 (2019): 124-134.
- Ju Hao., et al. “Transferring Policy of Deep Reinforcement Learning from Simulation to Reality for Robotics”. Nature Machine Intelligence12 (2022): 1077-1087.
- Zhao Wenshuai., et al. “Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: A Survey”. 2020 IEEE Symposium Series on Computational Intelligence (SSCI) (2020): 737-744.
- Yang Chenhao., et al. “Relative Camera Pose Estimation Using Synthetic Data with Domain Adaptation via Cycle-Consistent Adversarial Networks”. Journal of Intelligent and Robotic Systems4 (2021): 79.
- Tobin Josh., et al. “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World”. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017): 23-30.
- Mankowitz Daniel J., et al. “Situational Awareness by Risk-Conscious Skills”. arXiv:1610.02847, arXiv, 10 Oct. (2016).
- Christiano Paul., et al. “Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model”. arXiv:1610.03518, arXiv, 11 Oct. (2016).
- Hanna Josiah P., et al. “Grounded Action Transformation for Sim-to-Real Reinforcement Learning”. Machine Learning9 (2021): 2469-2499.
- Tzeng Eric., et al. “Adapting Deep Visuomotor Representations with Weak Pairwise Constraints”. Algorithmic Foundations of Robotics XII: Proceedings of the Twelfth Workshop on the Algorithmic Foundations of Robotics, edited by Ken Goldberg et al., Springer International Publishing, (2020): 688-703.
- Andrychowicz OpenAI: Marcin., et al. “Learning Dexterous In-Hand Manipulation”. The International Journal of Robotics Research1 (2020): 3-20.
- Rusu Andrei A., et al. “Sim-to-Real Robot Learning from Pixels with Progressive Nets”. Proceedings of the 1st Annual Conference on Robot Learning, PMLR, (2017): 262-270.
- Nagabandi Anusha., et al. “Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning”. arXiv:1803.11347, arXiv, 27 Feb. (2019).
- Arndt Karol., et al. “Meta Reinforcement Learning for Sim-to-Real Domain Adaptation”. 2020 IEEE International Conference on Robotics and Automation (ICRA) (2020): 2725-2731.
- Wang Pin., et al. “A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers”. 2018 IEEE Intelligent Vehicles Symposium (IV), (2018): 1379-1384.
- Gu Shixiang., et al. “Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates”. 2017 IEEE International Conference on Robotics and Automation (ICRA), (2017): 3389-3396.
- Bakker B., et al. “Quasi-Online Reinforcement Learning for Robots”. Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., (2006): 2997-3002.
- Hester Todd., et al. “RTMBA: A Real-Time Model-Based Reinforcement Learning Architecture for Robot Control”. 2012 IEEE International Conference on Robotics and Automation, (2012): 85-90.
- Riedmiller Martin., et al. “Reinforcement Learning for Robot Soccer”. Autonomous Robots1 (2009): 55-73.
- Fang Guoxin., et al. “Efficient Jacobian-Based Inverse Kinematics With Sim-to-Real Transfer of Soft Robots by Learning”. IEEE/ASME Transactions on Mechatronics6 ( 2022): 5296-5306.
- Chaplot Devendra Singh., et al. “Active Neural Localization”. arXiv:1801.08214, arXiv, 24 Jan. (2018).
- Wu Chunxue., et al. “UAV Autonomous Target Search Based on Deep Reinforcement Learning in Complex Disaster Scene”. IEEE Access 7 (2019): 117227-117245.
- Lei Lei., et al. “Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges”. IEEE Communications Surveys and Tutorials3 (2020): 1722-1760.
- Saravanan M., et al. “IoT Enabled Indoor Autonomous Mobile Robot Using CNN and Q-Learning”. 2019 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), (2019): 7-13.
- Liu Xiaolan., et al. “Resource Allocation for Edge Computing in IoT Networks via Reinforcement Learning”. ICC 2019 - 2019 IEEE International Conference on Communications (ICC), (2019): 1-6.
- Foruzan Elham., et al. “Reinforcement Learning Approach for Optimal Distributed Energy Management in a Microgrid”. IEEE Transactions on Power Systems5 (2018): 5749-5758.
- Zhang Yameng., et al. “A Deep Reinforcement Learning Approach for Online Computation Offloading in Mobile Edge Computing”. 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS), (2020): 1-10.
- Rolf Benjamin., et al. “A Review on Reinforcement Learning Algorithms and Applications in Supply Chain Management”. International Journal of Production Research (2022): 1-29.
- Wang Fang and Lin Lin. ‘Spare Parts Supply Chain Network Modeling Based on a Novel Scale-Free Network and Replenishment Path Optimization with Q Learning”. Computers and Industrial Engineering 157 (2021): 107312.
- Li Xijun., et al. “Learning to Optimize Industry-Scale Dynamic Pickup and Delivery Problems”. 2021 IEEE 37th International Conference on Data Engineering (ICDE), (2021): 2511-2522.
- Zhu Zheng., et al. “A Mean-Field Markov Decision Process Model for Spatial-Temporal Subsidies in Ride-Sourcing Markets”. Transportation Research Part B: Methodological 150 (2021): 540-565.
- Kegenbekov Zhandos and Ilya Jackson. ‘Adaptive Supply Chain: Demand-Supply Synchronization Using Deep Reinforcement Learning”. Algorithms8 (2021): 240.
- Zhao Zhijia., et al. “Reinforcement Learning Control for a 2-DOF Helicopter With State Constraints: Theory and Experiments”. IEEE Transactions on Automation Science and Engineering (2022): 1-11.
- Yang Tao., et al. “A Soft Artificial Muscle Driven Robot with Reinforcement Learning”. Scientific Reports1 (2018): 14518.
- Zhu Wei., et al. “A Survey of Sim-to-Real Transfer Techniques Applied to Reinforcement Learning for Bioinspired Robots”. IEEE Transactions on Neural Networks and Learning Systems (2021): 1-16.
- Ha Sehoon. “Quadrupedal Robots Trot into the Wild”. Science Robotics47 (2020): eabe5218.
- Naug Avisek., et al. “Deep Reinforcement Learning Control for Non-Stationary Building Energy Management”. Energy and Buildings 277 (2022): 112584.
Citation
Copyright