Journal of Northeastern University(Natural Science) ›› 2024, Vol. 45 ›› Issue (10): 1386-1393.DOI: 10.12068/j.issn.1005-3026.2024.10.003

• Information & Control • Previous Articles    

Application of Reinforcement Learning Based on Hybrid Model in Optimal Control of Flotation Process

Run-da JIA1(), Dong-hao ZHANG1, Jun ZHENG1, Kang LI2   

  1. 1.School of Information Science & Engineering,Northeastern University,Shenyang 110819,China
    2.National (Beijing) Key Laboratory of Mining and Metallurgical Process Automatic Control Technology,Mining and Metallurgical Technology Group Co. ,Ltd. ,Beijing 100160,China.
  • Received:2023-05-29 Online:2024-10-31 Published:2024-12-31
  • Contact: Run-da JIA
  • About author:JIA Run-da,E-mail: jiarunda@ise.neu.edu.cn

Abstract:

Traditional optimization control methods are difficult to make accurate and rapid decisions when the state of the flotation process changes, resulting in significant fluctuations in the concentrate grade and tailings grade, and unstable product quality. In addition, the flotation process is difficult to detect the concentrate grade online, leading to a decrease in its practicality. In response to the above problems, a hybrid model is used to model the flotation process and a reinforcement learning algorithm based on safety augmented value estimation from demonstrations (SAVED) is used to control the size distribution of flotation overflow bubbles to indirectly control the concentrate grade and tailings grade. The effectiveness of the proposed algorithm is verified through simulation experiments. Compared with artifical experience and data-driven models, SAVED based on hybrid models is used to model the flotation process and control the size distribution of flotation overflow bubbles. The algorithms can achieve better control effects while ensuring safety constraints.

Key words: flotation process, reinforcement learning, hybrid model, safety constraints, optimal control

CLC Number: