Journal of Northeastern University(Natural Science) ›› 2023, Vol. 44 ›› Issue (10): 1369-1376.DOI: 10.12068/j.issn.1005-3026.2023.10.001

• Information & Control •     Next Articles

Automatic Lane Change Decision Model Based on Dueling Double Deep Q-network

ZHANG Xue-feng, WANG Zhao-yi   

  1. School of Sciences, Northeastern University, Shenyang 110819, China.
  • Published:2023-10-27
  • Contact: ZHANG Xue-feng
  • About author:-
  • Supported by:
    -

Abstract: Automatic lane change of vehicles requires driving at the fastest possible speed while ensuring no collision situations. However, regular control is not robust enough to handle unexpected situations or respond to lane separation. To solve these problems, an automatic lane change decision model based on dueling double deep Q-network(D3QN) reinforcement learning model is proposed. The algorithm processes the environmental vehicle information fed back by the internet of vehicles, and then obtains actions through strategies. After the actions are executed, the neural network is trained according to given reward function, and finally the automatic lane change strategy is realized through the trained network and reinforcement learning. The three-lane environment built by Python and the vehicle simulation software CarMaker are used to carry out simulation experiments. The results show that the algorithm proposed has a good control effect, making it feasible and effective.

Key words: lane change; driverless vehicles; reinforcement learning; deep learning; deep reinforcement leaning

CLC Number: