Modern urban centers have one of the most critical challenges of congestion. Traditional electronic toll collection systems attempt to mitigate this issue through pre-defined static congestion pricing methods; however, they are inadequate in addressing the dynamic fluctuations in traffic demand. Dynamic congestion pricing has been identified as a promising approach, yet its implementation is hindered by the computational complexity involved in optimizing long-term objectives and the necessity for coordination across the traffic network. To address these challenges, we propose a novel dynamic traffic congestion pricing model utilizing multi-agent reinforcement learning with a transformer architecture. This architecture capitalizes on its encoder-decoder structure to transform the multi-agent reinforcement learning problem into a sequence modeling task. Drawing on insights from research on graph transformers, our model incorporates agent structures and positional encoding to enhance adaptability to traffic flow dynamics and network coordination. We have developed a microsimulation-based environment to implement a discrete toll-rate congestion pricing scheme on actual urban roads. Our extensive experimental results across diverse traffic demand scenarios demonstrate substantial improvements in congestion metrics and reductions in travel time, thereby effectively alleviating traffic congestion.
Copyright: © 2024 Lu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.