Flow: a deep reinforcement learning framework for mixed-autonomy traffic

Flow leverages state-of-the-art deep RL libraries and the open-source microsimulator, SUMO, enabling the use of reinforcement learning to design and train controllers in traffic settings.

Flow was developed at the University of California, Berkeley.

Results

Successful controllers developed with Flow. For more details check out our gallery.

Phantom Shockwave Dissipation on a Ring



Intersection control



Bottleneck control

Inspired by the rapid decrease in lanes on the San Francisco-Oakland Bay Bridge, we study a bottleneck that merges from four lanes down to two to one.

We demonstrate that the AVs are able to learn a strategy that increases the effective outflow at high inflows, and performs competitively with ramp metering.

Bottleneck control design
Control structure of the bottleneck. Scale of segments are distorted for visualization.
Bottleneck control design
Without control, congestion rapidly forms in the bottleneck.
Bottleneck control design
Control structure of the bottleneck; at high inflows the outflow is improved by 25%.
Bottleneck control design
Comparison of inflow, outflow curves for AV control vs. ramp metering. At high inflows they perform comparably.


On-ramp shockwave dissipation