A simple learning agent interacting with an agent-based market model: Julia code
We consider the learning dynamics of a single reinforcement learning optimal execution trading agent when it interacts with an event driven agent-based financial market model. Trading takes place asynchronously through a matching engine in event time. The optimal execution agent is considered at different levels of initial order-sizes and differently sized state spaces. The resulting impact on the agent-based model and market are considered using a calibration approach that explores changes in the empirical stylised facts and price impact curves. Convergence, volume trajectory and action trace plots are used to visualise the learning dynamics.
Please follow the README.md on the GitHub page for instructions on how to run the code. For example, the Calibrated-ABM branch contains the functionality to perform the simulation, sensitivity analysis, and calibration of the event-time ABM, while the RL-ABM branch extends the Calibrated-ABM branch and allows you to train an RL agent inside an event-based ABM.
Thet dataset used in this project can be found here.