ARENA’s Impact
Impact Reports
-
We ran this iteration between September 1st and October 3rd 2025 at the London Initiative for Safe AI (LISA), with 27 participants in attendance.
-
We ran this iteration between April 28th and May 30th 2025 at the London Initiative for Safe AI (LISA), with 28 participants in attendance.
-
We ran this iteration between September 2nd and October 4th 2024 at the London Initiative for Safe AI (LISA), with 33 participants in attendance.
-
We ran this iteration between January 8th and 2nd February 2024. This iteration saw a combination of in-person learning at the London Initiative for Safe AI (LISA) and online participation in the programme.
Due to a personnel transition at the end of the programme, no impact report for ARENA 3.0 is available.
-
We ran this iteration between May 22nd and June 30th 2023 at the London Initiative for Safe AI (LISA), with 16 participants in attendance.
-
The first iteration of ARENA was a 12-week long event which saw independent researchers come together to work on mechanistic interpretability. We’ve come a long way since then, and sadly don’t have an impact report from the first time ARENA happened! However, what we do today is so far removed from this that we hope you’ll forgive us.
At ARENA, we strive to ensure that our work represents an impactful and efficient use of our resources. This applies to running our in-person programmes, supporting those who use our materials, and curating the materials themselves.
Above, we have included some impact reports from previous ARENA iterations. We use the information collected in these to update our approach: the way we run our programmes, the content that we teach, and the high-level projects we take on to further our mission.
External Programmes:
Beyond running our in-person ARENA cohorts, a large part of what we do focuses on helping others to upskill as effectively as possible. This applies both to self-studying individuals and to external groups running AI safety courses using our materials.
To this end, we keep in contact with the organisers of such groups, and seek out their feedback. We pay particular attention to what strategies worked well, where learners struggled, and what we could be doing better to support them. We use this feedback to develop new ideas. At a higher level, we remain open-minded about how we run our programmes and the content that we teach.
Below are some examples of how we like external programmes to share their feedback with us:
Finnish Alignment Engineering Bootcamp (FAEB):
Technical Alignment Research Accelerator (TARA):
Moreover, this article provides valuable insight into the preparation and work required to run a programme based on ARENA’s materials successfully.