Our Mission

Our mission is to find talented people who want to contribute to AI safety, and help them build skills to become research engineers.

Why we are doing this

We are concerned that AI poses an existential risk, and we think that helping to positively shape the future of AI development is one of the most impactful things it is currently possible to do. Right now, advanced ML systems are playing an increasingly important role in the world, having surpassed humans in domains such as chess and videogames and now growing to rival them in domains such as art and literature summaries. In particular, the rise of LLMs like ChatGPT over the past year has thrown a spotlight onto the field of generative AI. And yet despite this, much of what goes on in the internals of AI systems is currently still a mystery to us. We believe that more high-quality work in fields such as interpretability and adversarial robustness can help alleviate this problem, and reduce the existential risk to humanity posed by advanced AI.

How we are doing this

We think there is benefit to be found in structured bootcamp-style course with exercises and pair-programming. We also think there are advantages to a more open-ended curriculum of individual investigation and implementation.

In this course, we hope to combine the benefits of both, by having each chapter of content open with pair programming based on a set of coding exercises (designed to help people develop their understanding of the material), and then transition into more project-focused material, where participants explore a particular topic in greater depth, with TA supervision.

We also believe that there are benefits from having a single location in which multiple people come together and study, which is why we’re running this course in office space in London. This space is also shared with SERI MATS fellows, which we hope will provide opportunity for productive discussion, networking, and possible future collaboration.

Why is this program different from others, e.g. MLAB?

This program owes a lot to MLAB, and many parts of it were heavily inspired by MLAB. We feel that the main way ARENA sets itself apart is with the longer duration (6 weeks), giving more time for deep dives into topics, and working on open-ended projects under supervision.

Our long-term aims

If the second iteration of this course is successful, then we may run more versions after this, possibly in Cambridge or Berkeley. We may also partner with organisations like BlueDot who can make this material available to a wider audience.

If you’re interested in mentoring / applying / funding us, then please reach out!