FAQs

Structure & Content

  • We welcome applications from people who fit most or all of the following criteria:

    • Care about AI safety, and making future development of AI go well

    • Relatively strong math skills (e.g. about one year's worth of university-level applied math)

    • Strong programmers (e.g. have a CS degree / work experience in SWE, or have worked on personal projects involving a lot of coding)

    • Have experience coding in Python

    • Would be able to travel to London for 4 weeks, starting 8th January 2024 (or possibly missing the first week), if you’re applying to the in-person program.

    We expect some participants to be university students, and others to have already graduated.

    Note - these criteria are mainly intended as guidelines. If you're uncertain whether you meet these criteria, or you don't meet some of them but still think you might be a good fit for the program, please do apply! You can also reach out to me directly, at callum@arena.education.

  • We hope that working through this course will give participants an opportunity to gain skills not just in ML engineering, but also more general software engineering skills such as how to structure a codebase, and the adoption of good coding practices. This should help prepare them for applying to research engineering roles at prominent AI safety orgs such as Anthropic, Redwood, FAR, Conjecture, etc.

    Additionally, at the end of each course participants will have produced their own GitHub repo containing the projects they worked on throughout the course. This will make a great thing to put on their CV, and to talk about in interviews.

    Lastly, we hope that sharing office space with other alignment orgs & independent researchers will lead to productive discussions, networking & collaboration.

  • We’re still working out all the details, but it’s likely the program will also involve some of the following:

    • Talks and Q&As with AI safety researchers

    • Some social activities in and around London, over the weekends

    • Group discussions on AI safety-related topics

  • Possibly! If you can't make these dates, then we encourage you to submit an application anyway (the form is designed to be relatively low-effort to fill out). We would be excited to continue to run these bootcamps if this iteration is also well-received.

Logistics

  • In the offices of LISA (the London Initiative for Safe AI), which is also home to organisations like Apollo Research, LEAP Labs, and BlueDot).

  • Each participant will be given a stipend which will cover housing and other expenses during the 4 week period they’ll be staying in London.

    We hope that money will not be a barrier for promising candidates who wanted to attend this program.

  • The deadline for submitting applications is November 27th 2023, although we will be interviewing and inviting participants on a rolling basis.

  • There will be three steps:

    1. Fill out an application form (this is designed to take ~20 minutes).

    2. Perform a coding assessment.

    3. Interview virtually with one of us, so we can find out more about your background and interests in AI safety & this course.

  • Yes, we are providing a virtual program (in collaboration with BlueDot). You can apply to either this or the in-person program (or you can express a preference in the application form).

Goals

  • It is split into three main chapters:

    1. Fundamentals

    2. Transformers & Interpretability

    3. Reinforcement Learning

    plus a fourth section for paper replications & capstone projects. For more details, see the homepage of this website.

  • At the start of the program, most days will involve of pair programming, working through structured exercises designed to cover all the essential material in a particular chapter. The purpose is to get you more familiar with the material in a hands-on way. There will also usually be a short selection of required readings in the morning.

    As we move through the course, some chapters will transition into more open-ended material. Much of this will still be structured (e.g. in the Mechanistic Interpretability section there will be a large set of structured exercises you can choose from), but you’ll have more choice over which things you want to study in more depth. You’ll also hopefully be able to do some independent projects, e.g. experiments, large-scale implementations, paper replications, or other bonus content. There will still be TA supervision during these sections, but the goal is for you to develop your own research & implementation skills. You may also want to work on group projects with other participants during this time instead, if that is your preference.

    Each day will be roughly the length of a normal working day (9am-5pm), although there will be more flexibility in working hours during the days of more open-ended projects. There will be no compulsory attendance on weekends, but we might organize AI safety discussion groups or social events during this time. The office space will be available 24-7 for anyone who wants to use it outside regular hours.

  • The main ML library we use will be PyTorch. During more open-ended projects you’re welcome to use different libraries, but the exercises will all be based around PyTorch, and fixing bugs might be harder if participants are all using different libraries.

    During the chapter on transformers and interpretability, we’ll also make heavy use of TransformerLens, a library developed by Neel Nanda.

  • Pair programming will be structured in a driver/navigator way. This is where the pair alternates between the roles of driver and navigator at regular intervals (e.g. every half hour).

    The driver sits in front of the keyboard; their job is to actually code up the functions and solutions to the exercises. The low-level implementation details will be their responsibility.

    The navigator will be giving high-level directions to the driver, and will also be responsible for spotting mistakes in the driver’s code.

    Note that this is just a loose suggestion, every pair will find the style that works best for them. But we strongly recommend you at least give this style a try.

  • Yes, we will be sending you prerequisite reading & exercises covering material such as PyTorch, einops and linear algebra (this will be in the form of a Colab notebook). We expect that these will take approximately 1-2 days to complete.