LJ Zhang
@mjltl9y9t6mc72yl
Joined Apr 8, 2022
0
Following
0
Followers
10
78
1.06k
80000hours.org/career-reviews/ai-safety-researcher/
Jul 17, 2023
2 Highlights
nesslabs.com/too-busy-to-enjoy-life
Apr 26, 2022
6 Highlights
medium.com/total-data-science/how-to-start-a-career-in-data-science-in-2022-advice-by-20-experts-e971bc8d0bcd
Apr 26, 2022
2 Highlights
80000hours.org/articles/ml-engineering-career-transition-guide/
Apr 24, 2022
4 Highlights
www.lesswrong.com/posts/JZrN4ckaCfd6J37cG/how-i-formed-my-own-views-about-ai-safety?utm_source=80%2C000+Hours+mailing+list&utm_campaign=9e1aee922a-April+15+2022+research+newsletter&utm_medium=email&utm_term=0_43bc1ae55c-9e1aee922a-352582054&mc_cid=9e1aee922a&mc_eid=6ec329da28
Apr 24, 2022
2 Highlights
medium.com/accelerated-intelligence/while-everyone-is-distracted-by-social-media-successful-people-double-down-on-a-totally-underrated-5a86701e9a27
Apr 22, 2022
26 Highlights
medium.com/accelerated-intelligence/the-1-mental-model-for-writers-who-want-to-write-high-quality-viral-content-43ecf0d4ec05
Apr 22, 2022
16 Highlights
medium.com/accelerated-intelligence/memory-learning-breakthrough-it-turns-out-that-the-ancients-were-right-7bbd3090d9cc
Apr 22, 2022
14 Highlights
medium.com/@nathanieldean/what-i-learned-reading-100-books-this-year-923c5f59f727
Apr 11, 2022
5 Highlights
With that caveat in mind, here are some suggestions of places you might start if you want to self-study the basics:
3blue1brown’s series on neural networks is a really great place to start for beginners.
When I was learning, I used Neural Networks and Deep Learning — it’s an online textbook, good if you’re familiar with the maths, with some helpful exercises as well.
Online intro courses like fast.ai (focused on practical applications), Full Stack Deep Learning, and the various courses at deeplearning.ai.
For more detail, see university courses like MIT’s *Introduction to Machine Learning, NYU’s Deep Learning for even more detail. We’d also recommend Google DeepMind’s lecture series.
PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge, and a good way to learn PyTorch.
In the process of getting this experience, you might end up working in roles that advance AI capabilities. There are a variety of views on whether this might be harmful — so we’d suggest reading our article about working at leading AI labs and our article containing anonymous advice from experts about working in roles that advance capabilities. It’s also worth talking to our team about any specific opportunities you have.
If you’re doing another job, or a degree, or think you need to learn some more before trying to change careers, there are a few good ways of getting more experience doing ML engineering that go beyond the basics we’ve already covered:
Getting some experience in software / ML engineering. For example, if you’re doing a degree, you might try an internship as a software engineer during the summer. DeepMind offer internships for students with at least two years of study in a technical subject,
Replicating papers. One great way of getting experience doing ML engineering, is to replicate some papers in whatever sub-field you might want to work in. Richard Ngo, an AI governance researcher at OpenAI, has written some advice on replicating papers. But bear in mind that replicating papers can be quite hard — take a look at Amid Fish’s blog on what he learned replicating a deep RL paper. Finally, Rogers-Smith has some suggestions on papers to replicate. If you do spend some time replicating papers, remember that when you get to applying for roles, it will be really useful to be able to prove you’ve done the work. So try uploading your work to GitHub, or writing a blog on your progress. And if you’re thinking about spending a long time on this (say, over 100 hours), try to get some feedback on the papers you might replicate before you start — you could even reach out to a lab you want to work for.
Taking or following a more in-depth course in empirical AI safety research. Redwood Research ran the MLAB bootcamp, and you can apply for access to their curriculum here. You could also take a look at this Deep Learning Curriculum by Jacob Hilton, a researcher at the Alignment Research Center — although it’s probably very challenging without mentorship.4 The Alignment Research Engineer Accelerator is a program that uses this curriculum. Some mentors on the SERI ML Alignment Theory Scholars Program focus on empirical research.
Learning about a sub-field of deep learning. In particular, we’d suggest natural language processing (in particular transformers — see this lecture as a starting point) and reinforcement learning (take a look at Pong from Pixels by Andrej Karpathy, and OpenAI’s Spinning up in Deep RL). Try to get to the point where you know about the most important recent advances.
Getting a job in theoretical AI safety research
There are fewer jobs available in theoretical AI safety research, so it’s harder to give concrete advice. Having a maths or theoretical computer science PhD isn’t always necessary, but is fairly common among researchers in industry, and is pretty much required to be an academic.
If you do a PhD, ideally it’d be in an area at least somewhat related to theoretical AI safety research. For example, it could be in probability theory as applied to AI, or in theoretical CS (look for researchers who publish in COLT or FOCS).
Alternatively, one path is to become an empirical research lead before moving into theoretical research.
Compared to empirical research, you’ll need to know relatively less about engineering, and relatively more about AI safety as a field.
Once you’ve done the basics, one possible next step you could try is reading papers from a particular researcher, or on a particular topic, and summarising what you’ve found.
You could also try spending some time (maybe 10–100 hours) reading about a topic and then some more time (maybe another 10–100 hours) trying to come up with some new ideas on that topic. For example, you could try coming up with proposals to solve the problem of eliciting latent knowledge. Alternatively, if you wanted to focus on the more mathematical side, you could try having a go at the assignment at the end of this lecture by Michael Cohen, a grad student at the University of Oxford.
If you want to enter academia, reading a ton of papers seems particularly important. Maybe try writing a survey paper on a certain topic in your spare time. It’s a great way to master a topic, spark new ideas, spot gaps, and come up with research ideas. When applying to grad school or jobs, your paper is a fantastic way to show you love research so much you do it for fun.
There are some research programmes aimed at people new to the field, such as the SERI ML Alignment Theory Scholars Program, to which you could apply.
Other ways to get more concrete experience include doing research internships, working as a research assistant, or doing a PhD, all of which we’ve written about above, in the section on whether and how you can get into a PhD programme.
One note is that a lot of people we talk to try to learn independently. This can be a great idea for some people, but is fairly tough for many, because there’s substantially less structure and mentorship.