Applications for our next round are open!
Apply by 11.59pm, Tuesday 21st January.
What the In-Depth EA fellowship involves
What is the fellowship?
IDEA is a free 6-week in-depth programme for those already acquainted with the fundamental ideas and causes of Effective Altruism, and looking to engage with them and challenge them more deeply. When you aim to do good, you face questions like:
How do I know what I think I know?
What do I morally value?
How should I plan my career?
Should I focus on near-term impact or long-term impact?
How do I account for the indirect effects of my actions?
What’s the risk of human extinction this century?
IDEA helps you engage with complex questions like these to help you figure out how you can increase your impact. Furthermore, participants are given the choice to pick from a list of discussion topics, tailoring their own exploration journey in EA.
Fellowship discussions will happen in small groups of 3-7 fellows, with an experienced facilitator. Cohorts will meet weekly for 1.5 hours in central Cambridge.
The programme draws upon knowledge from economics, philosophy, statistics, psychology, social activism, emerging technology and more. You can check our curriculum at the bottom of this page.
Who is it aimed at?
The fellowship is open to students at all stages of university education as well as non-students based in Cambridge. It’s designed for people familiar with Effective Altruism (at a similar level to having completed our intro programme) and interested in deeply questioning its ideas, ultimately aiming at having a greater positive impact on the world. We would love to see people from a wide range of backgrounds apply!
What are the requirements?
The fellowship runs for 6 weeks. To take part, you should be
Willing to spend 2-3 hours each week preparing for the meeting
Committed to attending all 6 sessions (unless unforeseen circumstances arise)
Excited about making a positive impact
Open to changing your mind
If you have any more questions, please contact info@eacambridge.org or antonio@meridiancambridge.org.
Weekly Topics
-
This week, we will consider some of the ethical positions which inspire effective altruism, how a history of changing ethical norms might affect how we want to do good, and how our own values line up with the tools EAs use.
Furthermore, we will discuss the project of developing a clearer picture of the world and improving our thinking both for ourselves and our work. We’ll evaluate the argument for why this might be important, look at some reasons to be excited about the project, and look at some next steps.
-
This topic is about understanding how in some areas, the most impactful actions can have orders of magnitude more impact that the average actions.
Questions to consider:
Is impact heavy-tailed across interventions? To what extent?
Is impact heavy-tailed across fields? To what extent?
Is impact heavy-tailed across causes? To what extent?
For mathematicians: how might we characterise heavy-tailedness?
-
Explore theories of decision-making! Questions to consider:
How much should we rely on expected value (EV) estimates?
What is Pascal’s mugging? Is this something we should be worried about?
Do we endorse hits-based giving?
-
Questions to consider:
What do people mean when they talk about noticing confusion?
How can we act under uncertainty?
How calibrated are we on our uncertainty? Can we overcome our biases here?
-
Questions to consider:
How can I use forecasting tools to make and collate forecasts?
How can I get better at forecasting?
What sorts of public forecasts can EAs make use of?
-
Questions to consider
What bioweapons programs currently exist?
What have they historically done?
What strengths/weaknesses do the biological weapons convention have?
What have they achieved recently, and what are they neglecting which EAs could support?
How concerned are you about biorisks, and why?
How are we more/less vulnerable to biorisks now than we have been previously?
Reasons we might be more vulnerable:
There is more travel, which allows for the spread of dangerous pathogens.
We are more densely populated.
Reasons we might be less vulnerable:
We have better medicine.
We have better hygiene.
-
Questions to consider:
What are the strengths and weaknesses of GiveWell’s methodology?
Are there any specific aspects of GiveWell’s cost-effectiveness methodology that you disagree with?
For yourself, how much do you think you could earn to give to GiveWell charities? How might that compare to the option of starting your own charity? How might that compare to the impact of influencing policy or aid spending?
-
Humans comprise just 0.01% of Earth’s biomass, and an even smaller percentage of all living individual animals. More farmed animals are slaughtered for food each year than the number of humans who have ever lived in all of history and most of them will live their entire lives in factory farm conditions. And at any given time, more than 99.9% of the animals alive are invertebrates, which face complex and varied harms in the wild.
Questions to consider:
What are the comparative moral weights of different species?
How should we prioritize the welfare of farmed animals, wild animals, and animals in the far future?
What tactics are effective at improving animal welfare?
-
Questions to consider:
Do you believe that advanced AI poses an existential risk?
What are the core pieces of evidence/information/intuition that inform your overall opinion?
These “core pieces” of information are also known as “cruxes”. A crux is defined as “any fact that if an individual believed differently about it, they would change their conclusion in the overall conclusion.”
How soon do you think we will achieve transformative AI, if ever? Why – what pieces of evidence/information/intuition are you using?
“Transformative AI, which we have roughly and conceptually defined as AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.” – Holden Karnofsky (Open Philanthropy, 2016)
Perhaps consider forecasting work on AI timelines.
AI Timelines: Where the Arguments, and the "Experts," Stand (Holden Karnofsky, 2021) (post - 13 mins)
What actions should humanity be taking (if any) with respect to potential risks from advanced AI, concretely? If you found this problem important, what actions would you take and why?
-
Questions to consider:
What do people mean by cluelessness? Is cluelessness the same as uncertainty?
What distinguishes simple and complex cluelessness?
Are these a reason to reject longtermism?
How limited are we by lack of information? How much does this affect our actions?
How much can we rely on studies like RCTs to resolve cluelessness? What other methods could we use to fill the gaps?
-
Questions to consider:
What’s the value proposition for a local EA group?
How does this diverge from a normal local society?
How do we go about communicating EA?
What risks are there when running a group?
How much impact could we have running a local group?
What are the key things you need to provide people to help them go from new EAs into engaged members of the community who feel confident developing their plans to do good?
-
Many issues that we care about often seem to stem from poor cooperation or decision making in institutions. This suggests that an effective way to improve the present and the long run future would be to improve how institutions function.