The Anthropic Fellows Program for AI Safety Research is a unique 6-month initiative starting in March 2025 to accelerate AI safety research. Designed for 10-15 Fellows with strong technical backgrounds, this program offers full-time research positions focusing on areas like Adversarial Robustness, Dangerous Capability Evaluations, Scalable Oversight, and more. Fellows will collaborate remotely with Anthropic mentors, receiving a weekly stipend of $2,100, access to a $10,000/month research budget, and opportunities to work from shared spaces in San Francisco or London. Applications for the first cohort close on January 20, 2025. This is an exceptional chance to contribute to the forefront of AI safety and alignment.
Location : San Francisco and London, USA, UK
Categories : Machine Learning
Press Ask Flow below to get a link to the resource
Join Y Combinator's first-ever AI Startup School on June 16-17, 2025, in San Francisco. This free conference is exclusively for final-year..
Computer Science . Machine Learning
Stanford University presents the CS336 course, "Language Modeling from Scratch," for Spring 2025, a freely accessible educational resource..
Machine Learning
Unlock the power of AI with the free WhatsApp Voice AI Agent Course! This step-by-step guide teaches you to build a WhatsApp voice AI agen..
Computer Science . Machine Learning
Ready to master AI agents? The Hugging Face Agents Course 2025 kicks off February 10, 2025, offering a 6-week, interactive, certified jour..
Computer Science . Machine Learning
Dive into the future of AI with CS25: Transformers United V5, Stanford’s premier seminar course, now open to everyone! Running April 1–Jun..
Computer Science . Machine Learning
Looking to stand out in AI? This curated list of 60+ Generative AI projects by Aishwarya Naresh Reganti (Tech Lead @ AWS) helps you build ..
Computer Science . Machine Learning