Slide 26
Slide 26 text
参考文献
・AGI Safety From First Principles
https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view
・AI Alignment Course | AI Safety Fundamentals
https://course.aisafetyfundamentals.com/alignment?
・Date of Artificial General Intelligence | Metaculus
https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
・Deep Reinforcement Learning From Human Preferences (Christiano et al, 2017)
・Efficient Deep Reinforcement Learning for Dexterous Manipulation (Popov et al, 2017)
・Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals | by DeepMind Safety Research | Medium
https://deepmindsafetyresearch.medium.com/goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-
cf96ebc60924
・Machine Learning for Humans, Part 2.1: Supervised Learning | by Vishal Maini | Machine Learning for Humans | Medium
https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feab
・On the Opportunities and Risks of Foundation Models
https://arxiv.org/pdf/2108.07258.pdf
・Scaling Laws for Neural Language Models
https://arxiv.org/pdf/2001.08361.pdf
・Visualizing the deep learning revolution | by Richard Ngo | Medium
https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5