Difference between revisions of "List of teams at OpenAI"
Line 4: | Line 4: | ||
* Safety [https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/] | * Safety [https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/] | ||
** Clarity (the one Chris Olah is on/leads): "The Clarity team (within Safety) designs abstractions, interfaces, and ways of thinking that enable humans to meaningfully understand, supervise and control AI systems." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] | ** Clarity (the one Chris Olah is on/leads): "The Clarity team (within Safety) designs abstractions, interfaces, and ways of thinking that enable humans to meaningfully understand, supervise and control AI systems." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] | ||
− | ** Reflection: "The Reflection team (within Safety) studies algorithms for learning and applying models of human values during ML training, with an emphasis on scalability to highly capable agents." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] | + | ** Reflection (led by Paul Christiano): "The Reflection team (within Safety) studies algorithms for learning and applying models of human values during ML training, with an emphasis on scalability to highly capable agents." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] |
** * Foresight [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/] "The Foresight team (within Safety) systematically studies patterns in ML training and performance, with an eye towards predicting the future performance and resource needs of AI systems." [https://openai.com/jobs/#foresight] | ** * Foresight [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/] "The Foresight team (within Safety) systematically studies patterns in ML training and performance, with an eye towards predicting the future performance and resource needs of AI systems." [https://openai.com/jobs/#foresight] | ||
* Policy "The Policy team derives its goals from the OpenAI Charter: ensure that AGI benefits all of humanity. Our mission is to create a stable international policy environment for the oversight of increasingly powerful AI technology." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] | * Policy "The Policy team derives its goals from the OpenAI Charter: ensure that AGI benefits all of humanity. Our mission is to create a stable international policy environment for the oversight of increasingly powerful AI technology." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1] |
Revision as of 23:16, 23 May 2020
why has nobody made a list like this?
- Reflection-Humans [1] -- is this different from safety?
- Safety [2]
- Clarity (the one Chris Olah is on/leads): "The Clarity team (within Safety) designs abstractions, interfaces, and ways of thinking that enable humans to meaningfully understand, supervise and control AI systems." [3]
- Reflection (led by Paul Christiano): "The Reflection team (within Safety) studies algorithms for learning and applying models of human values during ML training, with an emphasis on scalability to highly capable agents." [4]
- * Foresight [5] "The Foresight team (within Safety) systematically studies patterns in ML training and performance, with an eye towards predicting the future performance and resource needs of AI systems." [6]
- Policy "The Policy team derives its goals from the OpenAI Charter: ensure that AGI benefits all of humanity. Our mission is to create a stable international policy environment for the oversight of increasingly powerful AI technology." [7]
- Language [8] "The Language team works to improve the language understanding and generation capabilities of AI systems. They are working towards building a flexible and reusable core of language capabilities for general AI systems." [9]
- Robotics [10] "The Robotics team works on building a general-purpose robot that can perform a wide range of tasks using meta-learning in both simulated and real-world environments." [11]
- explainability [12]
- Finance "The Finance team ensures the longevity of our organization by enabling us to make the right financial decisions at the right time, from seeking mission-aligned partners to generating financial reporting that reflects our research progress." [13]
- People and Operations teams: "The People & Operations teams work to recruit and develop a diverse set of talented people. We believe diversity and a culture of continuous learning are prerequisites for achieving safe, universally beneficial AGI." [14]
- Algorithms "The Algorithms team makes fundamental advances on the frontier of machine learning. Topics of interest include unsupervised learning and representation learning, robust perception, out of domain generalization, and reasoning." [15]
- RL: "The RL team performs fundamental research on sample-efficient reinforcement learning via meta-learning, aiming to train agents to master previously unseen games as fast as humans can." [16]
- Multi-Agent: "The Multi-Agent team’s mission is to develop and understand multi-agent learning as a means towards unbounded growth of human-compatible intelligence." [17]
- Infrastructure: "The Infrastructure team manages OpenAI’s compute clusters and develops tooling to accelerate research used across our organization." [18]
- Hardware: "The Hardware team develops efficiently scalable models and training techniques, and does pathfinding research to accelerate the development of next-gen AI hardware." [19]
- Legal: "The Legal team enables OpenAI to advance our mission in a way that maximizes impact while safeguarding the company." [20]
- Communications: "The Communications team works closely with all teams at OpenAI to design, write, and build content and experiences that can elegantly communicate our progress towards beneficial AGI to a wide audience." [21]
- Learned Optimizer. The job ad is no longer up, but you can find it by googling. "At OpenAI, we're excited to pursue approaches to AI that improve with scale. Towards this end, we are spinning up a Learned Optimizer Team. This team will work on techniques that automate and optimize various parts of the process of training deep neural nets." [22]
- for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [23] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.
"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [24]