Difference between revisions of "List of teams at OpenAI"

From Issawiki
Jump to: navigation, search
Line 19: Line 19:
 
* Legal: "The Legal team enables OpenAI to advance our mission in a way that maximizes impact while safeguarding the company." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1]
 
* Legal: "The Legal team enables OpenAI to advance our mission in a way that maximizes impact while safeguarding the company." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1]
 
* Communications: "The Communications team works closely with all teams at OpenAI to design, write, and build content and experiences that can elegantly communicate our progress towards beneficial AGI to a wide audience." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1]
 
* Communications: "The Communications team works closely with all teams at OpenAI to design, write, and build content and experiences that can elegantly communicate our progress towards beneficial AGI to a wide audience." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTkJ:https://openai.com/jobs/&hl=en&gl=us&strip=0&vwsrc=1]
 +
* Learned Optimizer. The job ad is no longer up, but you can find it by googling [https://www.google.com/search?q=openai+%22Learned+Optimizer%22+team]
 
* for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [https://openai.com/jobs/#all] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.
 
* for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [https://openai.com/jobs/#all] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.
  
 
"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/]
 
"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/]

Revision as of 03:43, 11 March 2020

why has nobody made a list like this?

  • Foresight [1] "The Foresight team (within Safety) systematically studies patterns in ML training and performance, with an eye towards predicting the future performance and resource needs of AI systems." [2]
  • Reflection-Humans [3] -- is this different from safety?
  • Safety
    • Clarity (the one Chris Olah is on/leads): "The Clarity team (within Safety) designs abstractions, interfaces, and ways of thinking that enable humans to meaningfully understand, supervise and control AI systems." [4]
  • Policy "The Policy team derives its goals from the OpenAI Charter: ensure that AGI benefits all of humanity. Our mission is to create a stable international policy environment for the oversight of increasingly powerful AI technology." [5]
  • Language [6] "The Language team works to improve the language understanding and generation capabilities of AI systems. They are working towards building a flexible and reusable core of language capabilities for general AI systems." [7]
  • Robotics [8] "The Robotics team works on building a general-purpose robot that can perform a wide range of tasks using meta-learning in both simulated and real-world environments." [9]
  • explainability [10]
  • Finance "The Finance team ensures the longevity of our organization by enabling us to make the right financial decisions at the right time, from seeking mission-aligned partners to generating financial reporting that reflects our research progress." [11]
  • People and Operations teams: "The People & Operations teams work to recruit and develop a diverse set of talented people. We believe diversity and a culture of continuous learning are prerequisites for achieving safe, universally beneficial AGI." [12]
  • Algorithms "The Algorithms team makes fundamental advances on the frontier of machine learning. Topics of interest include unsupervised learning and representation learning, robust perception, out of domain generalization, and reasoning." [13]
  • RL: "The RL team performs fundamental research on sample-efficient reinforcement learning via meta-learning, aiming to train agents to master previously unseen games as fast as humans can." [14]
  • Multi-Agent: "The Multi-Agent team’s mission is to develop and understand multi-agent learning as a means towards unbounded growth of human-compatible intelligence." [15]
  • Reflection: "The Reflection team (within Safety) studies algorithms for learning and applying models of human values during ML training, with an emphasis on scalability to highly capable agents." [16]
  • Infrastructure: "The Infrastructure team manages OpenAI’s compute clusters and develops tooling to accelerate research used across our organization." [17]
  • Hardware: "The Hardware team develops efficiently scalable models and training techniques, and does pathfinding research to accelerate the development of next-gen AI hardware." [18]
  • Legal: "The Legal team enables OpenAI to advance our mission in a way that maximizes impact while safeguarding the company." [19]
  • Communications: "The Communications team works closely with all teams at OpenAI to design, write, and build content and experiences that can elegantly communicate our progress towards beneficial AGI to a wide audience." [20]
  • Learned Optimizer. The job ad is no longer up, but you can find it by googling [21]
  • for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [22] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.

"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [23]