Difference between revisions of "List of teams at OpenAI"

From Issawiki
Jump to: navigation, search
Line 11: Line 11:
 
* Finance "The Finance team ensures the longevity of our organization by enabling us to make the right financial decisions at the right time, from seeking mission-aligned partners to generating financial reporting that reflects our research progress." [https://openai.com/jobs/#finance]
 
* Finance "The Finance team ensures the longevity of our organization by enabling us to make the right financial decisions at the right time, from seeking mission-aligned partners to generating financial reporting that reflects our research progress." [https://openai.com/jobs/#finance]
 
* People and Operations teams: "The People & Operations teams work to recruit and develop a diverse set of talented people. We believe diversity and a culture of continuous learning are prerequisites for achieving safe, universally beneficial AGI." [https://openai.com/jobs/#people-operations]
 
* People and Operations teams: "The People & Operations teams work to recruit and develop a diverse set of talented people. We believe diversity and a culture of continuous learning are prerequisites for achieving safe, universally beneficial AGI." [https://openai.com/jobs/#people-operations]
 +
* for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [https://openai.com/jobs/#all] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.
  
 
"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/]
 
"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/]

Revision as of 03:35, 11 March 2020

why has nobody made a list like this?

  • Foresight [1] "The Foresight team (within Safety) systematically studies patterns in ML training and performance, with an eye towards predicting the future performance and resource needs of AI systems." [2]
  • Reflection-Humans [3] -- is this different from safety?
  • Safety
  • Policy
  • Clarity (the one Chris Olah is on/leads)
  • Language [4] "The Language team works to improve the language understanding and generation capabilities of AI systems. They are working towards building a flexible and reusable core of language capabilities for general AI systems." [5]
  • Robotics [6] "The Robotics team works on building a general-purpose robot that can perform a wide range of tasks using meta-learning in both simulated and real-world environments." [7]
  • explainability [8]
  • Finance "The Finance team ensures the longevity of our organization by enabling us to make the right financial decisions at the right time, from seeking mission-aligned partners to generating financial reporting that reflects our research progress." [9]
  • People and Operations teams: "The People & Operations teams work to recruit and develop a diverse set of talented people. We believe diversity and a culture of continuous learning are prerequisites for achieving safe, universally beneficial AGI." [10]
  • for things like "Research Scientist, Reasoning", does that mean "Reasoning" is a separate team? [11] If so, add the following to the list: Reasoning, Supercomputing, Multi-agent, Applied AI, Accelaration, Security, Output.

"Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun." [12]