Cointime

Download App
iOS & Android

OpenAI and Elon Musk

From openai

The mission of OpenAI is to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits. We are now sharing what we've learned about achieving our mission, and some facts about our relationship with Elon. We intend to move to dismiss all of Elon’s claims.

We realized building AGI will require far more resources than we’d initially imagined

Elon said we should announce an initial $1B funding commitment to OpenAI. In total, the non-profit has raised less than $45M from Elon and more than $90M from other donors.

When starting OpenAI in late 2015, Greg and Sam had initially planned to raise $100M. Elon said in an email: “We need to go with a much bigger number than $100M to avoid sounding hopeless… I think we should say that we are starting with a $1B funding commitment… I will cover whatever anyone else doesn't provide.” [1]

We spent a lot of time trying to envision a plausible path to AGI. In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission—billions of dollars per year, which was far more than any of us, especially Elon, thought we’d be able to raise as the non-profit.

We and Elon recognized a for-profit entity would be necessary to acquire those resources

As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control. Elon left OpenAI, saying there needed to be a relevant competitor to Google/DeepMind and that he was going to do it himself. He said he’d be supportive of us finding our own path.

In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations.

We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI. He then suggested instead merging OpenAI into Tesla. In early February 2018, Elon forwarded us an email suggesting that OpenAI should “attach to Tesla as its cash cow”, commenting that it was “exactly right… Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero”. [2]

Elon soon chose to leave OpenAI, saying that our probability of success was 0, and that he planned to build an AGI competitor within Tesla. When he left in late February 2018, he told our team he was supportive of us finding our own path to raising billions of dollars. In December 2018, Elon sent us an email saying “Even raising several hundred million won’t be enough. This needs billions per year immediately or forget it.” [3]

We advance our mission by building widely-available beneficial tools

We’re making our technology broadly usable in ways that empower people and improve their daily lives, including via open-source contributions.

We provide broad access to today's most powerful AI, including a free version that hundreds of millions of people use every day. For example, Albania is using OpenAI’s tools to accelerate its EU accession by as much as 5.5 years; Digital Green is helping boost farmer income in Kenya and India by dropping the cost of agricultural extension services 100x by building on OpenAI; Lifespan, the largest healthcare provider in Rhode Island, uses GPT-4 to simplify its surgical consent forms from a college reading level to a 6th grade one; Iceland is using GPT-4 to preserve the Icelandic language.

Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open.  The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”. [4]

We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.

We are focused on advancing our mission and have a long way to go. As we continue to make our tools better and better, we are excited to deploy these systems so they empower every individual.

[1]From:Elon Musk <>To:Greg Brockman <>CC:Sam Altman <>Date:Sun, Nov 22, 2015 at 7:48 PMSubject:follow up from callBlog sounds good, assuming adjustments for neutrality vs being YC-centric.I'd favor positioning the blog to appeal a bit more to the general public -- there is a lot of value to having the public root for us to succeed -- and then having a longer, more detailed and inside-baseball version for recruiting, with a link to it at the end of the general public version.We need to go with a much bigger number than $100M to avoid sounding hopeless relative to what Google or Facebook are spending. I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn't provide.Template seems fine, apart from shifting to a vesting cash bonus as default, which can optionally be turned into YC or potentially SpaceX (need to understand how much this will be) stock.[2]From:Elon Musk <>To:Ilya Sutskever <>, Greg Brockman <>Date:Thu, Feb 1, 2018 at 3:52 AMSubject:Fwd: Top AI institutions todayis exactly right. We may wish it otherwise, but, in my and’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn't zero.Begin forwarded message:From:<>To:Elon Musk <>Date:January 31, 2018 at 11:54:30 PM PSTSubject:Re: Top AI institutions todayWorking at the cutting edge of AI is unfortunately expensive. For example,In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it's unclear if a company could “catch up” to Google scale, and the investors might exert too much pressure in the wrong directions.The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.[3]From:Elon Musk <>To:Ilya Sutskever <>, Greg Brockman <>CC:Sam Altman <>,<>Date:Wed, Dec 26, 2018 at 12:07 PMSubject:I feel I should reiterateMy probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise.Even raising several hundred million won't be enough. This needs billions per year immediately or forget it.Unfortunately, humanity's future is in the hands of.And they are doing a lot more than this.I really hope I'm wrong.Elon[4]Fwd: congrats on the falcon 93 messagesFrom:Elon Musk <>To:Sam Altman <>, Ilya Sutskever <>, Greg Brockman <>Date:Sat, Jan 2, 2016 at 8:18 AMSubject:Fwd: congrats on the falcon 9Begin forwarded message:From:<>To:Elon Musk <>Date:January 2, 2016 at 10:12:32 AM CSTSubject:congrats on the falcon 9Hi ElonHappy new year to you,!Congratulations on landing the Falcon 9, what an amazing achievement. Time to build out the fleet now!I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem? There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world. Some of the more obvious points are well articulated in this blog post, that I'm sure you've seen, but there are also other important considerations:http://slatestarcodex.com/2015/12/17/should-ai-be-open/I’d be interested to hear your counter-arguments to these points.BestFrom:Ilya Sutskever <>To:Elon Musk <>, Sam Altman <>, Greg Brockman <>Date:Sat, Jan 2, 2016 at 9:06 AMSubject:Fwd: congrats on the falcon 9The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).From:Elon Musk <>To:Ilya Sutskever <>Date:Sat, Jan 2, 2016 at 9:11 AMSubject:Fwd: congrats on the falcon 9Yup

Comments

All Comments

Recommended for you

  • Robinhood Chief Legal Officer Dan Gallagher Says He Won't Become SEC Chairman

    According to market news, Dan Gallagher, the Chief Legal Officer of Robinhood, stated that he would not serve as the Chairman of the US Securities and Exchange Commission.

  • Cosine: After a user used GPT to write a bot with a backdoor code, the private key was sent to a phishing website

    SlowMist Yu Xian stated in a post on the X platform that a user used GPT to write a bot with code and sent the private key to a phishing website. The reason why the private key was stolen was because it was directly sent to the phishing website in the HTTP request body. Yu Xian reminded that when using LLM such as GPT/Claude, one must pay attention to the common fraudulent behavior of these LLM. It was previously mentioned that AI poisoning attacks were carried out, and now this is a real attack case targeting the crypto industry.

  • U.S. Supreme Court rejects Facebook's attempt to avoid shareholder securities fraud lawsuit

     US Supreme Court rejected Facebook's attempt to avoid shareholder securities fraud lawsuits under the META umbrella.

  • The final value of the US one-year inflation rate in November is expected to be 2.6%, the expected value is 2.7%, and the previous value is 2.60%

     the expected final value of the US one-year inflation rate in November is 2.6%, with an expected value of 2.7% and a previous value of 2.60%. The expected final value of the US five-to-ten-year inflation rate in November is 3.2%, with an expected value of 3.1% and a previous value of 3.10%.

  • Polymarket Blocks French Users Amid Government Investigation into Gambling Law Compliance

    Polymarket has blocked users from France following reports of an investigation by the country's gaming authority for compliance with gambling laws. The ban was not stated in Polymarket's terms of service, but French users attempting to access the website using a VPN from a French server were met with a digital blockade. The ANJ, France's national gaming authority, began investigating Polymarket after a French trader placed large bets on Donald Trump winning the 2024 US Presidential election.

  • U.S. stocks open, most crypto stocks open lower

     the US stock market opened with the Dow Jones up 0.19%, the S&P 500 up 0.05%, and the Nasdaq up 0.01%. Most cryptocurrency stocks opened lower, with Coinbase (COIN.O) down 0.06%, MicroStrategy (MSTR.O) up 0.4%, and Riot Platforms (RIOT.O) down 2.6%. Previously, Bitcoin had risen above $99,000 before falling back.

  • Amazon to invest an additional $4 billion in Anthropic, OpenAI's rival

     Amazon is deepening its cooperation with Anthropic and will add an additional $4 billion investment to the company. In September of this year, Anthropic, an artificial intelligence startup, was seeking a new round of financing with a valuation of up to $40 billion. Anthropic was founded by former OpenAI executives in 2021 and focuses on creating interpretable, secure, and controllable artificial intelligence systems. The company's flagship AI model, Claude, operates based on "Constitutional AI," which uses predefined principles to guide its output, avoiding some erroneous or discriminatory output reactions.

  • Family Offices Evolve into Powerful Investment Entities with Innovative Strategies and Advanced Technologies

    Family offices, which traditionally focused on conservative investment strategies, have transformed into powerful investment entities with a focus on alternative investments, private equity, co-investments, venture capital, and impact investing. This shift has been driven by innovative financial solutions and modern investment strategies, responding to technological advancements and an evolving global financial landscape. Family offices are taking a more active role in direct investments and co-investments, particularly in high-growth companies and startups, enhancing their control and flexibility. They are also diversifying further into private markets and real assets due to geopolitical and macroeconomic uncertainties, while embracing innovative financing solutions and cutting-edge risk management techniques. Additionally, family offices are implementing AI technologies to improve their decision-making processes, particularly in investment analysis, reflecting their commitment to innovation and strategic planning.

  • The Evolution of Family Offices: Embracing Innovative Investment Strategies and Technology

    Family offices have shifted from conservative investment strategies to more active roles in direct investments and co-investments, thanks to innovative financial solutions and modern investment strategies. They are now leaders in alternative investments, private equity, co-investments, venture capital, and impact investing, leveraging their capital through non-recourse and limited-recourse financing to expand their investments across sectors and regions. Family offices are also adopting sophisticated risk management strategies, diversifying further into private markets and real assets, and integrating advanced technologies such as AI-driven platforms to enhance decision-making processes. A family office in the UAE, International Venture Investments Holding, takes an active investment approach, emphasizing operational autonomy and forming dedicated management teams for specific projects. The UBS Global Family Office Report 2024 shows that 78% of family offices plan to invest in generative artificial intelligence in the next two to three years.

  • XEX officially launched the Slerf/USDT perpetual contract at 19:00 (UTC+8) on November 22

    On November 22nd, XEX officially launched the Slerf/USDT perpetual contract at 19:00 (UTC+8).