Você está na página 1de 8

Why the world’s leading AI charity decided to take

billions from investors


A Q&A with the founders of the cutting-edge AI lab OpenAI.
By Kelsey Piper Apr 17, 2019, 9:10am EDT

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

Most of the world’s most sophisticated artificial intelligence programs are written by
for-profits, like Facebook, Google, and Google sister company DeepMind. But in 2015,
Elon Musk founded an exception: OpenAI, a nonprofit with the mission to do cutting-
edge AI research and bring the benefits to everybody.

Since then, the young organization has racked up some impressive accomplishments. It
recently built a language-generating system called GPT-2. It writes news articles that
are, at a glance, difficult to tell apart from real ones, and this weekend their AI became
the first to beat an esports world champion team with its sweeping Dota 2 victory.

It has also shifted directions, in a way that’s left some outside observers nervous. Musk
left the board. OpenAI’s safety team concluded that open-sourcing all of their work
might, rather than advancing humanity’s common interests, invite trouble; when they
developed GPT-2, they didn’t release it publicly, expressing worries that it’d be easy to
misuse for plagiarism, bots, fake Amazon reviews, and spam. And last month, they
announced a significant restructuring: Instead of a nonprofit, they’ll operate from
now on as a new kind of company called OpenAI LP (the LP stands for “limited
partnership”.)

The team wanted to raise billions of dollars to stay on the frontiers of AI research. But
taking investment money would be a slippery slope towards abandoning their mission:
Once you have investors, you have obligations to maximize their profits, which is
incompatible with ensuring that the benefits of AI are widely distributed.

The solution? What they’re calling a “hybrid of a for-profit and nonprofit”. The company
promises to pay shareholders a return on their investment: up to 100 times what they
put in. Everything beyond that goes to the public. The OpenAI nonprofit board still
oversees everything.

That sounds a bit ridiculous — after all, how much can possibly be left over after paying
investors 100 times what they paid in? But early investors in many tech companies have
made far more than 100 times what they invested. Jeff Bezos reportedly invested
$250,000 in Google back in 1998; if he held onto those shares, they’d be worth more
than $3 billion today. If Google had adopted OpenAI LP’s cap on returns, Bezos would’ve
gotten $25 million dollars — a handsome return on his investment — and the rest would
go to humankind.

So that’s the idea. The devil, of course, is in the details. OpenAI’s mission is “discovering
and enacting the path to safe artificial general intelligence” (AGI) — that is, an artificial
intelligence that has human-like problem-solving abilities across many different
domains. That’s a huge ambition — one that is likened to the invention of electricity, the
internet, or the industrial revolution. Is there really any chance OpenAI can build an AGI?
Should they be trying, given the risks? How accountable is OpenAI to the world they’re
planning on transforming?

I sat down with OpenAI co-founders Greg Brockman and Ilya Sutskever to discuss these
and many other issues. Here’s our conversation, edited for length and clarity:

Kelsey Piper

I read the Open AI LP announcement, where you’ve stated your intent was to raise
billions of dollars in order to make progress towards artificial general intelligence. What
was the process that led you to that? And to do it in the structure you did, with the LP?

Ilya Sutskever
Making advances in AI, towards AGI, is not cheap. You need truly giant amounts of
comput[ing power], and you need to attract and retain the best talent. In terms of
compute, specifically, we’ve seen that the amount of compute required to get the best
results has been growing extremely rapidly each year. And at some point we realized
that we’d reached the limits of our fundraising ability as a pure nonprofit.

And so we sought to create a structure that will allow us to raise more money — while
simultaneously allowing us to formally adhere to the spirit and the letter of our original
OpenAI mission as much as possible.

Greg Brockman

We think this is a great structure for AGI, but we don’t think it’s just for AGI. There are
other possibly transformative technologies coming on the scene — things like CRISPR.
It’s important these technologies get developed but you don’t want them subject to
pure profit-maximizing for the benefit of one company. So we hope to see this structure
be adopted by other people.

We founded OpenAI in 2015. And we spent about two years really trying to design the
right structure — one year of figuring out what we think the path is [to being a leading AI
organization], and then another year figuring out how you’re supposed to do that while
still retaining the mission.

Kelsey Piper

Your understanding of how to achieve your mission has evolved a ton over time. It
definitely seems like a lot of people’s understanding of what OpenAI is doing is shaped
by some of the early messaging in a direction that doesn’t actually reflect you guys’
understanding of the mission at this point.

A lot of people think you’re about open sourcing progress towards AGI.

Greg Brockman

OpenAI is about making sure the future is going to be good when you have advanced
technologies. The shift for us has been to realize that, as these things get really
powerful, everyone having access to everything isn’t actually guaranteed to have a good
outcome.

You can look at deepfakes [convincing fake photos and videos made with AI]. Is the
world better because deepfakes are out there? It’s not obvious, right?

And so our focus, instead, has really shifted to thinking about the benefits. The way
technology has evolved, with the right big idea you can generate huge amounts of value
— but then, it also has this wealth-concentrating effect. And if AGI is built, by default, it
will be a hundred times, a thousand times, ten thousand times more concentrating than
we’ve seen so far.

Kelsey Piper

You’ve gotten some pushback on the move away from transparency. You’ll probably get
more as you start to publish less.

If your mission is publish everything, it’s easy for the public to tell whether you guys are
still motivated by your mission. I can just check if you’re publishing everything. If your
mission is to distribute the benefits, I don’t have a way of evaluating whether you’re still
committed to your mission. I have to wait to find out if I can trust you.

Greg Brockman

I think this is a really good point. And there’s a reason we didn’t just go in and make very
quick changes. There’s really a reason we spent a long time thinking about who we are.
We looked at every legal structure out there. And in some ways, a nonprofit is just great
for having a pure mission that’s very clear how it works. But you know the sad truth is
that not enough gets done in a nonprofit, right? And in a for-profit — I think too much
gets done there.

“THE SAD TRUTH IS THAT NOT ENOUGH GETS DONE IN A


NONPROFIT, RIGHT?” —GREG BROCKMAN

So the question was, how do you find something that gets the best of both?

One gap in our current structure that we do want to fill is representative governance.
We don’t think that AGI should be just a Silicon Valley thing. We’re talking about world-
altering technology. And so how do you get the right representation and governance in
there? This is actually a really important focus for us and something we really want
broad input on.

Kelsey Piper

One thing that struck me, reading some of the critical reactions to Open AI LP, was that
most of your critics don’t believe you that you’re gonna build AGI. So most of the
criticism was: “that’s a fairy tale.” And that’s certainly one important angle of critique.

But it seems like there’s maybe a dearth of critics who are like: “All right. I believe you
that there’s a significant chance — a chance worth thinking about — that you’re going to
build AGI ... and I want to hold you accountable.”

Greg Brockman

I think I go even one step further and say that it’s not just people who don’t believe that
we’re gonna do it but people who don’t even believe that we believe we’re going to do it.

Kelsey Piper

A lot of startups have some language about transforming the whole world on their Web
site, which isn’t that sincerely meant. You guys are saying “we’re going to build a general
artificial intelligence” —

Ilya Sutskever

We’re going to do everything that can be done in that direction while also making sure
that we do it in a way that’s safe.

It’s hard to tell how long it will take exactly. But I think it’s no longer possible to be totally
confident that this is impossible.

Greg Brockman

I think it’s interesting to look at the history of technological developments. Have you
ever read Arthur C. Clarke’s Profiles of the Future?

Kelsey Piper

Don’t think so.

Greg Brockman

It’s such a great book. This is [Clarke] trying to say — let me predict what the future’s
going to be like. And he starts by looking at the history of inventions.

He goes through flight, space flight, the invention of the atomic bomb, the invention of
the incandescent lamp — looking at all of these and saying “what was the climate?” How
did people feel at the time these technologies were coming on the scene?

The incandescent bulb was fascinating because Thomas Edison, the year before he
created the lamp, had announced what he was doing. He said, “We’re gonna do this, it’s
gonna be great” and gas securities in England fell. So the British Parliament put together
this committee of distinguished experts who went to go talk to Edison, check out all the
stuff.

They came back and they were like this is totally bogus. Never going to happen,
everything’s fine. A year later he ships.
And the thing is — the naysaying prediction will be right most of the time. But the
question is what happens when that prediction is false.

Ilya Sutskever

One way to think about what we are doing is [taking out] a global insurance policy
against sooner-than-expected AGI.

“ONE WAY TO THINK ABOUT WHAT WE ARE DOING IS A GLOBAL


INSURANCE POLICY AGAINST SOONER-THAN-EXPECTED ARTIFICIAL
GENERAL INTELLIGENCE” —ILYA SUTSKEVER

But then to add to why it’s even reasonable to talk about AGI in the first place today — if
you go back in history, they made a lot of cool demos with little symbolic AI. They could
never scale them up, they were never able to get them to solve non-toy problems.

Now with deep learning the situation is reversed. You have this very small set of tools
which is general — the same tools solve a huge variety of problems. Not only is it
general, it’s also competent — if you want to get the best results on many hard
problems, you must use deep learning. And it’s scalable. So then you say, “Okay, well,
maybe AGI is not a totally silly thing to begin contemplating.”

Kelsey Piper

So the thing that got me worried was that I talked to some people who weren’t worried,
and I said, “All right, what would scare you? What would make you say, ‘Well, you know,
maybe we are 10 years away’?” And they said things like “unsupervised learning” — that
is, learning from unstructured data. And now GPT-2 is doing unsupervised learning. You
know, 10 years ago everybody was saying “AI can’t even look at things.” We’ve basically
solved that one.

Greg Brockman

It’s a very funny psychological phenomenon, because you just adapt so fast and you
take it for granted. It’s understandable, because technological advance and
development is so difficult to think about.

There’s a great xkcd from 2014:


https://xkcd.com/1425/

One word that you used earlier was worried. I think that it’s interesting how when
people talk about AGI, often they focus on the negative. With technology generally it’s
much easier to focus on those negatives. But the thing with AGI — and this is true of all
technologies, but I think AGI in some ways is the most extreme form of technology that
we’ve conceived of so far — is that the upsides are going to be so huge.

You think about planetary scale problems that humanity just doesn’t really seem to
even have a hope of solving, how about, global health care for everyone? And so that’s
what we’re really excited about. We’re excited about the upside. We’re cognizant of the
downsides, we think it’s important to navigate those as well.

Kelsey Piper

We’ve been talking mostly about the ability of Open AI LP to ensure that benefits get
distributed. But distributing the benefits is really far from being the only big problem
that people see on the road to AGI. There’s risks of accidents, risks of locking us into
bad futures, stuff like that.

RELATED

The case for taking AI seriously as a threat to humanity


Greg Brockman

We see three main categories of risk from AGI. In some ways they all boil down to one
thing which is AGI’s ability to cause rapid change. You can think about the Internet. In a
lot of ways, we’ve had 40, 50 years to have the internet play out in society. And honestly
that change has still been too fast. You look at recent events and — it’d just be nice if
we’d spent more time to understand how this would affect us. With AGI you should view
it as almost this more compressed version of what we’ve seen.

The three categories we really see are — one, systems that pursue misspecified goals,
so does what no one wants. That’s a “careful what you wish for” scenario.

The second is systems that can be subverted. So you actually build it to do the right
thing but someone hacks into it and does bad things. And the third one is: so we get
those two problems right. We get the technical stuff right. It can’t be subverted, it does
what we intend but — somehow society doesn’t get better. Only a few people benefit.
Everyone else’s lives are the same or even worse.

Ilya Sutskever

And there is a fourth one, which is misuse.

And if you care about all those risks and you want to navigate all of them, you must think
about the human factor and the technological factor. We work to make sure that
governments are informed about the state of AI and can reason about it as correctly as
possible. And we work on building the AGI itself. Making sure that it is safe, that its goals
have been specified and it follows its goals.

Greg Brockman

And I think that you’ll see a lot of that weirdness with our structure — it doesn’t look like
structures that have existed, it’s not really precedented. A lot of that is because we
consider all these risks and we think — how does that affect how to do this?

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas
and solutions for tackling our biggest challenges: improving public health, decreasing
human and animal suffering, easing catastrophic risks, and — to put it simply — getting
better at doing good.

Você também pode gostar