Recommended Reading

Recommended Reading: Silicon Valley Is Turning Into Its Own Worst Fear3 min read

Ted Chiang is well known for his science fiction writing, specifically his short story  Story of Your Life, which became the basis for  Arrival  and can be found in his outstanding collection of short stories titled Stories of Your Life and Others.  He’s also a pretty smart dude, he graduated from  Brown University  with a  computer science  degree and currently  works as a  technical writer  in the software industry.

In a great essay for Buzzfeed, Chiang argues that we needn’t worry about machines becoming superintelligent, but should worry about the lack of remorse and insight present in the machines we’ve already built. They already have the capability to the destroy the world, but  “we just call them corporations”.

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do – grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

…..

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse.

…..

The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieves its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

Honestly, I’d like to just excerpt the whole thing here, just go read it. Chiang’s writing is is outstanding and his point is quite powerful and persuasive.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from Gear & Grit to your inbox Weekly.

You have Successfully Subscribed!

Pin It on Pinterest

Share This