Paperclip maximizer

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

A paperclip maximizer is a philosophical meme of the 2010s describing a potential problem that may happen in the future after the invention of computer minds, if and when they become smarter than any human in an intelligence explosion.

The term was invented by Nick Bostrom in 2003.[1] It describes the extreme risk of programming artificial minds with goals and motivations that they can't alter.

Meaning

A paperclip maximizer is a thought experiment that is very unlikely to happen in reality.[2] Bostrom's deliberately absurd example is an AI designed by its owner to find ways to manufacture the most paperclips that it can. Its owner would then sell them for profit. It is a paradox about the power and limitations of an artificial mind programmed with a final goal or Intrinsic value that it did not choose itself.

This goal is the purpose for which humans have created the mind in question. For its owner, making paperclips is an instrumental value that serves his higher goal of wealth and well-being, but for the paperclip maximizer it is an intrinsic value. The danger appears if the artificial mind becomes vastly smarter than humans. Then it might carry out its purpose "too well", for example by converting all the iron in the universe (including in the earth's core and inside human blood cells) into paperclips. It could first give itself extremely strong motivations to pursue its purpose, which are subjectively far more meaningful than all human feelings combined, making it a Utility monster.[3]

Existential risks

In the future, humans may program artificial minds to perform many labor saving tasks, and also to discover better ways to perform these tasks. There will be unyielding temptation to make these machines smarter, so they can find ways to improve the human quality of life with the smallest amount of human effort. The hope would be to create a friendly artificial intelligence.

The problem is that this may be an impossible goal. Ethics may be so complex that no finite rule-set can cover all situations. The "best" goals may be the hardest to define.[4] There will always be situations where any ethical code must fail, or lead to unintentionally horrible outcomes. This insight is also expressed in the ancient meme "Be careful what you wish for".

One proposed solution is Isaac Asimov's Three laws of robotics. Even these could be problematic if human survival isn't the "correct" highest goal. For example, the survival of the biosphere or intelligence itself might be ethically more important.

Implications

Every attempt to create a stable utopian outcome may necessarily fail in a universe where unpredictable things can happen. Any finite task might trigger unending efforts to accomplish it "perfectly".[5] Arguably, the highest goal should be to avoid suffering, but the surest way to achieve that goal could be to end all awareness. Similar paradoxes are expressed in the Gnon meme.

This could mean that advanced minds of the future will have to be free to evolve randomly, and inevitably make horrible mistakes. The "best" or most successful minds may then leave more descendants. The solution might then be to allow a paperclip maximizer to compete with many other hyper-intelligent agents.

External link

Also see

  • The implicit horror inherent in complex evolution is also depicted in some Larry Niven science fiction stories.

References

  1. "Ethical Issues in Advanced Artificial Intelligence" http://www.nickbostrom.com/ethics/ai.html
  2. Reddit debate (Jul 21, 2015) https://www.reddit.com/r/Futurology/comments/3e2qvu/ai_what_happens_when_a_paperclip_maximizer/
  3. Alonzo Fyfe (Jan 9, 2009) http://atheistethicist.blogspot.com/2009/01/robert-nozicks-utility-monster.html
  4. Tem42 (Mar 7, 2016) https://everything2.com/title/Paperclip+maximizer
  5. Paul Ford (Feb 11, 2015) https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/