Future of Life Institute

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

Future of Life Institute
250px
Formation March 2014
47-1052538
Legal status Active
Purpose Mitigation of existential risk
Location
Website futureoflife.org

The Future of Life Institute (FLI) is a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). Its founders include MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and its board of advisors includes cosmologist Stephen Hawking and entrepreneur Elon Musk.

Background

File:Fli team san francisco.png
An FLI group picture, 2015

The FLI mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.[1][2] FLI is particularly focused on the potential risks to humanity from the development of human-level artificial intelligence.[3]

The institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, Harvard graduate student and IMO medalist Viktoriya Krakovna, BU graduate student Meia Chita-Tegmark and UCSC physicist Anthony Aguirre. The institute's advisory board includes computer scientist Stuart J. Russell, biologist George Church, cosmologists Stephen Hawking and Saul Perlmutter, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman. [4][5][6] FLI operates grassroots-style to recruit volunteers and younger scholars from the local community in the Boston area.[3]

Events

On May 24, 2014, FLI held a panel discussion on "The Future of Technology: Benefits and Risks" at MIT, moderated by Alan Alda.[3][7][8] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[9][10] The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity.[3][11][12]

On January 2, 2015 through January 5th, 2015, the Future of Life Institute organized and hosted "The Future of AI: Opportunities and Challenges" conference, which brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law, and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI.[13] The institute circulated an open letter on AI safety at the conference which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence experts.[14]

Global research program

On January 15, 2015, the Future of Life Institute announced that Elon Musk had donated $10 million to fund a global AI research endeavor.[15][16][17] On January 22, 2015, the FLI released a request for proposals from researchers in academic and other non-profit institutions.[18] Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful.[19] On July 1, 2015, a total of $7 million was awarded to 37 research projects.[20]

In the media

  • "Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.[3]
  • "But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.[4]
  • "Transcending Complacency on Superintelligent Machines", an op-ed in the Huffington Post by Max Tegmark, Stephen Hawking, Frank Wilczek and Stuart J. Russell on the movie Transcendence.[1]
  • "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea" in Diana Crow Science.[11]
  • "An Open Letter to Everyone Tricked into Fearing Artificial Intelligence", includes "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter" by the FLI [21]
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • "Creating Artificial Intelligence" on PBS[22]

See also

References

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 3.2 3.3 3.4 Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. Lua error in package.lua at line 80: module 'strict' not found.

External links