Post-conference Workshop (Mon. Mar 9th, 2009)
The Future of AI
Following in the footsteps of the very successful Workshop held in association with AGI-08, the Future of AI workshop will be held in conjunction with AGI-09. This year's workshop will be expanded to a full day, and will feature a slate of invited talks as well as contributed papers and posters. It will be held Monday, March 9th, 2009, at the main conference venue of the Crowne Plaza National Airport in Arlington, VA.
Attendance at the Workshop is included in registration for AGI-09. Those wishing to attend only the Workshop may register separately.
The workshop extends the technical focus of the main conference with investigations of the implications, in a broad social and economic context, that success in the construction of a general-purpose artificial intelligence might have.
It will feature
- Selmer Bringsjord: "Unethical but Rule-Bound Robots Would Kill Us All"
- Robin Hanson: "The Economics of AI"
- James Albus: "If AGI does all the jobs, how will people get income?"
- Ben Goertzel: "Is Sousveillance the Best Path to Ethical AGI and a Safe Singularity?"
other invited talks
- a debate between Hugo deGaris and J. Storrs Hall: "Artilect War or Utopia?"
- a panel discussion on the possibility of "hard takeoff"
- and contributed papers and posters.
Call for papers
- If AI and robotics follow Moore's Law, the cost of doing any intellectual or ultimately physical task will decline in the now-familiar precipitous negative exponential. Won't that put all humans out of work in the not-too-distant future?
- On the other hand, having robots to do all the work means that no human should have to work. Why then shouldn't the robots support all humans in idle luxury?
- If AI proves capable of recursive self-improvement, or even merely continues to improve as it has for the past 50 years, then shortly after we have a human-level AI we will have a smarter-than-human AI. Wouldn't such robots be able to sieze the reins of power and dominate human society?
- But if we can make AI smarter than humans, surely we can also make AI morally better than humans. (As Ron Arkin says, it's a low bar.) Smarter and morally better together are essentially the definition of wiser. Surely wiser minds ought to be running things - and wouldn't we have a moral duty to obey them?
- In general, whether robots are simply serving us coffee or being ``servants of the people,'' in order to have a significant positive impact on society AI must change the way things are done. What are the pathways and roadblocks to such progress?
- As an example, a middle-term AI application that could have huge benefits is the self-driving car. These could save tens of thousands of lives and tens of millions of person-years of productive time every year. Today, however, they would be illegal to operate, and prohibitive to manufacture because of liability laws.
- In the longer term, AIs will become capable of replacing not only chauffeurs but secretaries, business managers, doctors, lawyers, and so forth. At each stage the benefits increase but so do the barriers. How will this play out?
- The general public has been fed a highly skewed view of AI by the popular entertainment industry, from The Forbin Project to 2001 to Terminator, as killer robots run amok. How can we encourage a balanced consideration of the benefits as well as the risks of AGI?
Potential speakers may submit either an extended abstract (1-2 pages) or a full paper (6-12 pages). Submissions should generally, but not strictly, follow AAAI guidelines using one of the following templates (appreciation goes to AAAI for allowing us to use their docs as a guide):
Submission Deadline: Feb. 15th, 2009 - Please submit via email to: ben -at- goertzel.org.
Whether an accepted paper (of either length) will be presented as a talk or as a poster will be determined by the Program Committee, in part based on paper quality as assessed by the anonymous reviewers, and in part according to the extent the paper addresses a topic of core interest to the AGI community.
Papers and abstracts submitted will be published in an online proceedings contemporaneously with the workshop, and will also be considered for publication in the Journal of Artificial General Intelligence. Depending on the number and quality of submissions received, there may also be an edited volume published as a post-proceedings.
The acceptance of a paper is based on the assumption that one of the authors will attend the conference to present the paper. Any questions can be directed to one of the conference chairs.
Monday, March 9
ECON SESSION (AND WORKSHOP INTRO/WELCOME)
- Selmer Bringsjord. “Unethical but Rule-Bound Robots Would Kill Us All” - video | slides
2:00 - 2:50
- Ben Goertzel. “Is Sousveillance the Best Path to Ethical AGI?” - video | slides | transcript
2:50 - 3:20
- Mark Waser. “Ethics for Self-Improving Machines” - video | slides
3:20 - 3:50
- Coffee break
3:50 - 4:30
- Itamar Arel. “Working Toward Pragmatic Convergence in AGI” - video | slides
4:30 - 4:50
- Thomas McCabe. “Failure and Successes in AGI Projects” - video
4:50 - 5:10
- AGI Roadmap panel - video | transcript
5:10 - 6:00