- This topic has 0 replies, 1 voice, and was last updated 1 year, 8 months ago by .
Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
- You must be logged in to reply to this topic.
Home › Forums › DAILY BREAD BY VIKING BOER › RESOURCES AND LINKS FROM VIKING BOER › AIW.024.VIKING BOER – AI (AGI) WATCH – An Interactive Q&A About Al Safety
AIW.024.VIKING BOER – AI (AGI) WATCH – An Interactive Q&A About Al Safety
Live: Eliezer Yudkowsky – Is Artificial General Intelligence Too Dangerous to Build?
An Interactive Q&A About Al Safety!
***************************
Streamed live on Apr 19, 2023, S. E. WIMBERLY LIBRARY
Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University,
Join us for an interactive Q&A with Yudkowsky about Al Safety!
Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4 Dr. Mark Bailey of National Intelligence University will moderate the discussion.
An open letter published on March 22, 2023, calls for “all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4.” In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligent.
Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment.
Dr. Mark Bailev is the Chair of the Cyber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.
**********************************