Future of Life Institute
Future of Life Institute
  • Видео 206
  • Просмотров 15 445 473
Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI
Mark Brakel (FLI Director of Policy), Yann LeCun, Francesca Rossi, and Nick Bostrom debate: "Should we slow down research on AI?" at the World AI Cannes Festival in February 2024.
Просмотров: 2 146

Видео

Members of congress want to Ban Deepfakes.
Просмотров 245День назад
U.S. lawmakers are waking up to the urgent need to address the rampant deepfakes issue. Here's what some have recently had to say on the topic. To learn more about deepfakes and the harm they're increasingly causing across society with deepfake-powered sexual abuse, disinformation, and fraud, visit bandeepfakes.org/
Emilia Javorsky at 2024 Vienna Conference on Autonomous Weapons
Просмотров 209День назад
In the panel "How Dealing With AWS Will Shape Future Human-Technology Relations", Emilia Javorsky addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from ruclips.net/video/A1DyH7N3ppE/видео.html More info: www.aws202...
Dan Faggella on the Race to AGI
Просмотров 4,7 тыс.День назад
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI chang...
Anthony Aguirre at 2024 Vienna Conference on Autonomous Weapons
Просмотров 254День назад
In the High-level panel "Geopolitics and Machine Politics: How to Move Forward on AWS", Anthony Aguirre addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from ruclips.net/video/Ju9fvM6pAS0/видео.html More info: www....
Jaan Tallinn Keynote: 2024 Vienna Conference on Autonomous Weapons
Просмотров 539День назад
In the High-level opening, Jaan Tallinn addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from ruclips.net/video/Ju9fvM6pAS0/видео.html More info: www.aws2024.at
Liron Shapira on Superintelligence Goals
Просмотров 2 тыс.14 дней назад
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just an...
Annie Jacobsen on Nuclear War - a Second by Second Timeline
Просмотров 69 тыс.Месяц назад
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first cri...
Katja Grace on the Largest Survey of AI Researchers
Просмотров 988Месяц назад
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. ...
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
Просмотров 1,1 тыс.2 месяца назад
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a...
Sneha Revanur on the Social Effects of AI
Просмотров 4772 месяца назад
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans ...
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Просмотров 1,9 тыс.3 месяца назад
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are hu...
Special: Flo Crivello on AI as a New Form of Life
Просмотров 8553 месяца назад
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's exe...
Carl Robichaud on Preventing Nuclear War
Просмотров 1 тыс.4 месяца назад
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weap...
Frank Sauer on Autonomous Weapon Systems
Просмотров 9204 месяца назад
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer ...
Darren McKee on Uncontrollable Superintelligence
Просмотров 2,1 тыс.5 месяцев назад
Darren McKee on Uncontrollable Superintelligence
Mark Brakel on the UK AI Summit and the Future of AI Policy
Просмотров 5705 месяцев назад
Mark Brakel on the UK AI Summit and the Future of AI Policy
How two films saved the world from nuclear war
Просмотров 372 тыс.5 месяцев назад
How two films saved the world from nuclear war
Dan Hendrycks on Catastrophic AI Risks
Просмотров 2,5 тыс.6 месяцев назад
Dan Hendrycks on Catastrophic AI Risks
Before It Controls Us.
Просмотров 363 тыс.6 месяцев назад
Before It Controls Us.
Samuel Hammond on AGI and Institutional Disruption
Просмотров 3,3 тыс.6 месяцев назад
Samuel Hammond on AGI and Institutional Disruption
Imagine A World: What if AI advisors helped us make better decisions?
Просмотров 4846 месяцев назад
Imagine A World: What if AI advisors helped us make better decisions?
Imagine A World: What if narrow AI fractured our shared reality?
Просмотров 7946 месяцев назад
Imagine A World: What if narrow AI fractured our shared reality?
Steve Omohundro on Provably Safe AGI
Просмотров 1,6 тыс.7 месяцев назад
Steve Omohundro on Provably Safe AGI
Imagine A World: What if AI enabled us to communicate with animals?
Просмотров 6387 месяцев назад
Imagine A World: What if AI enabled us to communicate with animals?
Regulate AI Now
Просмотров 2,3 тыс.7 месяцев назад
Regulate AI Now
Imagine A World: What if AI-enabled life extension allowed some people to live forever?
Просмотров 7127 месяцев назад
Imagine A World: What if AI-enabled life extension allowed some people to live forever?
Johannes Ackva on Managing Climate Change
Просмотров 2917 месяцев назад
Johannes Ackva on Managing Climate Change
Imagine A World: What if we developed digital nations untethered to geography?
Просмотров 5177 месяцев назад
Imagine A World: What if we developed digital nations untethered to geography?
Imagine A World: What if our response to global challenges led to a more centralized world?
Просмотров 4277 месяцев назад
Imagine A World: What if our response to global challenges led to a more centralized world?