♫musicjinni

AVOIDING AGI APOCALYPSE - CONNOR LEAHY

video thumbnail
Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk

In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment.

Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans.

https://www.linkedin.com/in/connor-j-leahy/
https://twitter.com/NPCollapse

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/112-AVOIDING-AGI-APOCALYPSE---CONNOR-LEAHY-e21ji45

Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass https://xrai.glass/)

TOC:
The success of ChatGPT and its impact on the AI field [00:00:00]
Subjective experience [00:15:12]
AI Architectural discussion including RLHF [00:18:04]
The paradox of AI alignment and the future of AI in society [00:31:44]
The impact of AI on society and politics [00:36:11]
Future shock levels and the challenges of predicting the future [00:45:58]
Long termism and existential risk [00:48:23]
Consequentialism vs. deontology in rationalism [00:53:39]
The Rationalist Community and its Challenges [01:07:37]
AI Alignment and Conjecture [01:14:15]
Orthogonality Thesis and AI Preferences [01:17:01]
Challenges in AI Alignment [01:20:28]
Mechanistic Interpretability in Neural Networks [01:24:54]
Building Cleaner Neural Networks [01:31:36]
Cognitive horizons / The problem with rapid AI development [01:34:52]
Founding Conjecture and raising funds [01:39:36]
Inefficiencies in the market and seizing opportunities [01:45:38]
Charisma, authenticity, and leadership in startups [01:52:13]
Autistic culture and empathy [01:55:26]
Learning from real-world experiences [02:01:57]
Technical empathy and transhumanism [02:07:18]
Moral status and the limits of empathy [02:15:33]
Anthropomorphic Thinking and Consequentialism [02:17:42]
Conjecture: Balancing Research and Product Development [02:20:37]
Epistemology Team at Conjecture [02:31:07]
Interpretability and Deception in AGI [02:36:23]
Futuristic whack-a-mole and predicting AGI threats [02:38:27]

Refs:
1. OpenAI's ChatGPT: https://chat.openai.com/
2. The Mystery of Mode Collapse (Article): https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse
3. The Rationalist Guide to the Galaxy https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795
5. Alfred Korzybski: https://en.wikipedia.org/wiki/Alfred_Korzybski
6. Instrumental Convergence: https://en.wikipedia.org/wiki/Instrumental_convergence
7. Orthogonality Thesis: https://en.wikipedia.org/wiki/Orthogonality_thesis
8. Brian Tomasik's Essays on Reducing Suffering: https://reducing-suffering.org/
9. Epistemological Framing for AI Alignment Research: https://www.lesswrong.com/posts/Y4YHTBziAscS5WPN7/epistemological-framing-for-ai-alignment-research
10. How to Defeat Mind readers: https://www.alignmentforum.org/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers
11. Society of mind: https://www.amazon.co.uk/Society-Mind-Marvin-Minsky/dp/0671607405

Elephant Toothpaste! ๐Ÿ˜‚ #shorts

9 Simple Stretching Exercises for Beginners

๐Ÿ’™ FREITAS AO VIVO ๐Ÿ’™ TO COM A SENSI HOJE!! ๐Ÿ’™ FREE FIRE MOBILE ๐Ÿ’™

Live your passion. Follow your dream.

Part 1 of 3: ALL OUT at International Wings Factory in NYCโ€™s Upper East Side! ๐Ÿ—๐Ÿ”ฅ๐Ÿคค #DEVOURPOWER

Home vs Shopping mall ๐Ÿ˜Š #shorts #trendingonshorts #family #funny

Meet Scarlett de la Torre, Openfit Sound Meditation Practitioner

What Happens If You Never Take Your Contacts Out?

Helping small businesses find a way

NBA "Vengeance!" MOMENTS

How to Start a Dragon Academy

ใ‚ชใƒคใƒ„ใซใ‚‚่™ซใฎ่กŒๆ–นใซใ‚‚ๆฐ—ไป˜ใ‹ใชใ„้ˆๆ„Ÿใช็Œซใ€‚ใ€€#shorts

Why Molekuleโ€™s PECO technology matters in a polluted world

Sharing Is Caring #TheReturnofSuperman | KBS WORLD TV

Only Muslims Can Understand ๐Ÿ˜‚ Towards Eternity #shorts

THE TOP 4 LEFT FOOTED FOOTBALLERS ๐Ÿคฏ๐Ÿ”ฅ

30 Body Facts That Were a Mystery for You

NBA "Most Brutal" MOMENTS

Our 8,000 SQFT Alaska Garden | Building Hot Beds + Planting Berries and Trees

I Forced my Friend to BEAT REDFALL

raze 1v5 clutch thanks sova lol #valorant #valorantclips #valorantfunny

Better Saffron Than Sorry

You Come Across These Daily But Don't Know What They're For

The Gift of Grace

(Teaser) NAYEON(๋‚˜์—ฐ) 'ABCD' (4K) | STUDIO CHOOM ORIGINAL

20 Coolest Gadgets That Are Worth Seeing

The Ultimate Gamer Camper Van

Puthusa try panni irukom๐Ÿคฉ๐Ÿ”ฅnext ena song pannalam sollunga ๐Ÿ”ฅโค๏ธ #tomjerry #couples #trending#shorts

How To Fix A Sticky Slime ๐Ÿ˜ค

Le tengo bien preparado?๐Ÿ˜‚

Disclaimer DMCA