DiscoverThe Inside View
The Inside View
Claim Ownership

The Inside View

Author: Michaël Trazzi

Subscribed: 29Played: 597
Share

Description

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
51 Episodes
Reverse
Max Kaufmann⁠ and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of the deployment of foundational models. is currently a Research Assistant to Owain Evans, mainly thinking about (and fixing) issues that might arise as we scale up our current ML systems, but also interested in issues arising from multi-agent failures and situational awareness. Alan is PhD student at Mila advised by Nicolas Le Roux, with a strong interest in AI Safety, AI Governance and coordination. He has also recently been working with David Krueger and helped me with some of the interviews that have been published recently (ML Street talk and Christoph Schuhmann). Disclaimer: this discussion is much more casual than the rest of the conversations in this podcast. This was completely impromptu: I just thought it would be interesting to have Max and Alan discuss model evaluations (also called “evals” for short), since they are both interested in the topic. Transcript: https://heinsideview.ai/alan_and_max Youtube: https://youtu.be/BOLxeR_culU Outline (0:00:00) Introduction(0:01:16) LLMs Translating To Systems In The Future Is Confusing(0:03:23) Evaluations Should Measure Actions Instead of Asking Yes or No Questions(0:04:17) Identify Key Contexts for Dangerous Behavior to Write Concrete Evals(0:07:29) Implicit Optimization Process Affects Evals and Benchmarks(0:08:45) Passing Evals Doesn't Guarantee Safety(0:09:41) Balancing Technical Evals With Social Governance(0:11:00) Evaluations Must Be Convincing To Influence AI Development(0:12:04) Evals Might Convince The AI Safety Community But Not People in FAccT(0:13:21) Difficulty In Explaining AI Risk To Other Communities(0:14:19) Both Existential Safety And Fairness Are Important(0:15:14) Reasons Why People Don't Care About AI Existential Risk(0:16:10) The Association Between Sillicon Valley And People in FAccT(0:17:39) Timelines And RL Understanding Might Impact The Perception Existential Risk From AI(0:19:01) Agentic Models And Longtermism Hinder AI Safety Awareness(0:20:17) The Focus On Immediate AI Harms Might Be A Rejection Of Speculative Claims(0:21:50) Is AI Safety A Pascal Mugging(0:23:15) Believing In The Deployment Of Large Foundational Models Should Be Enough To Start Worrying(0:25:38) AI Capabilities Becomign More Evident to the Public Might Not Be Enough(0:27:27) Addressing Generalization and Reward Specification in AI(0:27:59) Evals as an Additional Layer of Security in AI Safety(0:28:41) A Portfolio Approach to AI Alignment and Safety(0:29:03) Imagine Alignment Is Solved In 2040, What Made It Happen?(0:33:04) AGI Timelines Are Uncertain And Anchored By Vibes(0:35:24) What Matters Is Agency, Strategical Awareness And Planning(0:37:15) Alignment Is A Public Good, Coordination Is Difficult(0:06:48) Dignity As AN Useful Heuristic In The Face Of Doom(0:42:28) What Will Society Look Like If We Actually Get Superintelligent Gods(0:45:41) Uncertainty About Societal Dynamics Affecting Long-Term Future With AGI(0:47:42) Biggest Frustration With The AI Safety Community(0:48:34) AI Safety Includes Addressing Negative Consequences of AI(0:50:41) Frustration: Lack of Bridge Building Between AI Safety and Fairness Communities(0:53:07) Building Bridges by Attending Conferences and Understanding Different Perspectives(0:56:02) AI Systems with Weird Instrumental Goals Pose Risks to Society(0:58:43) Advanced AI Systems Controlling Resources Could Magnify Suffering(1:00:24) Cooperation Is Crucial to Achieve Pareto Optimal Outcomes and Avoid Global Catastrophes(1:01:54) Alan's Origin Story(1:02:47) Alan's AI Safety Research Is Driven By Desire To Reduce Suffering And Improve Lives(1:04:52) Diverse Interests And Concern For Global Problems Led To AI Safety Research(1:08:46) The Realization Of The Potential Dangers Of AGI Motivated AI Safety Work(1:10:39) What is Alan Chan Working On At The Moment
Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si). There, he is building tools to help developers locate and reason about software artifacts, by learning to read and write code. I met Breandan while doing my "scale is all you need" series of interviews at Mila, where he surprised me by sitting down for two hours to discuss AGI timelines, augmenting developers with AI and neuro symbolic AI. A fun fact that many noticed while watching the "Scale Is All You Need change my mind" video is that he kept his biking hat most of the time during the interview, since he was close to leaving when we talked. All of the conversation below is real, but note that since I was not prepared to talk for so long, my camera ran out of battery and some of the video footage on Youtube is actually AI generated (Brendan consented to this). Disclaimer: when talking to people in this podcast I try to sometimes invite guests who share different inside views about existential risk from AI so that everyone in the AI community can talk to each other more and coordinate more effectively. Breandan is overall much more optimistic about the potential risks from AI than a lot of people working in AI Alignement research, but I think he is quite articulate in his position, even though I disagree with many of his assumptions. I believe his point of view is important to understand what software engineers and Symbolic reasoning researchers think of deep learning progress. Transcript: https://theinsideview.ai/breandan Youtube: ⁠https://youtu.be/Bo6jO7MIsIU⁠ Host: https://twitter.com/MichaelTrazzi Breandan: https://twitter.com/breandan OUTLINE (00:00) Introduction(01:16) Do We Need Symbolic Reasoning to Get To AGI?(05:41) Merging Symbolic Reasoning & Deep Learning for Powerful AI Systems(10:57) Blending Symbolic Reasoning & Machine Learning Elegantly(15:15) Enhancing Abstractions & Safety in Machine Learning(21:28) AlphaTensor's Applicability May Be Overstated(24:31) AI Safety, Alignment & Encoding Human Values in Code(29:56) Code Research: Moral, Information & Software Aspects(34:17) Automating Programming & Self-Improving AI(36:25) Debunking AI "Monsters" & World Domination Complexities(43:22) Neural Networks: Limits, Scaling Laws & Computation Challenges(59:54) Real-world Software Development vs. Competitive Programming(1:02:59) Measuring Programmer Productivity & Evaluating AI-generated Code(1:06:09) Unintended Consequences, Reward Misspecification & AI-Human Symbiosis(1:16:59) AI's Superior Intelligence: Impact, Self-Improvement & Turing Test Predictions(1:23:52) AI Scaling, Optimization Trade-offs & Economic Viability(1:29:02) Metrics, Misspecifications & AI's Rich Task Diversity(1:30:48) Federated Learning & AI Agent Speed Comparisons(1:32:56) AI Timelines, Regulation & Self-Regulating Systems
Christoph Schuhmann is the co-founder and organizational lead at LAION, the non-profit who released LAION-5B, a dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M, previously the biggest openly accessible image-text dataset in the world. Christoph is being interviewed by Alan Chan, PhD in Machine Learning at Mila, and friend of the podcast, in the context of the NeurIPS "existential risk from AI greater than 10% change my mind". youtube: https://youtu.be/-Mzfru1r_5s transcript: https://theinsideview.ai/christoph OUTLINE (00:00) Intro (01:13) How LAION Collected Billions Of Image-Text Pairs (05:08) On Misuse: "Most People Use Technology To Do Good Things" (09:32) Regulating Generative Models Won't Lead Anywhere (14:36) Instead of Slowing Down, Deploy Carefully, Always Double Check (18:23) The Solution To Societal Changes Is To Be Open And Flexible To Change (22:16) We Should Be Honest And Face The Tsunami (24:14) What Attitude Should We Have After Education Is Done (30:05) Existential Risk From AI
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started the newsletter Navigating AI Risk on AI Governance, with a first post on slowing down AI. Note: this episode was recorded in October 2022 so a lot of the content being discussed references what was known at the time, in particular when discussing GPT-3 (insteaed of GPT-3) or ACT-1 (instead of more recent things like AutoGPT). Transcript: https://theinsideview.ai/simeon Host: https://twitter.com/MichaelTrazzi Simeon: https://twitter.com/Simeon_Cps OUTLINE (00:00) Introduction(01:12) EffiSciences, SaferAI(02:31) Concrete AI Auditing Proposals(04:56) We Need 10K People Working On Alignment(11:08) What's AI Alignment(13:07) GPT-3 Is Already Decent At Reasoning(17:11) AI Regulation Is Easier In Short Timelines(24:33) Why Is Awareness About Alignment Not Widespread?(32:02) Coding AIs Enable Feedback Loops In AI Research(36:08) Technical Talent Is The Bottleneck In AI Research(37:58): 'Fast Takeoff' Is Asymptotic Improvement In AI Capabilities(43:52) Bear Market Can Somewhat Delay The Arrival Of AGI(45:55) AGI Need Not Require Much Intelligence To Do Damage(49:38) Putting Numbers On Confidence(54:36) RL On Top Of Coding AIs(58:21) Betting On Arrival Of AGI(01:01:47) Power-Seeking AIs Are The Objects Of Concern(01:06:43) Scenarios & Probability Of Longer Timelines(01:12:43) Coordination(01:22:49) Compute Governance Seems Relatively Feasible(01:32:32) The Recent Ban On Chips Export To China(01:38:20) AI Governance & Fieldbuilding Were Very Neglected(01:44:42) Students Are More Likely To Change Their Minds About Things(01:53:04) Bootcamps Are Better Medium Of Outreach(02:01:33) Concluding Thoughts
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions On AGI Ruin Arguments, Refining The Sharp Left Turn Threat Model and Paradigms of AI Alignment. Transcript: theinsideview.ai/victoria Youtube: https://youtu.be/ZpwSNiLV-nw OUTLINE (00:00) Intro (00:48) DeepMind Alignment Team Opinions On AGI Ruin Arguments (05:13) On The Possibility Of Iterating On Dangerous Domains and Pivotal acts (14:14) Alignment and Interpretability (18:14) Deciding Not To Build AGI And Stricted Publication norms (27:18) Specification Gaming And Goal Misgeneralization (33:02) Alignment Optimism And Probability Of Dying Before 2100 From unaligned AI (37:52) Refining The Sharp Left Turn Threat Model (48:15) A 'Move 37' Might Disempower Humanity (59:59) Finding An Aligned Model Before A Sharp Left Turn (01:13:33) Detecting Situational Awarareness (01:19:40) How This Could Fail, Deception After One SGD Step (01:25:09) Paradigms of AI Alignment (01:38:04) Language Models Simulating Agency And Goals (01:45:40) Twitter Questions (01:48:30) Last Message For The ML Community
David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning systems, but he is also interested in governance and global coordination. He is famous in Cambridge for not having an AI alignment research agenda per se, and instead he tries to enable his seven PhD students to drive their own research. In this episode we discuss AI Takeoff scenarios, research going on at David's lab, Coordination, Governance, Causality, the public perception of AI Alignment research and how to change it. Youtube: https://youtu.be/bDMqo7BpNbk Transcript: https://theinsideview.ai/david OUTLINE (00:00) Highlights (01:06) Incentivized Behaviors and Takeoff Speeds (17:53) Building Models That Understand Causality (31:04) Agency, Acausal Trade And Causality in LLMs (40:44) Recursive Self Improvement, Bitter Lesson And Alignment (01:03:17) AI Governance And Coordination (01:13:26) David’s AI Alignment Research Lab and the Existential Safety Community (01:24:13) On The Public Perception of AI Alignment (01:35:58) How To Get People In Academia To Work on Alignment (02:00:19) Decomposing Learning Curves, Latest Research From David Krueger’s Lab (02:20:06) Safety-Performance Trade-Offs (02:30:20) Defining And Characterizing Reward Hacking (02:40:51) Playing Poker With Ethan Caballero, Timelines
Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all downstream evaluations that matter. He is known as the fearless leader of the "Scale Is All You Need" movement and the edgiest person at MILA. His first interview is the second most popular interview on the channel and today he's back to talk about Broken Neural Scaling Laws and how to use them to superforecast AGI. Youtube: https://youtu.be/SV87S38M1J4 Transcript: https://theinsideview.ai/ethan2 OUTLINE (00:00) The Albert Einstein Of Scaling (00:50) The Fearless Leader Of The Scale Is All You Need Movement (01:07) A Functional Form Predicting Every Scaling Behavior (01:40) A Break Between Two Straight Lines On A Log Log Plot (02:32) The Broken Neural Scaling Laws Equation (04:04) Extrapolating A Ton Of Large Scale Vision And Language Tasks (04:49) Upstream And Downstream Have Different Breaks (05:22) Extrapolating Four Digit Addition Performance (06:11) On The Feasability Of Running Enough Training Runs (06:31) Predicting Sharp Left Turns (07:51) Modeling Double Descent (08:41) Forecasting Interpretability And Controllability (09:33) How Deception Might Happen In Practice (10:24) Sinister Stumbles And Treacherous Turns (11:18) Recursive Self Improvement Precedes Sinister Stumbles (11:51) Humans In The Loop For The Very First Deception (12:32) The Hardware Stuff Is Going To Come After The Software Stuff (12:57) Distributing Your Training By Copy-Pasting Yourself Into Different Servers (13:42) Automating The Entire Hardware Pipeline (14:47) Having Text AGI Spit Out New Robotics Design (16:33) The Case For Existential Risk From AI (18:32) Git Re-basin (18:54) Is Chain-Of-Thoughts Enough For Complex Reasoning In LMs? (19:52) Why Diffusion Models Outperform Other Generative Models (21:13) Using Whisper To Train GPT4 (22:33) Text To Video Was Only Slightly Impressive (23:29) Last Message
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI. In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila. Transcript: theinsideview.ai/irina Youtube: https://youtu.be/ZwvJn4x714s OUTLINE (00:00) Highlights (00:30) Introduction (01:03) Defining AGI (03:55) AGI means augmented human intelligence (06:20) Solving alignment via AI parenting (09:03) From the early days of deep learning to general agents (13:27) How Irina updated from Gato (17:36) Building truly general AI within Irina's lifetime (19:38) The least impressive thing that won't happen in five years (22:36) Scaling beyond power laws (28:45) The neural scaling laws workshop (35:07) Why Irina does not want to slow down AI progress (53:52) Phase transitions and grokking (01:02:26) Does scale solve continual learning? (01:11:10) Irina's probability of existential risk from AGI (01:14:53) Alignment work at Mila (01:20:08) Where will Mila get its compute from? (01:27:04) With Great Compute Comes Great Responsibility (01:28:51) The Neural Scaling Laws Workshop At NeurIPS
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops. In this episode, we talk about a broad variety of topics, including how we could apply the lessons from running AI Governance workshops to governing transformative AI, AI Strategy, AI Governance, Trustworthy AI Development and end up answering some Twitter questions. Youtube: https://youtu.be/3T7Gpwhtc6Q Transcript: https://theinsideview.ai/shahar Host: https://twitter.com/MichaelTrazzi Shahar: https://www.shaharavin.com Outline (00:00) Highlights (01:20) Intelligence Rising (06:07) Measuring Transformative AI By The Scale Of Its Impact (08:09) Comprehensive AI Services (11:38) Automating CEOs Through AI Services (14:21) Towards A "Tech Company Singularity" (15:58) Predicting AI Is Like Predicting The Industrial Revolution (19:57) 50% Chance Of Human-brain Performance By 2038 (22:25) AI Alignment Is About Steering Powerful Systems Towards Valuable Worlds (23:51) You Should Still Worry About Less Agential Systems (28:07) AI Strategy Needs To Be Tested In The Real World To Not Become Theoretical Physics (31:37) Playing War Games For Real-time Partial-information Advesarial Thinking (34:50) Towards World Leaders Playing The Game Because It’s Useful (39:31) Open Game, Cybersecurity, Government Spending, Hard And Soft Power (45:21) How Cybersecurity, Hard-power Or Soft-power Could Lead To A Strategic Advantage (48:58) Cybersecurity In A World Of Advanced AI Systems (52:50) Allocating AI Talent For Positive R&D ROI (57:25) Players Learn To Cooperate And Defect (01:00:10) Can You Actually Tax Tech Companies? (01:02:10) The Emergence Of Bilateral Agreements And Technology Bans (01:03:22) AI Labs Might Not Be Showing All Of Their Cards (01:06:34) Why Publish AI Research (01:09:21) Should You Expect Actors To Build Safety Features Before Crunch Time (01:12:39) Why Tech Companies And Governments Will Be The Decisive Players (01:14:29) Regulations Need To Happen Before The Explosion, Not After (01:16:55) Early Regulation Could Become Locked In (01:20:00) What Incentives Do Companies Have To Regulate? (01:23:06) Why Shahar Is Terrified Of AI DAOs (01:27:33) Concrete Mechanisms To Tell Apart Who We Should Trust With Building Advanced AI Systems (01:31:19) Increasing Privacy To Build Trust (01:33:37) Sensibilizing To Privacy Through Federated Learning (01:35:23) How To Motivate AI Regulations (01:37:44) How Governments Could Start Caring About AI risk (01:39:12) Attempts To Regulate Autonomous Weapons Have Not Resulted In A Ban (01:40:58) We Should Start By Convincing The Department Of Defense (01:42:08) Medical Device Regulations Might Be A Good Model Audits (01:46:09) Alignment Red Tape And Misalignment Fines (01:46:53) Red Teaming AI systems (01:49:12) Red Teaming May Not Extend To Advanced AI Systems (01:51:26) What Climate change Teaches Us About AI Strategy (01:55:16) Can We Actually Regulate Compute (01:57:01) How Feasible Are Shutdown Swi
Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a survey published in 2017 called, When Will AI Exceed Human Performance? Evidence From AI Experts and recently published a new survey of AI Experts: What do ML researchers think about AI in 2022. We start this episode by discussing what Katja is currently thinking about, namely an answer to Scott Alexander on why slowing down AI Progress is an underexplored path to impact.  Youtube: https://youtu.be/rSw3UVDZge0 Audio & Transcript: https://theinsideview.ai/katja Host: https://twitter.com/MichaelTrazzi Katja: https://twitter.com/katjagrace OUTLINE (00:00) Highlights (00:58) Intro (01:33) Why Advocating For Slowing Down AI Might Be Net Bad (04:35) Why Slowing Down AI Is Taboo (10:14) Why Katja Is Not Currently Giving A Talk To The UN (12:40) To Avoid An Arms Race, Do Not Accelerate Capabilities (16:27) How To Cooperate And Implement Safety Measures (21:26) Would AI Researchers Actually Accept Slowing Down AI? (29:08) Common Arguments Against Slowing Down And Their Counterarguments (36:26) To Go To The Stars, Build AGI Or Upload Your Mind (39:46) Why Katja Thinks There Is A 7% Chance Of AI Destroys The World (46:39) Why We Might End Up Building Agents (51:02) AI Impacts Answer Empirical Questions To Help Solve Important Ones (56:32) The 2022 Expert Survey on AI Progress (58:56) High Level Machine Intelligence (1:04:02) Running A Survey That Actually Collects Data (1:08:38) How AI Timelines Have Become Shorter Since 2016 (1:14:35) Are AI Researchers Still Too Optimistic? (1:18:20) AI Experts Seem To Believe In Slower Takeoffs (1:25:11) Automation and the Unequal Distributions of Cognitive power (1:34:59) The Least Impressive Thing that Cannot Be Done in 2 years (1:38:17) Final thoughts
Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI  in Oxford and was previously seconded to the UK government office as a senior policy specialist. In this episode we discuss Jack Clark's AI Policy takes, answer questions about AI Policy from Twitter and explore what is happening in the AI Governance landscape more broadly. Youtube: https://youtu.be/DD303irN3ps Transcript: https://theinsideview.ai/markus Host: https://twitter.com/MichaelTrazzi Markus: https://twitter.com/manderljung OUTLINE (00:00) Highlights & Intro (00:57) Jack Clark’s AI Policy Takes: Agree or Disagree (06:57) AI Governance Takes: Answering Twitter Questions (32:07) What The Centre For the Governance Of AI Is Doing (57:38) The AI Governance Landscape (01:15:07) How The EU Is Regulating AI (01:29:28) Towards An Incentive Structure For Aligned AI
Alex Lawsen is an advisor at 80,000 hours, released an Introduction to Forecasting Youtube Series and has recently been thinking about forecasting AI progress, why you cannot just "update all the way bro" (discussed in my latest episode with Connor Leahy) and how to develop inside views about AI Alignment in general. Youtube: https://youtu.be/vLkasevJP5c Transcript: https://theinsideview.ai/alex  Host: https://twitter.com/MichaelTrazzi Alex: https://twitter.com/lxrjl OUTLINE (00:00) Intro (00:31) How Alex Ended Up Making Forecasting Videos (02:43) Why You Should Try Calibration Training (07:25) How Alex Upskilled In Forecasting (12:25) Why A Spider Monkey Profile Picture (13:53) Why You Cannot Just "Update All The Way Bro" (18:50) Why The Metaculus AGI Forecasts Dropped Twice (24:37) How Alex’s AI Timelines Differ From Metaculus (27:11) Maximizing Your Own Impact Using Forecasting (33:52) What Makes A Good Forecasting Question (41:59) What Motivated Alex To Develop Inside Views About AI (43:26) Trying To Pass AI Alignment Ideological Turing Tests (54:52) Why Economic Growth Curve Fitting Is Not Sufficient To Forecast AGI (01:04:10) Additional Resources
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.  Youtube: https://youtu.be/K34AwhoQhb8 Transcript: https://theinsideview.ai/roblong Host: https://twitter.com/MichaelTrazzi Robert: https://twitter.com/rgblong Robert's blog: https://experiencemachines.substack.com OUTLINE (00:00:00) Intro (00:01:11) The LaMDA Controversy (00:07:06) Defining AGI And Consciousness (00:10:30) The Slightly Conscious Tweet (00:13:16) Could Large Language Models Become Conscious? (00:18:03) Blake Lemoine Does Not Negotiate With Terrorists (00:25:58) Could We Actually Test Artificial Consciousness? (00:29:33) From Metaphysics To Illusionism (00:35:30) How We Could Decide On The Moral Patienthood Of Language Models (00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory (00:49:46) Have You Tried DMT? (00:51:13) Is Valence Just The Reward in Reinforcement Learning? (00:54:26) Are Pain And Pleasure Symetrical? (01:04:25) From Charismatic AI Systems to Artificial Sentience (01:15:07) Sharing The World With Digital Minds (01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience (01:39:48) Why Moral Personhood Could Require Memory (01:42:41) Last thoughts And Further Readings
Robert Miles has been making videos for Computerphile, then decided to create his own Youtube channel about AI Safety. Lately, he's been working on  a Discord Community that uses Stampy the chatbot to answer Youtube comments. We also spend some time discussing recent AI Progress and why Rob is not that optimistic about humanity's survival. Transcript: https://theinsideview.ai/rob Youtube: https://youtu.be/DyZye1GZtfk Host: https://twitter.com/MichaelTrazzi Rob: https://twitter.com/robertskmiles OUTLINE (00:00:00) Intro (00:02:25) Youtube (00:28:30) Stampy (00:51:24) AI Progress (01:07:43) Chatbots (01:26:10) Avoiding Doom (01:59:34) Formalising AI Alignment (02:14:40) AI Timelines (02:25:45) Regulations (02:40:22) Rob’s new channel
Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J.  Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research. One of the goals of Conjecture is to reach a fundamental understanding of the internal mechanisms of current deep learning models using interpretability techniques.  In this episode, we go through the famous AI Alignment compass memes, discuss Connor’s inside views about AI progress, how he approaches AGI forecasting, his takes on Eliezer Yudkowsky’s secret strategy, common misconceptions and EleutherAI, and why you should consider working for his new company Conjecture. youtube: https://youtu.be/Oz4G9zrlAGs transcript: https://theinsideview.ai/connor2 twitter: https:/twitter.com/MichaelTrazzi OUTLINE (00:00) Highlights (01:08) AGI Meme Review  (13:36) Current AI Progress (25:43) Defining AG (34:36) AGI Timelines (55:34) Death with Dignity (01:23:00) EleutherAI (01:46:09) Conjecture (02:43:58) Twitter Q&A
Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so  hopefully, this podcast will help re-establish the truth. Transcript: https://theinsideview.ai/blake Video: https://youtu.be/kWsHS7tXjSU Outline: (01:03) Highlights (01:03) AGI good / AGI not now compass (02:25) AGI is not a coherent concept (05:30) you cannot build truly general AI (14:30) no "intelligence" threshold for AI (25:24) benchmarking intelligence (28:34) recursive self-improvement (34:47) scale is something you need (37:20) the bitter lesson is only half-true (41:32) human-like sensors for general agents (44:06) the credit assignment problem (49:50) testing for backpropagation in the brain (54:42) burstprop (bursts of action potentials), reward prediction errors (01:01:35) long-term credit-assignment in reinforcement learning (01:10:48) what would change his mind on scaling and existential risk
Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own Machine Learning server, or rig, which costed him €25000. Emil's Story: https://www.emilwallner.com/p/ml-rig Youtube: https://youtu.be/njbPpxhE6W0 00:00 Intro 00:23 Building your own rig 06:11 The Nvidia GPU rder hack 15:51 Inside Emil's rig 21:31 Motherboard 23:55 Cooling and datacenters 29:36 Deep Learning lessons from owning your hardware 36:20 Shared resources vs. personal GPUs 39:12 RAM, chassis and airflow 42:42 Amd, Apple, Arm and Nvidia 51:15 Tensorflow, TPUs, cloud minsdet, EleutherAI
Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research engineer at a startup and graduated in computational neuroscience at Princeton University.  Anonymous feedback: https://app.suggestionox.com/r/xOmqTW  Twitter: https://twitter.com/MichaelTrazzi  Sonia's December update: https://t.co/z0GRqDTnWm Sonia's Twitter: https://twitter.com/soniajoseph_  Orthogonality Thesis: https://www.youtube.com/watch?v=hEUO6pjwFOo Paperclip game: https://www.decisionproblem.com/paperclips/ Ngo & Yudkowsky on feedback loops: https://bit.ly/3ml0zFL  Outline  00:00 Intro 01:06 NFTs 03:38 Web 3 21:12 Digital Copy 29:09 ML x Neuroscience 43:44 Limits of the Orthogonality Thesis 01:01:25 Goal of perpetuating Information 01:08:14 Compressing information 01:10:52 Feedback loops are not safe 01:17:43 Another AI Safety aesthetic 01:23:46 Meaning of life
In this episode I discuss Brain Computer Interfaces with Slava Bobrov, a self-taught Machine Learning Engineer applying AI to neural biosignals to control robotic limbs. This episode will be of special interest to you if you're an engineer who wants to get started with brain computer interfaces, or just broadly interested in how this technology could enhance human intelligence. Fun fact: most of the questions I asked were sent by my Twitter followers, or come from a Discord I co-created on Brain Computer Interfaces. So if you want your questions to be on the next video or you're genuinely interested in this topic, you can find links for both my Twitter and our BCI discord in the description. Outline: 00:00 introduction 00:49 defining brain computer interfaces (BCI) 03:35 Slava's work on prosthetic hands 09:16 different kinds of BCI 11:42 BCI companies: Muse, Open BCI 16:26 what Kernel is doing (fNIRS) 20:24 EEG vs. EMG—the stadium metaphor 25:26 can we build "safe" BCIs? 29:32 would you want a Facebook BCI? 33:40 OpenAI Codex is a BCI 38:04 reward prediction in the brain 44:04 what Machine Learning project for BCI? 48:27 Slava's sleep tracking 51:55 patterns  in recorded sleep signal 54:56 lucid dreaming 56:51 the long-term future of BCI 59:57 are they diminishing returns in BCI/AI investments? 01:03:45 heterogeneity in intelligence after BCI/AI progress 01:06:30 is our communication improving? is BCI progress fast enough? 01:12:30 neuroplasticity, Neuralink 01:16:08 siamese twins with BCI, the joystick without screen experiment 01:20:50 Slava's vision for a "brain swarm" 01:23:23 language becoming obsolete, Twitter swarm 01:26:16 brain uploads vs. copies 01:29:32 would a copy be actually you? 01:31:30 would copies be a success for humanity? 01:34:38 shouldn't we change humanity's reward function? 01:37:54 conclusion
We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.edu/blog/). We look at multiple slides with art throughout our conversation, so I highly recommend watching the video (https://www.youtube.com/watch?v=gcwidpxeAHI). In the first part we go through Charlie's explanations of DALL-E, a model trained end-to-end by OpenAI to generate images from prompts. We then talk about CLIP + VQGAN, where CLIP is another model by OpenAI matching prompts and images, and VQGAN is a state-of-the art GAN used extensively in the AI Art scene. At the end of the video we look at different pieces of art made using CLIP, including tricks for using VQGAN with CLIP, videos, and the latest CLIP-guided diffusion architecture. At the end of our chat we talk about scaling laws and how progress in art relates to other advances in ML.
loading
Comments 
loading
Download from Google Play
Download from App Store