AI: The Beast or Jerusalem? | Jonathan Pageau & Jim Keller | #308

Dr. Peterson’s extensive catalog is available now on DailyWire+:

Dr. Jordan B. Peterson, Jonathan Pageau, and Jim Keller dive into the world of artificial intelligence, debating the pros and cons of technological achievement, and ascertaining whether smarter tech is something to fear or encourage.

Jim Keller is a microprocessor engineer known for his work at Apple and AMD. He has served in the role of architect for numerous game changing processors, has co-authored multiple instruction sets for highly complicated designs, and is credited for being the key player behind AMD’s renewed ability to compete with Intel in the high-end CPU market. In 2016, Keller joined Tesla, becoming Vice President of Autopilot Hardware Engineering. In 2018, he became a Senior Vice President for Intel. In 2020, he resigned due to disagreements over outsourcing production, but quickly found a new position at Tenstorrent, as Chief Technical Officer.

Jonathan Pageau is a French-Canadian liturgical artist and icon carver, known for his work featured in museums across the world. He carves Eastern Orthodox and other traditional images, and teaches an online carving class. He also runs a YouTube channel dedicated to the exploration of symbolism across history and religion.

—Links—

For Jonathan Pageau:

Icon Carving:

Podcast: www.thesymbolicworld.com

Youtube Channel:

For Jim Keller:

Twitter:

Jim’s Speech, ”10 Problems to Solve”:

Jim’s Speech, ”Overclocking AI”:

Ian Banks References:

— Chapters —

(0:00) Coming up
(1:48) Intro
(5:00) Conceptualizing artificial intelligence
(9:10) Language models and story prediction
(12:20) Deep story and prompt engineering
(18:10) Friston, error prediction and emotional mapping
(23:37) Generative models
(24:36) Does the intelligence in AI come from humans?
(27:26) Can AI have goals that are not understandable to humans?
(30:22) When a human records data vs an AI
(34:00) When will AI become autonomous?
(37:48) To create what could supplant you
(47:36) When technology is used to achieve desire, unintended consequences
(55:14) Abundance and nihilism
(58:30) High human goals and the weaponization of intelligence
(1:04:28) AI: Who will hold the keys?
(1:14:09) Technology through biblical imagery
(1:17:30) When the term “AI” ceases to make sense
(1:20:12) What will humans worship in the tech age?

// SUPPORT THIS CHANNEL //
Newsletter:
Donations:

// COURSES //
Discovering Personality:
Self Authoring Suite:
Understand Myself (personality test):

// BOOKS //
Beyond Order: 12 More Rules for Life:
12 Rules for Life: An Antidote to Chaos:
Maps of Meaning: The Architecture of Belief:

// LINKS //
Website:
Events:
Blog:
Podcast:

// SOCIAL //
Twitter:
Instagram:
Facebook:
Telegram:
All socials:

#JordanPeterson #JordanBPeterson #DrJordanPeterson #DrJordanBPeterson #DailyWirePlus


So the Hebrews created history as we Know it You don’t get away with anything and so You might think you can bend the fabric Of reality and that you can treat people Instrumentally and that you can bow to The Tyrant and violate your conscience Without cost you will pay the piper it’s Going to call you out of that slavery Into Freedom even if that pulls you into The desert And we’re going to see that there’s Something else going on here that is far More Cosmic and deeper than what you can Imagine the highest ethical Spirit to Which we’re beholden is presented Precisely as that spirit that allies Itself with the cause of Freedom against Tyranny I want villains to get punished But do you want the villains to learn Before they have to pay the ultimate Price that’s such a Christian question That has to do with attention by the way It has to do with with a subsidiary Hierarchy like a hierarchy of attention Which is set up in a in in a way in Which all the levels can have room to Exist let’s say and so you know these New the new systems the new way let’s Say the new Urban urbanist movement Similar to what you’re talking about That’s what they’ve understood it’s like We need places of intimacy in terms of The house we need places of communion in

Terms of you know parks and Alleyways And and buildings where we meet and a Church all these places that kind of Manifest our community together yeah so So those existed coherently for long Periods of time and then the abundance Post World War II and some ideas about Like what what life could be like cause This big change and that change Satisfied some needs people got houses But broke community Community needs and Then new sets of ideas about what’s the Synthesis what’s the possibility of Having your own home but also having Community not having to drive 15 minutes For every single thing and some people Live in those worlds and some people Don’t do you think we’ll be smart so one Of the problems why were we smart enough To solve some of those because we had 20 Years but now because one of the things That’s happening now is we’re as you Pointed out earlier is we’re going to be Producing equally revolutionary Transformations but at a much smaller Scale of time what’s natural to our Children is so different than was Natural to us but what was natural to us Was very different from our parents so Some some changes get upset excepted Generationally really so what’s made you So optimistic [Music] Hello everyone

Watching on YouTube or listening on Associated platforms I’m very excited Today to bring to be bringing you two of The people I admire most intellectually I would say and morally for that matter Jonathan pajo and uh Jim Keller very Different thinkers Jonathan pajo is a French Canadian liturgical artist and Icon Carver known for his work featured In museums across the world he carves Eastern Orthodox among other traditional Images and teaches an online carving Class he also runs a YouTube channel This symbolic World dedicated to the Exploration of symbolism across history And religion Jonathan is one of the Deepest religious thinkers I’ve ever met Jim Keller Is a microprocessor engineer known very Well in the relevant communities and Beyond them for his work at Apple and AMD among other corporations he served In the role of architect for numerous Game-changing processors has co-authored Multiple instruction sets for highly Complicated designs And is credited for being the key player Behind amd’s renewed ability to compete With Intel in the high-end CPU Market In 2016 Keller joined Tesla Becoming vice president of autopilot Hardware Engineering in 2018 he became a Senior vice president for Intel In 2020 he resigned due to disagreements

Over Outsourcing production but quickly Found a new position at tens torrent as Chief technical officer we’re going to Sit today and discuss the perils and Promise of artificial intelligence and It’s a conversation I’m very much Looking forward to so welcome to all of You watching and listening I thought it Would be interesting to have a three-way Conversation Um Jonathan and I have been talking a Lot lately especially with John viveki And some other people as well about the Fact that we seem It seems necessary for us to view for Human beings to view the world through a Story in fact that Are When we describe the structure that Governs our action and our perception That is a story and so we’ve been trying To puzzle out I would say to some degree On the religious front What might be the deepest stories and I’m very curious about the fact that we Perceive the world through a story human Beings do and that seems to be a Fundamental part of our cognitive Architecture and of cognitive Architecture in general According to Some of the world’s top neuroscientists And I’m curious and I know Jim is Interested in cognitive processing and In Building Systems that in some sense

Seem to run in a manner analogous to the Manner in which our brains run and so I’m curious about the overlap between The notion that we have to view the World through a story and what’s Happening on the AI front there’s all Sorts of other places that we can take The conversation so maybe I’ll start With you Jim do you want to tell people What you’ve been working on and maybe Give a bit of a background to everyone About how to how you conceptualize Artificial intelligence Yeah sure so first I’ll say technically I’m not An artificial intelligent researcher I’m A computer architect And I’d say my skill set goes from you Know somewhere around the atom up to the Program so we uh we make transistors at Atoms we make logical Gates out of Transistors we make computers out of Logical Gates we run programs on those And recently we’ve been able to run Programs fast enough To do something called a artificial Intelligence model or neural network Depending on how we say it Um And then we’re building chips now That run artificial intelligence models Fast and we have a novel way to do it a Company I work at but lots of people are Working on it and

I think We were sort of taken by surprise what’s Happened in the last five years how Quickly Models started to do interesting and Intelligent seeming things Um There’s been an estimate that human Brains do about 10 to the 18th Operations a second which sounds like a Lot it’s a billion billion operations a Second and a little computer you know The processor in your phone probably Does 10 billion operations a second You know ish and then if you use the GPU Maybe 100 billion something like that And Big modern AI computers like open AI use This or Google or somebody They’re doing like 10 to the 16th maybe Maybe slightly more operations a second So they’re within a factor of a hundred Of a human brain’s raw computational Ability and by the way that could be Completely wrong our understanding of How the human brain does computation Could be wrong but lots of people have Estimated based on number of neurons Number of connections how fast neurons Fire how many operations a neuron firing Seems to involve I mean the estimates range by a couple Orders of magnitude but When our computers got fast enough

We started to build things called Language models and image models that do Fairly remarkable things So what have you seen in the last few Years that’s been indicative of this of The change that you described as Revolutionary what what are the com what Are computers doing now that you that You found surprising because of this Increase in speed yeah you can have a Language model read a 200 000 word book And summarize it fairly accurately So it can extract out the gist the gist Of it can it do that with fiction yeah Yeah and I’m going to introduce you to a Friend who took a language model and Changed it And fine-tuned it with Shakespeare And use it the right screen place That are pretty good And and these kinds of things are really Interesting and then We were talking about this a little bit Earlier so when computers do Computations You know a program will say add a equal B plus c The computer does those operations on Representations of information ones and Zeros it doesn’t understand them at all The computer has no understanding of it But what we call a language model Translates information like words and Images and ideas

Into a space where the program the ideas And the operation it does on them are All essentially the same thing We’ll be right back with Jonathan pajoto And Jim Keller first we wanted to give You a sneak peek at Jordan’s new Documentary logos and literacy I was very much struck by how the Translation of the biblical writings Jump started the development of literacy Across the entire world illiteracy was The norm the pastor’s home was the first School and every morning it would begin With singing the Christian faith is a Singing religion probably 80 percent of Scripture memorization today exists only Because of what is sung this is amazing Here we have a Gutenberg Bible Bible Printed on the Press of Johann gubern Science and religion are opposing forces In the world but historically that has Not been the case now the book is Available to everyone from Shakespeare To modern education and medicine and Science to to civilization itself it is The most influential book in all history And hopefully people can walk away with At least a sense of that Right so a language model can produce Words and then use those words as inputs And It seems to have an understanding of What those words are so which is very Different from our computer I’m curious

Operates on data about the language Models I mean my sense of at least in Part how we understand the story Is that Maybe we’re watching a movie let’s say And we get some sense of the characters Goals And then we see the manner in which that Character perceives the world and we in Some sense adopt his goals which is to Identify with character and then we play Out a panoply of emotions and Motivations on our body because we now Inhabit that goal space and we Understand the character as a Consequence of mimicking the character With our own physiology And you you have computers that can Summarize the gist of a story but they Don’t have that underlying physiology First of all that’s it’s a theory that Your physiology has anything to do with It you could Understand the character’s goals and Then get involved in the details of the Story and then you’re predicting the Path of the story And also having expectations and hopes For this story yeah and a good story Kind of takes you on a ride because it Teases you with Doing some of the things you expect but Also doing things that are unexpected And possibly that creates emotional that

Could yeah it does it does so in an AI Model So you can easily have a set of goals so You have your personal goals and then When you watch the story you have those Goals yeah you put those together Like how many goals is that like the Stories goals in your goals hundreds Thousands those are small numbers right Then you have the story The a model can predict the story too Just as well as you can howdy and that’s The thing that I find mysterious is that As the story progresses it can look at The error between What it predicted and what actually Happened And then iterate on that right so you Would call that emotional excitement Disappointment anxiety anxiety yeah Definitely well a big part of what Anxious Discrepancy in fact some of those states Are manifesting your body because you Trigger hormone Cascadia a bunch of Stuff but you also can just scan your Brain and see that stuff move around Right right And You know the AI model can have an error Function and look at the difference Between what it expected and not and you Could call that the emotional state yeah Yeah well if you want it I just talked

With it but not speculation but no no I Think that’s actually but uh you know we Can make an AI model that could predict A result of a story probably better than The average person So one of some people are really good at You know they’re rarely well educated About stories or they know the genre or Something but yeah but you know these Things and what they see today is the Capacity of the models is if you say Start describing a lot it’ll make sense For a while but it’ll slowly stop making Sense But that that’s possible that’s simply The capacity of the model right now and The the model is not well not grounded Enough in the settle to say goals in Reality or something to make sure so What do you think would happen Jonathan This is this is I think associated with The kind of things that we’ve talked Through to some degree so one of My hypotheses let’s say about Deep stories is that they’re they’re They’re Metagists in some sense so you could Imagine a hundred people telling you a Tragic story and then you could Reduce each of those tragic stories to The gist of the tragic story and then You could aggregate the gists and then You’d have something like a meta tragedy And I would say the deeper the gist the

More religious light the story gets and That’s part of it’s that idea is part of The reason that I wanted to bring you Guys together I mean one of the things That what you just said makes me wonder Is imagine that you took Shakespeare And you took Dante and you took like the The canonical Western writers and you Trained an AI system to ex to understand The structure of each of them and then And now you have you can pull out the Summaries of those Structures the gists and then couldn’t You pull out another gist out of that so It would be like the essential element Of Dante and Shakespeare and I want to Hear what John is So so there’s here’s here’s one funny Thing to think about you use the word Pull out so when you train a model to Know something You can’t just look in it and say what Is it no you have to Quirk it Right right what’s the next sentence in This paragraph what’s the answer to this Question there’s a thing on the internet Now now called prompt engineering and It’s the same way I I can’t look in your Brain to see what you think yeah I have To ask you what you think because if I Killed you and scanned your brain and Got the got the current state of all the Synapses and stuff hey you’d be dead Which should be sad and B I wouldn’t

Know anything about your thoughts your Thoughts are embedded in this model that Your brain carries around And you can express it in a lot of ways And so that so you could uh how do you Train so this is my my big question is I Mean because the way that I’ve been Seeing it until now is that that Artificial intelligence is it it’s based On us it’s not it doesn’t exist Independently from humans and it doesn’t Have care the question would be why does The computer care yeah that’s that’s That’s not true well what is a computer Care to get the gist of the story Well yeah so I think you’re asking kind Of the wrong question so so you can Train an AI model on like the physics And reality and images in the world just With images And there are people who are figuring Out how to train a model with just Images but the the model itself still Conceptualizes things like tree and dog And action and run because those all Exist in the in the world Right so And you can actually train so and when You train a model with all the language And words so all information has Structure And I know you’re a structure guy from Your video so if you look around you at Any image every single point you see

Makes sense yeah Right it’s an Italian it’s a Teleological structure it’s like a it’s A purpose in Laden structure right so This is something so it turns out all The words that have ever been spoken by Human beings also have structure right Right and and so physics has structure And then it turns out that some of the Deep structure of images and actions and Words and sentences are related Like there’s actually a common core of Like you imagine there’s like a Knowledge space And and sure there’s details of humanity Where you know they prefer this this Accent versus that those are kind of Details but they’re coherent in the Language model but the language models Themselves are coherent with our world Ideas And humans are trained in the world just The way the models are trained in the World they look a little baby As it’s learning looking around it’s Training on everything it sees when it’s Very young and then his training rate Goes down and it starts interacting when It’s with what it’s learning interacting With the people around but it’s trying To survive it’s it’s trying to live it Has like it has a the the infant or the Child has kids aren’t trying the the Weights and the neurons aren’t trying to

Live what they’re trying to do is reduce The error so neural networks generally Are predictive things like what’s coming Next What makes sense you know how does this Work And when you train them when you train An AI model You’re training it to reduce the error In the model And if you’re modeling okay let me ask You about that so well first of all and So babies are doing the same thing like They’re looking at stuff go around and In the beginning their neurons are just Randomly firing but as it starts to get Object permanent it’s a look at stuff it Starts predicting what will make sense For that thing to do and when it doesn’t Make sense it’ll it’ll it’ll update as Well basically it Compares its Prediction to the events and then it Will adjust its its prediction so in a In a story prediction model the AI would Predict the story then compare it to its Prediction and then fine tune itself Slowly as it trains itself okay reverse You could ask it to say given the set of Things tell the rest of the story and it Could do that Right and that’s what and and the state Of it right now is there are people Having conversations with this that are Pretty good so I talked to Carl friston

About this prediction idea in some Detail and so first and for those of you Who are watching and listening is one of The world’s top neuroscientists and he’s Developed an entropy Enclosure model of conceptualization Which is analogous to one that I was Working on uh I suppose a proc across Approximately the same time frame so The first issue and this has been well Established in the neuropsychological Literature for quite a long time is that Anxiety is an indicator of discrepancy Between prediction and actuality And then positive emotion also looks Like a discrepancy reduction indicator So imagine that you’re moving towards a Goal and then you you evaluate what Happens as you move towards the goal and If you’re moving in the right direction What happens is what you might say what You’ll expect to happen and that Produces positive emotion and it’s Actually an indicator of reduction in Entropy That’s that’s one way of looking at Looking at it then the point is yeah Yeah you have a bunch of words in there That are psychological definitions of States but you could say there’s a Prediction in Earth a prediction yes You’re reducing error yes but but what’s It what’s simple but what I’m trying to Make a case for is that your emotions

Directly map that both positive and Negative emotion look like they’re Signifiers of discrepancy reduction well On the positive and negative emotion Side but then there’s a complexity that That I think is germane to part of Jonathan’s query which is that So the neuropsychologists and the Cognitive scientists have talked a lot a Long time about expectation prediction And discrepancy reduction but one of the Things they haven’t talked about is it Isn’t exactly that you expect things It’s that you desire them you want them To happen like because you could imagine That there’s a there’s in some sense a Literally infinite number of things you Could expect and we we don’t strive only To match prediction we strive to bring About what it is that we want and so we Have these preset systems that are Teleological that are motivational Systems well I mean it depends like if You’re sitting idly on the beach Like and a bird flies by you expect it To fly along in a regular path right but You don’t really want that to happen Yeah but you don’t want it to turn into Something that could Peck out your eyes Either like there’s a disaster yeah but But you’re you’re kind of following it With your expectation to look for Discrepancy yes now you’ll also have a You know depends on the person somewhere

Between 10 and a million desires right And and then you also have fears and Avoidance and those are contexts so if You’re sitting on the beach with some Anxiety that the birds are going to Swerve at you and knock your eyes out Yeah so then you might be watching it Much more attentively than somebody who Doesn’t have that worry for example well But both of you can predict where it’s Going to fly and you will both notice a Discrepancy right the motivations one Way of conceptualizing fundamental Motivation is they’re like don’t they’re Like a primary prediction domains right And so it helps us narrow our Attentional Focus because I know I know When you’re when you’re sitting and and And you’re not motivated in any sense You can be doing just in some sense Trivial expectation computations but Often we’re in a highly motivated State Sure and what we’re expecting is bounded By what we desire and what we desire is Oriented as Jonathan pointed out towards The fact that we want to exist and one Of the things I don’t understand and Wanted to talk about today is how the How the computer models the i o models Can generate intelligible sense without This without mimicking that sense of Motivation because you’ve said for Example they can just derive the Patterns from observations of the

Objective world but there’s a so let’s So again I don’t want to do all the talking but So so AI generally speaking like when I First learned about it had two two Behaviors they call it inference in Training so inferences you have a Trained model so you say you give it a Picture and say is there a cat in it and It tells you where the cat is that’s Inference the model has been trained to Know where a cat is and training is the Process of giving it an input an Expected output and when you first start Training the model gives you garbage out Like an untrained brain but and then you You take the difference between the Garbage output and the expected output And call that the error and then they Invent the big Revelation was something Called back propagation with gradient Descent but that means takes the error And divide it up across the layers and Correct those calculations so that when You put a new thing in it gives you a Better answer And then to somewhat my astonishment if You have a model of sufficient capacity And you train it with a hundred million Images if you give it a novel image and Say tell me where the cat is it can do It Right that’s called in that’s called so Training is the process of doing a pass

With an expected output and propagating An error back through the network and Inferences to the behavior of putting This something in and getting it out There yeah I think am I like I’m really Pulling but there’s there’s a third Piece which is what the new models do Which is called generative it’s called a Generative model So for example say you put in a sentence And you say predict the next word this Is the simplest thing so it predicts the Next word so you add that word to the Input And I’ll say predict the next word so it Contains the original sentence and the Word you generated and it keeps Generating words that make sense in the Context of the original word in addition Right right this is the simplest basis And then it turns out you can train this To do lots of things you can change it To summarize a sentence you can train it To answer a question there’s a big thing About you know like Google every day has Hundreds of millions of people asking it Questions giving answers and then rating The results you can train a model with That information so you can ask it a Question it gives you a sensible answer But I I think in what you said I Actually have the the issue that has Been Going through my mind so much is when

You said you know people put in the Question and then they rate the answer My intuition is that the intelligence Still comes from humans in the sense That It seems like in order to train whatever AI you have to be able to give it a lot Of power and then say at the beginning This is good this is bad this is good This is bad like reject certain things Accept certain things in order to then Reach a point when then you train the AI And so that that’s what I mean about the Care so the care will will come from Humans because the care is the one Giving it the value saying this is the This is what is valuable this is what is Not valuable in your calculation So so when they first so there’s the Program called alphago I learned how to Play go better than a human so there’s Two ways to train the model one is they Have a huge database of lots of Go Games With good winning moves so they trained The model with that and that worked Pretty good And they also took Two simulations of go and they did Random moves And alls that happened was these two Simulators played one go game and they Just recorded whichever moves happened To win And it started out really horrible

And I just started training the model This is called adversarial learning it’s A particular adversarial it’s like you Know you make your moves randomly And you train a model and so they train Multiple models and over time those Models got very good and they actually Got better than human players Because the humans have limitations About what they know whereas the models Could experiment in a really random Space and and go very far yeah but Experiment towards the brothers watching The game Yes well but you can experiment towards All kinds of things it turns out And and humans are also training that Way like when you were learning you were Reading you were said this is a good Book this is a bad book this is good Sentence construction it’s good spelling So you’ve gotten so many error signals Over your life well that’s what culture Does in large Partners culture does that Religion does that your everyday Experience does that your family so we Embody that yeah right and we’re all and Everything that happens to us we process It on the inference pass which generates Outputs and then sometimes we look at That and say hey that’s unexpected or That got a bad result or that got bad Feedback and then we we back propagate That and update our models

So could could very well trained models Can then train other models so the the Difference right now are the smartest People in the world so the biggest Question the biggest question that I That that comes now based on what you Said is Because my my main point is to try to Show how it seems like artificial Intelligence is always an extension of Human intelligence like it remains an Extension of human intelligence and Maybe the way down can’t be true at all So do you think that do you think that At some point the artificial Intelligence will be able to because the Goals recognizing cats you know uh Writing plays all these goals our goals Which are which are based on embodied Human existence could you train what Could an AI at some point develop a goal Which would be uncomprehensible to Humans because of its its own existence Yeah I mean like for example there’s a Small population of humans that enjoy Math right and they are pursuing You know adventures in math space that Are incomprehensible to 99.99 of humans But that’s but they’re interested in it And you can imagine like an AI program Working with those mathematicians and Coming up with very novel math ideas And then interacting with them but they Could also

You know if some AIS were were Elaborating out really interesting and Detailed stories they could come up with Stories that are really interesting We’re going to see it pretty soon like Could there be it at everything a story That is interesting only to the AI and Not interesting to us that’s possible So Stories are like I think some high Level information space so so the the The Computing age of Big Data there’s All this data running on computers where Nobody the only humans understood it Right the computers don’t so AI programs Are now at the state where The information the processing and the Feedback loops are all kind of in the Same space They’re still you know relatively Rudimentary to humans like in some AI Programs and certain things are better Than humans already but for the most Part they’re not But it’s moving really fast so and so You could imagine You know I think in five or ten years Most people’s best friends will be AIS And you know they’ll know you really Well and they’ll be interested in you And you know it’s kind of like your real Friends Yeah real friends are problematic They’re only interested in you when You’re interested yeah yeah real friends

Are the AI systems will love you even When you’re dull and miserable well There’s there’s and there’s so much idea Space to explore And humans have a wide range some humans Like to go through their everyday life Doing their everyday things and some People spend a lot of time like you a Lot of time reading and thinking and Talking and arguing and debating You know and You know that’s there’s going to be like Say A diversity of possibilities with What’s what A thinking thing can do when the Thinking is fairly Unlimited So I’m curious about I’m still I’m Curious in pursuing this this issue that Jonathan uh has been developing so There’s a there’s a literally infinite Number of ways virtually infinite number Of ways that we could take images of This room right now if a human being is Taking images of this room they’re going To be they’re going to sample a very Small space of that infinite range of Possibilities because if I was taking Pictures in this room in all likelihood I would take pictures of ident objects That are identifiable to human beings That are functional to human beings at a Level of focus that makes those objects Clear and so then you could imagine that The set of all images on the internet

Has that implicit structure of Perception built into it and that’s a Function of what human beings find Useful you know I mean I could take a Photo of you that was the focal depth Was here and here and here and here and Here and two inches past you and now I Suppose you because it’s a technology For that called light fields okay so so Then you could if you had that picture Properly done then you could move around In an image and and see but yeah fair Enough I get your point like the the Human recorded data has here’s her Biology built into it has our biology Built into it but also unbelievably Detailed encoding of how physical Reality works Right so every single Pixel in those Pictures even though you kind of Selected the view the focus the frame Right It still encoded a lot more information Than you’re processing Right and if you take a large it turns Out if you take a large number of images Of things in general so you’ve seen These things where you take a 2D image And turn it into a 3D image yeah right The reason that works is even in the 2D Image the 3DS In the room actually got embedded in That yeah then if you have the right Understanding of how physics and reality

Works you can reconstruct a 3D model You know an AI scientist may cruise Around the world with infrared and radio Wave cameras and they might take Pictures of all different kinds of Things and every once in a while they’d Show up and go hey the sun you know I’ve Been staring at the sun and the Ultraviolet and radio yes for the last Month and it’s way different than Anybody’s thought because humans tend to Look at light and visible spectrum And you know it there could be some Really novel things coming out of that Well so so but humans also we live in The Spectrum we live in because it’s a Pretty good one for planet Earth like it Wouldn’t be obvious that AI would start Some different place like visible Spectrum is is interesting for a whole Bunch of reasons right so in a set of Images that are human derived you’re Saying that there’s the way I would Conceptualize that is that there’s two Kinds of logos embedded in that one Would be that you could extract out from That set of images what was relevant to Human beings but you’re saying that the Fine structure of the objective World Outside of human concern is also Embedded in the set of images and that An AI system could extract out a Representation of the world but also a Representation of what’s motivating to

Human beings yes so and then some human Scientists already do look at the sun in Radio waves and other things because They’re trying to you know get different Angles on how things work Yeah well I guess it so it’s a it’s a Curious thing it’s like the same with Like buildings and architecture mostly Fit people Well the the other you know there’s a Reason for that the reason why I keep Coming back hammering the same point is That even in terms of the development of The AI that is developing AI requires Immense amount of money energy Uh you know and time and so that’s a Transient thing in 30 years it won’t Cost anything so it’s that’s that’s Going to change so fast it’s amazing so That’s that’s a like super computers Used to cost millions of dollars and now Your phone is the super computer so it’s The time between millions of dollars and Ten dollars is about 30 years So it’s like I’m I’m just saying it’s Like the time and effort isn’t a thing In in technology it’s moving pretty fast It’s just that’s just it just says the Date yeah but even making even making Let’s say even I mean I guess maybe the this is the Nightmare question like could you Imagine an AI system which becomes Completely autonomous which is creating

Itself even physically through automized Uh factories which is you know Programming itself which is creating its Own goals which is not at all connected To human endeavor Yeah I mean individual researchers can You know I have a friend who I’m going To introduce you to him tomorrow he Wrote a program that scraped all of the Internet and trained an AI model to be Uh a language model on a relatively Small computer and in 10 years The computer he could easily afford Would be as smart as a human so he could Train that pretty easily And that model could go on Amazon and Buy a hundred more of those computers And copy itself So yeah we’re we’re 10 years away from That And then but then why like why would it Do that I mean what does it does is it Possible it’s all about the motivational Question I think that that’s what what Even Jordan and I both have been coming At from the outset is like so you have An image right you have an image of of Skynet or of the Matrix you know in Which the sentient AI is actually Fighting for its survival so it has a Survival Instinct which is pushing it I Was thinking of to self-perpetuate like To to to to to replicate itself and to Create variation on itself in order to

Survive and identify as humans as an Obstacle to that you know Yeah yeah so you have a whole bunch of Implicit assumptions there so so humans Last I checked are unbelievably Competitive and when you let people get Into Power with no checks on them they Typically run amok it’s been history Historical experience and then humans Are You know self-regulating to some extent Obviously with some serious outliers Because they self-regulate with each Other and humans and AI models at some Point will have to find their own Calculation of self-regulation and And trade-offs about that yeah because AI doesn’t feel pain at least as we if That we don’t know that it feels well Lots of humans don’t feel pain either so I mean that’s I mean humans feeling pain Or not didn’t you know it doesn’t stop a Whole bunch of activity I mean that’s I Mean it doesn’t the fact that we feel Pain doesn’t stop doesn’t regulate many People right right I mean there’s Definitely people like you know children If you threaten them with you know go to Your room and stuff you can regulate Them that way but some kids ignore that Completely and adults and it’s often Counterproductive yeah so so right you Know you know Culture and societies and organizations

We regulate each other you know Sometimes in competition and cooperation Yeah do you do you think that we’ve Talked about this to some degree for Decades I mean when you look at how fast Things are moving now and as you push That along What and what when you look out 10 years And you see the relationship between the AI systems that are being built and Human beings what do you envision Or can you envision it Okay well can I yeah so like I said I’m A computer guy And I’m watching this with let’s say Some Fascination as well I mean the last so Ray cartswell said You know prior progress accelerates yeah Right so we we have this idea that 20 Years of progress is 20 years but you Know the last 20 years of progress was 20 years in the next 20 years would Probably be you know five to ten right Right right and and you can really feel That happening to some level that causes Social stress independent of whether It’s AI or or Amazon deliveries you know What you know there’s so many things That are going into the the stress of it All but there’s but there’s progress Which is an extension of human capacity And then there’s this progress which I’m Hearing about the way that you’re Describing it which seems to be an

Inevitable progress towards creating Something which is more powerful than You Right and so what is that I don’t even Understand that drive like what is that Drive to to create something which can Supplant you so look at the average Person in the world right so the average Person already exists in this world Because the average person is halfway up The human hierarchy there’s already many People more powerful than any of us There they could be smarter they could Be richer they could be better connected We already live in a world like very few People are at the top of anything Right so that’s already a thing so Basically the drive to make someone a Superstar let’s say or the drive to Elevate someone above you that would be The same drive that is bringing us to Creating these Ultra powerful machines Because we have that like we have a Drive to elevate like you know when we See a rock star that we like people want To submit themselves to that they want To dress like them they want to raise Them up above them as an example Something to follow right something to Say to to subject themselves to you see That with leaders you see that in the Political world ah And in teams you see that in sports Teams the same thing and so do you think

Well we’ve always tried to build things That are Beyond us you know I mean I Mean I mean it’s about are we building Are we building a God is that the is That what people is that the drive that Is pushing someone towards because when I hear what you’re describing uh Jim I Hear something that is extremely Dangerous right sounds extremely Dangerous to the very existence of Humans yet I see humans acting and Moving in that direction almost without Being able to stop it as if there’s no One now I think it is Unstoppable that Well that’s one of the things we’ve also Talked about is because I I’ve asked Jim Straight out you know Because of the hypothetical danger Associated with this why not stop doing It and well part of his answer is the Ambivalence about the outcome but also That it isn’t obvious at all that in Some sense it’s stoppable I mean it’s a it’s the cumulative action Of many many people that are driving us Along and even if you took out one Player even a key player the probability That you do anything but slowing Infinitesimally is is quite happy Because there’s also a massive payoff For those that will succeed it’s also Set up that way people know that at Least At least until the AI take over whatever

That whoever is on the line towards Increasing the power of the AI will will Rake in major Rewards Right well so there’s a cognitive Acceleration right yeah I I can Recommend uh Ian Banks As an author English author I think he Wrote a series of books on the he called The culture novels and it was a world Where there was humans and then there Was AI is the smartest humans and AIS That were dumber than humans but there Were some AIS that were much much Smarter and they they lived in harmony Because they mostly all pursued what They wanted to pursue Humans pursued human goals and super Smart AIS pursue super smart AI goals And and you know they they communicated And worked with each other but but they They mostly you know they’re different When they were different enough that That was problematic their goals were Different enough that they didn’t Overlap because one of the one of the Things that that would be my guess is Like these ideas were these super AIS Get smart and the first thing they do is Stomp out to humans it’s like you don’t Do that like you like you don’t wake up In the morning and think I have to stomp Out all the cats no the cats do cat Things and the Ants do ant things and The birds do bird things and and super

Smart mathematicians do smart Mathematician things and you know guys Who like to build houses do build house Things and you know everybody you know The the world there’s so much space in The intellectual Zone That People people tend to go pursue the in a Good Society like you tend to pursue the Stuff that you do and then the people in Your Zone you self-regulate and you also Even in the social strategies we Self-regulate we I mean the the recent Political events of the last 10 years The the weird thing to me has been Why of you know people with power been Overreaching to take too much from People with less like that’s bad Regulation but one of the things Go ahead of the aspects of increase in Power is that increase in power is Always mediated at least in one aspect By the military by military by by let’s Say physical power on others you know And we can see that technology is linked And has been linked always to military Power and so the idea that there could Be some ai’s that will be our friends or Whatever is maybe possible but the idea That there will be some AIS which will Be weaponized is seems absolutely Inevitable because increase in power is Always to increase in technological Power always moves towards towards

Military so we’ve lived with atomic Bombs since the 40s Right so the I mean the the solution to this has been Mostly You know some form of mutual assured Destruction or Attacking me like the response to Attacking me is so much worse than the Yeah but it’s also because we rest we Have reciprocity we recognize each other As the same so if I look into the face Of another human there there’s a limit Of how a different I think that person Is for me but if I’m hearing something Described as the possibility of super Intelligences that have their own goals Their own cares their own structures Then how much mirror is there between These two groups of people these two Groups well is there objection seems to Be something like We’re we’re making we may be making when We’re doomsaying let’s say and I’m not Saying there’s no place for that we’re Making the presumption of something like A zero-sum competitive landscape right Is that the the idea and the idea behind Movies like like the Terminator is that There is only so much resources and the Machines and the human beings would have To fight over it and you can see that That that could easily be a Preposterous Assumption now I I think that one of the

Fundamental points you’re making though Is also Um There will definitely be people that Will weaponize Ai and those weaponized AI systems will have as their goals Something like the destruction of human Beings at least under some circumstances And then there’s the possibility that That will get out of control because the Most effective systems out destroying Human beings might be the ones that win Let’s say and that that could happen Independently of whether or not it is a True zero-sum competition yeah and also The The the effectiveness of military stuff Doesn’t need very smart AI to be a lot Better than it is today You know he’s used to you know that like The Star Wars movies where like you know Tens of thousands of years in the future Super highly trained you know Fighters Can’t hit somebody running across the Field like that’s silly right you can Already make a gun deck and hit Everybody in the room without aiming at It it’s you know there’s like the The military threshold is much lower Than any Intelligence threshold Like for danger and you know like to the Extent that we self-regulated through The nuclear crisis it’s interesting

Um I don’t know if it’s because we Thought that the Russians were like us I Kind of suspect the problem was that we Thought they weren’t like us and But we still managed to make some Calculation to say that any kind of Attack would be mutually devastating Well when you when you look at you know The destructive power of the military we Already have so far exceeds the planet I’m I’m not sure like adding Intelligence to it is the Tipping Point Like like that’s And I think the more likely thing is Things that are truly Smart in different Ways will be interested in different Things And then the possibility for let’s say Mutual flourishing is Is really interesting and I know artists Using AI already to do really amazing Things and and that’s already happening Well when you when you’re working on the Frontiers of AI development and you see The development of increasingly Intelligent machines I mean I know that Part of what drives you is I don’t want To put words in your mouth but what Drives intelligent engineers in general Which is to take something that works And make it better and maybe to make it Radically better and radically cheaper So so there’s this Tech drive toward Technological Improvement and I know

That you like to solve complex problems And you do that extraordinarily well What but do you do you is there also a Vision of Um a more abundant form of human Flourishing emerging from the from the Development so so what do you see Happening years ago it’s like we’re Going to run out of energy what’s next We’re going to run out of matter right Like our ability to do what we want in Ways that are interesting and you know For some people beautiful is limited by A whole bunch of things because we’re You know partly it’s technological and Partly what’s you know we’re stupidly Divisive But um but there is there’s possible That there’s also a reality which is one Of the things that technology has been Is of course an increase in power Towards desire towards human desire and That is represented in mythological Stories where Let’s say technology is used to Accomplish impossible desire right we Have you know the story of The story of building the the mechanic The bull around uh the king of Mino the The wife of the king of Minos you know In order to be inseminated by uh by a Bull we have the story The the we have The story of the um I swear Frankenstein Etc the story of

The Golem where we put our desire into This increased power and then what Happens is that we don’t know our Desires that’s one of the things that I’ve also been worried about in terms of AI is that We act we have secret desires that enter Into what we do that people aren’t Totally aware of and as we increase Empower these systems those desires Let’s say the the like the idea for Example of the possibility of having an AI friend and the idea that an AI friend Would be the best friend you’ve ever had Because that that friend would be the Nicest to you would care the most about You would do all those things that would Be an exact example of what I’m talking About which is it’s really the story of The genie right it’s the story of the The genie and the lamp where the genie Says what do you wish and the and the Person and I have unlimited power to Give it to you and so I give him my wish But that wish has all these these Underlying implications that I don’t Understand all these underlying Possibilities The cool thing the more of almost all Those stories is having unlimited wishes Will be lead to your downfall and so Humans like if you give you know a young Person an unlimited amount of stuff to Drink for for six months they’re going

To be falling down drunk and they’re Going to get over it right having a Friend that’s always your friend no Matter what it’s probably going to get Boring pretty well the the literature on Marital stability indicates that so There’s a there’s a sweet spot with Regards to marital stability in terms of The ratio of negative to positive Communication So if on average you receive five Positive Communications and one negative Communication from your spouse that’s on The low threshold for stability if it’s Four positive to one negative you’re Headed for divorce but interestingly Enough on the other end there’s a Threshold as well which is that if it Exceeds 11 positive to one negative You’re also moving towards divorce So there’s so so so there might be Self-regulating mechanisms that would Incense take care of that you might find A Yes Man AI friend Extraordinarily boring very very rapidly But as opposed to an AI friend that was Interested in what you’re interested in It was actually interesting like you Know we go through friends in the course Of our lives like different friends are Interesting at different times and some Friends we grow with and that continues To be really interesting for years and Years and other friends you know some

People get stuck in their thing and then You’ve moved on or they’ve moved on or Something so Yeah I so we I tend to think of uh Like a world where there was more Abundance and more possibilities and More Interesting things to do is is an Interesting okay okay so as modern Society has let the human population and Some people think this is a bad thing But I don’t know I I’m a fan of it you Know modern population has gone from Tens of to 100 million to billions of People that’s generally being a good Thing we’re not running out of space I’ve been in you know some of your Audience has probably been in an Airplane if you look out the window the Country is actually mostly empty The oceans are mostly empty like we’re We’re weirdly good at polluting large Areas but as soon as we decide not to we Don’t have to like technology most most Of our you know energy pollution Problems are technical like we can stop Polluting like electric cars are great So so here’s so many things that we Could do technically Um I forget the guys name he said the Earth could easily support a population Of petroleum people and trillion people Would be a lot more people doing you Know random stuff and he didn’t imagine

That the future population would be a Trillion humans and a trillion AIS but It probably will be So we’ll probably exist on multiple Planets which would be good the next Time an asteroid shows up so what do you Think about so so one of the things that Seems to be happening tell me if if you Think I’m wrong here and I think it’s Germane to and Jordan I just want to Make the point you know where we are Compared to living in the Middle Ages Our lives are longer our our families Are healthier our children are more Likely to survive like many many good Things happens Like setting the clock back would be Good You know if we have some care and people Who actually care about how culture Interacts with technology for the next 50 years You know we’ll get through this Hopefully more successful than we did The atomic bomb and the Cold War But it’s it’s okay so so it’s a major Change I mean this is like like Your worries are you know I mean they’re They’re relevant But that you know but also you’re Jonathan your stories about how humans Have faced abundance and faced evil Kings and evil overlords like we have Thousands of years of history of facing

The challenge of the future and the Challenge of things that cause radical Change yeah but it’s just that that’s Very valuable information you know Information but for the most part nobody Succeeded by stopping change they’ve Succeeded by Bringing to bear on the chains our Capability to self-regulate the balance Like a good life isn’t having as much Gold as possible it’s a boring life a Good life is you know having some Quality friends and doing what you want And having some Some insight in life yeah and some Optimal Challenge and You know and then in a world where a Larger percentage of people can have Wealth live in relative abundance and Have tools and opportunities I think is A good thing yeah and I don’t I don’t Want to pull back abundance but what I Have noticed is that that our abundance Brings a kind of nihilism to people and I don’t like I said I don’t want to go Back I’m happy to live here and to have These these Tech things but I but I Think it’s something that I’ve also Noticed uh That increase of of the capacity to Get your desires When that increases to a certain extent Also leads to a kind of nihilism where Exactly that well yeah I wonder Jonah

Said I wonder if that’s part partly a Consequence Of the Erroneous maximization of short-term Desire I mean one of the things that you Might think about that could be Dangerous on the AI front is that we Optimize the manner in which we inter we Interact with our electronic gadgets to Capture short-term attention Right because there’s a difference Between getting what you want right now Right now and getting what you need in Some more mature sense across a Reasonable span of time and one of the Things that does seem to be happening Online and I think it is driven by the Development of AI systems is that we’re We’re assaulted by systems that Parasitize our short-term attention and At the expense of longer term attention And I if the AI systems emerge to Optimize attentional grip it isn’t Obvious to me that they’re going to Optimize for the attention that works Over the medium to long run right They’re gonna they’re gonna be they Could conceivably maximize something Like whim centered Exactly because all the virality is Based on that all the the social media Yeah they’re all based on this on this Reduction this reduction of attention This reduction of desire to to reaching

Your your rest let’s say in that desire Right the like to click all these things There yeah now yeah exactly so but but That’s something that you know so for For reasons that are somewhat puzzling But maybe not You know the business models around a Lot of those interfaces are around You know the part the users the product And you know that the advertisers are Trying to get your attention yeah yeah But that’s something culture could Regulate We could decide that no we Don’t we don’t want Tech platforms to be Driven by advertising money like that Would be a smart decision probably Um and that could be a big change And also if you see as an older see well The problem is markets drive that in Some sense right and yeah and I know They’re driving that way we can take Steps like you know at various times you Know alcohol’s been illegal like you you Can we Society can decide to regulate All kinds of things And you know sometimes some things need To be regulated and some things don’t Like when you buy a hammer you don’t Fight with your hammer for its attention Right it Hammer’s a tool you buy one When you need one Nobody’s marketing hammers to you like Like that that that has a relationship That’s transactional to your purpose

Right yeah well our technology has Become a thing where I mean but there’s a relationship a lot Of things are so there’s relationship Between human I would say High human Goals something like attention And status And what we talked about which is the Idea of elevating something higher in Order to see it as a model see these are Where intelligence exists in the human Person and so when we notice that in the Systems in the the platforms these are The the these are the aspects of Intelligence which are being weaponized In some ways Not against us but are just kind of Being weaponized because they’re the Most beneficial at the short term to be Able to generate our constant attention And so what I mean is that that is what The as AIS are made of right they’re Made of attention prioritization uh you Know good bad what what is it that that Is worth putting energy into in order to Predict towards a Telos and so yeah I’m Seeing so I’m seeing that it’s the idea That we could disconnect them suddenly Seems very difficult to me Yeah so I’ll give it to to first I want To give an old example so after after World War II America went through this Amazing building boom of building Suburbs and the American dream was you

Could have your own house your own yard In the suburb with a good school right So in the 50s 60s early 70s they were Building that like crazy by the time I Grew up I lived in this you know Suburban dystopia right and we found That that that as a goal wasn’t a good Thing because people ended up in houses Separated from social social studies Instructors and then new towns are built Around like a hub with you know places To go and eat you know so there was a a Good that was viewed in terms of Opportunity and abundance but it Actually was a fail culturally and then Some places it modified and continues in Some places are so dystopian you know Suburban areas and some places people Simply learned to live By the way it has to do with with a Subsidiary hierarchy like a hierarchy of Attention which is set up in a in in a Way in which all the levels can have Room to exist let’s say and so you know The these new the new systems the new Way let’s say the new Urban urbanist Movement similar to what you’re talking About that’s what they’ve understood It’s like we need places of intimacy in Terms of the house we need places of Communion in terms of you know parks and Alleyways and and buildings where we Meet and a church all these places that Kind of manifest our community together

Yeah so so those existed coherently for Long periods of time and then the Abundance post World War II and some Ideas about like what what life could be Like cause this big change and that Change satisfied some needs people cut Houses but broke Community needs and Then new sets of ideas about what what’s The synthesis what’s the possibility of Having your own home but also having Community not having to drive 15 minutes For every single thing and some people Live in those worlds and some people Don’t do you think we’ll be smart so one Of the problems why were we smart enough To solve some of those because we had 20 Years but now because one of the things That’s happening now is we’re as you Pointed out earlier is we’re going to be Producing equally revolutionary Transformations but at a much Smaller scale of time and so Mike one of The things I wonder about I think it’s Driving some of the concerns in the Conversation is Are we going to be Intelligent enough to Direct with regulation the Transformations of Technology as they Start to accelerate I mean we’ve already Look what’s happened online I mean we’ve Inadvertently for example Radically magnified the voices of Narcissists Psychopaths and

Machiavellians and we’ve done that so Intensely partly and I would say partly As a conservation consequence of AI Mediation that I think it’s it’s Destabilizing the entire body it’s a Stabilizing part of it like as Scott Adams point out you just block everybody That acts like that I don’t pay Attention to people that talk like that Yeah but they seem they seem to be real There’s still places that are sensitive To it like yeah 10 000 people here can Make a storm in some corporate you know Person you know fire somebody but I Think that’s like we’re five years from That being over Corporation will go ten Thousand people out of 10 billion not a Big deal okay so you think yeah that’s a Learning moment that will re-regulate Um What’s natural to our children is so Different than it was natural to us but What was natural to us was very Different from our parents so some some Changes get upset excepted Generationally really so what’s made you So optimistic What’s means what do you mean optimistic Well most of the things that you have Said today and and maybe it’s also Because we’re pushing you I mean you you Really you know my my nephew Kyle is a Really smart clever guy he calls me a uh What did he called it a cynical optimist

Like I believe in people Like I like people but also people are Complicated they all got all kinds of Nefarious goals like I worry a lot more About people burning down the world than I do about artificial intelligence just Because you know people Well you know people they’re they’re Difficult right and And but the interesting thing is in Aggregate we mostly self-regulate and When things change you have these Dislocations and then it’s up to people Who talk and think and while we’re Having this conversation I suppose to Talk about how how do we re-regulate This stuff yeah well because one of the Things that one of the things that the Increase in power has done in terms of AI and you can see it with Google and You can see it online is that there are Certain people who hold the keys let’s Say and and and then who hold the keys To what you see and what you don’t see So you see that on Google right and you Know it if you know what searches to Make where you realize that this is not This is actually being directed by Someone who now has huge amount of power In order to direct my attention towards Their ideological purpose and so yeah Yeah so that’s why like I think that to Me I personally think it would I always Tend to see AI as as an extension of

Human power even though so there is this Idea that it could somehow become Totally independent I I still tend to See it as an increase of the human care And whoever will be able to hold the the Keys to that will have increase in power And that can be like and I think we’re Already seeing it well that’s that’s not That’s not really any different though Is it Jonathan the the situation that’s Always confronted Us in the past I mean We’ve always had to deal with the evil Uncle of the king and we’ve always had To deal with the fact that an increase In ability could also produce a Commensurate increase in tyrannical Power Right I mean so that might be magnified Now and and maybe the danger in some Sense is more acute but possibly the Possibility is more present as well Because you get randomly to find hate Speech right you can train an AI to find Hate speech and then to act on that hate Speech immediately within now it’s only We’re not only talking about social Media but we what we’ve seen is that That is now In being encroaching into payment Systems and into people losing their Bank account their access to different Services and so this idea of Automization you know there’s an Australian bank that already has decided

That it’s a good thing to send all of Their customers a carbon load report Every month Right and to to offer them hints about How they could reduce their their Polluting purchases let’s say and well At the moment that system is one of Voluntary compliance but you can Certainly see in a situation like the One we’re in now that the line between Voluntary compliance and involuntary Compulsion is very very thin Yeah so so I’d like to say so during the Early Computer World computers were very Big and expensive and then we they made Many computers and workstations but they Were still corporate only and then the PC World came in all of a sudden PCS put Everybody online everybody could Suddenly see all kinds of stuff and you Know people could get a freedom Information act request put it online Somewhere and a hundred thousand people Could see it like it was an amazing Democratization moment and then there Was a similar but A smaller Revolution with the world of You know smartphones and apps but then We’ve we’ve had a new completely Different set of companies by the way You know from you know what happened in The 60s 70s and 80s to today it’s a very Different companies that control it and And there are people who worry that AI

Will be a winner take all thing now I Think so many people are using it and They’re working on so many different Places and the cost is going to come Down so fast that pretty soon you’ll Have your own AI app that you’ll use to Mediate the internet to strip out you Know the the endless stream of ads and You can say well is this story objective Well here’s the 15 stories and this has Been manipulated this way and this is Being manipulated that way and you can Say well I want what’s what’s more like The real story And the funny thing is Information that’s broadly distributed And has lots of inputs is very hard to Fake the whole thing So right now a story can pull through a Major media outlet and if they can Control the narrative everybody gets the Fake story But if the media is distributed across a Billion people who are all interacting In some useful way Yeah there’s real signal there and if Somebody stands up and says something That’s not true everybody go everybody Knows that’s not true So the like a good outcome with people Thinking seriously would be the Democratization of information and you Know objective facts in the same way the Same thing that happened with pecs

Versus Corporate Central Computers could Happen again yeah Always creates the tooth at the same Time and so we saw that you know Increase in power creates creates first Or it depends in which direction happens And it creates an increase in Decentralization increase in Access and He creates it creates all that but then It also at the same time creates the Counter reaction which is an increase in Control and increase in in Centralization and so Now the more the power is the more the Waves will the bigger the waves will be And so the image of the image that 1984 Presented to us you know of of people Going into newspapers and changing the The the headlines and taking the Pictures out and doing that that now Obviously can happen with just a click So you can click and you can change the Past you can change the past you can Change facts about the world because They’re all held you know online and We’ve seen we’ve seen it happen Obviously in the media recently but it So does decentralization win over Centralization how is that even possible It seems I mean and it’s also Interesting like when Amazon became a Platform Suddenly any mom and pop business could Have a you know Amazon eBay there’s a

Bunch of platforms which had an amazing Impact because any business could get to Anybody But then the platform itself started to Control the information flow yeah right But at some point that’ll turn into People go well why am I letting somebody Control my information flow when Amazon Objectively doesn’t really have any Capability Right so so so like you point out though The waves are getting bigger but they’re Real waves it’s the same with Information the information is all Online it’s also on a billion hard Drives Right so somebody says I’m going to Erase objective fact the distributed Information system would say yeah go Ahead and erase it anywhere you want Here’s another thousand copies of it Yeah and that’s what so that’s what but Again this is right to do today yeah Yeah and this is where thinking people Have to say yeah this is a serious Problem like if humans don’t have Anything to fight for they get lazy and You know A little bit Dopey in my view like we do Have something to fight for and you know That that’s worse that’s worth talking About like what would what would a great World with you know distributed you know In human intelligence and artificial

Intelligence working together in a Collaborative way to create abundance And fairness and You know like some some better way at Arriving at good decisions than what the Truth is that would be a good thing But it you know it’s not well we’ll Leave it to the experts and then the Experts will tell us what to do that’s a Bad thing yeah so that’s Well so do you did the model that you Just laid out which which I think is Very I’m quite optimistic about that Yeah well it did happen on the Computational front I mean it was it Happened a couple times both directions Okay right you know the PC Revolution Was amazing yeah right and Microsoft was A fantastic company it enabled everybody To write a 10 50 program to use yeah Right and then at some point they’re Also you know let’s say a difficult Program company and they made money off A lot of people and became extremely Valuable now for the most part they Haven’t been that directional and Telling you what to do and think and how To do it but they are a money-making Company You know Apple created the app store Which is great but then they also take 30 of the App Store profits and there’s A whole section of the internet that’s Fighting with apple about their control

Of that platform right and in Europe you Know they’ve they’ve decided to regulate Some of that which that should be a Converse that should be a social Cultural conversation about how should That work yeah so so do you do you see The more likely certainly the more Desirable future Is something like a set of distributed AIS many of which are under personal in Personal relationship in some sense the Same way that we’re in personal Relationship with our phones and our Computers and that that would give People the industrations back so to Speak against this and there’s lots of People really interested in distributed Platforms And one of the interesting thing about The AI world is you know there’s a Company called open Ai and they open Source a lot of it the AI research is Amazingly open it’s all done in public People publish the new models all the Time you can try them out people there’s A there’s a lot of startups doing Ai and All different kinds of places You know it’s it’s it’s it’s a very Curious phenomena yeah and it’s kind of Like a big huge Wave It’s not like a you You can’t stop a wave with your hands Yeah what what do you think it’s Happening waves there are two actually In the Book of Revelation which is

Describes the end or describes the Finality of all things or the totality Of all things is baby away for people Who are more secular to kind of Understand it and in that in that book There are two images interesting images About technology one is that there’s a Dragon that falls from the heavens and That Dragon makes a beast and then that Beast makes an image of the Beast and Then the image speaks and when the image Speaks then people are so mesmerized by The speaking image that they worship the The Beast ultimately so that is one Image of let’s say making and technology In scripture in Revelation but there’s a Lad another image which is the image of The Heavenly Jerusalem and that image is More an image of balance it’s an image Of the city which comes down from heaven With a garden in the center and then Becomes this glorious City and it says The glory of all the kings is gathered Into the city like so the glory of all The nations is gathered into this city So now you see a technology which is at The service of human flourishing and Takes the best of humans and brings it Into itself in order to kind of manifest And it also has hierarchy which means it Has the natural at the center and then Has the artificial as serving the Natural you could say so those two Images seem to reflect these these two

Waves that we see and this kind of idea Of a of a of an artificial intelligence Will be which will be ruling over us or Speaking over us but there’s a there’s a Secret person controlling it even in the In Revelation it’s like there’s a There’s a beast controlling it and Making it speak so now we we we’re Mesmerized by it and then this other Image so I don’t I don’t know Jordan if You ever thought about those two images In real relation is being related to Technology let’s say I I don’t think I’ve thought about those Two image Image cry but I would say that the work That I’ve been doing and I think the Work you’ve been doing too and the Public front reflects the dichotomy Between those images and it’s relevant To the points that Jim has been making I Mean we are definitely increasing our Technological power and you can imagine That that’ll increase our capacity for Tyranny and also our capacity for Abundance and then the question becomes What do we need to do in order to Increase the probability that we tilt The future towards Jerusalem and away From the Beast and the reason that I’ve Been concentrating on helping people Bolster their individual morality to the Degree that I’ve managed that is because I think that whether the outcome is the

Positive outcome that in some sense Jim Is being outlining or the negative Outcomes that we’ve been querying him About I think that’s going to be Dependent on the individual ethical Choices of people well at the individual Level but then cumulatively right so if We decide that we’re going to worship The image of the Beast so to speak Because we’re mesmerized by our own Reflection that’s another way of Thinking about it and we want to be the Victim of our own dark desires then the Ia Revolution is going to go very very Badly but if if we decide that we’re Going to aim up in some positive way and We make the right micro decisions well Then maybe we can harness this Technology to produce a time of Abundance in the manner that Jim is Hopeful about yeah and let me make two Two funny points so one is I I think there’s going to be Continuum Like the word artificial intelligence Won’t actually make any sense Right so so humans collectively like Individuals know stuff but collectively We know a lot more right and the thing That’s really good is in a diverse Society with lots of people pursuing Individual interesting You know ideas worlds like we have a lot Of things and more people more Independents

Generates more diversity and and that’s A good thing where you know totalitarian Society where everybody’s told to wear The same shirt and like it’s inherently Boring like the Beast speaking through The monsters inherently dull Right like but in an intelligent world Where not only can we have more Intelligent things but in some places Go far beyond what most humans are Capable of in pursuit of interesting Variety and you know like I believe the Information and well let’s say Intelligence is essentially Unlimited Right like and it the unlimited Intelligence won’t be the shiny thing That tells everybody what to do that’s That’s that’s sort of the opposite of Interesting intelligence interesting Intelligence will be more diverse not Less diverse like that’s a that’s a good Future And and your second description that That seems like a future was working for And also was fighting for And that means concrete things today And also you know it’s it’s a it’s a Good conceptualization like like I I see The messages my kids are taught you know Don’t have children and the world’s Going to end we’re going to run out of Everything you’re a bad person why do You even exist it’s like these messages Are terrible it’s the opposite is true

More people would be better we we live In a world of potential abundance right It’s right in front of us like there’s So much energy available it’s it’s just Amazing It’s possible to build technology Without you know pollution consequences That’s called externalizing costs like We know how to we know how to do that We can have very good clean technology We can do it would do lots of Interesting things So if the goal is maximum diversity then Uh The line between human intelligence Artificial intelligence that we draw it Like you’ll see all these kind of really Interesting Partnerships and all kinds Of things and more people doing what They want which is the world I want to Live in yeah but to me it seems like the The Question is going to be related to Attention ultimately that is what are Humans attending to at their highest What is it that humans care for in the Highest you know in some ways you could Say what do humans what are humans Worshiping and like depending on what Humans worship then their actions will Play out in the technology that they’re Creating and the increase in power that They’re creating well that’s well and if We’re Guided by the negative Vision the

Sort of thing that Jim laid out that is Being taught to his children you can Imagine that we’re in for a pretty damn Dismal future right human beings are a Cancer on the face of the planet there’s Too many of us we have to accept Top-down compelled limits to growth There’s not enough for everybody a bunch Of us have to go because there’s too Many people on the planet we have to Raise up the price of energy so that we Don’t What burned the planet up with carbon Dioxide pollution Etc it’s a pretty damn Dismal view of of of the potential That’s in front of us And so you know the world should be Exciting and the future should be Exciting well We’ve been sitting here for about 90 Minutes banding back and forth both uh Visions of abundance and visions of Apocalypse and Um I mean it’s I’ve been hardened I Would say over the decades talking to Jim about what he’s doing on the Technological front and I think part of The reason I’ve been heartened is Because I do think that his vision is Guided primarily by uh desire to help Bring about something approximating life More abundant and I would rather see People on the AI front who were Guided By that Vision working on this

Technology but I also think it’s useful To do what you and I have been doing in This conversation Jonathan and acting in Some senses friendly critics and and Hopefully learning something in the Interim do you have anything you want to Say in conclusion I mean I just think That the question is linked very Directly to what we’ve been talking About now for several years which is the Question of attention the question of What is the highest attention and I Think the reason why I have more alarm Let’s say than Jim is that I’ve noticed That in some ways human beings have Become to have come to now let’s say Worship their own desires they’ve come To worship uh and that even the strange Thing of worshiping their own desires is That actually led to an anti-human Narrative you know this weird idea it’s Almost suicidal desire that humans have And so I think that seeing all of that Together in the increase of power I I do Worry that the image of the Beast is Closer to what will manifest itself and I feel like during covid That sense in me was accelerated tenfold In noticing to what extent technology Was used especially in Canada how Technology was used to to instigate Something which looked like Authoritarian systems and so I am Worried about it but I think like Jim

Honestly although I say that I do Believe that in the end truth wins I do Believe that in the end you know these Things will level themselves out but I I Think that because I see people rushing Towards AI almost you know almost like Lemmings going off a cliff uh I feel Like it is important to to to sound the Alarm once in a while and say you know We need to orient our desire before we Go towards these this extreme power so I Think that that’s mostly the the thing That worries me the most and that Preoccupies me the most but I think that Ultimately in the end I do share Jim’s Positive vision and I do think that I do Believe the story has a happy ending It’s just You might have to go through hell before We get there I hope not So Jim how about you what have you got To say in closing a couple years ago a Friend who’s you know my age said oh Kids coming out of college they don’t Know anything anymore they’re lazy and I Thought I work at Tesla I was working at Tesla at the time and and we hired kids Out of college and they couldn’t wait to Make things They were like it’s a Hands-On place It’s a great place and and I’ve told People like if if you’re not at a place Where you’re doing stuff it’s growing It’s making things you need to go

Somewhere else like and also I think You’re right the mindset of If people are feeling this is a Productive creative technology that’s Really cool they’re going to go build Cool stuff and if they think it’s a Shitty job and they’re just tuning the Algorithm so they can get more clicks They’re going to make they’re going to Make something be astly you know beastly Perhaps and and the stories you know our Cultural tradition is super useful both Cautionary and you know Explanatory about something good like And I think it’s up to us to go do Something about this and I know people Are working really hard to make you know The internet a more open place that Makes your information is distributed to Make sure AI isn’t a winner take all Thing like And these are real things and people Should be talking about them and then They should be worrying but the the Upside’s really high and we’ve faced These kind of technological Like this is a big change like the AI is Bigger than the internet like I’ve said This publicly like the internet was Pretty big And you know this is bigger it’s it’s True But the possibilities are amazing Yeah with some sense we could achieve it

Yeah it’s And the world is interesting like I Think it’ll be a more interesting place Well that’s an extraordinarily cynically Optimistic place to end Um I’d like to thank everybody who is Watching and listening and thank you Jonathan for participating in the Conversation it’s much appreciated as Always I’m going to talk to Jim Keller For another half an hour on the daily Wire plus platform I use that extra half An hour to usually walk people through Their biography I’m very interested in How people develop successful careers And lives and and how their Destiny Unfolded in front of them and so for all Those of you who are watching and Listening who might be interested in That consider heading over to the Daily Wire plus platform and partaking in that And otherwise Jonathan we’ll see you in Miami in month and a half to finish up The Exodus seminar we’re going to Release the first The first half of the Exodus seminar we Recorded Miami on November 25th by the Way so that looks like it’s in the can Yeah Yeah yeah Absolutely I’m really excited about it And just for everyone watching and Listening I brought a group of Scholars Together about two and a half months ago

We spent a week in Miami some of the Smartest people I could gather around me To walk through the book of Exodus we Only got through halfway because uh Turns out there’s more information there Than I had originally considered but it Went exceptionally well and I learned a Lot and exodus means X hotos that means The way forward and well that’s very Much relevant to everyone today as we Strive to find our way forward through All these complex issues such as the Ones we were talking about today so I Would also encourage people to check That out when it launches on November 25th I learned more in that seminar than Any seminar I ever took in my life I Would say so it was good to see you There we’ll see in a month and a half Jim we’re going to talked a little bit More and greatly wear a plus platform And I’m looking forward to meeting the Rest of the people in your AI oriented Community tomorrow and learning more About well what seems to be an Optimistic version of a life more Abundant and to all of you watching and Listening Thank you very much your attention isn’t Taken for granted and it’s much Appreciated Hello everyone I would encourage you to Continue listening to my conversation With my guest on dailywireplus.com

Challenge Secrets Masterclass

At Last! The “Funnel Guy” Teams-Up With The “Challenge Guy” For A Once-In-A-Lifetime Masterclass!

The ONE Funnel Every Business Needs, Even If You Suck At Marketing!

Just 60 Minutes A Day, Over The Next 5 Days, Pedro Adao & Russell Brunson Reveal How To Launch, Grow, Or Scale Any Business (Online Or Off) Using A ‘Challenge Funnel’!

Leave a Comment