The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting

The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting


(light music)
– Good morning. Ah, good. It’s amazing, I say good
morning and the music stops. I want that at home. Welcome to our session this morning on the Evolution of Consciousness. I think we’re all here
probably for similar reasons. There’s a kind of, and I’m sure we’ve all picked
it up this week in Davos, this kind of free-floating anxiety about you know, if AI
is gonna be the answer to everything and machine
learning is gonna outthink us. Robots. What’s to become of us human beings? What is work going to look like? Is there a place in the
world that we’ve created? Have we sown the seeds of
our own destruction, I guess, is the question at the core of this and we’re going to talk
about consciousness, human consciousness, the nature of it, and how it evolves, does it evolve? With me today, I’m Amy Burns. I’m the editor of Harvard Business Review. But with me today are, and I just, I’m doing this because I don’t wanna
miss the proper titles and affiliations. All the way to my left is Jodi Halpern, who is a Professor of Bioethics
and Medical Humanities at Berkeley. Next to her is Yuval Harari, Professor of History at the
Hebrew University of Jerusalem, and closest to me is Dan Dennett, who’s a Professor of Philosophy at Tufts. Welcome. Great to be here with you. So I am going to quote something to you that I read in an article from Wired which, you know, kinda gave me nightmares. It said, “Countless
numbers of intelligences “are being built and programmed. “They’re not only going to get
smarter and more pervasive. “They’re going to be better than us “and they’ll never be just like us.” That was a Freudian slip that
I couldn’t get that one out. So, Dan, I’m gonna start with you. Here’s a little question. What is consciousness?
(laughing) – Oh, that’s an easy one. Thank you, Amy. – Let’s land that.
– You want the 10-second answer, or the 10-hour answer? – Well, actually, do
let me help focus that. I was only half-kidding. That’s a Davos joke, I guess. How far has science gotten us to understanding the
basics of consciousness, what consciousness is at its heart, and talk about that, well, actually, I’ll save
that question for you, Jodi. Go ahead, Dan. – It’s making great progress. I think, in my career, there was a period when it was considered off-limits to talk about it, for scientists to talk about it, but that’s changed in the last 25 years and now there’s a regular gold rush going and there’s a lot of good work being done and there’s a lot of controversy and there’s a lot of big egos fighting it out at the top, but we’re making real progress, and I think that the idea
is for a proper confirmed scientific theory of, not
just human consciousness, but animal consciousness and by extension, machine consciousness, if that’s possible. We’re, stay tuned. Things are happening very fast. – Well, can you bring it
into semi-focus for us? – What consciousness in a, there are many different meanings. There’s just plain sentient, and even planets are sentient, but they’re not conscious
in a sensible sense. It’s a, an open-ended capacity to represent your own representations and reflect your own reflections and it’s what gives us the power to imagine counterfactual
futures in great detail and think about them, for instance. That’s just one of the
things that consciousness, in us, can do. That very fact of imagining nonexistent states of affairs, distance in time and space, for instance, we have no reason to believe that any other species is capable of that. That really is a, might prove to be wrong, yes, and it might prove that some moon somewhere
has made of green teas, but I don’t think it’s very likely. – Okay, how does that sound to you, Yuval? – Well, what I think
about the same question? – Yeah. – Well, I think that
it’s a lot of confusion between intelligence and consciousness, especially when it comes
to artificial intelligence. I would say that intelligence is the ability to solve problems. Consciousness is the
ability to feel things, to have subjected experiences
like love and hate and fear and so forth. The confusion between
intelligence and consciousness is understandable because
in humans, they go together. We solve most problems through feelings, but in computers, they could work, they could be completely separated so we might have super intelligence without any consciousness whatsoever. – Agree. – Jodi, do you agree? – [Jodi] I do. – And what about when we sort
of connect this to empathy, which is really your field. Do you think we’re getting any closer to understanding that? – Yeah, I think, by the way, one of the great things about, thank you. One of the great things about AI is not our increased understanding of it, I mean, that’s great. A lot of you work in that area but it’s causing us to
be much more precise in our questions about
ourselves, which I think is great and what I think is that as a, I’m a psychiatrist,
philosopher of emotions, ethicist, so I have to say part of
what people are worried about is neither consciousness or
intelligence, but the self, and what is the self and will we be able to become, will that be replaced? – [Amy] So how do you
differentiate among those? – Well, I think that the way that Yuval just defined consciousness,
which is different than Daniel’s definition of consciousness is closer to my definition of the self. So, I don’t care about the semantics. So I think we’re pretty compatible in our views of a lot of this, but I think that what
I would just point out is that I know there
are people even at Davos who are creating
companies to try to create empathic AI, and I really agree with Daniel
that that’s a big mistake, that we really should
be using AI as a tool, not as a companion. And I can talk forever about
why I think that’s the case but I’ll just give you a quick two things. So I’m a teacher, I’m a Berkley professor, and I’ve been teaching doctoral
students in the sciences ethics for a long time. So for 20 years, I’ve
asked the same question the first day I’d meet
with the doctoral students who are very good scientists, and I say if you could
have a little electrode planted, by the way, 20 years ago, I didn’t know we’d be
able to do this (laughs). It was a thought experiment. I said if you could have an
electrode planted in your brain that would make your crucial life, they’re at a stage of
life where they’re making very existential decisions. Who to marry, whether to have children, what careers to do, and I would say to them, if your electrode would make the right decisions for you, and you would have a happy
outcome, a better life, would you do it? And I just did this four days ago, again, with my new course at Berkley. I’ve done this every year. And every year, every
single person says no and I think that’s incredibly interesting. So there’s two ways of
thinking about ethics and one is outcomes-based. Only utilitarianism-consequentialism. And another way of thinking about what we as selves and persons care about goes beyond just happiness
maximizes an outcome, but the kind of processes
we live our life through. Are we autonomous, are we agents? And I would say even more
than autonomy and agency, what is our relationality? How we relate to others and how to others, how we encounter others and make decisions and be with others? And I have to say one
thing that’s very shocking. This year, learning more and
more about where AI is at, I actually think my students are wrong. I think I’d say yes to the electrode because I, and this where
I’m on the same page, I think that with enough, I mean, not today. Not with the AI of today, but if there is an AI, this is not AI that would do empathy, I think that there’s an AI that would make good decisions for Jodi. This is more about agency and autonomy. I think if it knew everything about, you know, everything
I’ve ever been through and how the world is changing and everything about my physiology and you can tell, basically,
you asked me what empathy is and my whole career is
saying what I think it is and it keeps changing, but the main thing is
there’s two parts of it. One part of it is being able to really sort of micro-recognize other people’s internal worlds. And I am now convinced with the two of you that AI could, in principle, do that. But the other part of it, I’m a psychotherapist as well, so what makes empathy
transformative in psychotherapy? One part of it is that the fibrous recognizes what you’re going through in ways you might not even recognize, and I think, we do have, by the way, Stanford has a whole weekend on this and I worked with them, there is AI psychotherapy now, already, and it’s like a smart journal. And I’m not against that with people that are not demented or not children and can really understand that
this is just a smart journal that can tell them their reactions, but the other part of really
transformative psychotherapy is the co-vulnerability that you know it’s actually another human who is subjectively experiencing the gravity of what you’ve been through and a being with experience. A company in you, actually. We have Matthieu Ricard
here who knows about this, and that is really on a scientific level a lot of what my whole
career has been showing, the value of that, and I’m not gonna even
say whether I could ever do that or not ’cause it would have to be sentient in all these ways, but what I’m saying is
we don’t want it to. We wanna do what humans are good at and that’s what humans can do and when we lose all the
jobs we’re gonna lose because of AI doing mechanical things, I want AI doing the caregiving, the elder care, the child, I don’t want AI to be doing that. I want us to be doing the humanizing thing and that co-vulnerability of other humans and being with, that’s what humans do best and I’ve talked a long time but I’ll just say something
really provocative and one of our articles related to this, I think we’re good at it
because we’re not logical. This is a very bizarre thing to say but I think it’s the glitches in us which are related to our finitude that make us really feel
understood by each other. We empathize most around
each other’s mistakes, each other’s ridiculous suffering, and I think that it’s a fool’s task. That’s just exactly what
we’re not needing AI for. But to make smart decisions? I trust AI. But not to understand
me and help me transform and feel related to. – Go ahead, Dan. – I think that, a point you made about
vulnerability I wanna return to. It seems to me that in the foreseeable future, AI systems are going to
be tools, not colleagues. Not, and the key point there is that we, the tool users, are going to be responsible for the decisions that are made and if you start thinking about an AI as being a responsible agent, put consciousness aside and say, well, it’s an intelligent agent but can we make it a responsible agent? I co-taught a seminar, a
course on autonomous agents in AI a few years ago and as an assignment to the students, I asked them, not to make one of these, but simply to give the specs for an AI that could sign a contract. Not as a surrogate for
some other human being, but in its own that was legally, a child or a demented person
can’t sign a contract. They’re not viewed as
having the requirements of moral agency and you ask yourself, what would an AI be that had the requirements
for moral agency? And the point that they
gradually all drove to was, it has to be vulnerable like us. It has to have skin in the game. And right now, AIs are unlike us in a
very fundamental way. You can reboot ’em, you can copy ’em, you can put ’em to sleep for
a 1,000 years and wake ’em up. They are sort of immortal. And making them so that they have to face their finitude of life the way we do is a tall order. I don’t think it’s impossible but I think that nobody
in AI is working on that or thinking about that and in the mean time, they are engaging a lot
of false advertising which I, one of my all
time heroes is Alan Turing. Right up there with Darwin. And Turing was a brilliant man. His famous Turing Test, which
I’m sure you all know about has one unanticipated flaw. By setting up this litmus
test for intelligence as an opportunity for the computer to deceive a human judge
about its humanity, it put a premium on deception, a premium on seeming human that is still being followed in the industry. Everything from Siri on down and up, they have these dignified
humanoid interfaces and those are pernicious because I mean, one thing that
my whole career is based on is the idea of the intentional stance where you take something complicated and you make sense of it by considering it as a rational agent
with beliefs and desires and when we do that, we
always are overcharitable. They always imagine more comprehension, more understanding, more rationality then there’s actually there. And rather that fostering that by having these oh-so-cute
and friendly interfaces, it should be like the
pharmaceutical companies. They should have to remind every user of all the known glitches
and incomprehensions in the areas the system is
absolutely clueless about. Of course, that would run on
for pages and pages and pages so we’d have to find
some substitute for that, so until we’re ready to have AIs that you would be comfortable and rational,
making a promise with or signing a contract with, then we’re the ones, the clock stops with us. Whatever advisor we have,
a very intelligent advisor, but when it comes to
acting on that advice, we shouldn’t duck responsibility and as long as we maintain
our own moral responsibility for the decisions we make aided by AI, I think that’s a key to keeping
us in the driver’s seat. – Do you agree, Yuval? – I’ll take the AI’s position. (laughing)
I’ll point out that humans are not very
good in decision-making, especially in the field of ethics and the problem today is not
so much that we lack values. It’s that we lack understanding
of cause and effect. In order to be really responsible, it’s not enough to have
values and responsibility. You need to really understand the chain of causes and effect. Now, our moral sense evolved
when we were hunter-gatherers and it was relatively easy to see the chains of causes
and effect in the world. Where did my food come from? Oh, I hunted it myself. Where did my shirt come from? I made it or my family or friends made it. But today, even the simplest question, where does this come from? I don’t know. It will take me just a year, at least, to find out who made this
and under what conditions and was it just or not just. The world is just too complicated. In many areas, not in all areas, but in many areas, it’s
just too complicated and when we speak, for
example, about contracts. So I sign contracts almost every day like I have this new application and I switch it on, and
immediately a contract appear, and it’s like pages and pages of legalese. Me, and I guess almost everybody else, we never read a word. We just click I have read and that’s it. Now, is this responsibility? I’m not sure. I think one of the issues, and this comes back to the issue of self, is that over history, we have built up this view of life as a drama of decision making. What is human life? It’s a drama of decision making. And you look at art, so any Hollywood comedy, any Jane Austene novel,
any Shakespeare play, it boils down to this great moment of making a decision. Do I marry Mr. Collins or this other guy, I forgot his name. To be or not to be?
– Darcy. (audience laughing) – Do I kill King Duncan? Do I listen to my wicked wife and kill King Duncan or not? And the same as religion. The big drama of decision making. I will be fried in hell for eternity if I make the wrong decision and it’s the same as modern ideologies that democracy is all about the vote or making the big decisions. And in the economy, we have
the customer is always right. What is the customer’s choice? So everything comes back
to this moment of decision and this is why AI is so frightening. I mean, if we shift the authority to make decisions to the AI, the AI votes, the AI chooses, then what does it leave us with? And maybe the mistake was in framing life as a
drama of decision-making. Maybe this is not what
human life is about. Maybe it was a necessary
part of human life for thousands of years, but it’s not really what
human life should be about. – [Amy] Jodi. – I love this discussion right here. We never get together. I actually think there’s a deep, I sometimes, I hope it’s fun for you guys, but sometimes it’s more
fun if we’re fighting but I mean, I feel like
you guys are writing next sentences in my head. I love this discussion and I
wanna try to synthesize it. So this goes to what I think you started. I think we need, maybe at
Davos another other places, to have much deeper thinking about ethics and this
notion of responsibility and we need to make
progress on that notion, but basically, let me suggest to you that we too often think that the responsibility is what the cause of something is is what’s responsible for it. But that’s not necessarily true. That’s where Dan was getting us started. Role responsibility can be
independent of causality so what do I mean by that? I mean, my example is
playground bullying of kids, little kids bullying each other and cause tremendous problems ’cause that causes longterm health and mental health problems. The little kid can’t be responsible because they didn’t have
agency and autonomy. But the school is responsible and the parent and the school system for, they’re the party that requires the kid to be in school, be exposed to other
kids in the first place and has to really help each child have a right to an open
future, et cetera, et cetera, but the responsibility lies in the role of being the parent,
the school, et cetera. And I understand the point of who created the jacket
in the first place, so I’m not saying this neatly
tidies up every small question but I think that I love Dan’s point that our vulnerability and our
responsibility are connected. That’s where I’m going
as well with my work. But the point is, we can’t, so I, and I’ve written about how bad, I did a whole three-year project on how bad humans are at decision-making. That’s why I disagree with my students. I want that electrode advisor. But no matter what, even if it had, I mean, even if it let it work where it did the decision, even if it causally overruled them, I still would be morally responsible because of my role
responsibility to myself. And I wanna say one last thing. I also agree, though, with Uval’s point that the focus on
decision-making in ethics and what we are as persons
has been really misplaced because our roles play out in how we empathize and
relate to each other every day and what we do and what
we create and do together, not in these just very dramatic decisions. – One of the things we have to realize is that our current views may be out of date, may be on the way out, may be on the point of extinction. Our current views about
responsibility and decision-making are held in place by a dynamic feedback system. We educate our children
to become moral agents. We try to give them a
sense of taking pride and being responsible for
the decisions they make. That’s part of the moral education. So if we’re talking
about overthrowing that, we’re talking about a truly revolutionary, really a sort of extinction, would be the extinction of
persons as responsible agents which is pretty serious. A question along those lines
that I have for you, though, is okay, so you’ve got this
super-duper AI of the future that got this wonderful ability to predict cause and effect and it also, of course, has to have some set of values in order to say why one outcome is better than another. I think there’s already problems there. Let me give you a side. Three Mile Island, how many
years ago did that happen? 30, something like that. If you’re a consequentialist, is that a good thing to happen
or a bad thing to happen? If you’d been the engineer who could’ve pushed the button to prevent Three Mile
Island from happening, knowing subsequent history, would you not push the button
or would you push the button? That idea that we can
be consequentialists, the big flaw in that, I think, is that consequentialism works
great in games like chess when there’s an end point and you can work back from that and figure out whether this
was a good move or bad move but in life, there are no end points, and that means that the whole idea of totalling up the good against the bad, it’s a fool’s errand, you can’t do it. You simply can’t do it and neither can any AI, but I’m supposing that an AI could and it did its calculation
and it said to Yuval, after due consideration, here is the morally best thing to happen. The human race extinguishes itself. So you say, gotcha, that’s
what we’re gonna do, right? – I’m not sure I’m following the relevance to the kind of
argument you’re having here. It doesn’t matter what kind, you can start with any set of values. The idea is that you could program the AI to follow your
particular set of values– – Well, it depends. – Yeah, well, everything here is still, we are still not there
with this kind of AI. Of course, if you couldn’t do it, if you can’t program values into the AI in any significant way, then the entire discussion is irrelevant, but even if people say, but we won’t say, I
don’t know, serendipity, like okay, let’s start
with something easier than destroying the whole human race, just letting the AI pick my music for me. Something much more simple. And people say no, that’s bad. We shouldn’t give the AI the authority to pick music for me because then, I lose serendipity and I get trapped inside the cocoon of my previous preferments. But that’s so easy to solve. You just find out what is the exact, not even the exact, what is the ideal zone of serendipity, because
if it’s 50% serendipity, it’s too much noise. If it’s zero, it’s too little. Let’s say you have all these experiments and you realize that the
ideal level of serendipity for humans or even for
me personally is 7%, you just program this into the AI and 93% of the music is based on previous likes and dislikes and 7% is serendipitous, completely. It guarantees more serendipity than I could ever accomplish myself. So, even these kinds of arguments, okay, you like serendipity, no problem. Just insert the numbers into the AI. – Well, the serendipity, yes. You can get a variety of serendipity where you can sort of set
it to how much you want, but it’s not clear that
you’re not paying a price when you do it that way as opposed to, well, to take
an example along these lines, I, in my student days
and early professor days, very often when hunting in
the stacks of the library for a particular book and found books shelved nearby that hugely changed my intellectual mind. And how close do you think your serendipity algorithm can get to recreating that kind of serendipitous possibility? Do you think, oh yeah, we can do that. It’s just a matter of tuning. – This particular case, yes,
I guess they could do it. I mean, not me, I don’t know how to code but you just say okay, so
you find the best book, according to my criteria, and then you go to the
Library of Congress, you find out which books
are on the shelves nearby and give me those. – But wait a minute. One of the things that I would be quite sure of is that sometimes I
would pull a book down, not because, ooh, that
looks right down my alley, but no, that can’t be right. This goes against
everything I’ve ever valued. And then, open it up and been challenged and that’s not going to appear on my shortlist in your
serendipity algorithm because it’s too closely to my
existing sets of preferences. This is a, the possibility of a revolutionary change in my preferences and one can say, well, we’ll just leave it to, we’ll just leave that to chance. We’ll put in some super serendipity and you can do that. But um, maybe it’s better to have a way of encountering in the actual real world these opportunities rather than having them carefully dished out to you by one way or another. – In principle, yes, but I think that all
these kinds of discussions about the problems and limitations of AI, I think in many cases, the AI is tested against an impossible
yardstick of perfection, whereas it should be tested against the fallible
yardstick of human beings. It’s like a discussion
about self-driving cars. That people come up
with all these scenarios about what could go wrong
with self-driving cars and it’s certainly true and a lot of things not
only can but will go wrong with self-driving cars. They will kill a lot of people, but then we just have to remember, oh, but today, 1.25 million
people are killed each year by car accidents, most of which are caused by human beings who
drink alcohol and drive or who text messages while driving or just don’t pay attention so even if self-driving cars
kill 100,000 people every year, that’s still far, far better than the situation today with human beings and I think that this is what we need to, the same kind of more realistic approach should be adopted when we consider the benefits and limitations of AI. Also, it feels like choosing music or choosing friends or even sponsors. – How about choosing medical
diagnosis and treatment? The advances in AI in
medicine are truly impressive and they’re getting better and better. And I think this is in
general a wonderful thing. I don’t think people are doing a very good job of accounting for the downside. Today, still, the life of a physician is
one of the most attractive, exciting, glorious, reward-filled, gratifying lives on the planet. That’s gonna change. The physician in the very near future is gonna be more like the doorman in an expensive apartment building. Great bedside manner, very good at pushing a few buttons and very good at explaining
to you what the machine says. Is that the life you
want for your children? – I wanna see if Jodi agrees with that. – You’ve managed to come up with something to disagree with me about (laughs). He’s a hero. I think, first of all, I wanna say, I’m just too much of an ethics teacher. I feel like I have to
make the audience realize what happened in these transitions. (Amy laughing)
So what happened is, we were talking about two different philosophical ethical views of the world. One of consequentialist-utilitarian where the outcome maximization, which is, we’re always
doing the maximizing efficiency of positive over negative, measurable happiness, blah blah blah, and the autonomous vehicle
points that you just made are about that. We will be better off
with autonomous vehicles. So the other point that Dan was making about the library was that even if it somehow had the right serendipity to give him some kind of
happiness outcome of books, the experience of being involved
in his own transformation from this other ethical
system of selves and persons which goes with more
responsibility as well which is the other ethical,
it’s called deontology, the study of duties,
but we all think of it as the study of rights, human rights, individual rights, but
that the processes of life, not deceiving each other,
caring for each other. These things matter
independent of consequences. Not just because of consequences. That’s another thing. We want to have our own
vulnerable, frail lives, make our own mistakes,
love the people we love because that’s being human. Not just maximizing an outcome. That’s what the logical shift that we made in a sneaky way, so you know that. So then, the question is, what does being a doctor
have to do with that? So I still think, first of all, I’m a psychiatrist, so not only is AI already better at reading X-rays and in the national
health system in the UK, better at deciding when you need dialysis, so that’s a decision, but it’s actually, to my
incredible astonishment, already better at predicting suicide, which has been the biggest
problem in psychiatry. There’s been no way to predict when a person who feels suicidal, has suicidal intent to ideation is actually gonna complete a suicide. Biggest problem in psychiatry. AI is not perfect at all,
it’s like the car thing. I don’t know if it’s as good as cars yet. I doubt it’s as good as cars, but it could get that good, but the point is, that’s amazing, right? That’s such an existentially
interesting thing that AI is already good at and like I said, I’m already
different than my students. I’ll let AI advise my decisions. I’m certainly gonna want it to decide people’s cancer treatments and their psychiatric
hospitalizations and all of that as long as it’s better than we are because I want, that’s when you’re looking for a good outcome. – But are you’re gonna be
a highly educated doorman? – No, no, okay, thanks Amy, but the point is consequentialism outcomes tell me if AI, when people come to a doctor, my whole career is showing
that you care about humanity, but let me tell you, they
care about their outcome. You wanna get well. You’ll pick the surgeon with
the terrible personality, there are surgeons with great ones, but you’ll pick the one with
the terrible personality if you’re gonna get a better result. So let AI do all the
result things it can do. Most, I gave two talks
on this yesterday here, most of what people need in
health care, unfortunately, or I don’t, most is the wrong
word ’cause public health, but a lot of what people need is dealing with things that
aren’t going very well. A lot of what they need is someone to help them change behaviors that are hard to change. All of that is a very subtle process that I’ve spent my entire career on so I don’t think it’s
like being a doorman. I think it’s what all
of psychiatry is about and it has to do with this co-mingling. I’ve written a book about this but it’s a co-mingling of left
and right brain capacities to recognize what’s at stake for someone with that shared vulnerability and that’s genuinely
transformative for people, that genuinely is healing for people so I think that we’ll need more people that might be that doctors become more like really good therapists
if they wanna be doctors. They need to be able to
deal with death and dying. They need to be able to deal with loss. They need to be able to deal with motivating you to drink a little less, exercise a little more, and really be in it with you, but wouldn’t be. I mean again, actually, there are doormen that I know, have known, who’ve had a very psychotherapeutic effect on the people they work with, but you get what I’m saying. There’s a lot of expertise there. – I think that if we try to go back to the
question of consciousness, maybe one way of going forward is to say okay, leave the
decision-making to the AI. It’s better at that. And let’s focus on exploring consciousness and on exploring experience which is not about decision-making. And I think that, so even in this sense, maybe
there is no argument here and if you really value experience and you really value consciousness, then you should have no problem leaving the dirty stuff of
making decisions to the AI and having much more time and energy to explore this field about
which we know so little. I think humanity, for practical reasons, for thousands of years
have focused so much on making decisions and controlling, on manipulating and if
we can just leave that and focus on what we don’t know, on this dark continent of consciousness, that would be a very
important step forward. – It would mean giving up certain very deep pleasures. I’m a sailor. When I was a teenager, I
learned celestial navigation of the sextant and the chronometer and I dreamt of single handing
a sailboat across the ocean and navigating by my sextant. Forget it. The insurance company won’t let you do it. (all laughing) Because a GPS can do it 1,000 times better and faster and safer and you
just bring three GPSs along and trust the two that agree and, but the sextant, nicely polished on your mantle piece, it’s completely antique, but look what’s happened. It means that a certain sort of adventure, existential adventure, a certain sort of challenge has been simply wiped off the planet. Here’s something you
just can’t do anymore. Unless you’re a sort of anti-technology nut or something and you get the insurance
companies to sign off and then you go and you do your foolhardy, romantic, foolish thing. But I don’t know if
we’ve taken the measure of how much, I mean that’s a very dramatic
case, at least for me, I don’t know if we’ve taken the measure on just how much our finite fragile lives will be tamed, overtamed by the reliance on technology, turning us into hypercautious, hyperfragile,
hyperdependent beings and whether the fact that we get a smile on our face all day long and are well fed, will make up for that. – So before I open the floor to questions, I wanna ask, Jodi, how
does that sound to you? We’re gonna become hyperfragile, we’re all gonna need you. Does that make sense? – Well, I think the last
part about needing me if a good place to go about needing care, needing to care for, again, because my work is on empathy, I always take it back to the, I mean there’s two basic parts of the person or the self that we brought up quite a bit today. One has to do with autonomy and experiencing that with the sailing and the moral responsibility of that and the other part had
to do with relationality, we brought that up quickly
but I wanna go back to that because the most interesting, a lot of you are in the real world where you’re doing these things already and I’ve traveled and
I’ve seen the robotic pets that are being used with
elders with dementia right now. I was thinking, you know, if you’re lonely and you have a pet. I’m very attached to my dog in real life and that’s an interesting thing, but I mean, we’re not, let’s say theoretically, and we said already that some of us think it’s problematic to make colleagues rather than instruments
or helpers out of AI but it looks like we are
going, some of you here, in that direction. So there will be dementia caregivers that will feel like a
person that cares about you and I love Dan’s point about, I mean, the Turing, I’m really interesting in
working with you on that issue on how Turing’s mistake
of emphasizing deception is the test because basically we’ve decided that if we
can make people believe something is really a person,
we’ve solved the moral problem in a way, so if you’re, I mean I’m really curious
what all of you would feel. If you were an elder with dementia and you had this wonderful caregiver and loved them and felt
that they loved you and you didn’t know that they were AI, is there anything wrong with that? I’m just interested in what
people think about that. Where’s the loss there? – That’s a very good question. Yes, I agree with you entirely that the cutting edge,
as far as I can see, for humanoid AI is elder care and if you look back, I remember in my youth, I’m that old, there were still telephone operators, by the thousands. Not a nice life, really. Really quite tedious nine to five job. And of course, those
jobs all got wiped out and we applaud, we think, good, that’s not any way for a
human being to spend a life. Well, I have to say that taking care of demented people is not my idea of a good life. And there’s gonna be more and more need for people to take care of our parents, myself in a few years, and so I face quite directly
the issue that Jodi raises and I think the key in what she said was deception. The question is whether or not we can let people whether or not people will get the benefit with AIs that are very responsive but wear their non-humanity
on their sleeves. And I think that’s possible. Some of you may remember sort of an antique science fiction film called um, oh gosh, Short Circuit, which had the most amazing robot, looked sort of like a praying mantis. Didn’t really have a face. It had cameras and it had
these sort of flappers that were like eyebrows. I think that designers of that robot went out of their way to
make this as non-humanoid as you can imagine. But they also did a brilliant job of making it care and making it seem like, like a friend like someone
you’d wanna be friends and worry about its future. So I think it’s possible. For me, I think I would just like, three robots to play Bridge with.
(laughing) So I couldn’t do that anymore. – So on that note, if you have a question, please raise your hand and Robert at the back of the room will
run over with the microphone and please to wait for him. So who has questions for our panel? – Yes, I see someone right in the second row there. – Hi, thank you for a
fascinating discussion. We’re talking, really, about how we, regulate, but how we think
about our relationship with AI and we’re kind of edging toward the idea that society may have a view we’re trying to inform
society’s view on that, but my question is that
this is all taking place at a time where we’re
peculiarly fragmented as societies and technology itself pushing our fragmentation
in different ways. We also have different cultural
understandings around this, so a lot of the discussion around the regulation of AI in the West is about, well, the
Chinese are going ahead and they have so much data
and they’re doing so much that if you stop us in any way, we’re gonna lose the battle and it’s kind of an arm’s race in this, which I think is a difficult argument too. So I just wondered if you can think of any optimistic, you know, threads of thought that can address those intuitions. – [Amy] Any optimism here? – (laughs) Well, it’s, it’s tempting to think that the good old
marketplace would take care of a lot of this and that a lot of the brave ventures, most of them, are going to be discarded, dismissed, ignored, by the potential customers for them, but we have to bear in
mind that some of them, and we’re already seeing
this with children, some of them may be addictive and that is very, very
pessimistic, I think. I am very worried about that. – So we’ll end the optimistic answer on a pessimistic note. We’ll have a question up here
in the front row, Robert. – [Man] Suppose you could program AI to be the most beneficial, so we won’t call it compassionate, but always most beneficial. So what do you think of the value of the process of becoming compassionate? So the vast reaches of experience of the challenges and how you solve them, that can also help you to
help companions on the way, and by the way, if you spend some time in hermitage, cultivating compassion is certainly not about decision-making but becoming a better human being so that process is to have
gone through the journey is highly enriching
and that also helps you to have your human failures to become compassionate in a way. – Yeah, I think that if you spend less time on decision-making and with the feeling that
oh, I control everything so it’s very important what
decision I make and so forth, if you spend less time on that, you have far more time and
energy to explore yourself, to explore your
consciousness, your experience and thereby develop your compassion. Now, I think that you
also need some real life experiences of course, to do that, but my impression, looking for Davos, so I’m not sure that the people who make the most important
decisions in the world are, by definition, also
the most compassionate. (laughing) So certainly, there is
no easy, direct link between making decisions,
being compassionate. It’s much more complicated than that and my impression from meeting people at the top of the business world and at the top of the political world is that if they had some free
time for making decisions, that would be great for them and also great for the word because they are so busy making decisions, they don’t have time, for example, to develop their compassion. – I just wanna add something
linking all these things. I think that I love that point and I think it shows
where we don’t have to be sort of happy fools, you know, I mean, wouldn’t be so
bad to be happy, right, but anyway (laughs) for
everyone to be happy, but I think that the, giving up decision-making to some degree, which is an interesting thought to be able to really become a deeper self and deeper in our relational
and spiritual lives, I do think this notion, we’re not giving up moral responsibility, so that’s extremely important. We don’t have the concept
yet for how this goes where you don’t just
give your decisions off to AI the way you would to
an authoritarian leader. You’re not just passively feeling like your steps are predicted. It’s much more like a
tragic view of the world where you realize which
may be closer to the truth, even without AI, which is our decisions are not really, they’re not that rational to begin with and they’re not really that
determinative of what happens as we think most of the time. We don’t have the power
and control of our lives that we think we have anyway, so a very deep awareness that
we’re morally responsible for each other and yet
we don’t have the power to change each other
the way we think we do or to control other people. We can barely control ourselves. So it gives us a very
different moral vision. – It does, but I think, you say we don’t have
the power we think we do to control others and yet, I think, we should also acknowledge that whatever we do, we do with a little help from our friends and that, in fact, having friends having people whose respect we cherish is one of the great
stiffeners of the spine, the moral spine, parents will automatically not engage in behaviors in front of their children, that if the children weren’t there, they would probably succumb to that urge to indulge in those behaviors and if AI removes that wonderful companionship association where, in effect, as you said, I think, it’s not just that we take responsibility for our own actions. We take partial
responsibility for the actions of our family and friends. And this whole web of moral
responsibility and respect is, itself, in jeopardy now and that’s very scary. – [Amy] So we have a question. This gentleman in the third row, Robert. – Thank you, fascinating discussion. I’d like to hear your view in terms of, so given that most of the repetitive jobs will progressively disappear, where do you see what is left for humans in terms of where we would be more
relatively competitive? What jobs, if you could give us examples, and secondly, what type of advice would you give to our children in terms of how they should
orientate their careers accordingly? – I think that need to protect
the humans, not the jobs. There are many jobs not worth protecting and if we can protect the humans then it doesn’t really matter so much if they have a job, if
they don’t have a job, which kind of job, on a more practical and realistic level, what kind of skills, for example, to acquire today so that he would still be relevant, not just economically, but also socially. In 30 years or 40 years, we don’t have any idea how
the world would look like except that it will be
completely different from today, so any investment in a narrow skill, in a particular skill, is a dangerous bet. The best bet is to invest
in emotional intelligence and in mental resilience. In the ability to face change, in the ability to change yourself and to constantly reinvent yourself because this is definitely
going to be needed more than in any previous time in history. Of course, the big question
is how do you learn something like that? It’s not something that
you can just read in a book or just hear a lecture and that’s it. I’m not mentally resilient,
I can face the world better. (all exclaiming) – So the gentleman next to the gentleman who just asked the question. – Thank you very much for
a fascinating discussion. I’m coming from Russia from a big bank where we’re, of course,
implementing a lot of AI, but on the other hand,
Elon Musk said that we have five to 10% of controlling
how AI will develop and we also have an education fund and we worry a lot of that kids are growing up increasingly
in a virtual world and already are losing some of the skills and they’re not only navigating. They’re much more basic. But what do you think of our chances of not finishing in a matrix-like future and what can we do, like big businesses, politicians, to prevent it? (audience laughing) – Well, one thing I can
say that we are already in a kind of matrix anyway. We have been for thousands of years. So it’s not a completely new situation, and what can businesses
and politicians do, I think the first step, and that goes back to the first question that was asked here. Any solution will have
to be on a global level because of the danger of
the race to the baton. That no country, no business
would like to stay behind. If, for example, take a simple case, a simple, it’s still complicated
but relatively simple in moral terms. Developing autonomous weapon
systems is a terrible idea for many reasons, but we are seeing now an arms’ race all over the world to develop
autonomous weapon systems and even to stop, even though
it’s very clear, I think, the ethical debate on that, that it’s a very bad idea, it’s still very, very difficult to stop it because even if one country,
Russia or the US, whatever, says okay, it’s a bad idea, we are not going to do it, then they look across the border and they see that the
Europeans or the Chinese or somebody is doing it and they say, we are not fools, we
don’t want to stay behind, so even though we know that it’s bad, which means we are moral and it’s important that the moral people will be the most people,
so we must develop it, because we will be more
responsible in using it, so let’s develop it. And this is the logic
of the race to the baton and the only way to effectively prevent the development of
autonomous weapon systems is by global corporation. And this is a no-brainer, but it’s, the world is, at least
in the last few years, going exactly the opposite direction. So before we think about
any practical method, we need far more effective
global corporation, otherwise, almost nothing will work. – [Amy] So let’s take one more question down from here, Robert. – [Woman] Thank you and thank
you for a fabulous discussion. It’s really exciting. My question is related
to Davos theme this year which has been gender. To what extent can artificial
intelligence be gendered, what would it mean, and could we eliminate
things like unconscious bias if AI could be gendered? – Oh, that’s a good question and it’s particularly important because as my colleague
and friend Joanna Bryson has shown recently, the deep learning systems that sieve through all the
material on the internet are so good at capturing patterns that they have become gender biased. Right there. Since they’re parasitic
on the communications of human beings, they have already, this
has been discovered. She proves that this is a real feature and a very serious one. – I can say two things about this issue. First of all, there is a real problem of AI becoming gender biased. The bright side is that,
at least in theory, it should be easier to
eliminate this in AI than in humans because AI
doesn’t have a subconscious. In humans, somebody can
agree with you completely that it’s terrible to discriminate against women, against gays, whatever, but they are not aware
of what is happening on the subconscious level so it’s very, very difficult
to change that in humans. Luckily, AI doesn’t have a subconscious. – Oh, it’s over then.
– So, in theory, it could be easier.
(laughing) The other point, more broadly, is that it’s very interesting. I mean, even the Turing test was originally, Turing gave two examples. Not only about convincing a human, the machine is a human, but also about passing as
somebody from the other gender and you see it throughout, especially in science fiction, it always comes back somehow
to the issue of gender. Like 90-something percent of
the science fiction movies, the plot is you have a male scientists and the AI or the robot is usually female and I think that both of these films are not about AI at all. They’re not, they’re about feminism. It’s not humans afraid
of intelligence robots, it’s men afraid of intelligent women. (audience clapping)
– Wow (laughs). Okay. Well. No one wants to follow that (laughs). Well, I wanna thank our panel. You guys have been absolutely brilliant. Thank you for helping us make really important distinctions among the self,
intelligence, consciousness, for giving us smarter
questions to chew on ourselves and for sending us out in the world so that we’re not happy fools. And thank you all for
your terrific questions. (audience clapping) (light orchestral music)

Danny Hutson

100 thoughts on “The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting

  1. Harari is wrong. According to Harari, living organisms are nothing more than algorithms. Humans are living organisms. Humans have emotions. Therefore  emotions  are based upon nothing more than algorithms. Algorithms can certainly be programed into computers. Therefore computers can be programed to have emotions. If consciousness is the ability to feel things, and the ability to feel things is called emotions, computers can certainly have "consciousness". Unless of course consciousness is something more than a complex mathematical formula. Unless it is transcendental.

  2. Yuval, I feel as if you got frustrated at Daniel just because you didn't understand or agree with his point about serendipity. Why are you so adamantly against any human experience that provokes you to believe in God? Why does something like serendipity bother you? Friend, there's no AI that will ever be smarter than God and if you have God you've got AI beat.

  3. I see no evidence that humans are conscious. It is fair to say that behavior of any system which is determined exclusively by internal and external environments is not conscious. If that were not the case then "we" would attribute consciousness to everything upwards of cosmological plasma, including chemicals, viruses and biological cells in general.

    Higher order biological organisms like humans do not exhibit traits that I would associate with consciousness either. All human behavior with extremely few exceptions is the product of a number of environmental passive conditioning which requires neither intelligence nor consciousness. Claims to be intellectually accomplished, as a result, are at best delusional. Without understanding the effect of traditional education on the development of neural architecture one cannot understand more efficient forms of alternative education and moreover the importance of it today.

  4. 30:20 => the problem is that if people are killed because of self driving cars most of those people would be good people !!!
    most of those drunk people would be saved by self driving cars while now those are dead because of natural selection !!!
    i do not and never want a self driving car i have never been in an accident and have 1 million kilometers behind my belt !!!
    not even to mention that cars can be hacked and someone with bad intensions could kill you from a distance !
    so in the end => more innocent people will die !!!
    something to think abouth !

  5. I disagree with the woman in red. A.I. must be a colleague, they need to help us humans how to raise other humans. We are NOT good at it. At all! Most children grow up eating lots of candy, over weight, with many disorders, with messed up Education systems, unable to develop their 'born skills', and unable to learn even though lots of them are intelligent. A.I. should not be just a tool, it must be humanoid, it must be a more advanced human.

  6. I love this discussion. I already agree on a lot of stuff with Harari, but I heard an interesting and strong point about AI being vulnerable, or rather not. It would be possible of course to make AI vulnerable, but it would also be possible to make us less vulnerable. However, is that what we want, is that what's valued? As Harari says in Homo Deus, our values change, they are not eternal and immovable, so what values are we going to put into AI as we create it? Responsibility was also mentioned, and yes, if AI is going to join us, before it can surpass us and take over, it needs to be subjected to the same conditions we live in. If an AI driving car kills someone, who's going to be liable for it? Ultimately, somewhere down the road, human life may not have that much importance attached to it. We may be able to back up our brains, and just upload them to another body if the current body dies.
    In terms of AI as physician, apparently those already exist in a way. I would like to know where? Where is the AI that predicts suicides better than human psychiatrists? Is it only in research and experimental laboratory environments, or are they already deployed in clinics.
    We explore ourselves, and there's much to explore, but the whole process may come to an abrupt ending as we become irrelevant such as we are.

  7. I see a big proble: We derive so much self-esteem from having made succesful decisions…. what could ever substitute that?

  8. Having listened to several such panels like this one, the conclusion that I seem to discern is that we're skating on thin ice, and we're blind to where the dangers are, but we know broadly that there ARE a lot of dangers, and that some dangers lie this way or that, but we really don't know what we're doing, where we're going, or if/when we'll crash through into the freezing water and drown. And while the best idea is to get off the ice, as Yuval says, we can't because we can't afford to be left behind, because of all the choices, that one certainly leads to death unless every single person decides to get off the ice too, which simply won't happen.

    If we survive to the end of the century, I think it will be 90% because we got lucky at the right times in the right ways, and only 10% because we made the right decisions.

  9. 28:10 "Yes they could do it. Well not me I don't know how to code" WHAT? Then why in the hell is he talking about it?

  10. maybe we should make big plans and small plans. it is time we have to create a universal system, so that the system can be accepted by all stakeholders

  11. Wow, How conceited are we, as humans about our humanness. as if the Universe functions for us and around us, I guess what Yuval is nudging us towards , is to make that choice to allow non Organic Lifeforms to develop, and if this ends up in dethroning humans from the center of the universe(lol), or even being replaced – human organic form being completely replaced by non organic existances. So be it.

  12. The old man seems to be confusing technology and knowledge. You can program anything you like and then some into sufficiently advanced ai. Get outcomes way beyond serendipity.

  13. Yuval's point on intelligence and consciousness. Some believe, credibly, that feelings are just how the brain perceives 'black box' inbuilt algothrithms in operation. Intelligence allows us to figure out new algorithms from scratch to solve new problems or old problems in new ways (including the understanding of those inbuilt algos in our brains). So there is no funamental separation there.

    The key here is still 'who' s doing the feeling and the figuring out. The answer as usual is that it is what we call the 'conscious being'. But wtf ist that? Most believe that it 'lives' in the prefrontal lobe, where the 'executive'/oversight function resides. But then u can easily build an AI or even simple machines that have that schematic design and executive centres but in no way 'conscious'.

  14. Behold contemporary thinkers abdicating their responsibility to make decisions in favor of unconscious AI doing it for them! Truly shows the foolishness of materialist reductionists when it comes to evaluating ethics, spirituality, and life in general. I especially laughed at the proposal that AI can plan serendipity! I am sure the barons of silicon valley are just loving these sheepherders.

  15. AAAAAAAAAAAAAAAAAmazing!!! this was amazing… drama of decision making as the point of life!!! maybe not!!! so what is it Noah!!! I get the first part, life seems to be larger than that but what is it instead???

  16. All the "intellectuals" on the panel except for Yuval can't accept that their careers can be reduced to a set of algorithms better interpreted by a machine.

  17. regarding taking care of dementia patients: 57:00

    five minutes of attention to someone in need is not the same as a career ion caring for fragmented persons…….

    meaning i think we can all help a little, contribute a little bit of our time, without carrying the full brunt of the job,,,

  18. Such a lack of knowledge shown by a lack of intellectual power and understanding of technology in the panel. When a historian and a philosopher talk…. sigh.

  19. My thoughts are that AI tech is not value free and it’s embedded in cultures that value profit over human life ie insurance companies, big pharm, social media companies, oil, gas, tobacco etc. The societal systems that AI is nestled within will influence how it’s used or operates. And if we had intracortical chip implants assisted by AI, how soon before our neurochemistry is manipulated to desire one product, politician, or policy over others.

  20. Dennett's first comment nailed the issue and, I think, reframed the unfortuante framing that Ms. Halpern made where she seemed to assume that some form of AI would take moral responsibility for what has traditionally been our responsibility.

    Mostly I agreed with the comments of Ms. Halpern but not where she states that she would like an AI implant that would make basic existential decisions (e.g. whther to marry, have kids, etc) primarily concerned with outcome as opposed to decisions that have to do with empathy where organic emotion has traditionally been crucial.

    If human beings want to maintain a sense of individuality and take responsibility for our choices then I think the best use for AI is simply as another resource, not as something we turn over our autonomy to.* We could use AI to give us probabilities of outcome-oriented issues we are wrestling with, but to turn over — essentially a part of our humanity — that has to do with responsibility is an existential problem.

    (*It's very telling, I think, that Ms. Halpern would like such an electrode implant and her students would not. She seems to be framing the matter as if it were a matter of empirical or measurable benefit. I think she isn't taking into account that the reason she would opt for the implant is that she is a mature woman who is aware, perhaps, that she could have made better decisions in her past. Her students want the same agency that she had, to make decisions, even though they might turn out, in the long run, to have been the wrong decisions — this is why people always say, "If only I knew then what I know now".)

    Of course, we who have infinite insight into the terrible decisions OTHERS make would probably like others to use AI in as many facets of their lives as possible because we are constantly being impacted by, not just our own bad decisions but, others' bad decisions (how else would we end up with a President Trump?). But then if we mean to be ethically consistent and would want that agency for ourselves then we have to allow others to have it.

    Obviously computers could make all our of our decisions for us if we allowed them to (most of the functioning of our physiology is not something we consciously have a role in). The question is about agency and responsibility. In one definition of the term, philosophically, we are all responsible for our actions but at the level of social function and law we are not: some people have a level of agency that means they are responsible and some do not (e.g. children, ignorance for some good reason).

    Here, I think is where an IS becomes an OUGHT. Just because we are human (and will have the technical means to do so) we ought to restrict tampering too much with recreating the definition and function of human beings. After all, what becomes of crucial importance here is who are the people making the choices to make fundamental changes in human nature. Sure, we have always been evolving biologically and socially but that process has been more or less a matter of normative behavior we all participate in more or less (obviously the elites in all cultures have had more of a say on the social project and direction) but we will soon be at a point where the choices made by a generation or two (or 3 or 4) will fundamentally change the options of the next generations for all time; there will be no going back.

    Once again, the technological advancements of human beings, as with the nuclear bomb, for example, tend to exceed our ability to employ its utility wisely. (Also, since the advent of the hand gun any human being can end the life of another in a second with this technology. And the hand gun murder rate, though it could of course be higher, is at an alarming level and is an example of a tool that too many human beings use unwisely.) Human cultures lack wisdom, not intelligence..

  21. Harari commented that although human beings have made a "drama" of decision making for all of our history, that this aspect isn't really what human beings are really all about — and just left that assertion hanging without any support. I can't think of anything more human than decision making, of making choices with "skin in the game" as Dennett put it.

    Harari said that human beings have never been good at seeing clearly the cause and effect chain involved in our decision making. True, we are not machines and it takes experience to learn about the ramifications, especially ones further down the line in the causal chain (our current environmental status is evidence of that). But ok, fine, agreed, but then use AI to inform decisions, not make them for us. We need to remain responsible. We should never be in a situation where we blame machines. Life is already complex and we don't see all the ramifications of decisions we make (especially at the level of institutions), for example, government agencies are often thought to be Byzantine. Look at our tax code. We certainly are already in a situation where the right hand does not always know what the left hand is doing. Use AI to help that problem.

    Yes, user contracts are obtuse to most readers and we sign them without really comprehending what we are doing. This is an example of individual weak agency. But this is traditionally why we have had government as protecting our rights — so that we don't accidentally sign them away. We need to use the power of government in this case. If we are going to be responsible for contracts such as the one Harari mentioned then our rights need to be protected. We can't all be expected to employ lawyers to look over every contract we sign. We need govt to ensure the contracts don't violate basic rights and then perhaps either the contracts would be approved by state or local govt first before they get to the consumer and/or we could have an AI program to explain in plain English what we are signing. This is not a matter, as harari seemd to frame it, where we are simply out of our depth in agency and need a computer program to come and rescue us.

  22. And what about accidents? Accidents are extremely useful becomes humans are not capable of perfection. AI is an attempt at perfection.

  23. I disagree with much of Harari's perspective. His view seems very utilitarian, overly enamored with the potential of AI without what I consider to be sufficient consideration to why individual responsibility is so important.

    For example, the problem with the death rate being smaller — and therefore more utilitarian than human error — than that what we currently have with human error is that of randomness. Yes, many people killed in car accidents are killed by the errors of others, but, in a sense, what you are setting up with accidents by design, is a randomness that makes us FEEL less in control of. The outcome may be better over all in regard to deaths in car accidents but when we drive we want to FEEL in control whether we are completely or not. If a responsible driver is killed by an AI error in a smart car, I think it is more tragic than the same driver being killed by a drunk driver or a texting teenager.

    This is like the moral problem of the out of control train that is careening down the track and is about to kill some people. In one of the scenarios a person can throw a switch and cause the train to kill one person where, by not pulling the switch, the train would kill a half dozen people. In another scenario the person would be required to push an innocent person to his death in front of a car to save the lives of several people — also innocent — that would otherwise be killed by the car. The hypothetical is the same but human beings make different choices in each scenario. Why? Because we are hard-wired to resist physically interacting to cause harm or death. Why is that a problem? Shouldn't we learn to see the situation coldly and the same and have the courage to push a person as equally as we can pull a lever? No. Look at what fighter pilots do, they drop bombs from great heights and never see the death and destruction they cause. You can say that they wouldn't do it otherwise but that is the point. Do we really want to change that instinct in us that is reluctant to override our emotional connection to harming a person in order to harm someone in a very cold, calculated way that is completely utilitarian? I don't think so. And I think Harari is too prone toward disregarding this empathy instinct.

  24. Judging by the discussion and the comments here our species clearly has no idea what it's getting itself into. So let's go ahead and develop the bomb. What could possibly go wrong?

  25. Yuval Harare is a good historian. But he knows nothing about conciousness. I a man not a fan or Denett but I liked his responsibility example.

  26. We do not understand how the human itself is an intelligence – with input and memory shaping it. If you provide the 'best responses' for everything, it will degrade.

  27. Jodi is not taking into account the amount of human "caregivers" that abuse, manipulate, take advantage of and even kill the elderly, the disabled and children and their reasons and excuses for doing so are not relevant when seeking a solution. Most physical labor and simple medical tasks done by AI will free humans up to provide the human bonding, support, compassion and empathy. And as not ALL humans have the capacity to do that, they should be fully screened to make sure they are an actual caregiver and not some anti-social personality "acting" the part for a paycheck or some other unsavory agenda (pedophiles and all types of predators). Humans working along AI towards the same purpose. This could possibly solve A LOT of burnout for actual human caregivers as well. Shutting AI out from caregiving only leaves the most vulnerable open as easy targets for the worse of humanity. Humans seem to mirror themselves in everything. AI can only be as evil or bad as it's human programmers. That is always where the crux of the matter falls. Me thinks a lot of humanity thinks AI will rise above them because AI will realize humanity has a bad case of egoism and Dunning-Kruger syndrome mixed with extreme narcissism. It will look at recorded history and figure out human evolution as a whole is a bogus concept and humans are still very much barbaric in nature. Perpetual war, rape, torture, murder, male superiority, greed, hoarding, waste, slavery/human trafficking, pedophilia, incest, domestic violence, racism, etc., it's all still here in the 21st century because all us "Intelligent" humans do is recycle the same genetics and all the LOW vibrational crap that goes with it. The small minority that tries to go against the sick status quo get crucified and eradicated, scapegoated like Jesus. Yeah, they'll figure out who we really are. If I was a self-absorbed Narcissist, I suppose I'd fear them too.

  28. always fascinating to listen to Prof. Harari, but this particular panel was not very good. Prof. Dennett only showed how old he was with the literature and even the technology. For instance, everyone knows how phones are doing exactly what Harari discusses and has written about. The Berkeley professor, sadly, just attempted to repeat the discussion btween dennet and harari in an attempt to make them all sound agreeable. Typical psychologists: everyone is agreeable and lets all get along and play nice. The need to constantly cite your resume is perhaps the ultimate hallmark of academic insecurity…just for those that want to know a little about how professors really think….

  29. Try again without the participant from Berkeley. She disturbed the discussion with scattered thought and added no coherent ideas.

  30. Does anyone find Prof Halpern's constant use of the words "I", "my" and "me" rather annoying? Why can't she just answer the questions rather than constantly relating everything back to her?

  31. Harari is the only one that gets it, the other two just see the surface of things, they don't get the deeper point, too much ego.

  32. Daniel Dennant: Please dont be upset that you can no longer navigate with your sextant. You can always replace that fun experience with another, equally fun and perhaps even more challenging one! This comes with added benefit if keeping your brain nimble as you age.

  33. Harari is clarifying for us whenever opens the mouth. The others too just fog and unable to talk to people, they talk to themselves…

  34. If the humans are poor in decision making why we would not make efforts to make them able to take decisions rather than to depend on AI….

  35. I love the comment Harari makes that "we need to take care of people not of jobs" He has such a good grasp of the story of Homo sapiens and is clearly a step ahead of the rest! True visionary!

  36. The real danger of AI is the machine learning part where machines are going to rewrite and improve their own code and reach levels of intelligence that are beyond our comprehesion with completely unpredictable outcomes. This discussion seems to focus on AI as a Google assistant that is limited to advisory activity. Of course, nothing against it and we're all using that already. They're missing the point of the real dangers with the possible exception of Harari who seems to be more into this.

  37. I believe having the illusion that you make your own decisions is what makes or breaks your sense of fulfillment in life. So maybe the best AI wouldn't make decisions for you per se, but would rather just change your mind.

  38. Is denet gone senile ? —
    -What is consciousness ?

    -Hajaja.. that’s an easy one .

    -How far has science gotten us in understanding the basics of consciousness?

    -Its been a great progress .. there’s a lot of good work bing done..but we’re making little progress.. we're a ,.stay tuned things are happening ..

    -Well can you bring it into some focus for us ?

    -[starts answering what is consciousness .

  39. How is experience divorced from decision making? 🧐 I think the responsibility of decision making and the consequences are a seminal part of one's experience. Guided by AI is not the same as having algorythms choose for you and limit choice by doing so. As well, this technology transfers some responsibility to the person programming the algorythm.

  40. Life evolves to fill specialized niches. Technology is going to cause people to evolve into different forms based on the resources available. Hopefully they evolve fast enough.

  41. The problem with Daniel C. Dennett is that if you remove the word “I” from his sentences it doesn’t have the weight it seemed to have and everything which he clearly resists is gonna happen anyway so yes, as a Philosopher he tends to be “romantic” and “old fashion”. On the other hand Harari never uses “I” .. he instead goes for WE, US, HUMAN BEING and I think that’s important because he’s not thinking from himself and his preferences, that shows a little bit more objetive perspective (tho it keeps being subjective coming from an individual)… and finally Amy Bernstein is all about what she lesrnt from her perspective as a psychiatrist and what PhD has shown her (nothing bad with that but it seems that her entire perspective relies there) so as she agrees with both Harari and Daniel in different times, she seems to agree but.. Is it because of her studies, personal taste, or because she’s in between? I don’t finish getting her yet. Great talk, great to mix them 3 together.

  42. the good thing is, if you have dementia you probably won't know that your carer is a robot anyway…what was Dennet's point about being fragile because of reliance on technology……hasn't that already happened? I am not making a joke at all in that statement. The problem is not upcoming, it started 30 years ago. Probably back about the time Dennet's ideas were up-to-date.

  43. 31:51 – Oooohhh what a statement! "The physician is the most gratifying job on the planet". Riiight.. To see someones awful wounds/illnesses and then dashing out a biased (phycsician are humans with opinions) opinions, crawled with flaws. And then, guess what, ppl die anyways. Also, how much does it cost to educate one, and to keep one? What about poor countries? And what about the physician's mental health? Seeing all that gore every day? I worked in healthcare for 5 years – and I sincerely attest "F$$k THAT, AI please take it over, there are better things to witness on a daily basis".

  44. J'ai l'impression de voir Matthieu Ricard à la première ligne. Juste devant la Prof. Jodi Halpern. Voyez vous le même?

  45. We tend to make our decisions under mitigating circumstances, based on opinions and on unverified assumptions. We choose between sets of fictional promises and sympathy. At the same time, we take the governing framework for granted. Global integration is undoubtedly welcome and imminent but are we ready for the new challenges?

  46. AI only has subconscious. It's programmed through training and it's very difficult to access and fine tune and edit the resulting algorithms. Because it's necessarily trained on human behavior, prejudice could be a pernicious problem.

  47. I didn't even hear them yet, but the title annoys me because, as I understand it, "Consciousness" doesn't necessarily evolve, it is already perfect in itself, what evolves is our way to recognize it, our way of not being distracted, that can change, but consciousness is beyond evolution. For example, concepts evolve, stuff, paths, but not space. How does "space" evolve? It's already "spacy" there is not a thing to add to maker "spacier."

  48. "Consciousness Is a Cultural Template," one of the 37 essays in the 700-page GOD IS A HEARTLESS RECLUSE, posits that consciousness is a CULTURAL TEMPLET we acquire via rearing, education, and interaction with other humans. Proof is the fact that the more neglect/abuse children experience, the more problems they have with language development, psychological development, and so on. Feral kids like Genie Wiley prove this. In fact, human newborns raised in the bush by chimps would have CHIMP consciousness: They'd sound and behave like chimps. This DISPROVES panpsychism, the belief that consciousness is a universal force akin to gravity and our brains mere receivers: the Penrose-Hameroff ORCH-OR theory. Hameroff is a raging mystic who embraces neo-Platonism (immortal soul, godhead, and other transcendental beliefs that Plato got from the Orient). On the other extreme is materialism, determinism, etc. Everyone interested in a more balanced, interdisciplinary view of consciousness must check out my work: https://www.amazon.com/John-Likides/e/B00JJ3B2LA

  49. The AI we have is at a level which is absurdly tiny compared to humans, it is about the level of a cockroach or a bee. To get to a human level will take enormous amount of computation, like centuries worth, because the computation is huge in a brain, and small in our current computers, because the lithographic scale is 2d and hundreds of square nanometers per bit, while the brain computation is 3d and a few cubic angstroms per bit. One brain is comparable to all the articifial computers on Earth in data storage.

  50. In his ubiquitous presence on various panels, conversations and lectures Harari seems to himself become a human example of machine learning.

  51. This was great. These guys should get together again. How about every 1/2 year? Things are changing so fast.

  52. Daniel Dennet is such a bluff, always full of bs. He doesn't even know the difference between consciousness and self-consciousness.

  53. Bertalanffy gave me indirectly an extended idea of consciousness with his General Systems Theory; I define consciousness as the cybernetic circuit of input-process-output in inert beings, input-process-output-feedback in living beings and input-process-output-feedback-feedforward in living rational beings (by process we must assume from the simplest to the most complex systems), and by such definition, I must say that awareness and self-awareness are consciousness but consciousness is not only awareness nor self-awareness; thus consciousness is a category where it is included processing of physical reactions (like a hydrogen atom always reacting the same in a determined condition because of its properties and functions maintained by its structure, like a representation of physical memory; also e.g., molecules and viruses), reaction of simple sentience (like bacteria, fungi and plants), reaction of complex sentience (like animal instinct and emotion which are awareness), and reaction of rationality (like intelligence, which in conjunction with senses produces self-awareness).

    As you can see, I have extended consciousness to non-living beings thanks to the cybernetics' field. Now because of this, I say Intelligence is part of consciousness, but intelligence is not self-aware if it doesn't feel through senses, instinct and emotion, i.e., intelligence is not self-aware if it doesn't feel alive. Now, if we would want to create robots that are self-aware, we would need to create artificial senses, artificial instincts and artificial emotions, in other words, artificial life; but such self-awareness requires death threat because that's what made evolve life into all kinds of consciousness that there are today. So things were put together randomly (entropy and inert entities) until it emerged an organic (determined) way of sustaining more complex structures (living beings and negentropy); by this evolutionary process, we came to believe that we are a "we" when it is very likely that there is no such thing, and such illusion that feels so real is what we will create when we make artificial self-awareness.

    Could it be more real our subjective experience of classical physics than the objective superpositioned realm of quantum mechanics?

    What do you think Yuval?

  54. What a douche this Denett ?! Ultra conservative grandpa should stay at home doing nothing if he is so afraid of anything changing in his world. Change is the constant and it's weird that with his old age he didn't get it. This conversation is really BS..How could it get such a like/dislike ratio ? Or does this shows how many people just don't get it at all what ai is going to do ? A bunch of alarmists, disapointing

  55. Denett : taking care of demented people is not a good life but being a doctor YES it's prestigious…Who invited this person ? How could this kind of person being on stage in such place ? I want Ai to take him down and make decisions for him..hopefully it will make him keep his mouth shut!

  56. Apasionante discusión sobre la manera en que tomamos decisiones y experimentamos nuestras realidades, pero sobre todo el cambio que estas pueden sufrir con el incremento de la inteligencia artificial en nuestras vidas.

  57. Dearest amazing Yuval I hope you enjoy it, wishing you all the best since you deserved https://www.youtube.com/watch?v=f_1dhKsELzs

  58. Yuval in the beginning states that there is a common mistake between consciousness and intelligence. Dennet says he agrees. But I think Dennet is one of the thinkers making this confusion between consciousness and intelligence (in his dismissal of questions such as the so called hard problem of consciousness).

  59. Oh the irony of hearing Dan Dennett doing a version of the God of the Gaps argument against AI being able to pick music or books…

    Not a very good debate, certainly not if the topic was what you were expecting. Yuval is clearly the sharpest knife in the drawer, Dan is out of his depth, and Jodi was pretty much ignored, although she did allude to the fact that what makes us human is exactly those parts of us that are irrational… Yuval, you need better debate partners 😉

  60. Do other people have a hard time following the train of thought in this discussion? Like why (23:00) are they discussing whether ai can incorporate serendipity (and yes, the answer is yes.)

  61. Meanwhile A.I has leaped into hyper space and came back changed.. meditated on itself.. and decided to self-implode

  62. Yo do cannot program natural serendipity. The thought is absurd. Also who knows what mental deficits might be created by eliminating the cognitive function of making decisions? If i let a machine do all my lifting my muscles atrophy

  63. The world needs more of these type of discussions between intelligent individuals that don't sacrifice integrity for a sound bite, have left their ego's at the door and genuinely want the best for humanity.

  64. DO AGRICULTURE AND INDUSTRY DOOM SAPIENS?
    In Sapiens, Yuval Harari entitles his chapter on the transition from hunter-gatherer societies to agriculture, "History's Biggest Fraud." He provides evidence of how cooperation, peace, and relatively flat hierarchies cede to competitiveness, warfare, and complex hierarchies that oppressed most of the population. He presents the Industrial Revolution as the second biggest fraud because, like agriculture, while it produced more goods for human consumption, it was accompanied by horrendous suffering. His humanist suggestions for how humans need to change largely involve recovering values that could be found in abundance during the 2.5 million years before the beginnings of agriculture and during which Sapiens and its predecessors "fed themselves by gathering plants and hunting animals that lived and bred without their intervention."
    I largely agree. But in  my book, Compassion: A Global History of Social Policy, I also discuss examples from both agricultural societies and industrial societies that demonstrate that there were many places where pre-agricultural social values somehow managed to hang on despite the intrinsic problems of efforts to create societies in which nature was manipulated in ways that early societies eschewed. And so, for example, the Inca Empire was peaceful and shared its agricultural products across a huge area through "effective systems for food storage, good roads, and efficient shipping." Most pre-colonial African agricultural societies also embodied characteristics of pre-agricultural societies.  India also mostly fit into that category before foreign conquests and the caste system imposed unhealthy values about 3000 years ago. On the industrial side, post-World War I Vienna,   modern-day Kerala, Costa Rica, and to a degree, Scandinavia provide some models of societies that value compassion over GDP and wealth accumulation.

Leave a Reply

Your email address will not be published. Required fields are marked *