Tag: Conversations about AI

  • Can you fall in love with AI?

    Can you fall in love with AI?

    Does AI have beliefs, desires, plans even emotions? Does it want anything? Does it want to answer our questions? Is it capable of having a desire and goal? – Prof. Herman Cappelen

    A leading authority in the philosophy of artificial intelligence, Chair Professor Herman Cappelen is the director of AI, Ethics and Society MA, the director of AI&Humanity-Lab, and co-director of ConceptLab at the University of Hong Kong. Author of some of the most influential papers in this field, he has also written several books on the philosophy of AI. In this extensive interview with our science editor Dr Pavel Toropov, Professor Cappelen explains how AI is changing us and – how the humanity can survive AI.

    ❓ Dr Pavel Toropov: You are a philosopher. It is easy to see why we need a computer scientist, an engineer or a mathematician when it comes to AI, but what is the role of a philosopher?

    💬 Professor Herman Cappelen: We, philosophers, have spent centuries trying to understand what it means to have a thought, a desire, a goal, a fear, and – love. Now we use those theories to investigate whether AI can have thoughts, desires, hopes, fears, and love. You are not trained to answer these questions if you are a computer scientist.

    As we build more and more powerful AI, we want to know about its capacities. For example, if you create something that has desires, consciousness, awareness and emotions, then you might have made something that’s an agent. You then might have moral and ethical responsibilities towards it. Again, these moral questions are not something computer scientists are experts on or even trained to think about.

    If you create something that has its own goals and desires, is really, really smart, and is running our economy, electricity system and the military, you’ve created a risk. That’s why people talk about AI risk and develop strategies to make us safe from potentially dangerous AI.

    ❓ So, are philosophers “psychologists” who help us understand what is going on “in the head” of the AI by providing analysis beyond what algorithms and mathematics can give us?

    💬 Psychologists are primarily trained to answer questions about humans, human fear, human anxiety, human development… They are not trained to think about or experiment on things made from silicon. With AI, there is now a whole new field that takes the concepts we use to describe the human mind and apply them to a new kind of creature. The bridge, from talking about human cognition to AI cognition, is built in large part by philosophers.

    It might be that we need a whole new terminology for the “psychology” of AI. Maybe AI does not want things in the way that we do. Maybe AI does not hope or fear the way humans do, maybe it has other (psychological) states, different from ours.

    ❓ What does your work on AI involve?

    💬 One part of my work is about understanding the way things like Chat GPT understand language, the ways AI can and cannot communicate, the ways in which it can be similar to or different from humans in its grasp and construction of language. We want to know if they are communicative agents as we are, or if they are a totally different thing.

    ❓ What do you mean by “totally different thing”?

    💬 Some theorists claim that AI systems don’t understand anything of language. On that view, they are like parrots that produce human speech, but don’t understand what they produce. Something goes in, something goes out, but there is nothing there.

    I worked on the nature of language and the nature of communication long before anything like Chat GPT came about. When Chat GPT came out I thought it was incredibly interesting from the point of view of my research, because now we could have created systems totally different from us, that can communicate. This gave us new insights into the nature of language and communication.

    ❓ What kind of insights?

    💬 Almost everything we know about language and communication comes from studying humans, but now it turns out that there are many other non-human ways to communicate. ChatGPT is doing it, and maybe even better than us! I wrote a book on this topic: “Making AI Intelligible”.

    (Making AI Intelligible can be accessed here: https://arxiv.org/pdf/2406.08134)

    ❓ In what ways can Chat GPT be better at communicating than humans?

    💬 It does not get tired. It is not going to get up and leave because it has another meeting to go to. It remembers. It always talks about what you want to talk about. Its ability to process and produce conversation is much better and faster than ours. Chat GPT can produce everything I will say in the next half hour in ten seconds, and it could process this entire conversation in 2 seconds.

    ❓ In addition to language, what else interests you, a philosopher, about AI?

    💬 Does AI have beliefs, desires, plans even emotions? Does it want anything? Does it want to answer our questions? Is it capable of having a desire and goal? Can it be held morally accountable? Do we have moral obligations towards AI? Are they potentially our intellectual superiors and if so, how do we react to that? My next book on AI: “The Philosophy of AI: An Opinionated Methodological Guide” is in large part on these issues.

    ❓ So, does AI have goals?

    💬 I think the answer to this is yes.

    ❓ Why?

    💬 Because AI exhibits the kind of structural behaviour that we would consider as having a goal in humans. It behaves structurally as if it had a goal and plan and some beliefs and then acts on those beliefs.

    ❓ Could you clarify?

    💬 For example, if you ask it questions, and think – why is it giving me answers? Well, one way is to say AI wants to answer your questions, it understood your questions and it thinks that the answer is “bla bla bla”, and so it says it “bla bla bla”. That’s the kind of explanation that we use for humans.

    The people who disagree say: “Yeah, but that’s not really what’s going on, because it is really just processing symbols and just predicting what the next word is.” I think this is a horrible argument because you can say that about humans too: “What’s really going on is just processing in some fatty tissue inside the brain”. That would be a bad argument for the view that humans lack goal directed behaviour. It’s an equally bad argument in the case of AI.

    ❓ So, do you think that AI can have a mental life?

    💬 I’m confident that people who say: “AI cannot speak a language, it cannot have a mental life”, are basing their views on bad arguments. The big picture is that I don’t think we can be sure about the answers until we have resolved some extremely hard philosophical questions.

    We used to think that non-human animals couldn’t think, plan, or have emotions. Now we are much happier to accept that elephants and dolphins, cats and dogs, can communicate and have rich cognitive, emotional and social lives. You don’t have to be a human to have these. Mental and cognitive life can exist in things very different from us. But then there is a big leap – could it also exist in something non-biological? I am not convinced by the arguments that it cannot, and I think there are strong arguments that it can.

    ❓ One of the main worries about AI is that it will replace humans in every job imaginable. What is your view on that?

    💬 Some jobs will go very fast. There are a lot of skills AI cannot do now, but this will change fast. People say that AI makes mistakes, but I am completely unmoved by that. Humans make mistakes much more than AI and the rate at which it improves is unbelievable. Chances are that those deficiencies will disappear really fast.

    But surprisingly, much will stay because we want human to human interactions. And in more areas than you might think! Let’s look at chess. Humans are much, much worse at chess than even a cheap computer. The best chess player, Magnus Carlsen, can never beat the best computer.

    But the important thing is that nobody wants to watch two computers play each other. People want to play people and watch other people doing the same.

    I asked my 13-year-old daughter: would you listen to a musician who is an avatar that creates the same music that you like? (She said) Of course not! We care about people and a lot of the things we do is about people. I would not talk to you and care much about this interview if you were an avatar sitting there in front of me.

    ❓ Let’s talk about the risks that AI poses to humanity, this is another area of your work, correct?

    💬 Will the super intelligence turn on us? I don’t know if it will do that, but it is possible.

    I have a new paper that’s co-authored with Simon Goldstein. It’s called “AI Survival Stories”. It is about the ways in which we can survive AI.

    There are two broad categories of how we can survive – one is what we call plateau stories and the other non-plateau stories.

    One kind of plateau is technical. We keep developing AI over the next few years, but nothing significantly better happens. For some reason that we could not foresee, the technology does not keep evolving, it flatlines.

    Another plateau story is cultural. Maybe we’ll eventually treat dangerous AI the way we now treat biological or chemical weapons. One way that could happen is that someone uses AI in very dangerous ways that has horrific consequences. After that, AI is perceived as a threat to humanity and there’s a worldwide effort to block the creation of dangerous AI.

    The other scenario is that there is no plateau, and the AI becomes super intelligent and super powerful. And then there are two ways in which we can survive. Either we make sure that AIs are aligned with our values and are always nice to us. The other option is that we can control the AI systems and somehow make sure that they can’t harm us.

    Technical plateau is not super likely. Cultural plateau is not that likely. Alignment is incredibly difficult. Trying to control a super intelligent AI seems hopelessly difficult. What you think of the probability of humanity surviving AI depends how likely you think these various survival stories are.

    ❓ Replacing human jobs and threatening the humanity aside, AI is now affecting our daily and emotional lives. In the 2013 film “Her,” a man falls in love with an AI chatbot. This seemed like pure fiction then, but now we are not so sure. Can AI change us?

    💬 AI can change us in very unpredictable ways. It changes our language, changes how we classify the world, changes what we think is important in the world, and changes how we think about each other. It is all very new, only two years old, but the change is unbelievably fast. I have never seen anything change that fast.

    There is now a huge market for romantic chatbots – for boyfriend, girlfriend, partner, whatever bots. People can now develop a super emotional, romantic relationship with a chatbot.

    I was just at a conference in Germany where one of the speakers had created an AI robot to help interact with people with dementia. It was remarkable how the patients often prefer that to the overworked and stressed nurses.

    There is something great about these chatbot friends. They are never tired; they are never in a bad mood. They don’t ask for anything in return. If we get used to that as a model of friendship and intimacy, might we expect that from humans too: why is he or she not like a chatbot?

    Therefore, one thing that I think is happening already, words like “friendly”, “empathy”, “relationship” are changing in meaning. Our classification of emotions and cognitive states is in flux, changed by our interaction with AI systems.

    Our ConceptLab at HKU is about how language changes. And these kinds of language changes happen when we are in new situations. Now we are in a new situation – one where we can talk and engage with AI systems.

    ❓ Continuing on the subject of AI films, which films about AI do you think come closest to reality?

    💬 The Matrix is an amazing film, a brilliant illustration of a very important philosophical issue.

    AI has created an illusion of life, and you are not a person but a figment of a big computer system. It is something that philosophers think about, and the Matrix is an ingenious illustration of that possibility. Philosopher Nick Bostrom has a super famous paper arguing that the probability that we are in a matrix is pretty high.

    It is a very simple calculation – what is the probability that AI systems will be able to generate a completely realistic world? Pretty high. How many of these are there going to be? Probably a lot. How many real worlds are there? One. So, what’s the probability you are in a real one? Pretty small.

    ❓ And finally, will AI replace philosophers?

    💬 I’m not sure. There will probably be AI systems that can generate philosophical arguments better and faster than any human. What I’m less sure about is whether people will want to talk to and interact with this system.

    Recall the point from before: lots of people want to pay attention to Magnus Carlson playing chess and no one cares about the games played between chess computers. Maybe philosophy will be like that? I think we’ll be harder to replace than, say, computer scientists. Their function is to produce a certain output and if something can produce it cheaper, faster, and better, replacement is likely.

    Philosophy has human connection at its core and so is harder to replace. But maybe not impossible.

    👏 Thank you, Professor Cappelen!

    🔍 Many of the questions discussed in this interview are part of the curriculum of the AI, Ethics and Society MA at the University of Hong Kong.

    For more information: https://admissions.hku.hk/tpg/programme/master-arts-field-ai-ethics-and-society