“My hope is that when people think about issues like AI and organisation, they think about CAMO.” — Professor Jin Li
When Professor Jin Li launched the Centre for AI, Management, and Organization (CAMO) earlier this year, his immediate goal was filling a gap in the scholarship on artificial intelligence and its impact on the future of work.
But Professor Li’s long-term vision is even more ambitious: Transforming the University of Hong Kong into a global leader in AI, Management, and Organisation research, positioning it at the forefront of a revolution that will shape the next half-century of work and corporate governance. In his own words, he hopes the centre will contribute the “grand ideas” and frameworks societies need to navigate the AI transition.
Drawing on the combined expertise of faculty from across the university – plus a board of advisers featuring members from the University of California, Berkeley, Columbia, the Massachusetts Institute of Technology, the University of Tokyo, and the London School of Economics – CAMO is well on its way to reaching that lofty goal. We asked Professor Li about what drew him to AI, how artificial intelligence is changing the future of work, and what’s next for CAMO.
The Pursuit of Happiness
When Professor Li was a young student growing up in Shanghai, his school emphasised that the “purpose of life is the search for excellence.” A top student at one of Shanghai’s best schools, Professor Li’s only weak subject was chemistry. The harder he worked and the more he struggled, the more he began to wonder whether there was more to life than excellence – that perhaps the real purpose of life was the search for happiness.
He found his answer in an introductory economics class at the California Institute of Technology (CalTech). As his professor explained that individuals maximise utility, he remembers thinking that this seemed an awful lot like maximising happiness.
“Now is the best time to study AI.” — Professor Jin Li
Although not directly involved in AI research at the time, a number of his friends and classmates would go on to play pivotal roles in an earlier wave of the AI revolution, and their work sparked his own interest in the field. Now, as he enters what he calls the “second curve” of life, he sees AI research as both important and a way to pursue a topic that has always interested him.
Out of the ‘Stone Age’
Professor Li is fond of a famous line from Edward O. Wilson, in which the American biologist notes that we live in a world of “Palaeolithic emotions, medieval institutions… and god-like technology.” He envisions CAMO as contributing to the upgrading of those institutions to keep pace with technological change.
As an example, he points to the way many firms have struggled to update corporate governance and workplace norms for the AI age. Previously, the ideal firm was mid-sized: big enough to achieve economies of scale, but not so big that it becomes bogged down by bureaucracy – an example of what economists call the “U-shaped” relationship between size and efficiency.
“As economists, as management strategy scholars, we don’t have that much to do with technology. We cannot change human nature either. What we can do is to think about new organisations, new institutions.” — Professor Jin Li
Now, however, the AI boom has helped fuel the rise of both unicorns and tech giants like FAANG (Facebook, Apple, Amazon, Netflix, and Google). That tectonic shift has caught many companies off-guard, with CEOs and even consultants unsure of how to adapt. “Firms need to have a playbook,” says Professor Li. “They need to have a framework for how to move forward.”
The Future of Work
To help write that playbook, CAMO has already surveyed more than 100 C-suite leaders and 500 HR reps as it works on a practical guide for companies navigating the AI transition.
One of the centre’s current points of emphasis is identifying the jobs humans don’t want to do, helping firms to decide on automation priorities without exacerbating popular fears of AI “replacing” workers. This approach is also what sets CAMO apart. “There are very few centres that focus on organisation,” Li says. “It may sound cocky, but I don’t think there are more than three to five institutions in the world with the same calibre of people as us.”
In addition to laying the “intellectual foundation” for the study of AI and the future of work through his research at the centre, Professor Li is also hard at work on a new book about “The Great Compression.”
“I like to call incentive and knowledge the ‘yin’ and ‘yang’ of AI.” — Professor Jin Li
The access to knowledge promised by AI has spawned new incentives – to cheat, to game the system, to “shirk” – all of which managers must understand and learn to spot. Professor Li’s book will explore these two sides of AI, as well as their combination, which he identifies as “power.” In the process, he hopes to help managers better navigate both the opportunities and risks of the AI era.
If all that seems daunting, Professor Li would likely agree. When asked about the biggest challenges he’s facing, he replies quickly: “Time.” There are so many interesting and exciting potential projects, he explains, but the CAMO team must be selective and focused in its priorities. “It’s such an exciting time that I’m not getting enough sleep,” he says with a wry smile. “But now we’re being bombarded with so many interesting possibilities.”
Computer vision and artificial intelligence. To put it simply, computer vision means giving machines the capability to see. Humans can see the 3D world – the objects, the relationships between them, and a lot of semantics. Then we make decisions for our many activities in the 3D world.
Professor Xiaojuan Qi works at the Department of Electrical and Electronic Engineering at HKU where she is a member of the Deep Vision Lab. Her work covers deep learning, computer vision and artificial intelligence. In this interview with our science editor, Dr Pavel Toropov, Professor Qi talks about self-driving cars and building virtual worlds.
❓ Dr Pavel Toropov: What is the main direction of your work?
💬 Professor Xiaojuan Qi: Computer vision and artificial intelligence. To put it simply, computer vision means giving machines the capability to see. Humans can see the 3D world – the objects, the relationships between them, and a lot of semantics. Then we make decisions for our many activities in the 3D world.
In order for a robot, a machine, to go around this world, it must also be able to see. It must recognise different objects, estimate their geometry. This has a lot of applications, one of which is self-driving cars. For a car to be able to drive automatically, it must have the ability to see what is in front of it, what obstacles there are, forecast the behaviour of other agents and make planning to drive safely.
An automated driving system has several parts. One part is about perception – how can a car get knowledge from the environment? Most of this knowledge is visual data from digital cameras and Lidar. Lidar is used to detect 3D objects, and, based on that data, the car can make decisions, like adjusting speed or turning. Our algorithm helps the machine better analyse this data, better understand what is happening around it.
Another application is medical. We develop AI to automatically analyse medical images to make more informed diagnoses, to make diagnoses more precise and reduce the potential possibility of mistreatment.
Another exciting area is AI for science. I am collaborating with the Department of Chemistry, we have developed an AI algorithm to improve the resolution of electron microscope images. This can help biologists make discoveries.
❓ Automated driving and AI are not new, what does your research contribute to this field? What are your strengths?
💬 In order to test if an automated car can drive safely, we need a simulation platform. What we are currently doing is building a simulation environment so that we can help train the models and evaluate whether the car can drive safely in a real environment. Do you know (the massively popular computer game) Wukong?
❓ Of course!
💬 The scenes in this game look very real, and the reason is that the developers used Lidar to scan objects, historical buildings especially, in Shanxi province (of China). They did a reconstruction of them and imported them into the virtual environment – the computer game.
This is very similar to what I am doing. Using such scans, but without relying on expensive Lidar scanning techniques, we reconstruct the world into virtual space, mostly using images shot with a digital camera. We create a completely new reality!
Another strength is that we are working to make the algorithm run on casually captured data. For example, in Wukong, they needed experts to scan objects and do reconstruction, but what we are doing allows anyone, not only experts, to use their phones to scan. Then we can make algorithms that can reconstruct the scenes.
❓ So you reconstruct, or build, a new reality, a virtual word, to train or test automatic cars and robots?
💬 Yes. We can use Lidar or digital camera scans of a room or a city and turn the real world into a digital space using algorithms. Besides, we also create models that can generate 3D objects, such as tables and chairs. And in this reconstructed or recreated digital world we can train our algorithm and test if it makes mistakes or not.
❓ What is the advantage of using the virtual world for training and evaluating algorithms?
💬 We can get data from interactions – for example a cleaning robot must move a table in the virtual world – and this can then be used to train agents – robots – to interact with the real, physical world. Training in the real world is expensive, and not safe – the robot can break objects, harm humans. But in the virtual world we can produce an infinite amount of data and interactions.
Besides, we can create what is called corner cases and improve safety. These are cases that happen very rarely in reality, but are critical – for example, two cars colliding. We can create these scenarios and let the car learn what to do.
❓ Have you partnered with anyone in the industry?
💬 We work with APAS (Hong Kong Automotive Platforms and Application Systems R&D Centre, set up by the Hong Kong SAR government), we have a collaborative project in automated driving. It’s a Hong Kong based company with branches in Mainland China. There is also (the car hailing app) Didi. And we have collaborations with Google, TenCent and ByteDance.
❓ What is the main difficulty machines have when trying to see the world?
💬 The variety and the diversity of data within the environment. For example, we are in this room, it is now bright, but when it is dark, or when the weather is different, this creates a lot of challenges for the model, for the machines, to recognise the same objects.
The (car) camera will capture different video points, under different lighting conditions, weather conditions… all these variations make this problem very complicated for machines, even though for humans it is very easy to interpret objects under different conditions.
So, in order for a machine to recognise an object properly we must include this object into its training data, and to be robust, the model must have a lot of training data to cover all the potential scenarios. If one is not covered, in the deployment stage there will be a lot of mistakes.
For example, in the US and Europe cars are different sizes. This also creates difficulties when developing 3D detection models. If the model is trained only on the data collected in the USA, and then you apply it in Europe, it may make mistakes. This is why companies have to develop foundation models, designed to be large in size and take large amounts of data, the assumption is that the data can cover the real world diversity. Chat GPT is a huge model with hundreds of billions of parameters. It is trained on the entire Internet data, but it also makes mistakes.
❓ Self-driving cars are already on the road in Mainland China, correct?
💬 Yes. Such cars are already on the road. Baidu has self-driving cars already in China. I am collaborating with Baidu. In the city of Wuhan, Baidu has a car service called LuoBo KuaiPao. There are no human drivers, but there is a human remote controller that can take over if a challenging scenario happens. One human controller can handle over 20 cars.
❓ When do you think self-driving cars will be as common as “normal” cars?
💬 It is coming. I think it will come in the next few years. The major issue is that humans cannot tolerate any mistakes from AI models. It is big news if a self-driving car makes a mistake, but humans also make mistakes. We need to accept that machines can make mistakes. Humans do, and they make a lot of mistakes! The issue is – how to make humans trust machines? We need human-machine collaboration.
Professor Yuan Xiaoming works in the Department of Mathematics at HKU. An accomplished mathematician and scientist, he was three times voted highly cited researcher by Clarivate Analytics. Professor Yuan’s main speciality is optimisation, and he applies this expertise in the field of artificial intelligence. In this interview with our science editor Dr Pavel Toropov, Professor Yuan explains the importance of mathematics in AI.
❓ Dr Pavel Toropov: What is the role of a mathematician in AI?
💬 Professor Yuan Xiaoming: Our role is fundamental, a foundation. I can prove it – when people talk about AI, you usually only hear about the engineering side, the programming, the implementation of the product. But – breakthroughs in AI depend on how good you are at maths! What does AI mean? Intelligence created by people. And how do you get intelligence? One answer is maths!
❓ What do you mean by this?
💬 AI is artificial – fake intelligence, intelligence in the machine. AI is not fake, of course, but it is computational intelligence. And, who tells the computer to generate this intelligence? Humans, people. So, you need human intelligence – in your brain – to do AI better. And maths is the best way to improve your intelligence, your level of thinking, your logic.
❓ So mathematics is like training, going to the gym, but for the brain?
💬 Yes. Mathematics is brain training.
❓ What is your main area in the field of AI?
💬 Optimisation (algorithms). There are a lot of optimization problems in the AI industry. For example, if we want to minimise the bandwidth cost for a livestreaming business, that’s an optimisation problem.
You understand the questions from a maths perspective and design fast, efficient and robust algorithms to solve them. These are purely maths problems. We work on such problems.
❓ You also work a lot with the industry, designing algorithms for commercial application. How do you use maths there?
💬 For example, something that everyone is talking about now – LLM, large language models, like Chat GPT and variants of that. In LLM there are several stages – pre-training, post-training, fine-tuning. Each stage has a lot of optimisation problems.
For example, LLM has a lot of connections between different neurons, and in the post-training stage you have to cut some of these connections to save the hardware computation facilities, like memory. Which neurones can be cut off? We design a mathematical model to help do this and it can save a lot of computation facilities – and that’s money.
❓ You also work with AI chips. What is an AI chip?
💬 Traditionally the concept of computer chips is just hardware. But now people integrate algorithms into the chip so that it works faster. We design algorithms into the chip to accelerate the computation in the chip.
One type of standard, representative work is vectors and matrix decomposition, matrix and vector multiplication. We design specific algorithms to try to make the computation of vectors and matrix more efficient.
Sometimes a structure has to be introduced into the chip. For example, the most popular chips, like Nvidia A100 and H100, have specific structures – that’s why they work so well. One typical structure is sparse tensor core, designed to accelerate sparse matrix computations, and we have to design algorithms to fit into such hardware structures.
❓ Where does HKU compare to other institutions when it comes to innovation in the field of AI and mathematics?
💬 HKU is a good place to do AI-related topics. I am a maths professor, and for AI we need to do a lot of maths. Theory-wise we are quite strong – HKU attracts good postgrad students, and I am very happy to be working with such good students. I also launched an AI program with the Department of Statistics and the Department of Computer Science. This program provides new resources, manpower for our projects.
❓ Tech jobs – software engineers, programmers, are coveted because they offer good pay and are in demand. Are there career opportunities for mathematicians?
💬 I can see a direct link between maths and commercial value. I know that many people do not have any idea about this – maths is just formulas on paper, they have nothing to do with business, but I am an example – I save money for industry.
I am a mathematician, and I can help industry earn a lot of money. I recently helped Huawei, with my algorithms, to save USD108 million, in less than three years. That’s commercial value! I designed the algorithms which helped them save money because of benefits costs for their livestream business, which is one of the most important parts of the digital economy, as business professors like to call it.
And – what is the specific meaning of “digital economy”? Digital means numbers, numbers on the computer. Creating meaningful knowledge from numbers depends on numerical algorithms, so it depends on mathematical knowledge.
Therefore, the mathematical foundation for AI is really important. We are not doing AI incrementally, little by little – mathematicians can help industry with breakthrough ideas! The importance of mathematicians in AI must be highlighted.
❓ Final question: would you recommend mathematics as a career?
“Do you feel that Chat GPT understands you? Microsoft Copilot? To a certain degree, I do, yes… I did not expect this just three years ago, but it happened. But is it true understanding? I don’t know. “- Prof. Lingpeng Kong
Professor Lingpeng Kong’s research at HKU’s Department of Computer Science focuses on natural language processing (NLP). Before joining HKU, Professor Kong worked at the AI research laboratory Google DeepMind in London.
❓ Dr Pavel Toropov: Your research profile says that you “tackle core problems in natural language processing by designing representation learning algorithms that exploit linguistic structures.” What does it mean, in simple terms?
💬 Professor Lingpeng Kong: We teach computers how to understand human language and speak like a human.
❓ And what is the main difficulty for a computer in doing that?
💬 The ambiguity of the human language. Humans have a lot of ambiguities in our speech, for example: “the man is looking at a woman with a telescope”. Does the woman have a telescope? Is the man using a telescope to look at the woman?
This is called proposition attachment problem. Modern language processing is built around the idea of statistical methods, but you always have a lot of boundary cases that you cannot fully and efficiently model.
❓ Humans figure out such boundary cases easily, from context. Why cannot computers do that?
💬 Because there is an exponentially large space to search. We must find efficient space within the boundaries of computation recourse and memory. That’s the difficult part, to build a statistical method to model that stuff.
Also, it is difficult with low resource languages. For example, Swahili – we don’t have enough data to train the system to work efficiently.
I think the good thing is that with current development of deep learning we can build models with large exponential value, and we can solve a lot of problems that in the past we could not imagine that we would be able to solve. That is why people are excited about AI.
You learn about things and you learn to generalise into things you have not thought before.
It is a matter of what model, what algorithm can generalise the best from less data, less computations. Nowadays we need very large data to train systems, basically the whole of the Internet.
❓ You also work on machine translation. The quality of machine translation seems very good now, much better than just a few years ago.
💬 I feel like the problems with machine translations have been solved! It has been developing very fast. Ten years ago there were translation ambiguities that you could not solve well, but today we have large language models.
Chat GPT translates really, really well! I think when it comes to technical documents and daily use email, it does better than me in Chinese to English translation. Nowadays, if I write an email in Chinese and translate to English, I only have to modify very, very few things.
❓ So will translators be replaced by AI?
💬 I think it is already happening now. Technology has advanced so far that some of the very difficult things in the past are not that difficult today.
Machine translation is just conditional language generation – for example, conditioning on the Chinese part to generate the English part and represent the same meaning. There are a lot of conditional generation problems like this – condition on your prompts to generate the next thing.
Everything is inside one model now, the big language model. Before, question answering has its own system, machine translation had its own system, so did creative writing… but now it is all the same system, it is only the prompt that is different.
❓ What prevents machines from understanding humans?
💬 Nothing, but here is always a philosophical debate about what the true understanding is. Do you feel that Chat GPT understands you? Microsoft Copilot? To a certain degree, I do, yes… I did not expect this just three years ago, but it happened. But is it true understanding? I don’t know.
I like to do tests – I give song lyrics (to AI) and I ask: what does this mean? And it tells me, for example: sometimes times are hard, but things will be better. I still feel that it is not quite a human being talking to me, but maybe because I know that the result is coming from a lot of computation.
But if you do what is called Turing Test – differentiate between talking to Chat GPT and talking to a human being, then it is hard, really hard. I don’t think I can guess right more than 60 or 70% of the time.
❓ What allowed the AI to be able to communicate like that?
💬 We had never, in human history, trained a model of that size before. Before COVID, the largest language model had roughly 600 million parameters. Today, we have the model, the open source one, with 405 billion parameters. We never had the chance before to turn this quantity of data, such large amount of computation, into knowledge inside computers, and now we can.
❓ What is the current direction of your research?
💬 Our group works mainly on discovering new machine learning architecture. When you talk with Chat GPT, after about 4000 words, it forgets. The longer you talk to it, the more likely it is not to remember things. These are fundamental problems with machine learning architecture sites. This is one of the things we are trying to solve.
The machine learning model behind Chat GPT is called Transformer. It is a neural network. It can model sequences used everywhere, for example in the AI program called AlphaFold that works with proteins.
One direction of our work is – making Transformer better in terms of efficiency, in terms of modelling power, so that we can we have Transformer that works with ultra-long sequences and does not forget.
The second direction is pushing the boundary of reasoning limits of the current language models. I have a team working on problems from the International Mathematics Olympiad. We can now use large language models to solve those problems. It is doing really well.
Does AI have beliefs, desires, plans even emotions? Does it want anything? Does it want to answer our questions? Is it capable of having a desire and goal? – Prof. Herman Cappelen
A leading authority in the philosophy of artificial intelligence, Chair Professor Herman Cappelen is the director of AI, Ethics and Society MA, the director of AI&Humanity-Lab, and co-director of ConceptLab at the University of Hong Kong. Author of some of the most influential papers in this field, he has also written several books on the philosophy of AI. In this extensive interview with our science editor Dr Pavel Toropov, Professor Cappelen explains how AI is changing us and – how the humanity can survive AI.
❓ Dr Pavel Toropov: You are a philosopher. It is easy to see why we need a computer scientist, an engineer or a mathematician when it comes to AI, but what is the role of a philosopher?
💬 Professor Herman Cappelen: We, philosophers, have spent centuries trying to understand what it means to have a thought, a desire, a goal, a fear, and – love. Now we use those theories to investigate whether AI can have thoughts, desires, hopes, fears, and love. You are not trained to answer these questions if you are a computer scientist.
As we build more and more powerful AI, we want to know about its capacities. For example, if you create something that has desires, consciousness, awareness and emotions, then you might have made something that’s an agent. You then might have moral and ethical responsibilities towards it. Again, these moral questions are not something computer scientists are experts on or even trained to think about.
If you create something that has its own goals and desires, is really, really smart, and is running our economy, electricity system and the military, you’ve created a risk. That’s why people talk about AI risk and develop strategies to make us safe from potentially dangerous AI.
❓ So, are philosophers “psychologists” who help us understand what is going on “in the head” of the AI by providing analysis beyond what algorithms and mathematics can give us?
💬 Psychologists are primarily trained to answer questions about humans, human fear, human anxiety, human development… They are not trained to think about or experiment on things made from silicon. With AI, there is now a whole new field that takes the concepts we use to describe the human mind and apply them to a new kind of creature. The bridge, from talking about human cognition to AI cognition, is built in large part by philosophers.
It might be that we need a whole new terminology for the “psychology” of AI. Maybe AI does not want things in the way that we do. Maybe AI does not hope or fear the way humans do, maybe it has other (psychological) states, different from ours.
❓ What does your work on AI involve?
💬 One part of my work is about understanding the way things like Chat GPT understand language, the ways AI can and cannot communicate, the ways in which it can be similar to or different from humans in its grasp and construction of language. We want to know if they are communicative agents as we are, or if they are a totally different thing.
❓ What do you mean by “totally different thing”?
💬 Some theorists claim that AI systems don’t understand anything of language. On that view, they are like parrots that produce human speech, but don’t understand what they produce. Something goes in, something goes out, but there is nothing there.
I worked on the nature of language and the nature of communication long before anything like Chat GPT came about. When Chat GPT came out I thought it was incredibly interesting from the point of view of my research, because now we could have created systems totally different from us, that can communicate. This gave us new insights into the nature of language and communication.
❓ What kind of insights?
💬 Almost everything we know about language and communication comes from studying humans, but now it turns out that there are many other non-human ways to communicate. ChatGPT is doing it, and maybe even better than us! I wrote a book on this topic: “Making AI Intelligible”.
❓ In what ways can Chat GPT be better at communicating than humans?
💬 It does not get tired. It is not going to get up and leave because it has another meeting to go to. It remembers. It always talks about what you want to talk about. Its ability to process and produce conversation is much better and faster than ours. Chat GPT can produce everything I will say in the next half hour in ten seconds, and it could process this entire conversation in 2 seconds.
❓ In addition to language, what else interests you, a philosopher, about AI?
💬 Does AI have beliefs, desires, plans even emotions? Does it want anything? Does it want to answer our questions? Is it capable of having a desire and goal? Can it be held morally accountable? Do we have moral obligations towards AI? Are they potentially our intellectual superiors and if so, how do we react to that? My next book on AI: “The Philosophy of AI: An Opinionated Methodological Guide” is in large part on these issues.
❓ So, does AI have goals?
💬 I think the answer to this is yes.
❓ Why?
💬 Because AI exhibits the kind of structural behaviour that we would consider as having a goal in humans. It behaves structurally as if it had a goal and plan and some beliefs and then acts on those beliefs.
❓ Could you clarify?
💬 For example, if you ask it questions, and think – why is it giving me answers? Well, one way is to say AI wants to answer your questions, it understood your questions and it thinks that the answer is “bla bla bla”, and so it says it “bla bla bla”. That’s the kind of explanation that we use for humans.
The people who disagree say: “Yeah, but that’s not really what’s going on, because it is really just processing symbols and just predicting what the next word is.” I think this is a horrible argument because you can say that about humans too: “What’s really going on is just processing in some fatty tissue inside the brain”. That would be a bad argument for the view that humans lack goal directed behaviour. It’s an equally bad argument in the case of AI.
❓ So, do you think that AI can have a mental life?
💬 I’m confident that people who say: “AI cannot speak a language, it cannot have a mental life”, are basing their views on bad arguments. The big picture is that I don’t think we can be sure about the answers until we have resolved some extremely hard philosophical questions.
We used to think that non-human animals couldn’t think, plan, or have emotions. Now we are much happier to accept that elephants and dolphins, cats and dogs, can communicate and have rich cognitive, emotional and social lives. You don’t have to be a human to have these. Mental and cognitive life can exist in things very different from us. But then there is a big leap – could it also exist in something non-biological? I am not convinced by the arguments that it cannot, and I think there are strong arguments that it can.
❓ One of the main worries about AI is that it will replace humans in every job imaginable. What is your view on that?
💬 Some jobs will go very fast. There are a lot of skills AI cannot do now, but this will change fast. People say that AI makes mistakes, but I am completely unmoved by that. Humans make mistakes much more than AI and the rate at which it improves is unbelievable. Chances are that those deficiencies will disappear really fast.
But surprisingly, much will stay because we want human to human interactions. And in more areas than you might think! Let’s look at chess. Humans are much, much worse at chess than even a cheap computer. The best chess player, Magnus Carlsen, can never beat the best computer.
But the important thing is that nobody wants to watch two computers play each other. People want to play people and watch other people doing the same.
I asked my 13-year-old daughter: would you listen to a musician who is an avatar that creates the same music that you like? (She said) Of course not! We care about people and a lot of the things we do is about people. I would not talk to you and care much about this interview if you were an avatar sitting there in front of me.
❓ Let’s talk about the risks that AI poses to humanity, this is another area of your work, correct?
💬 Will the super intelligence turn on us? I don’t know if it will do that, but it is possible.
I have a new paper that’s co-authored with Simon Goldstein. It’s called “AI Survival Stories”. It is about the ways in which we can survive AI.
There are two broad categories of how we can survive – one is what we call plateau stories and the other non-plateau stories.
One kind of plateau is technical. We keep developing AI over the next few years, but nothing significantly better happens. For some reason that we could not foresee, the technology does not keep evolving, it flatlines.
Another plateau story is cultural. Maybe we’ll eventually treat dangerous AI the way we now treat biological or chemical weapons. One way that could happen is that someone uses AI in very dangerous ways that has horrific consequences. After that, AI is perceived as a threat to humanity and there’s a worldwide effort to block the creation of dangerous AI.
The other scenario is that there is no plateau, and the AI becomes super intelligent and super powerful. And then there are two ways in which we can survive. Either we make sure that AIs are aligned with our values and are always nice to us. The other option is that we can control the AI systems and somehow make sure that they can’t harm us.
Technical plateau is not super likely. Cultural plateau is not that likely. Alignment is incredibly difficult. Trying to control a super intelligent AI seems hopelessly difficult. What you think of the probability of humanity surviving AI depends how likely you think these various survival stories are.
❓ Replacing human jobs and threatening the humanity aside, AI is now affecting our daily and emotional lives. In the 2013 film “Her,” a man falls in love with an AI chatbot. This seemed like pure fiction then, but now we are not so sure. Can AI change us?
💬 AI can change us in very unpredictable ways. It changes our language, changes how we classify the world, changes what we think is important in the world, and changes how we think about each other. It is all very new, only two years old, but the change is unbelievably fast. I have never seen anything change that fast.
There is now a huge market for romantic chatbots – for boyfriend, girlfriend, partner, whatever bots. People can now develop a super emotional, romantic relationship with a chatbot.
I was just at a conference in Germany where one of the speakers had created an AI robot to help interact with people with dementia. It was remarkable how the patients often prefer that to the overworked and stressed nurses.
There is something great about these chatbot friends. They are never tired; they are never in a bad mood. They don’t ask for anything in return. If we get used to that as a model of friendship and intimacy, might we expect that from humans too: why is he or she not like a chatbot?
Therefore, one thing that I think is happening already, words like “friendly”, “empathy”, “relationship” are changing in meaning. Our classification of emotions and cognitive states is in flux, changed by our interaction with AI systems.
Our ConceptLab at HKU is about how language changes. And these kinds of language changes happen when we are in new situations. Now we are in a new situation – one where we can talk and engage with AI systems.
❓ Continuing on the subject of AI films, which films about AI do you think come closest to reality?
💬 The Matrix is an amazing film, a brilliant illustration of a very important philosophical issue.
AI has created an illusion of life, and you are not a person but a figment of a big computer system. It is something that philosophers think about, and the Matrix is an ingenious illustration of that possibility. Philosopher Nick Bostrom has a super famous paper arguing that the probability that we are in a matrix is pretty high.
It is a very simple calculation – what is the probability that AI systems will be able to generate a completely realistic world? Pretty high. How many of these are there going to be? Probably a lot. How many real worlds are there? One. So, what’s the probability you are in a real one? Pretty small.
❓ And finally, will AI replace philosophers?
💬 I’m not sure. There will probably be AI systems that can generate philosophical arguments better and faster than any human. What I’m less sure about is whether people will want to talk to and interact with this system.
Recall the point from before: lots of people want to pay attention to Magnus Carlson playing chess and no one cares about the games played between chess computers. Maybe philosophy will be like that? I think we’ll be harder to replace than, say, computer scientists. Their function is to produce a certain output and if something can produce it cheaper, faster, and better, replacement is likely.
Philosophy has human connection at its core and so is harder to replace. But maybe not impossible.
👏 Thank you, Professor Cappelen!
🔍 Many of the questions discussed in this interview are part of the curriculum of the AI, Ethics and Society MA at the University of Hong Kong.
For more information: https://admissions.hku.hk/tpg/programme/master-arts-field-ai-ethics-and-society