Category: Applied A.I.

  • AI helps us decode bacteria’s chemical language and harness their weapons

    AI helps us decode bacteria’s chemical language and harness their weapons

    We use AI to decode bacteria’s chemical language. Genome information is now available online – more than one million bacterial genomes are available! 

    We developed a new methodology called genome mining. AI looks at a million genomes and assesses their genetic potential for coding an antibiotic or an anti-viral molecule. 

    Professor Philip Yongxin Li, from the Department of Chemistry, specialises in chemical biology and drug discovery, with the focus on bioinformatics-guided drug discovery and biosynthesis. In this interview with our science editor, Dr Pavel Toropov, Professor Li talks about how his team uses AI to create new antibiotics. 

    ❓ Dr Pavel Toropov: Could you explain your research? 

    💬 Professor Yongxin Li: We work on discovering new antibiotics to tackle the problem of antibiotic resistance – superbugs. Because of on-going overuse, the current antibiotics are failing, and superbugs – antibiotic-resistant bacteria – are emerging. 

    My job is to learn from Mother Nature. In the natural environment, bacteria use chemicals as weapons, used in competition with other bacteria. These are very intense chemical interactions! 

    Our job is to decode this chemical language, make good use of bacteria’s chemical weapons, repurpose them for therapy, and develop them into antibiotics and anti-virals to kill human pathogens. 

    But rather than following the traditional way – culturing bacteria, isolating them and identifying chemical compounds that they make, which is time- and labor-consuming – we look at the genetic potential of bacteria, mining the chemicals from bacterial genomes from large datasets. 

    Instead of using synthetic chemistry to make new antibiotics, we use synthetic biology to harness their genetic potential for drug discovery. We use cell factories, cell assembly lines to produce chemicals for us. We clone biosynthetic genes [Note: biosynthetic genes are genes that produce complex chemicals, such as those used to kill other bacteria], plug them into a cell’s factory, and let the cell factory build the antibiotic for us. 

    ❓ How is AI used in your work? 

    💬 We use AI to decode bacteria’s chemical language. Genome information is now available online – more than one million bacterial genomes are available! 

    We developed a new methodology called genome mining. AI looks at a million genomes and assesses their genetic potential for coding an antibiotic or an anti-viral molecule. 

    💬 The traditional methods analyse the genomes one by one. It is not efficient, and the chance of discovering a new antibiotic is low. So, we train AI to select, from one million genomes that can contain 20 or 30 million biosynthetic genes, the genes that code for antibiotics. 

    We use AI to select and prioritise the genes with the highest probability that they code for new antibiotics. Using AI we can also predict the antibiotics’ structure and bioactive potential. 

    ❓ How much time does AI save? 

    💬 Using the traditional way, the in-silico screening process [Note: “in silico” means biological experiments conducted on a computer, or via a computer simulation, to make predictions about the behavior of different compounds] can screen several thousand biosynthetic genes and narrow them to about 100 for experimental validation. Using AI, we can start with 20 or 30 million genes and evaluate their potential for coding for antibiotics. 

    The traditional way can take years. But, using AI, we can finish assay in silica within a few days and even hours. To validate the result, you still need to clone the gene, so we need to use synthetic biology and this part still takes a long time. 

    ❓ How close are you to creating a new antibiotic? 

    💬 Drug discovery and drug development are a very long process. But with one of our lead compounds, we have finished pre-clinical tests, anti-infection application in vivo, and evaluated ecotoxicity. It is ready for the next step. 

    👏 Thank you, Professor Li.

  • AI makes the impossible possible for us

    AI makes the impossible possible for us

    Professor Haibo Jiang works at the Department of Chemistry at the University of Hong Kong and is the director of the multi-disciplinary JC STEM Lab of Molecular Imaging. He received his PhD from the University of Oxford and joined HKU in 2021. In this interview with our science editor Dr Pavel Toropov, Professor Jiang explains the importance of AI in his research.

    ❓ Dr Pavel Toropov: what is the research focus of your lab?

    💬 Professor Haibo Jiang: My lab focuses on the development of new imaging technologies to see inside biological systems at a very small scale. For example, what is happening inside a single cell or a single organelle.

    Our molecular imaging combines different modalities of microscopy – optical, electron and mass spectroscopy imaging. By combining them we can extract information from one sample so that we can understand what is happening in its biology and structure.

    ❓ What are the images used for?

    💬 One example is tracking, with high resolution and very high sensitivity, of drugs in biological systems. We can see when people (and we also use animal models) take the drug, where the drug goes, and how it gets to the target to be therapeutically effective.

    We combine structural information from electron microscopy with chemistry information from mass spectrometry imaging, and then we can then reliably correlate where the drug is – in which organelle, in which cell, in which tissue of which organ. We can also learn why the drug is, or isn’t, effective, and why it causes side-effects.

    ❓ Which drugs are you working with?

    💬 Our system is versatile. We have applied it to understand the traffic of antibiotics. Once in the human body, antibiotics need to get to the bacteria to kill them. We can track a range of different antibiotics to see if they get to the right cells at the infected site. We also applied our methods to study cancer drugs to see where the drug gets into the cell, because this is important for its efficacy.

    ❓ And what does the AI do?

    💬 For us, AI makes the impossible possible! With AI we can achieve high image quality at high speed.

    ❓ Could you explain?

    💬 The biggest problem in imaging is the compromise between image quality and image speed. AI speeds things up and also provides better resolution. Currently, the hardware we have has its limit in spatial resolution, so the improvement has to come from the software – which means AI.

    One of the major limitations of our method is that it is slow. It is the nature of the microscopy techniques that we use. We scan pixel by pixel, and there is a compromise between the quality of the image and the speed of the imaging. If we scan fast, there will be noise and the signal would be low. If we scan one pixel ten times, we can get higher signal and less noise and the quality would be much higher, but it takes longer.

    But using AI, we can improve the electron microscopy speed by more than 10 times, it is faster and more efficient. With AI, we can speed things up by using a lower resolution image, but at a faster rate, and cover a big region of the sample to extract more information from one biological sample.

    ❓ What is the next step for you?

    💬 Life is in 3D, but what we just talked about is 2D. When we get to 3D imaging it is even more challenging!

    There is a technique that allows you to look at 3D structures of cell organelles, but at most, you can do around 200 by 200 by 200 microns, and that’s really small for a tissue sample.

    But scientists dream of seeing a big sample in 3D. For example how the neurones are connected in the human brain. This is not possible with the current technology. Our dream is to have algorithm technology to achieve high speed 3D imaging of large biological samples. We are not there yet, but that is our aim.

    ❓ What is the future of AI in your field?

    💬 I think, in biological imaging, AI will be everywhere. From the imaging itself, to the data analysis. I only started collaborating with AI people after I came to HKU, but I think AI will be the future.

    But AI will not replace what we, humans, do. We need to learn how to employ AI in our research – to do what we do, but better.

  • We used to wait for a year for results, with AI – a week

    We used to wait for a year for results, with AI – a week

    Brute-force computation cannot solve these difficult problems – because of the exponential wall, one has to approach them with deeper understanding: either applying artificial intelligence or human intelligence. Now with AI, many of the previously impossible simulations – because they required vast amounts of computational power – are becoming possible.

    Professor Ziyang Meng is an acclaimed computational condensed matter physicist and is one of the pioneers of the use of AI in computational physics. His research focuses on developing large-scale numerical and machine-learning simulations to investigate quantum materials. Professor Meng has published more than 100 papers in top journals such as Nature, Nature Physics and PNAS. In this interview with our science editor, Dr Pavel Toropov, Professor Meng explains how AI has revolutionised quantum and computational physics research. He also talks about ancient Greece and mahjong.

    ❓ Dr Pavel Toropov: How do you use AI in your work?

    💬 Professor Ziyang Meng: Quantum materials are very complicated. The existing methods usually face an exponential wall.

    (Note: in a quantum system, each particle can exist in multiple states. As the number of particles increases, the number of possible states increases exponentially, and thus so does the amount of information needed to describe the system and the computational capacity to do so. This increase is known as the exponential wall.)

    The wall is very high! Our computational capacity cannot jump over it. So, we use AI to extract quantum information in a quantum material. This information is used to design better algorithms that help us jump over an exponential wall to look into new materials, new properties.

    For example, we developed what is called the Self-Learning Monte Carlo Method. It is one of the first examples of employing explainable-AI techniques in quantum many-body systems. It helps to open up the field of AI-inspired algorithms for reducing numerical complexity in the computational research of quantum materials.

    My inspiration for developing these AI-related algorithms comes from the (ancient) Greek Delphic maxim “know thyself”.

    ❓ This is a lot of terminology! Before we get to the ancient Greece part, could you explain why quantum materials are complicated?

    💬 The basic ingredients of quantum materials – electrons, billions and billions of them, are subject to mutual quantum mechanical interactions and the complicated chemical, physical and topological environment they live in.

    The full quantum treatment of so many electrons is way beyond paper and pencil. Instead, it requires modern computational techniques and advanced theoretical analyses.

    Brute-force computation cannot solve these difficult problems – because of the exponential wall, one has to approach them with deeper understanding: either applying artificial intelligence or human intelligence. Now with AI, many of the previously impossible simulations – because they required vast amounts of computational power – are becoming possible.

    ❓ Can you give an example?

    💬 The Self-Learning Monte Carlo algorithm. With this algorithm, we first use AI to extract better model parameters at a smaller scale – with few electrons, simulation, and these parameters can more accurately present how the billions of electrons interact with each other inside the material, and how they respond to the experimental conditions such as temperature, electronic or magnetic fields.

    (Note: Monte Carlo simulation allows us to model and solve problems that involve randomness and big data. It is used to handle such situations by testing many possible scenarios.)

    Then we can start the large-scale Quantum Monte Carlo simulation on supercomputers. It is faster than the tradition simulation without the self-learning step that involves AI.

    The self-learning step is crucial. It gives us better and more accurate model parameters, which means that we get to know the properties of the material better. This is what I meant by the Delphic maxim “know thyself”.

    To “Know yourself” means that we must find the most important interactions among these interacting electrons. Self-learning, therefore, is as modern as AI and quantum physics, but also as old as the beginning of human civilisation – ancient Greece.

    ❓ Do you mean the Delphic maxims – the set of moral principles that were inscribed on the Temple of Apollo in ancient Greece?

    💬 Yes, and I even wrote a popular science article about this – “From Delphic Oracle to Self-Learning Monte Carlo”. The article is in Chinese and was published in Physics – a journal of the Chinese Physics Society. http://www.wuli.ac.cn/cn/article/doi/10.7693/wl20170406).

    ❓ So, AI speeds things up by saving quantum physicists a lot of number crunching?

    💬 AI in quantum physics research does not only mean that we can compute faster. AI helps us find better, more accurate, models for the quantum materials. This allows us to better understand the material and also better understand the process of understanding: that’s how we can come across new laws of physics.

    ❓ Quantum physics is not something that most people experience in their daily lives. To non-specialists, such research may sound very abstract, entirely theoretical. What does your research mean in the “real world”? What are the practical applications of your work?

    💬 Our Momentum-Space Monte Carlo self-learning method deals with a new mystery in a quantum material: the magic angle twisted bilayer graphene in which superconductivity has been recently discovered. Graphene is what we have in every pencil! If we can elevate the superconducting temperature from minus 270 Celsius to, say, room temperature, we can solve the global energy crisis.

    Our recent paper on this was awarded the 2024 Top China Cited Paper Award for Physics.

    ❓ How can this solve the energy crisis?

    💬 Using superconductor cables and wires, the electricity, once generated from the power station, will not be lost as heat and dissipate into the air. This is because the electrons in the superconducting state will not experience resistivity as in commonly used conductors such as copper, iron and any other metals. Not experiencing resistivity means electron movement will not be slowed down, converted into heat and lost, and 100% of the generated energy can be used for the intended purpose.

    ❓ In your office you have a physics-themed mahjong set. Why?

    💬 Mahjong is a strategy game. In quantum physics, AI helps the physicist with strategy, allowing the physicist to better understand the problem. I find AI to be a very good partner who helps the physicist solve the mysteries of Mother Nature.

    Mother Nature plays games with us, hiding her secrets behind complicated phenomena, and we need a good partner – like AI, to play this game, and find solutions.

    I am teaching a new undergraduate course at the HKU Physics Department – PHYS3151: Machine Learning in Physics, where you can learn how to use AI techniques to solve problems – from Newtonian mechanics and electromagnetism to quantum phenomena. Everyone is welcome to join!

    https://quantummc.xyz/hku-phys3151-machine-learning-in-physics-2024/
    👏 Thank you, Professor Meng!

  • What takes us one day, AI can do in a few minutes

    What takes us one day, AI can do in a few minutes

    One ostracod is one white dot on the slide. In one second AI can identify 20 of them. There are several hundred ostracods here on this slide. Identification will take a few minutes for AI, but by eye, depending on the person – one day, or several hours.

    Professor Moriaki Yasuhara works at the Swire Institute of Marine Science, the School of Biological Sciences. One of his main research interests is paleoecology – the interaction of ancient organisms and their environment. In this interview with our science editor Dr Pavel Toropov, Professor Yasuhara and his PhD student Jiamian Hu explain how AI deep learning tools have transformed their research.

    ❓ Dr Pavel Toropov: could you explain what the direction is of the research done by your laboratory?

    💬 Professor Moriaki Yasuhara: We want to understand the climatic and environmental impact on our planet, especially on marine ecosystems and biodiversity. We are interested in how climate change, global warming, acidification, and oxygen decline effect marine animals.

    Our laboratory focuses on paleobiology. We study marine biology, but over longer time scales, using the fossil record. In contemporary biology, scientists start monitoring after they realise there is a problem. Once they realise there is, for example, pollution, they start monitoring. But we don’t know what the natural environmental conditions were before pollution.

    But, by studying sediment cores and deep time fossil record, we have long time series throughout – before and after. We can go back hundreds of thousands, tens of millions or even hundreds of millions of years.

    ❓ Dr Pavel Toropov: What animals do you use in the fossil record?

    💬 Professor Moriaki Yasuhara: Most animals – fish, jelly fish, worms, marine mammals – don’t have a good fossil record as they have no hard parts, for example shells, that allow for good fossil preservation. Or, they are too large to be abundantly preserved in a small amount of sediments as fossils. So, we need a representative, a surrogate, to make conclusions about the global marine ecosystem.

    One representative is ostracods. They are tiny crustaceans with really nice calcium carbonate shells, and they have some of the best fossil records amongst all crustaceans, arthropods, and metazoans.

    So, by studying ostracods, we can know not only about ostracods themselves, but, using them as a representative, learn about the entire ecosystem, the entire biodiversity.

    ❓ Dr Pavel Toropov: Where do your ostracods come from?

    💬 Professor Moriaki Yasuhara: Mainly from the Cenozoic Era – from 66 million years ago to the present. Some of my students are working on Ordovician samples – from more than 400 million years ago. My research locations include the Arctic, Antarctic, Atlantic Ocean, Indian Ocean, Pacific Ocean, Red Sea, Mediterranean Sea… Hong Kong, Africa.

    ❓ Dr Pavel Toropov: So, to explain your work in simple terms: you get a core sample of the sediment from the bottom of the sea, take out all the tiny ostracods, put them on the microscope slide. Then you identify what species they are. Because different species prefer different conditions, by knowing how the numbers of different species of ostracods changed with time, you can make conclusions on the changes in the entire marine ecosystem, correct?

    💬 Professor Moriaki Yasuhara: Yes.

    ❓ Dr Pavel Toropov: How do you use AI in this?

    💬 Professor Moriaki Yasuhara There are several problems (working with ostracods). First, it is very time consuming – picking, identification, taxonomy. Also, we need expert knowledge. To train one person to be good at ostracod identification and taxonomy takes many years. An entire PhD is probably necessary.

    Recently, I have been working with my PhD student Hugo Jiamian Hu to automate this process by applying AI deep learning. He did a very good job, and now we can scan entire slides automatically, using our digital microscope.

    Hugo used more than 200,000 ostracod specimens for training our AI, and now the AI can do its own automatic identification. Identification is now much faster, and we can use much bigger data.

    💬 Jiamian Hu: Yes, and having a lot of data, big data, means quite something! The 200,000 research-grade, specialist-identified samples ensure that our deep neural network can effectively learn patterns in ostracod identification.

    ❓ Dr Pavel Toropov: How much time does using AI save you?

    💬 Jiamian Hu: We have a PhD student who has about a hundred samples of ostracods from Panama. Before AI, by hand, one by one, it may take her several days to finish a sample. Now, using AI – less than an hour.

    (Shows a microscope slide with ostracods) One ostracod is one white dot on the slide. In one second AI can identify 20 of them. There are several hundred ostracods here on this slide. Identification will take a few minutes for AI, but by eye, depending on the person – one day, or several hours.

    In addition, Professor Yasuhara is not always free, but AI is always free. So, when student has a question about identification, AI can always help.

    ❓ Dr Pavel Toropov: Did you write this deep learning program yourself?

    💬 Jiamian Hu: I wrote it with PyTouch. I built it from scratch, it is specifically designed. I was a computer science student before.

    💬 Professor Yasuhara: Not only is AI more time efficient, by using AI, deep learning, we made exciting discoveries, learned new things. AI can discover errors, misidentifications. AI can give you new questions to answer.

    👏 Dr Pavel Toropov: Thank you both.

  • Our models can predict cancer treatment response

    Our models can predict cancer treatment response

    Then the AI algorithm integrates this information with other information, such as imaging information and general lab test information, and puts it all together to make a more comprehensive prediction about the condition of the patient.

    Professor Lequan Yu is the director of the Medical AI Lab at HKU and his work lies at the intersection of AI and healthcare. Before joining HKU, Professor Yu was a postdoctoral research fellow at Stanford University. In this interview he explains to our science editor, Dr Pavel Toropov, how AI can revolutionise healthcare.

    ❓ Dr Pavel Toropov: How do you use artificial intelligence in your research?

    💬 Professor Lequan Yu: We use AI technology, AI algorithms to solve problems related to healthcare and medicine. We rely on multimodal AI models, using AI to analyse and integrate different medical data, such as medical images, medical reports, lab test results and genomic data. The aim is to interpret and integrate them together to help doctors make decisions.

    ❓ Could you provide an example?

    💬 For example, using an algorithm to see if the patient has cancer or not from computed tomography (CT) images. This can reduce the doctor’s workload. Also, we want to do precision medicine, especially for cancer patients. Currently, treatment strategies are not really tailored for individual patients. We want to use AI algorithms to integrate the diverse information about each individual patient and then let AI make recommendations for doctors.

    ❓ What data would AI integrate?

    💬 Radiology data such as CT scans, MRI and also pathology images – microscopic images. Recently, we have been exploring how to integrate genomic data.

    ❓ What do you mean by “genomic data”?

    💬 Broadly speaking, this refers to DNA, RNA or protein data. For example, we work on gastric cancer. We get samples of cancerous tissue, and do genetic sequencing or molecular testing to obtain molecular information, for example, about what subtype of cancer it is.

    Then the AI algorithm integrates this information with other information, such as imaging information and general lab test information, and puts it all together to make a more comprehensive prediction about the condition of the patient.

    For cancer, take gastric cancer for example, there are different treatment strategies, for example immunotherapy. But, we do not know whether this strategy and treatment would benefit this particular patient. Because some strategies may not. So our AI algorithm can predict treatment response in this particular patient and also provide survival analysis.

    ❓ In healthcare, what does using AI allow humans to do that they cannot do alone?

    💬 Two examples. One is chest X-rays. A doctor can do the analysis very well, detecting pneumonia, for example. AI can also do it, and the reason to use AI is to help reduce the doctors’ effort and workload.

    But for cancer image analysis, that is different. Doctors can estimate potential survival or potential treatment response from the image. But this is quite subjective, based on the doctor’s experience. AI has the potential to evaluate this more quantitatively and objectively.

    ❓ Can this technology be used in clinical practice now?

    💬 Currently for oncology this is frontier research. There is still a way to go before putting it in clinical practice. But, after we incorporate genomic data, we think it will be more workable. Perhaps in to 10 years this will be applied in clinical practice.

    ❓ Other than cancer, in the treatment of what other conditions can AI help?

    💬 Cardiovascular disease. Here AI can play an important role. When it comes to certain heart risk disease predictions, it is less challenging than with cancer. Moreover, AI can integrate and analyse chest X-ray images, and here the accuracy of AI is very high, over 90%.

    But still we have issues regarding privacy, ethics and medical regulations before we can apply it in clinical practice.

    Collaboration is very important – we must collaborate with doctors, hospitals, medical schools. It is the best way to apply AI technology to solve real-world problems, address real-world needs, and help our society, medicine and economy.

    👏 Thank you, Professor Yu.

  • Generative AI will change how reality is presented in film

    Generative AI will change how reality is presented in film

    The final project is a 3-5 minute short film, using different AI applications. They put all their AI skills together to assemble something complex, working with text prompts and reference images to create video sequences. The main Gen AI applications that we use are ChatGPT, Midjourney and RUNWAY Gen3 Alpha. 

    Ulrich Gaulke is an award-winning documentary filmmaker who has taught his art across the world, from Bosnia to Bolivia. Last year he took a position as a senior lecturer at the Media and Journalism Center at the University of Hong Kong. One of the courses he teaches is Generative AI for Media Applications, for which he has been awarded the Social Science Outstanding Teaching Award. In this interview with our science editor Dr Pavel Toropov, Ulrich Gaulke talks about this new course, the role of AI in filmmaking, and why AI cannot yet replace a human storyteller.   

    ❓How did this course – Generative AI for Media Applications, come about? 

    💬I studied computer science – a long time ago! And I am interested in technical things – like AI. I had an idea – how I can develop a new, pioneering class where students are storytellers, but they develop story-telling skills by using all the newest applications of Gen AI. 

    ❓What do students do in your course? 

    💬First we do introduction to AI – how large language models work, how a diffusion model that creates images works… basic understanding of neural networks. Students learn what happens with the data inside an AI model and how the model creates a proper outcome.  

    Then, the students build, and feed, their own AI model with data – pictures. The students take pictures of Hong Kong with mobile phones to feed their own diffusion AI model. Based on this AI model we can create more Hong Kong-related stuff.  

    These are the beginnings. Then, step by step, we go through Gen AI applications: text to text, text to image, text to video, text to animation, text to speech, and text to music. After this the students can create stories and video sequences, and can start working on the final project.  

    The final project is a 3-5 minute short film, using different AI applications. They put all their AI skills together to assemble something complex, working with text prompts and reference images to create video sequences. The main Gen AI applications that we use are ChatGPT, Midjourney and RUNWAY Gen3 Alpha. 

    But first, the students must create a proper story, then divide it in different parts – like a story board in a fiction film. Each chapter, location, characters, everything must be visualised. I let students start with the final project only when I agree with the story. If their story has something poetic, revealing, touching, then I let them create characters, set ups, visuals. 

    We also have weekly assignments – to create something. We also discuss the ethical aspects of AI and technical background. I wanted to include people working with AI and combine their experience with my perspective of a storyteller. So, I have guest speakers from the computer science department. 

    I also invite guest speakers who are leading figures in AI and media, for example Professor Sylvia Rothe, Chair of Artificial Intelligence from Munich University of Television and Film in Germany. 

    We established a pioneering course, nobody else does this kind of work. 

    ❓How popular is this course? 

    💬It was booked out immediately. But more students want to attend! Journalism and media students are a priority, and so we established a summer class in June. It has the same content, but the course is open to students from other departments and faculties.  

    ❓You teach this course at the Media and Journalism Center at HKU. Is the course for filmmakers or journalists? 

    💬The course is open to both journalism students and filmmaker students, but our work is focused on the creation of works of fiction, not journalistic work.  

    Journalism is fact-based. If you are a journalist, you have to be very responsible in your use of generative AI. An AI model is not a research tool like Google. We teach that with AI, you cannot trust the outcome – an AI model does what it wants, creates its own patterns. The result may look very detailed, but it needs to be checked. 

    ❓So, what is the main use of AI in filmmaking?  

    💬Filmmakers can use it creatively as a visualisation tool – for visualising something that is not possible to shoot, for example something that happened in the past, something that you have no video materials for.  

    Animation has traditionally been used for this purpose, in combination with powerful, real stories. For example (the animated film) FLEE, about a family from Afghanistan escaping to Europe, was nominated for three Oscars. It is based on a real story but is fully animated.  

    ❓Do you use  AI in your own work as a filmmaker? 

    💬I am now using it for historical re-enactments in my latest documentary about five 100-years-old ladies. They are talking about their past, but they only have still pictures from the time when they were very young. So, I can create video sequences based on these still images.  

    ❓Storytelling is key to making films. Do you think this will be done by AI at some point? 

    💬Yes and No. It depends on what your expectations are. ChatGPT can help you write a story, but the main constellation – the plot, the characters, this must come from you! If you let AI create everything by itself, then you will see that it is just mimicking something that already exists. Stories are what humans use to communicate with each other. A good story must include something unique, surprising, that has something to do with your own life.  

    AI is based on patterns, these patterns come from the learning material, and the learning material is based on what has already been made…  A script for a TV soap opera is based on very simple elements, and writers write the same stuff every year, so that is something that AI can do. 

    ❓A little while ago, when text to video applications came out, there was talk that AI will now make films for us, eliminating the need for actors, directors…. This does not seem to be happening.  

    💬A lot of people are giving up on that idea. 

    ❓Why? 

    💬The expectations are too high. Try giving AI a simple story to do. For example, a teacher is angry at a pupil, a little girl. Try to keep consistency with both characters, try to bring them into a serious conversation – for example, the angry teacher tells the pupil that there is something wrong with her homework. It is a very simple story but try to create it with AI – and it becomes very complicated!  

    Try to find a video on YouTube, one that can make you forget that it was created by AI. It is always more than obvious that a video was created by AI – a character disappears, another appears randomly, there are many random actions, there are aliens… 

    ❓So, you see AI as a tool to create visuals for a creatively written story, one done by a human? 

    💬If you let AI do something on its own, then it goes weird, random. AI-created work is totally different from our idea of creativity.  It´s more like a dream. 

    If you want to use AI application as a tool to create something that is based on our understanding of storytelling, of creating characters, of emotional expression, then it is very hard. Consistency is the problem – the movement, and the facial expressions of characters are not consistent. 

    It is very hard to develop a character using an AI model. You can create a realistic photo using Midjourney – for example of an old guy who is looking sad. The AI model will create an image of an old guy looking sad, but is this the old guy that you want to use in your story? Or is he completely different? 

    ❓Does AI allow you to fine-tune these discrepancies? 

    💬Gen AI can do very impressive things, but it is not like applications such as Photoshop or After Effects, where you have direct control of the outcome by changing the parameters. 

    With Gen AI, if the result is something that you cannot use for your work – then try to change something, and – it is impossible! You can instead create something else, something new, but it can differ, again, from what you expect. You can become more and more frustrated, because you do not have direct control of the outcome. 

    What you can do with AI is write another prompt. But you cannot be sure that the AI model will give you exactly what you want.  

    ❓Where would the skill of operating AI tools be? We know what a good photoshop operator can do, what is the AI equivalent? 

    💬The equivalent is an AI operator who is very experienced in writing prompts. The communication with an AI model needs AI communication skills and this means prompting, prompt design: how can I design the prompt to make the AI model to fulfil my expectations? 

    ❓Do you teach prompt design in your course? 

    💬I try, in each lecture, to talk about prompt design. But, the more complex the outcome needs to be, the more skills you need to write a prompt. 

    There are prompt design tools: you can write something that is not really good as a prompt, and this tool can turn your idea into a proper prompt. I teach that too. 

    The students must keep in mind that it takes a lot of work to design prompts. So, they must be prepared – have a proper story ready, and only then spend time to create, using AI, the right visuals for that story. 

    ❓What is the plan for the future of this course? 

    💬The AI applications keep improving. Last year we were mostly focused on managing the challenges and the difficulties of all the AI applications. This year we can expect more from them. Runway, the video application, is more advanced, so now we can do more with storytelling.  

    I want to make sure that the students develop better storytelling skills. Then, they can use the AI applications that are now more advanced, which means that we can create more sophisticated visuals.  

    👏Thank you, Ulrich! 

  • “From a brain image, AI can read out the emotion that the person is feeling”

    “From a brain image, AI can read out the emotion that the person is feeling”

    “AI can help us greatly in finding patterns in this data. We recently discovered that we can use AI inspired procedures to read emotions from the brain. AI can find out if someone feels afraid or feels disgusted, for example.” – Prof. Benjamin Becker

    Professor Benjamin Becker studies the human brain. His innovative, cutting-edge research has been published in top journals such as Nature Human Behaviour and the American Journal of Psychiatry. In this interview with our science editor Dr Pavel Toropov, Professor Becker talks about the breakthroughs that Artificial Intelligence made possible.

    ❓ Dr Pavel Toropov: What is the main direction of your research?

    💬 Professor Benjamin Becker: Trying to find out how the human brain generates emotions, and what happens with these processes in people with mental disorders and how we can make this better.

    ❓ How do you use AI?

    💬 AI allows us to make very big progress in analyzing brain images. The brain is a highly complex structure, probably the most complex structure in the Universe. We are looking at the biological architecture of the brain made from billions of neurons with billions of connections. Humans, because our cognitive capacities are very limited, struggle to make sense of these very complex patterns.

    AI can help us greatly in finding patterns in this data. We recently discovered that we can use AI inspired procedures to read emotions from the brain. AI can find out if someone feels afraid or feels disgusted, for example. Using human brain power this is nearly impossible. We need complex algorithms to help us make sense of this complex data.

    ❓ How is this done?

    💬 We put individuals in MRI scanners to image their brain activity while we induce specific emotions. What we, humans, can only see (in the brain scan images) is that particular brain regions become active. But this is too simple, and AI allows us to see more complex patterns and read out the emotions that the individual experiences.

    ❓ Can you specify?

    💬 We, humans, see that specific regions in the brain have become active, but these are rather big structures, and what AI can do is screen those structures on a much finer level than humans can, and then use the data to generate complex patterns – like the fingerprints that specific emotions have left on the brain.

    Most amazingly, based on these patterns that it sees in the brain, the AI can read out what the person feels at a given moment. For humans this data is too noisy and too complex. A human interpretation is just not possible. AI gives us the cutting edge.

    ❓ So, for humans, basically, a brain scan is too noisy – blurry, messy, to see pattern in it. But AI can look at a series of such brain scans, see through this noise, and say – these are all the same, and these people are all feeling, say – fear?

    💬 Yes. AI can even take advantage of this noise to make very good predictions.

    ❓ How cutting edge is this? This was not possible just a few years ago, right?

    💬 Yes. I think there has been progress on three sides. Progress with imaging technology, MRI. Then we have progress in terms of what we know about human emotions, and the third is the progress in machine learning and AI.

    ❓ Where can this take us in the future?

    💬 When MRI was developed 30 years ago, people said: in ten years we will have understood the entire brain. This did not happen. It was too optimistic. The brain is still the most complex structure in the Universe, so to understand it will still take some time.

    I see more progress from AI in applications. In basic research we look at how emotions are processed in the brain, we look at mental disorders, because that’s when emotions dysregulate. Patients with depression, addiction, they have problems controlling their emotions and they feel these negative emotions very strongly.

    Our hope is to map these emotions in a healthy brain, and then apply AI to support the diagnostics of mental disorders. We now see advances where AI can help make good diagnosis – for a medical doctor it is difficult to decide – does this patient have depression, anxiety or something else?

    AI could provide us with a probability value – for example, 80%, that this patient will respond to this treatment and not to this one. We will be able to make huge progress, reducing the duration of patients’ suffering and also reducing the cost for the health care systems.

    The second thing – using AI, you can have subgroups of patients and make better recommendations for their treatment.

    ❓ What do you mean by subgroups of patients?

    💬 Working as part of a large collaboration, we have recently shown a lot of variation in the symptoms and brain alterations in adolescents with depression. Using findings like these we could target different brain areas, or provide different treatments (according to the subgroups). Some patients would, for example, respond better to behavioral therapy, others to medication, others to brain stimulations.

    ❓ Can this technology be used clinically, in the real world?

    💬 What we see is that there is good progress, but right now AI is not precise enough for clinical diagnosis level. This is about human life. Perhaps soon we will be able to use AI to make recommendations, but now the predictions are not precise enough to enter clinical practice, we need to have a high level of certainty.

    ❓ There is a lot of fear about AI replacing humans in many jobs. Do you think that in the future AI can replace psychologists?

    💬 I think in the next 10 years I will be able to get away with it! I am not concerned for psychology and I would recommend students who have an interest in psychology to pursue it.

    ❓ Why are you so confident?

    💬 One area where AI will not overtake us is in understanding other humans, communicating with other humans, bringing humans together and treating humans as therapy.

    ❓ Is there a scientific basis to this?

    💬 Yes. An area where we see more and more research is AI interaction with humans. We recently did a study about our trust in other humans and in AI.

    From very early on in our lives, we are very sensitive to whom we can trust. Evolutionary this is very deep. If you don’t have this skill, your ancestors probably did not survive for long, because they trusted the wrong people or because they did not trust anyone.

    We showed that there is a clear brain basis for our trust in other humans. At the same we assessed people’s trust in AI. We asked them – do you trust AI? We saw that these two “trusts” are not related! Moreover, trust in humans was associated with specific brain systems, but we did not see a brain basis for AI.

    We have learned as species to trust each other. This is ingrained in our biology. But with AI, even though it is somehow human-like, it has only been there for a couple of years. How can we know whether to trust it or not?

    👏 Thank you, Professor Becker.