Author: kklc

  • Teaching Machines to Think Quantumly: Qi Zhao on the Frontier of AI-Driven Computing

    Teaching Machines to Think Quantumly: Qi Zhao on the Frontier of AI-Driven Computing

    “Quantum computers don’t just calculate. They learn from the rules of nature itself.” — Prof. Qi Zhao

    Artificial intelligence is everywhere — in our phones, our cities, and the tools we use to think.

    But for Professor Qi Zhao at The University of Hong Kong’s School of Computing and Data Science (CDS), the next leap in AI may come from a place far smaller than any silicon chip.

    His research explores how quantum physics and machine learning can work together to create a new kind of intelligence — one that learns the way the universe learns.


    From Theory to Computation

    Zhao trained as a quantum information theorist, studying how data behaves when stored in particles rather than bits.
    At HKU CDS, he leads a group that builds hybrid computing models combining classical algorithms with quantum processors.

    His goal is simple to state but hard to achieve: use quantum systems to make AI faster, smarter, and more energy-efficient.

    “Classical computers follow fixed paths,” he explains. “Quantum computers can explore many paths at once. That difference changes how learning works.”


    Reimagining Computation

    Traditional AI trains neural networks through repetition — adjusting parameters until patterns emerge.
    Quantum computers take a different approach.

    They rely on variational quantum algorithms, where a small quantum circuit learns by tuning itself with help from a classical controller.

    Think of it as teamwork: the quantum part handles exploration; the classical part handles evaluation. Together, they solve problems that would take ordinary machines far longer to compute. Zhao’s team studies how this cooperation could transform optimization tasks, from image recognition to material design.


    Quantum Machine Learning in Action

    Inside his lab, AI helps control fragile quantum hardware.

    Algorithms adjust pulse shapes, timing, and temperature to keep qubits stable. The system learns which conditions produce reliable results and adapts automatically when the environment changes. “It’s feedback learning in the truest sense,” Zhao says. “The machine is teaching itself how to stay coherent.”

    These experiments do more than improve performance. They show how AI and quantum physics can enhance each other.
    AI stabilizes quantum devices; quantum mechanics gives AI new mathematical tools for creativity and pattern discovery.


    Learning from Quantum Data

    Zhao believes that the next revolution will come when AI no longer just analyzes quantum data — it learns inside quantum data.

    His group explores models where quantum systems perform the learning directly, finding relationships hidden from classical logic.
    Such systems might recognize molecular structures or financial correlations beyond human intuition.

    “This is where AI stops imitating intelligence,” he explains. “It begins to share it.”


    Mentorship and Collaboration at CDS

    As a mentor, Zhao encourages students to cross boundaries between physics and computer science.
    He collaborates closely with Prof. Giulio Chiribella, Prof. Yuxiang Yang, and Prof. Ravi Ramanathan, creating a bridge between theory, experiment, and data science.

    In class, he simplifies complex formulas into visual intuition. His students learn not only to code algorithms but also to think about why an algorithm works. “The most exciting discoveries,” he says, “often happen when we try to explain them simply.”


    Looking Ahead: The Shape of Quantum Intelligence

    Zhao imagines a future where AI systems powered by quantum hardware design drugs, manage energy grids, or simulate ecosystems in real time.

    These machines will not replace human reasoning; they will extend it.

    “Intelligence isn’t just logic,” he says. “It’s the ability to learn from limited information. That’s what quantum mechanics has been doing for billions of years.”

    In his view, teaching machines to think quantumly is not just about computation — it’s about understanding learning itself.
    And at HKU CDS, that journey has already begun.

  • When Randomness Becomes Intelligence: Ravi Ramanathan on Quantum Security and the Limits of AI

    When Randomness Becomes Intelligence: Ravi Ramanathan on Quantum Security and the Limits of AI

    “In both physics and life, uncertainty isn’t a flaw — it’s what keeps everything interesting.” — Prof. Ravi Ramanathan

    In a world driven by algorithms, certainty feels powerful. Yet for Professor Ravi Ramanathan of The University of Hong Kong’s School of Computing and Data Science (CDS), the opposite may be true.
    He studies how randomness and trust shape the future of both quantum security and artificial intelligence — two fields that depend on data, but also on doubt.


    From Theoretical Curiosity to Digital Trust

    Ramanathan’s path began in theoretical physics. He was fascinated by the strange mix of order and unpredictability inside quantum systems. Over time, that curiosity evolved into a question that now defines his research: Can uncertainty itself protect information?

    At HKU CDS, his group designs quantum cryptographic protocols that don’t rely on trusting the devices used to send or receive data. Instead, they use the laws of physics — not human assurances — to guarantee security.
    “It’s like replacing a lock built by people with one built by nature,” he says.


    Building Unhackable Systems

    Traditional encryption depends on mathematical puzzles that powerful computers might one day solve. Quantum cryptography flips that logic.
    It uses quantum particles, whose behavior changes when observed, to detect any eavesdropper immediately.

    Ramanathan’s work focuses on device-independent security — a method where users don’t even need to know how their devices are built. As long as the results obey specific quantum correlations, the communication is secure.

    This idea has major implications for AI as well. “AI systems make decisions based on patterns in data,” he explains. “If those data streams are ever compromised, the intelligence that depends on them becomes fragile. Quantum security keeps the foundation solid.”


    The AI Connection: Intelligence Without Certainty

    While most AI operates deterministically — producing the same output for the same input — the quantum world thrives on probabilities.
    Ramanathan believes that future forms of quantum AI may combine these two views of intelligence: the structured logic of algorithms and the creative randomness of quantum mechanics.

    “Learning,” he says, “might require a balance between prediction and surprise.”
    By introducing controlled randomness, quantum systems could explore possibilities that classical AI would never consider. The result might be machines that don’t just calculate outcomes — they imagine them.


    The Ethics of Uncertainty

    As AI becomes more autonomous, questions of control and trust follow closely behind. Ramanathan’s research in quantum randomness adds a unique ethical layer: unpredictability can protect privacy.
    “In cryptography, unpredictability is freedom,” he says. “If every decision were predictable, there would be no security — and no choice.”

    He often compares the challenge of securing algorithms to the challenge of keeping human decision-making free from bias. Both require space for uncertainty. Both depend on humility before complexity.


    Collaboration and Teaching at CDS

    At CDS, Ramanathan collaborates with Prof. Giulio Chiribella, Prof. Yuxiang Yang, and Prof. Qi Zhao to connect foundational theory with practical technology.
    He teaches courses that blend mathematics, physics, and computer science, encouraging students to question what it really means to “know” something in a computational world.

    His mentoring style mirrors his research: open-ended, curious, and slightly unpredictable. “Students learn best when they discover answers for themselves,” he notes.


    Looking Forward: Trusting Uncertainty

    Ramanathan sees a future where quantum communication and AI-driven reasoning merge into systems that are both intelligent and secure.
    These technologies may protect digital infrastructures, power next-generation networks, and even redefine how machines reason about risk.

    “Randomness is often seen as noise,” he says. “But in nature — and in intelligence — it’s also creativity. Embracing it might be our best safeguard in an age of perfect prediction.”

  • How Quantum Sensors Learn: Prof. Yuxiang Yang on Intelligence Beyond Silicon

    How Quantum Sensors Learn: Prof. Yuxiang Yang on Intelligence Beyond Silicon

    “To make better sensors, we need to teach them how to learn from the world around them.” — Prof. Yuxiang Yang

    When most people think about intelligence, they picture circuits and code — the realm of artificial intelligence. But for Professor Yuxiang Yang of the School of Computing and Data Science (CDS) at The University of Hong Kong, true intelligence may start not in software, but in how we sense reality itself.

    Yang’s research bridges the boundary between quantum physics and machine learning, using AI-driven methods to make quantum sensors more precise, adaptive, and even self-correcting. His work explores what happens when perception — human, mechanical, or quantum — becomes an intelligent process.


    A Physicist Who Thinks Like an Engineer

    Before arriving in Hong Kong, Yang trained as an optical physicist fascinated by the limits of measurement. “Every experiment,” he says, “is a conversation with nature. The better you listen, the more you understand.”

    Now at HKU CDS, he leads projects that bring AI algorithms into quantum laboratories, transforming how sensors detect the tiniest fluctuations in magnetic fields, light, or vibration. Where most people see noisy data, Yang sees a new kind of language — one that can be decoded through learning.


    The Science of Quantum Perception

    Quantum sensors exploit delicate quantum states to measure the world with extreme accuracy — detecting signals a billion times smaller than what conventional devices can capture. The challenge, however, is noise: every vibration, temperature shift, or stray photon can overwhelm the signal.

    Yang’s team tackles this by training AI models to separate meaningful information from random interference. Instead of manually tuning every parameter, algorithms learn to predict, filter, and adapt in real time. In effect, the sensor begins to “think” about its own environment.


    When Sensors Get Smarter

    The concept is revolutionary: a sensor that improves through experience. By combining reinforcement learning with quantum-optical hardware, Yang’s group designs systems that adjust their measurement strategies automatically.

    In one project, machine-learning routines identify patterns in experimental noise and recommend control settings that boost sensitivity. In another, neural-network architectures inspired by human vision help interpret complex interference patterns in light-based sensors.

    The results are striking — faster calibration, higher precision, and lower error rates. What used to take days of manual adjustment can now happen in minutes. “It’s like giving intuition to an instrument,” Yang says.


    The AI–Quantum Feedback Loop

    For Yang, the relationship between AI and quantum science runs both ways. Quantum experiments provide unique datasets that could inspire new machine-learning architectures; conversely, AI provides tools that push quantum systems to their limits.

    “Quantum systems don’t process information the way classical computers do,” he explains. “They can explore many possibilities at once. Understanding how they ‘learn’ from data may reshape how we define intelligence itself.”

    This feedback loop — between artificial learning and quantum understanding — is precisely where Yang believes future breakthroughs will occur.


    Mentorship and Collaboration at CDS

    At HKU CDS, Yang works closely with colleagues such as Prof. Giulio Chiribella, Prof. Ravi Ramanathan, and Prof. Qi Zhao in a research environment that encourages cross-disciplinary thinking.

    His students are trained not only to run experiments, but to write code, model data, and reason from first principles. Many take part in HKU’s Common Core course “The Quantum Revolution,” where Yang introduces AI-driven approaches to measurement as a window into the nature of knowledge itself.

    “We’re building a new generation of researchers who speak both languages — the language of physics and the language of data,” he says.


    Looking Ahead: Perception as Intelligence

    As AI systems become ubiquitous, Yang believes the next frontier is not only smarter computation but smarter perception — machines that can sense the world with the nuance of quantum systems.

    His vision is ambitious yet grounded: quantum sensors that monitor ecosystems, detect disease, or guide autonomous vehicles with unprecedented precision — all while learning continuously from the environment.

    “In the end,” he reflects, “AI and quantum physics are asking the same question: how do we extract meaning from uncertainty? The answer may begin with how we choose to observe.”

  • Prof. Giulio Chiribella: Rewriting the Rules of Reality

    Prof. Giulio Chiribella: Rewriting the Rules of Reality

    “If we change the way events can be ordered, we may discover new ways to compute, communicate, and understand the universe.” — Prof. Giulio Chiribella

    At the heart of modern physics lies a paradox: the deeper we go into the quantum world, the more reality seems to defy common sense. For Professor Giulio Chiribella, this mystery isn’t a roadblock — it’s the fuel for his life’s work. As a theoretical physicist and thought leader at HKU’s School of Computing and Data Science (CDS), Chiribella is rewriting the rules that govern how we understand time, information, and causality.


    An Early Fascination with Fundamental Questions

    Growing up in Italy, Chiribella was drawn not just to physics but to philosophy — asking questions about what’s real, what can be known, and how we represent the world. He earned his PhD at the University of Pavia, where his fascination with the mathematical structure of quantum theory began to crystallize. At Canada’s Perimeter Institute for Theoretical Physics, he established himself as one of the world leaders in a vibrant research community pushing the understanding of quantum foundations and the boundaries of theoretical physics. Later, he became a professor at the University of Oxford, deepening his exploration of the foundations of quantum mechanics.

    Now based in Hong Kong, Chiribella sees the city as a fertile ground for developing the next generation of quantum research — a place where abstract ideas can evolve into global impact.


    Foundations First: Indefinite Causality and Quantum Processes

    Much of Chiribella’s research centers on quantum foundations, particularly the concept of indefinite causal order — where cause and effect can be in a quantum superposition. In this framework, event A might cause event B and vice versa, simultaneously. These ideas aren’t just theoretical mind-benders. They’re opening doors to new types of quantum communication and computation that outperform even traditional quantum protocols.

    His work also explores process theories, abstract mathematical structures that describe physical systems more flexibly than conventional models. These tools help scientists test which parts of quantum theory are essential — and which might change in future theories.


    A Thought Experiment Across Time

    As someone who reshapes how we think about quantum theory itself, Chiribella often reflects on how far the field has come — and how much further it could go. When asked what he’d show to the pioneers of quantum physics, his answer reveals both reverence and ambition:

    “If we could go back and talk to Schrödinger, Heisenberg — or even Einstein — we could tell them that quantum theory can be written in a much more general way… not just using states, but processes. That would probably surprise them — in a good way.”

    Einstein, famously skeptical of the indeterminacy at the heart of quantum theory, might have found Chiribella’s process-based view provocative. Instead of smoothing over quantum strangeness, it embraces it — using tools like indefinite causality to explore new mathematical structures that go beyond classical intuition. While Einstein wanted a deeper reality beneath quantum randomness, Chiribella’s work suggests that the structure of reality itself may be fundamentally different — not deterministic, but richer and more relational than previously imagined.

    For Chiribella, it’s not just about solving problems within the rules — it’s about uncovering new rules entirely.


    From Thought Experiments to Technology

    While his thinking starts with fundamental questions, the implications are far-reaching. Chiribella’s frameworks are being applied in quantum machine learning, quantum communication networks, and quantum cryptography. He has co-authored seminal papers on quantum supermaps, causal networks, and programmable quantum processors — many of which laid the groundwork for today’s advances in distributed quantum computing.


    A Quantum Vision for Hong Kong

    As the founder of the Quantum Information, Computation and Intelligence (QICI) Lab at HKU, Chiribella is helping position Hong Kong as a global hub for quantum innovation. His leadership was instrumental in bringing AQIS 2025— Asia’s largest quantum conference — to Hong Kong for the first time. The event will convene leading theorists, experimentalists, and industry players to push the boundaries of what’s next.

    But beyond visibility, his vision is about cultivating a deep-thinking research culture — one that embraces foundational work as the seed of future technologies.


    The Quantum Team Around Him

    Chiribella’s activity has a positive impact throughout the whole QICI quantum team. His colleague Prof. Ravishankar Ramanathan carries forward ideas from the quantum foundations playbook into cryptographic protocols that don’t rely on trust. Prof. Yuxiang Yang complements the group with expertise in quantum sensors and applying AI to enhance their sensitivity. And Prof. Qi Zhao, a close collaborator, works at the intersection of quantum information theory and learning, investigating how quantum systems can process, compress, and extract information in ways that classical systems cannot.

    Together, they form a rare cluster of talent that spans both theory and experiment, foundations and applications — a hallmark of Chiribella’s approach to research and collaboration.


    Mentorship and Teaching: Nurturing Quantum Thinkers

    Chiribella is also known for his commitment to mentorship. In classes and labs, he encourages students to think like scientists, challenging assumptions rather than rushing to solve problems mechanically. His students are not just learning quantum mechanics — they’re exploring what it means to investigate reality.

    This mindset is embedded in HKU’s Common Core course, “The Quantum Revolution,” which Chiribella helped shape. It invites students from all disciplines to grapple with the counterintuitive features of quantum theory and reflect on its broader impact on knowledge and society.


    Looking Forward: Questions Still Unanswered

    As the world celebrates 100 years since the birth of quantum physics, the research in this area is far from done. Chiribella’s sights are set on exploring post-quantum theories, testing the boundaries of causality, and understanding how computation, information, and physical law interact.

    “The interplay between information, space, time, and matter is one of the most exciting frontiers of contemporary physics. If we manage to understand it, it will open the doors to new physics that we haven’t even imagined yet.”

    In a field defined by uncertainty, Chiribella’s work seeks clarity — not in final answers, but in how to ask the right questions. And through this work, Hong Kong is becoming a place where deep foundational questions can be explored at the cutting edge of science.

  • Making Cities Smarter with AI

    Making Cities Smarter with AI

    Meet “Urban GPT” and “Open City,” two projects designed to make our cities not just smarter, but future-predictable. Buckle up and let’s see what the future of urban living looks like.

    Professor Chao Huang of the Computer Science Department and his innovative team,

    Q: What is Urban GPT?

    A: Imagine a crystal ball that can tell you what’s going to happen in your city—that’s Urban GPT. This incredible large-language model forecasts everything from traffic flow and transportation demand to even crime predictions. No need for coding skills or an engineering background. Just chat with Urban GPT in natural language, give it a few basic details and instructions, and it’ll whip up predictions in a flash—under a second.

    Q: What sets Urban GPT apart from other smart city AI tools?

    A:

    Versatility on Demand

    While most smart city tools are programmed for specific tasks, Urban GPT is an AI multitasker. From traffic to crime prediction, it’s got all bases covered.

    Ready When You Are

    Urban GPT does not rely on years of training with long-term historical data. It’s designed to deliver high-accuracy predictions from day one. Even under zero-shot scenarios, Urban GPT still maintains high accuracy.

    Q: Who will reap the rewards of Urban GPT?
    A: By harnessing cutting-edge predictive analytics, city planners can seamlessly optimise traffic flow and strategically deploy law enforcement resources. The result? A future-forward approach to urban operations that ensures cities run smoother and smarter.

    Q: What is Open City?

    A: Open City is the driving force behind Urban GPT, helping it analyse urban data effectively. Professor Huang and his team discovered that smaller cities often lack the tech needed for large models, so they created their own urban foundation model from scratch to make sure it works for everyone.

    Lead researcher Zhonghang Li
    Zhonghang Li, lead researcher of the Computer Science Department and Prof. Chao Huang’s innovative team, together they are revolutionising our understanding of urban systems.

    Q: What are the common concerns about AI tools?

    Scalability and Implementation: However, Professor Huang’s team is adept at deploying AI tools across varied urban landscapes, ensuring easy integration with existing systems.

    Data Privacy and Security: Using advanced techniques like federated learning, they ensure your data remains private, while still fully utilising AI’s capabilities.

    Q: What is the next step?

    A: Professor Huang and his team are developing an AI assistant. Imaging Jarvis from the movie “Iron Man”. With the support of Urban GPT and Open City, this AI assistance can help you plan a long trip. No need to worry about flight tickets, sightseeing routes or travel plan. Your personal AI assistant will handle it all with just a snap of your fingers.

    More about Prof. Huang’s other researches and initiatives: https://sites.google.com/view/chaoh

  • AI helps us decode bacteria’s chemical language and harness their weapons

    AI helps us decode bacteria’s chemical language and harness their weapons

    We use AI to decode bacteria’s chemical language. Genome information is now available online – more than one million bacterial genomes are available! 

    We developed a new methodology called genome mining. AI looks at a million genomes and assesses their genetic potential for coding an antibiotic or an anti-viral molecule. 

    Professor Philip Yongxin Li, from the Department of Chemistry, specialises in chemical biology and drug discovery, with the focus on bioinformatics-guided drug discovery and biosynthesis. In this interview with our science editor, Dr Pavel Toropov, Professor Li talks about how his team uses AI to create new antibiotics. 

    ❓ Dr Pavel Toropov: Could you explain your research? 

    💬 Professor Yongxin Li: We work on discovering new antibiotics to tackle the problem of antibiotic resistance – superbugs. Because of on-going overuse, the current antibiotics are failing, and superbugs – antibiotic-resistant bacteria – are emerging. 

    My job is to learn from Mother Nature. In the natural environment, bacteria use chemicals as weapons, used in competition with other bacteria. These are very intense chemical interactions! 

    Our job is to decode this chemical language, make good use of bacteria’s chemical weapons, repurpose them for therapy, and develop them into antibiotics and anti-virals to kill human pathogens. 

    But rather than following the traditional way – culturing bacteria, isolating them and identifying chemical compounds that they make, which is time- and labor-consuming – we look at the genetic potential of bacteria, mining the chemicals from bacterial genomes from large datasets. 

    Instead of using synthetic chemistry to make new antibiotics, we use synthetic biology to harness their genetic potential for drug discovery. We use cell factories, cell assembly lines to produce chemicals for us. We clone biosynthetic genes [Note: biosynthetic genes are genes that produce complex chemicals, such as those used to kill other bacteria], plug them into a cell’s factory, and let the cell factory build the antibiotic for us. 

    ❓ How is AI used in your work? 

    💬 We use AI to decode bacteria’s chemical language. Genome information is now available online – more than one million bacterial genomes are available! 

    We developed a new methodology called genome mining. AI looks at a million genomes and assesses their genetic potential for coding an antibiotic or an anti-viral molecule. 

    💬 The traditional methods analyse the genomes one by one. It is not efficient, and the chance of discovering a new antibiotic is low. So, we train AI to select, from one million genomes that can contain 20 or 30 million biosynthetic genes, the genes that code for antibiotics. 

    We use AI to select and prioritise the genes with the highest probability that they code for new antibiotics. Using AI we can also predict the antibiotics’ structure and bioactive potential. 

    ❓ How much time does AI save? 

    💬 Using the traditional way, the in-silico screening process [Note: “in silico” means biological experiments conducted on a computer, or via a computer simulation, to make predictions about the behavior of different compounds] can screen several thousand biosynthetic genes and narrow them to about 100 for experimental validation. Using AI, we can start with 20 or 30 million genes and evaluate their potential for coding for antibiotics. 

    The traditional way can take years. But, using AI, we can finish assay in silica within a few days and even hours. To validate the result, you still need to clone the gene, so we need to use synthetic biology and this part still takes a long time. 

    ❓ How close are you to creating a new antibiotic? 

    💬 Drug discovery and drug development are a very long process. But with one of our lead compounds, we have finished pre-clinical tests, anti-infection application in vivo, and evaluated ecotoxicity. It is ready for the next step. 

    👏 Thank you, Professor Li.

  • AI makes the impossible possible for us

    AI makes the impossible possible for us

    Professor Haibo Jiang works at the Department of Chemistry at the University of Hong Kong and is the director of the multi-disciplinary JC STEM Lab of Molecular Imaging. He received his PhD from the University of Oxford and joined HKU in 2021. In this interview with our science editor Dr Pavel Toropov, Professor Jiang explains the importance of AI in his research.

    ❓ Dr Pavel Toropov: what is the research focus of your lab?

    💬 Professor Haibo Jiang: My lab focuses on the development of new imaging technologies to see inside biological systems at a very small scale. For example, what is happening inside a single cell or a single organelle.

    Our molecular imaging combines different modalities of microscopy – optical, electron and mass spectroscopy imaging. By combining them we can extract information from one sample so that we can understand what is happening in its biology and structure.

    ❓ What are the images used for?

    💬 One example is tracking, with high resolution and very high sensitivity, of drugs in biological systems. We can see when people (and we also use animal models) take the drug, where the drug goes, and how it gets to the target to be therapeutically effective.

    We combine structural information from electron microscopy with chemistry information from mass spectrometry imaging, and then we can then reliably correlate where the drug is – in which organelle, in which cell, in which tissue of which organ. We can also learn why the drug is, or isn’t, effective, and why it causes side-effects.

    ❓ Which drugs are you working with?

    💬 Our system is versatile. We have applied it to understand the traffic of antibiotics. Once in the human body, antibiotics need to get to the bacteria to kill them. We can track a range of different antibiotics to see if they get to the right cells at the infected site. We also applied our methods to study cancer drugs to see where the drug gets into the cell, because this is important for its efficacy.

    ❓ And what does the AI do?

    💬 For us, AI makes the impossible possible! With AI we can achieve high image quality at high speed.

    ❓ Could you explain?

    💬 The biggest problem in imaging is the compromise between image quality and image speed. AI speeds things up and also provides better resolution. Currently, the hardware we have has its limit in spatial resolution, so the improvement has to come from the software – which means AI.

    One of the major limitations of our method is that it is slow. It is the nature of the microscopy techniques that we use. We scan pixel by pixel, and there is a compromise between the quality of the image and the speed of the imaging. If we scan fast, there will be noise and the signal would be low. If we scan one pixel ten times, we can get higher signal and less noise and the quality would be much higher, but it takes longer.

    But using AI, we can improve the electron microscopy speed by more than 10 times, it is faster and more efficient. With AI, we can speed things up by using a lower resolution image, but at a faster rate, and cover a big region of the sample to extract more information from one biological sample.

    ❓ What is the next step for you?

    💬 Life is in 3D, but what we just talked about is 2D. When we get to 3D imaging it is even more challenging!

    There is a technique that allows you to look at 3D structures of cell organelles, but at most, you can do around 200 by 200 by 200 microns, and that’s really small for a tissue sample.

    But scientists dream of seeing a big sample in 3D. For example how the neurones are connected in the human brain. This is not possible with the current technology. Our dream is to have algorithm technology to achieve high speed 3D imaging of large biological samples. We are not there yet, but that is our aim.

    ❓ What is the future of AI in your field?

    💬 I think, in biological imaging, AI will be everywhere. From the imaging itself, to the data analysis. I only started collaborating with AI people after I came to HKU, but I think AI will be the future.

    But AI will not replace what we, humans, do. We need to learn how to employ AI in our research – to do what we do, but better.

  • We used to wait for a year for results, with AI – a week

    We used to wait for a year for results, with AI – a week

    Brute-force computation cannot solve these difficult problems – because of the exponential wall, one has to approach them with deeper understanding: either applying artificial intelligence or human intelligence. Now with AI, many of the previously impossible simulations – because they required vast amounts of computational power – are becoming possible.

    Professor Ziyang Meng is an acclaimed computational condensed matter physicist and is one of the pioneers of the use of AI in computational physics. His research focuses on developing large-scale numerical and machine-learning simulations to investigate quantum materials. Professor Meng has published more than 100 papers in top journals such as Nature, Nature Physics and PNAS. In this interview with our science editor, Dr Pavel Toropov, Professor Meng explains how AI has revolutionised quantum and computational physics research. He also talks about ancient Greece and mahjong.

    ❓ Dr Pavel Toropov: How do you use AI in your work?

    💬 Professor Ziyang Meng: Quantum materials are very complicated. The existing methods usually face an exponential wall.

    (Note: in a quantum system, each particle can exist in multiple states. As the number of particles increases, the number of possible states increases exponentially, and thus so does the amount of information needed to describe the system and the computational capacity to do so. This increase is known as the exponential wall.)

    The wall is very high! Our computational capacity cannot jump over it. So, we use AI to extract quantum information in a quantum material. This information is used to design better algorithms that help us jump over an exponential wall to look into new materials, new properties.

    For example, we developed what is called the Self-Learning Monte Carlo Method. It is one of the first examples of employing explainable-AI techniques in quantum many-body systems. It helps to open up the field of AI-inspired algorithms for reducing numerical complexity in the computational research of quantum materials.

    My inspiration for developing these AI-related algorithms comes from the (ancient) Greek Delphic maxim “know thyself”.

    ❓ This is a lot of terminology! Before we get to the ancient Greece part, could you explain why quantum materials are complicated?

    💬 The basic ingredients of quantum materials – electrons, billions and billions of them, are subject to mutual quantum mechanical interactions and the complicated chemical, physical and topological environment they live in.

    The full quantum treatment of so many electrons is way beyond paper and pencil. Instead, it requires modern computational techniques and advanced theoretical analyses.

    Brute-force computation cannot solve these difficult problems – because of the exponential wall, one has to approach them with deeper understanding: either applying artificial intelligence or human intelligence. Now with AI, many of the previously impossible simulations – because they required vast amounts of computational power – are becoming possible.

    ❓ Can you give an example?

    💬 The Self-Learning Monte Carlo algorithm. With this algorithm, we first use AI to extract better model parameters at a smaller scale – with few electrons, simulation, and these parameters can more accurately present how the billions of electrons interact with each other inside the material, and how they respond to the experimental conditions such as temperature, electronic or magnetic fields.

    (Note: Monte Carlo simulation allows us to model and solve problems that involve randomness and big data. It is used to handle such situations by testing many possible scenarios.)

    Then we can start the large-scale Quantum Monte Carlo simulation on supercomputers. It is faster than the tradition simulation without the self-learning step that involves AI.

    The self-learning step is crucial. It gives us better and more accurate model parameters, which means that we get to know the properties of the material better. This is what I meant by the Delphic maxim “know thyself”.

    To “Know yourself” means that we must find the most important interactions among these interacting electrons. Self-learning, therefore, is as modern as AI and quantum physics, but also as old as the beginning of human civilisation – ancient Greece.

    ❓ Do you mean the Delphic maxims – the set of moral principles that were inscribed on the Temple of Apollo in ancient Greece?

    💬 Yes, and I even wrote a popular science article about this – “From Delphic Oracle to Self-Learning Monte Carlo”. The article is in Chinese and was published in Physics – a journal of the Chinese Physics Society. http://www.wuli.ac.cn/cn/article/doi/10.7693/wl20170406).

    ❓ So, AI speeds things up by saving quantum physicists a lot of number crunching?

    💬 AI in quantum physics research does not only mean that we can compute faster. AI helps us find better, more accurate, models for the quantum materials. This allows us to better understand the material and also better understand the process of understanding: that’s how we can come across new laws of physics.

    ❓ Quantum physics is not something that most people experience in their daily lives. To non-specialists, such research may sound very abstract, entirely theoretical. What does your research mean in the “real world”? What are the practical applications of your work?

    💬 Our Momentum-Space Monte Carlo self-learning method deals with a new mystery in a quantum material: the magic angle twisted bilayer graphene in which superconductivity has been recently discovered. Graphene is what we have in every pencil! If we can elevate the superconducting temperature from minus 270 Celsius to, say, room temperature, we can solve the global energy crisis.

    Our recent paper on this was awarded the 2024 Top China Cited Paper Award for Physics.

    ❓ How can this solve the energy crisis?

    💬 Using superconductor cables and wires, the electricity, once generated from the power station, will not be lost as heat and dissipate into the air. This is because the electrons in the superconducting state will not experience resistivity as in commonly used conductors such as copper, iron and any other metals. Not experiencing resistivity means electron movement will not be slowed down, converted into heat and lost, and 100% of the generated energy can be used for the intended purpose.

    ❓ In your office you have a physics-themed mahjong set. Why?

    💬 Mahjong is a strategy game. In quantum physics, AI helps the physicist with strategy, allowing the physicist to better understand the problem. I find AI to be a very good partner who helps the physicist solve the mysteries of Mother Nature.

    Mother Nature plays games with us, hiding her secrets behind complicated phenomena, and we need a good partner – like AI, to play this game, and find solutions.

    I am teaching a new undergraduate course at the HKU Physics Department – PHYS3151: Machine Learning in Physics, where you can learn how to use AI techniques to solve problems – from Newtonian mechanics and electromagnetism to quantum phenomena. Everyone is welcome to join!

    https://quantummc.xyz/hku-phys3151-machine-learning-in-physics-2024/
    👏 Thank you, Professor Meng!

  • What takes us one day, AI can do in a few minutes

    What takes us one day, AI can do in a few minutes

    One ostracod is one white dot on the slide. In one second AI can identify 20 of them. There are several hundred ostracods here on this slide. Identification will take a few minutes for AI, but by eye, depending on the person – one day, or several hours.

    Professor Moriaki Yasuhara works at the Swire Institute of Marine Science, the School of Biological Sciences. One of his main research interests is paleoecology – the interaction of ancient organisms and their environment. In this interview with our science editor Dr Pavel Toropov, Professor Yasuhara and his PhD student Jiamian Hu explain how AI deep learning tools have transformed their research.

    ❓ Dr Pavel Toropov: could you explain what the direction is of the research done by your laboratory?

    💬 Professor Moriaki Yasuhara: We want to understand the climatic and environmental impact on our planet, especially on marine ecosystems and biodiversity. We are interested in how climate change, global warming, acidification, and oxygen decline effect marine animals.

    Our laboratory focuses on paleobiology. We study marine biology, but over longer time scales, using the fossil record. In contemporary biology, scientists start monitoring after they realise there is a problem. Once they realise there is, for example, pollution, they start monitoring. But we don’t know what the natural environmental conditions were before pollution.

    But, by studying sediment cores and deep time fossil record, we have long time series throughout – before and after. We can go back hundreds of thousands, tens of millions or even hundreds of millions of years.

    ❓ Dr Pavel Toropov: What animals do you use in the fossil record?

    💬 Professor Moriaki Yasuhara: Most animals – fish, jelly fish, worms, marine mammals – don’t have a good fossil record as they have no hard parts, for example shells, that allow for good fossil preservation. Or, they are too large to be abundantly preserved in a small amount of sediments as fossils. So, we need a representative, a surrogate, to make conclusions about the global marine ecosystem.

    One representative is ostracods. They are tiny crustaceans with really nice calcium carbonate shells, and they have some of the best fossil records amongst all crustaceans, arthropods, and metazoans.

    So, by studying ostracods, we can know not only about ostracods themselves, but, using them as a representative, learn about the entire ecosystem, the entire biodiversity.

    ❓ Dr Pavel Toropov: Where do your ostracods come from?

    💬 Professor Moriaki Yasuhara: Mainly from the Cenozoic Era – from 66 million years ago to the present. Some of my students are working on Ordovician samples – from more than 400 million years ago. My research locations include the Arctic, Antarctic, Atlantic Ocean, Indian Ocean, Pacific Ocean, Red Sea, Mediterranean Sea… Hong Kong, Africa.

    ❓ Dr Pavel Toropov: So, to explain your work in simple terms: you get a core sample of the sediment from the bottom of the sea, take out all the tiny ostracods, put them on the microscope slide. Then you identify what species they are. Because different species prefer different conditions, by knowing how the numbers of different species of ostracods changed with time, you can make conclusions on the changes in the entire marine ecosystem, correct?

    💬 Professor Moriaki Yasuhara: Yes.

    ❓ Dr Pavel Toropov: How do you use AI in this?

    💬 Professor Moriaki Yasuhara There are several problems (working with ostracods). First, it is very time consuming – picking, identification, taxonomy. Also, we need expert knowledge. To train one person to be good at ostracod identification and taxonomy takes many years. An entire PhD is probably necessary.

    Recently, I have been working with my PhD student Hugo Jiamian Hu to automate this process by applying AI deep learning. He did a very good job, and now we can scan entire slides automatically, using our digital microscope.

    Hugo used more than 200,000 ostracod specimens for training our AI, and now the AI can do its own automatic identification. Identification is now much faster, and we can use much bigger data.

    💬 Jiamian Hu: Yes, and having a lot of data, big data, means quite something! The 200,000 research-grade, specialist-identified samples ensure that our deep neural network can effectively learn patterns in ostracod identification.

    ❓ Dr Pavel Toropov: How much time does using AI save you?

    💬 Jiamian Hu: We have a PhD student who has about a hundred samples of ostracods from Panama. Before AI, by hand, one by one, it may take her several days to finish a sample. Now, using AI – less than an hour.

    (Shows a microscope slide with ostracods) One ostracod is one white dot on the slide. In one second AI can identify 20 of them. There are several hundred ostracods here on this slide. Identification will take a few minutes for AI, but by eye, depending on the person – one day, or several hours.

    In addition, Professor Yasuhara is not always free, but AI is always free. So, when student has a question about identification, AI can always help.

    ❓ Dr Pavel Toropov: Did you write this deep learning program yourself?

    💬 Jiamian Hu: I wrote it with PyTouch. I built it from scratch, it is specifically designed. I was a computer science student before.

    💬 Professor Yasuhara: Not only is AI more time efficient, by using AI, deep learning, we made exciting discoveries, learned new things. AI can discover errors, misidentifications. AI can give you new questions to answer.

    👏 Dr Pavel Toropov: Thank you both.

  • We create a new reality with AI

    We create a new reality with AI

    Computer vision and artificial intelligence. To put it simply, computer vision means giving machines the capability to see. Humans can see the 3D world – the objects, the relationships between them, and a lot of semantics. Then we make decisions for our many activities in the 3D world.

    Professor Xiaojuan Qi works at the Department of Electrical and Electronic Engineering at HKU where she is a member of the Deep Vision Lab. Her work covers deep learning, computer vision and artificial intelligence. In this interview with our science editor, Dr Pavel Toropov, Professor Qi talks about self-driving cars and building virtual worlds.

    ❓ Dr Pavel Toropov: What is the main direction of your work?

    💬 Professor Xiaojuan Qi: Computer vision and artificial intelligence. To put it simply, computer vision means giving machines the capability to see. Humans can see the 3D world – the objects, the relationships between them, and a lot of semantics. Then we make decisions for our many activities in the 3D world.

    In order for a robot, a machine, to go around this world, it must also be able to see. It must recognise different objects, estimate their geometry. This has a lot of applications, one of which is self-driving cars. For a car to be able to drive automatically, it must have the ability to see what is in front of it, what obstacles there are, forecast the behaviour of other agents and make planning to drive safely.

    An automated driving system has several parts. One part is about perception – how can a car get knowledge from the environment? Most of this knowledge is visual data from digital cameras and Lidar. Lidar is used to detect 3D objects, and, based on that data, the car can make decisions, like adjusting speed or turning. Our algorithm helps the machine better analyse this data, better understand what is happening around it.

    Another application is medical. We develop AI to automatically analyse medical images to make more informed diagnoses, to make diagnoses more precise and reduce the potential possibility of mistreatment.

    Another exciting area is AI for science. I am collaborating with the Department of Chemistry, we have developed an AI algorithm to improve the resolution of electron microscope images. This can help biologists make discoveries.

    ❓ Automated driving and AI are not new, what does your research contribute to this field? What are your strengths?

    💬 In order to test if an automated car can drive safely, we need a simulation platform. What we are currently doing is building a simulation environment so that we can help train the models and evaluate whether the car can drive safely in a real environment. Do you know (the massively popular computer game) Wukong?

    ❓ Of course!

    💬 The scenes in this game look very real, and the reason is that the developers used Lidar to scan objects, historical buildings especially, in Shanxi province (of China). They did a reconstruction of them and imported them into the virtual environment – the computer game.

    This is very similar to what I am doing. Using such scans, but without relying on expensive Lidar scanning techniques, we reconstruct the world into virtual space, mostly using images shot with a digital camera. We create a completely new reality!

    Another strength is that we are working to make the algorithm run on casually captured data. For example, in Wukong, they needed experts to scan objects and do reconstruction, but what we are doing allows anyone, not only experts, to use their phones to scan. Then we can make algorithms that can reconstruct the scenes.

    ❓ So you reconstruct, or build, a new reality, a virtual word, to train or test automatic cars and robots?

    💬 Yes. We can use Lidar or digital camera scans of a room or a city and turn the real world into a digital space using algorithms. Besides, we also create models that can generate 3D objects, such as tables and chairs. And in this reconstructed or recreated digital world we can train our algorithm and test if it makes mistakes or not.

    ❓ What is the advantage of using the virtual world for training and evaluating algorithms?

    💬 We can get data from interactions – for example a cleaning robot must move a table in the virtual world – and this can then be used to train agents – robots – to interact with the real, physical world. Training in the real world is expensive, and not safe – the robot can break objects, harm humans. But in the virtual world we can produce an infinite amount of data and interactions.

    Besides, we can create what is called corner cases and improve safety. These are cases that happen very rarely in reality, but are critical – for example, two cars colliding. We can create these scenarios and let the car learn what to do.

    ❓ Have you partnered with anyone in the industry?

    💬 We work with APAS (Hong Kong Automotive Platforms and Application Systems R&D Centre, set up by the Hong Kong SAR government), we have a collaborative project in automated driving. It’s a Hong Kong based company with branches in Mainland China. There is also (the car hailing app) Didi. And we have collaborations with Google, TenCent and ByteDance.

    ❓ What is the main difficulty machines have when trying to see the world?

    💬 The variety and the diversity of data within the environment. For example, we are in this room, it is now bright, but when it is dark, or when the weather is different, this creates a lot of challenges for the model, for the machines, to recognise the same objects.

    The (car) camera will capture different video points, under different lighting conditions, weather conditions… all these variations make this problem very complicated for machines, even though for humans it is very easy to interpret objects under different conditions.

    So, in order for a machine to recognise an object properly we must include this object into its training data, and to be robust, the model must have a lot of training data to cover all the potential scenarios. If one is not covered, in the deployment stage there will be a lot of mistakes.

    For example, in the US and Europe cars are different sizes. This also creates difficulties when developing 3D detection models. If the model is trained only on the data collected in the USA, and then you apply it in Europe, it may make mistakes. This is why companies have to develop foundation models, designed to be large in size and take large amounts of data, the assumption is that the data can cover the real world diversity. Chat GPT is a huge model with hundreds of billions of parameters. It is trained on the entire Internet data, but it also makes mistakes.

    ❓ Self-driving cars are already on the road in Mainland China, correct?

    💬 Yes. Such cars are already on the road. Baidu has self-driving cars already in China. I am collaborating with Baidu. In the city of Wuhan, Baidu has a car service called LuoBo KuaiPao. There are no human drivers, but there is a human remote controller that can take over if a challenging scenario happens. One human controller can handle over 20 cars.

    ❓ When do you think self-driving cars will be as common as “normal” cars?

    💬 It is coming. I think it will come in the next few years. The major issue is that humans cannot tolerate any mistakes from AI models. It is big news if a self-driving car makes a mistake, but humans also make mistakes. We need to accept that machines can make mistakes. Humans do, and they make a lot of mistakes! The issue is – how to make humans trust machines? We need human-machine collaboration.

    👏 Thank you, Professor Qi.