We recently had the opportunity to interview Nello Cristianini, Professor of Artificial Intelligence at the University of Bath, UK. His research spans statistical theory in machine learning, natural language understanding, social media content analysis, and the ethical and social impact of intelligent technologies.
During the recent Smart Life Festival, we were captivated by Cristianini's insights into the pressing issues surrounding AI. This prompted us to delve deeper into his knowledge and introduce his thinking to our readers. We also encourage everyone to read his book, "The Shortcut: Why Intelligent Machines Do Not Think Like Us."
Our discussion with Cristianini revolved around the pressing need for a dialogue between the natural and human sciences. This, he believes, is crucial for a safe coexistence with this new form of intelligence. According to Cristianini, our artificial creations differ from us, might exceed our abilities, they are different, and sometimes superior, to us. To coexist harmoniously, it's imperative for all of us to understand the ongoing changes, as intelligent machines have become an integral part of our lives. They filter resumes, grant loans, and even write the news we read. Cristianini believes that it is essential that we strive to understand them, even if we are not all scientists or researchers. At RED-EYE, we too believe this to be a monumental turning point.
While these machines accomplish many of our desired tasks, interacting with them remains a challenge. Their behaviour is driven by statistical relationships derived from massive amounts of data. These machines continuously observe us and make decisions on our behalf. The challenge is: how do we integrate them into our society without unintended consequences?
Cristianini's approach to this subject is both rigorous and uniquely original. He possesses the ability to explain complex ideas with simplicity and clarity. What struck us most is his calm perspective, often emphasizing the need for fearlessness, especially for the youth who must learn, err, and certainly understand how to live harmoniously with intelligent machines.
Stepping back, Cristianini underlines that intelligence isn't an exclusively human capability. "It's the ability to adapt to new, unforeseen situations." Human arrogance often makes us believe that our intelligence is supreme, much like the ancient belief that Earth was the universe's center. We need to recognize different forms of intelligence, what Cristianini terms as alien intelligence, as he refers to "forms of cognition and consciousness that deviate from human paradigms, offering a unique perspective on what it means to think, perceive, and understand beyond the confines of human experience."
He points out, for example, the limitations of instructing a statistical algorithm to avoid discrimination, as the machine doesn't understand context or intentions, much like how YouTube can't discern which videos might propagate conspiracy theories to a young viewer. "We've attempted to program machines logically and grammatically, but it wasn't successful. It wasn't until we incorporated statistical mechanisms that we began to see benefits." This is how ChatGpt operates, generating text through a statistical language model based on billions of pre-existing web content. This "shortcut" approach also explains the biases found in these machines, as they reflect biases found on the internet. Moreover, to understand users' interests and objectives, the machine observes and remembers everything, using this knowledge to propose content that has previously enticed the user. Overexposure to certain content could reshape individuals and even lead to addictions, especially concerning mental health and digital/social media usage.
The most pressing issue today, Cristianini believes, is the need for regulatory laws. "Fortunately, a proposal at the European Parliament seeks to categorize AI use into different risks categories. Instead of regulating the technology, it aims to regulate its use."
However, there are challenges. For instance, if a user reveals vulnerabilities on social media platforms, algorithms might exploit those to maximize clicks, possibly leading to addiction. Cristianini suggests in his book the need for "cultural antibodies." It's a delusion to think an algorithm can block fake news or discern inappropriate content for children, without our help. It's up to our culture to solve these problems. "The answers to significant questions require physicists, philosophers, humanists, lawyers, and politicians to come together and address the challenges of the future."
Looking ahead, Cristianini sees a future full of diverse thinking. At RED-EYE, we totally get that. We're here taking notes on today to help shape a better tomorrow. With a lot of hope and hard work, we're all in for a bright future.