top of page

What is AI and what can we learn from it?: Dannier Xiao

Updated: Feb 7

Please note: This interview has been edited for increased clarity and readability. Dannier Xiao’s views are still fully represented.


Dannier Xiao is a doctoral student at the University of Warwick currently completing his PhD in the field of Artificial Intelligence while also working as a lab teacher for the WM919-15 Machine Intelligence and Data Science module. Before this, he received an MSc in Advanced Mechanical Engineering from Imperial College London. During his studies, Dannier has accrued a range of professional experience in engineering and finance through roles at Rolls Royce Motor Cars, Siemens and Goldacre.


Through delving into the story of Artificial Intelligence, Dannier’s talk ‘What is AI and what can we learn from it?’ for TEDxWarwick’s Student Salon ‘Square One’ addresses the past, present and possible future of this revolutionary technology. While the public’s infatuation has been recent, decades of trials and tribulations have led to this moment. 


Dannier highlights that through persistence and resilience, researchers have found a way to outlive and outgrow initial criticisms and scepticism. This success story is one we can all learn from.


What motivated you to share your experiences at TEDxWarwick? 


AI has become a buzzword in today's society, gathering a lot of interest from everyone. However, there are a lot of misconceptions out there, and this drives fear. I wanted to help people go behind the scenes to understand that AI also has a large human component to it. It is not this black box that wants to take over the world. It just makes predictions based on the data that we feed it. Ultimately, it is just a tool, and we should not be afraid of that. 


I also wanted to give the audience the lesser-known and inspiring story about AI’s history. People get the impression that it was an overnight success, but it was 70 years in the making. There were periods when people almost gave up and thought it was impossible. I find there is an inspiring story behind that.


What do you find to be the most interesting new developments in AI?


Probably large language models (such as GPT-4 on which ChatGPT is trained). OpenAI released this model to the public to see what people would use it for. They found out that by feeding it what had been written on the internet, it could quickly digest vast amounts of information, making it useful as a more general tool. We are now able to speak to these machines in an organic way, not by needing to go through code. In parallel, we are developing multimodal language models, allowing us to now input videos and images, and the next step is to create these videos and images. 


Do you see these large language models as a way we build out artificial general intelligence in the long term? 


Ultimately, large language models take a vast amount of information and from that build a probability distribution on what words follow other ones. It, therefore, does not know when it is hallucinating and making stuff up. It shows that it does not have the fundamental reasoning capabilities that an AGI* would require. It would need guardrails to prompt it to poke in the right direction. Just like in humans, the first thought that comes into our head is often muddled, but we organise it before we speak, and filter out the noise into a coherent sentence. So, large language models could be an interesting way to build the baseline, but it would need those extra layers to help it reason. I think it will be part of, but only part of the tech stack.


*Artificial General Intelligence (AGI) is defined as AI that can perform any task that a human being is capable of.


Meta, Facebook’s parent company, has released its large language model to the public. What do you think are the pros and cons of open sourcing these base models allowing anyone to get their hands on it?


The pros are that you have the cognitive diversity of all the universities and all the experts to tinker with it in their free time, allowing for a vast array of ideas to be generated. This could be text-to-voice, allowing you to start speaking with the AI, and then it'll be like the real-world version of Jeeves. You might hook it up to your car, and suddenly, Siri becomes useful. These projects that would not make financial sense for Meta to pursue might see the light. The cons though are that it could fall into the wrong hands and be used for the wrong things. However, we have to be optimistic and believe that the pros will outweigh the cons and this will drive innovation rather than stifle it.


Mark Zuckerberg said in an earnings call last year that the content users are going to consume on apps such as Instagram or TikTok will be more and more AI-generated. Should this be a cause for worry as people see exactly what they want to see? 


That’s a tough one. One of the problems of giving people what they want to see is that it can become an echo chamber for political ideals and xenophobic views, potentially leading to further polarisation. As we saw with some of the elections, news that came out with social media, figuring out who was on the left, who was on the right, and pumping adverts to them make people lean away from the middle and walk more towards the extremes. 


You pointed out the impact of social media on elections. It seems that governments around the world have failed to regulate social media adequately. How much do you trust these governments today to correctly regulate AI to stop what you were pointing out from happening again?


From what we have seen about the government's response to these AI systems and social media in general, they are likely one eye behind the ball. They seem to be more reactive than proactive. The issue is that to be proactive, you need someone who deeply understands the technologies and works with the developers. Currently, there is no feedback loop between the government and the industry, they hardly talk to each other. The companies just want to maximise their users’ retention and profits, while the government wants to safeguard the online safety of its people, and sometimes those objectives just don't align.


So when you hear that large companies, such as OpenAI and Meta, are asking for governments to regulate, do you think they do it for the greater good or to solidify their position as leaders in the market, making sure that competition is harder moving forward?


I think that regulation in an ideal world would be really helpful. Because without that, this technology can get out of hand. Imagine giving AI decision-making capabilities over a financial system. If a human makes a mistake, they might control a portfolio of a few million, but if an AI controls the entire company's portfolio, even a 1% mistake can potentially have highly detrimental effects. So, in an ideal world, regulation is good. 


Realistically, however, I just don't know what the material effects of regulation will be. Even if you put rules in place, how do you make sure these are being applied? If you are a startup, who is going to watch over you, making sure that you adhere to these rules? Therefore, I question how it will work in the real world.


AI has woken us up to a new set of challenges like transparency and intellectual property. We've seen with ChatGPT but also image-generating AI such as DALL-E that these tools may affect creators' relationship with technology. How do you think AI will influence the way we perceive creativity in the future?


I know that the art world has had a mixed response to AI, especially because some of these image generation technologies have been trained on works by these artists. For these machines to create something, they have to take from something or somebody else. If we're able to figure out a symbiotic way of managing copyright and artists’ rights to their work, it could be a tool that enhances creativity. For example, in the fashion industry, you might have a rough sketch of what a design looks like in your mind. With this technology, to suddenly be able to go to an AI, type in exactly what you are conceptualising and tweak it into an image form in the space of five minutes will completely revolutionise the entire design process. Previously, you might have been sketching for a couple of hours, but now you can go through 10 different ideas in 20 minutes.


You seem to see AI as a way for people to be more productive rather than necessarily replacing them. Do you think that this will be the case for jobs all across the economy?


Like with all technologies and innovations, it's not a matter of subtraction, it's a matter of reallocation. Those jobs turn into something else. Just as computers came, they displaced a lot of jobs, but then those jobs changed. Accountants used to have to manually tabulate numbers into sheets. Now, the accountant focuses on actually crunching the numbers, doing advanced analysis they may not have been able to do before. With AI, the jobs that will be reallocated will usually be the ones that are repetitive and laborious. AI is creating its own industries based on what they've created.


In your PhD, you specifically look at AI for autonomous vehicle systems. How do you think AI will transform our relationship with cars?


I work on building AI models to recognise dangers on the road. Never have we lived in a time where you can imagine your car driving you to work. Being able to add my little grain of salt to all the researchers out there is very exciting. Looking to the future, we've discovered that implementing AI into cars is a lot harder than people thought it would be. 10 years ago, when AI systems first started testing, we thought we were just five years away. This came and went, and we are still trying to figure out how to implement it. You can teach a machine based on the data that you have, but the real world is not a predictable story. There is stuff that happens that you would never have expected, and that's where the machines fail. Researchers now think that to have full autonomy, we might have to figure out the AGI problem first. So the exciting thing is, we're not there yet, and there's so much more we need to learn to get there. Once we get there, this technology might be transferable to trains, buses, or the entire transportation system, which might fix the inefficient way in which we drive now.

 

 

Transcribed and edited by Thomas Loubeyres.


The views and opinions presented in this interview belong to Dannier Xiao — not Thomas Loubeyres, nor TEDxWarwick.


If you have any questions concerning the interview, or opinions expressed, do feel free to comment in the comments section, or email publications@tedxwarwick.com.

152 views0 comments

コメント


bottom of page