Sessions is temporarily moving to YouTube, check out all our new videos here.

General Artificial Intelligence

Meire Fortunato speaking at TECH(K)NOW Day in March, 2017
11Views
 
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

Artificial Intelligence has enabled many advances in fields such as computer vision, speech generation, or computer Go. DeepMind's mission is to ‘solve intelligence’, developing programs that can master complex problems by learning without the aid of handcrafted rules. In this talk, Meire will describe recent advances from her lab, as well as her personal experience of how people coming from diverse backgrounds (such as pure Math) can contribute towards the goal of developing general artificial intelligence.


Transcript


My name is Meire Fortunato, and today, I'm going to start the talk by telling you a little bit of my story, my background, how did I get here, where did I come from. And then later, I'm gonna jump into the definition of what we call the general artificial intelligence. So this is the kind of AI we try to do at DeepMind. So I'll go over some of the concepts and also some of the results of the recent developments we got in our lab. So first, how did I get to AI research? I actually come from a math background, originally from Brazil, and I did my undergrad and my masters at this university in Brazil that's called UNICAMP. That was a big step in my journey because I'm actually originally from a very small town. My parents still live there. It has 15,000 people. So just moving away from home at a very early age, I was 17, it was a big step so I could get higher level education. I got a scholarship to do my PhD in the US, Fulbright Scholarship. So I went to UC-Berkeley. I was still doing pure math in the first few years of my PhD, but around the mid of the way, I start having interest also in applied math. So it was only then that I had my first coding experience. So I was already probably 24 or 25 when I start writing my first piece of code first for projects for applied math, so just course work. And also, I start already developing some interest in machine learning, so I start playing around with some tools you can find online and trying to quote some things related to machine learning. More specifically, I start auditing some classes at UC-Berkeley, so doing part of the work but not completely and also using this tool online that's called Coursera that's quite popular nowadays where you have machine learning classes. So I myself, the first class I took very seriously was the engineering one. I also tookso you can have an idea of the concepts and also to code your first neural network, to train a digit classifier, this is all exciting to do at first. So very quickly, I started to realise that machine learning was maybe the field that I wanted to do research rather than math for several reasons. One of them is that in math, especially in the area where I was working, there were very few people doing work. So it was very hard to collaborate, to talk about ideas, to share my work, to get help. In the machine learning, I could see that there are so many people working. So you go to conferences, you talk to people. I thought the environment to be way more inclusive than what I was seeing in math and also more fast paced. So you see results faster, so it's easier to keep self motivated. So I started doing some more serious literature reading, so taking books that were classics in machine learning and also reading some papers, blog posts, so I start developing more intuition on the area. But all of this was alongside my PhD. My PhD was not machine learning related at all. It was actually in an area in math that we call mesh generation. So one example of a mesh in 2D is like you can see here to the left. It is how to triangulate a set of points on a plain, for example. And so I connected my machine learning background with my math background to this paper that was presented in NIPS 2015. That's an important conference in machine learning where I had the collaboration of two machine learning experts. So it was very nice to see how you can shift your knowledge from one area to the other. So I used some of my knowledge in geometry and for example, in mesh generation and how we could set up this in a neural network way. So you just ask your computer to do this triangulation. Instead of you telling exactly what to do, it has to figure out by itself. So once I had the paper, this opened many doors. So I went to give a talk at DeepMind about my paper, and I already did the interview. I got a job offer. But I still was in math land, so I had to step back and finish my PhD. So I was out of machine learning for a year, and I finished my PhD last summer, so last year. And since then, I've been full time at DeepMind since July last year. So now, I'm all ready for AI research. So let's see, okay. So now a little bit about DeepMind. So DeepMind is a company that is trying to solve AI. We are based in London. And our mission, we like to describe it in two steps. The first step, we say that it's to solve intelligence. And then the second step we say is to use it to solve everything else. So this might seem very ambitious, but if you understand what we mean by intelligence, maybe the mission becomes a little bit more clear. So I'll give you the definition that we have for intelligence. There are many definitions. This is the one that we use at DeepMind. So intelligence is the ability to learn to perform well over a wide range of environments. So we are not interested in doing very well in a particular task. We want to do well in many tasks that are related to knowledge. So how do you learn to do things in a very general form? So maybe now the mission becomes a little more doable because if you really are able to solve intelligence, then you can solve many problems. But I'm not saying that solving intelligence is easy. It's very hard, but if you solve it, then you can solve everything. So what we are trying to build is called this general purpose learning machines. So we try to build a system that learn automatically so they're not pre-programmed. So you want to write codes not to solve a problem. You want to write code to learn how to solve a problem. You're not going to put any domain-specific knowledge to your code. And we want this system to be general, so the same kind of architecture for several problems. So it helps already that you're not putting pre-programmed conditions into your code for it to be general. So this very fundamental difference between artificial general intelligence that we call AGI, that's what we look for at DeepMind, and narrow AI. You could think narrow AI, for example, if you are trying to solve the game of chess. You can start being better by just, you might think, "Well, maybe I'll just tell my programme "what to do," so if you see these positions from the board, do this or do that because you'll get a very good chess player that can tell you what he thinks is good or bad. But this might be a bad idea because you are limiting your programme to not develop its own knowledge. Your machine is always trying to follow rules. You're not giving it freedom to do what it thinks is better. So again, we are always aiming at this artificial general intelligence. So one of the ways you can do is use this area that is called deep learning. So I'll go over deep learning overview. So first, just like a very raw intuition of how computers work, I'm sure most people here know but it's always good to remind, it is that generally, you are going to give your computer some input. It's going to make some computations, and then it's going to return you some outputs. So let's say we have a very simple task like you want to take an image and you want to say if in this image, you have a cat or you have a dog. So how do you pose this problem in this framework of input-output and in the middle, you put some computations? So there are things that are called neural networks. They are inspired by the way our brain works and the synapses of yourself, of your neurons. So the way to do this, you start putting neurons. So any of these black dots you see are neurons, and you start structuring them in a way that is, first, easy to make computations on your computer and also, easy to train these models, easy to make sense of this structure. Okay, so you start putting layers. So you see these plains, they are what we call layers, and you can stack many of them on top of each other, and then you can start trying to solve this task. And how do you do it? For example, you can take an image, you can input the raw pixels of your image through the first layer. And then you do a bunch of computations. You pre-process some information that you pass to the next neuron. So the next layer are the next experts, and then you do some more computations. You compute some other stuff, and you keep sending the information back up, sorry. When you get to the last layer, that is kind of your experts neurons. They are the ones that know the most. So based on everything that has been told them, they are going to decide is this a cat or is this a dog, and that is generally a probability. You say with 99%, this is a dog, and then you classify, this is a dog. Okay, so now when you have many, many layers, these are called deep neural networks because they are very deep, and these are very, very powerful tools. So just to make sure, how do you learn to classify this well. So here, I'm already showing you let's say a neural network that has already learned how to classify this way, and you learn this by example. So you just give it many examples and you say, "This is a cat. "This is a dog. "This is a cat." Eventually, it will figure out by itself. You don't need to pre-program anything. So again, just like the old simple recipe for deep learning, if you have some experience, you should know by now, is it's very important to have loads of data. That's not always very easy to get but it's very important. And then you develop an architecture in general that is going to be a very deep and big network, so many parameters so you can get all the complexity of your problem. And then there is the magic, so here comes all your experience on how to train to finally get the profit. And how good is the profit? It has been proven to be very, very good. So we are getting state of the art results in many areas, for example, image recognition, robotics, machine translation, image segmentation, you name it, and this without any domain knowledge. So myself, for example, I don't speak any Chinese or I don't know not even the alphabet or to recognise the letters. But if you give me lots of data, I'm going to give you a very good machine translation from English to Chinese although I don't know any Chinese. So this is one of the nice things of the power of deep learning and machine learning. You can now become an expert in several areas although your domain knowledge for that area is not very deep. So here, a little bit more technical but maybe, for example, so if we want to start doing language with machines, how do you do that? So then you get an idea how you approach and how little domain knowledge you need. So let's say you want to represent a word in a vector. So one way you could do this, it is just, well, I have a very big vector that each position is a word. And then for each word, so for word number 122, I go there in this position and I put a one, everything else is zero. So now I have a word representation. This is very simple, but the problem is it's not very meaningful. So if you start trying to do operations with this representation, so subtracting words, you're gonna get things that don't make much sense. So you subtract two words, you have a vector that have a minus one and a one somewhere and that's it, zeroes everywhere else. So it doesn't transmit too much information. So one way to make sense of word representations is just to do what we call a wording body. So you take this vector that represents your word, this naive representation, and you're going to multiply each word by the same matrix that for each word is going to connect another vector that lives in a high dimensional space that now has some meaning. So I wrote there that equation, e is equal to w times v. So this w, I want to be the same for all the words, but which w should I choose? These are millions of parameters. So you don't choose. You put a neural network and you train it to choose it for you, so you present it many examples. So how do you do it? You want to teach your machine to learn language. So one way to do it is, for example, you can actually do that. That was my first project in machine learning. You download the whole Wikipedia data centre so you have lots of language, so you have the data. And then you can just show the sentences and ask your machine to always predict the next word. So if I say, "I'm sitting in my living room, in my?" Then you ask your machine, "What is the word?" You say couch, and then you keep presenting this data that's available for you. And now, all of a sudden, your machine knows about language. And you can start doing now math with language. So you can start doing things like taking the word king and subtracting queen, and you're gonna learn that when you do the subtraction, it's very similar to do man minus woman. So this relation is from language. They are learned by the machine itself. You don't need to tell it. Just by seeing a bunch of data, it will figure it out by itself. And you can also do, for example, if you take king and then you add the difference between woman and man, then you got a queen. When I say this, you got a queen, this is all in this space where I'm taking these words and embedding. So now I start to tell you that we have this image recognition, for example, when we have language, we can glue this two piece together and we can start doing things like instead of just saying, "If in a picture, "I have a cat or a dog," I can do more meaningful things. I can describe the picture. So I show the machine the picture and I ask, "What's the description?" So this is some piece of work I made with some collaborators. We just submitted it to a conference, ICML, this year where we took a baseline and we improved the way to describe this scene. So here you can think the baseline said that this was a white plate with pizza on it, and our model thought this was a small white dog eating a piece of pizza, so it was more informative. Here, another example where the baseline thought this was, "A man riding a motorcycle down a street." And our model for this was, "A police officer riding a motorcycle down a street." And you can think how image captioning can be very useful actually to society in the sense that maybe you have people that are impaired visually that now can just look at pictures and have a description without having to have someone to describe it for them. Another interesting piece of work that we got from DeepMind recently is what is called WaveNets. So WaveNets is a way to produce speech from text. So you are given a piece of text, and you want the machine to read the text for you. And why is that? Well, you can think, for example, voice search. If you ask your phone something, what time is it, you're expecting the phone to reply in voice as well, not in text. Or you ask, "Where is London?" You want a description in voice. You don't want to read about it. So WaveNets model is speech by using the raw wave forms of the sound which is a very complex way of doing it. So you can think one set, they were doing it at 16 kilohertz per second. So it's a lot of information to process, but these end up being the way to get very high quality natural sounding samples. Another kind of techniques to text to speech that they were available before WaveNet was some kind of model that we call Concatenative where you have a large database with short speech fragments. And then when someone has a query, you kind of glue these pieces together. So you have a human recording speech for most frequent words or most frequent things that you say, and then you put this together. And you can see why this has many limitations. For example, you can change the voice of the speaker, right? If it was this particular person that recorded, it's going to sound like that person. You can also now add emotions, intonations, and things in speech that humans are looking for. Let's say for an audiobook, you don't want it to read for you very flat. You want to have emotions. You want it to understand better what it's reading, not just read it for you. Another type of models for text to speech are called the parametric one where all information is stored in parameters. The problem with this approach before WaveNet is that in the end, it sounded less natural than the first approach. So here, if you look to the left, you can see a bar chart of quality of speech. So here we are comparing, so the quality goes from one to five, the higher, the best. In green, you can see humans, so we are not perfect. And then all the way to my right, you can see the models. The first approach, it was a little better than the parametric. And here in blue, it was when WaveNet came, and actually, it closed the gap between machines and humans. It diminished it by 50%. So it was a very good result. So you have an idea how to compare these examples. I'm going to play them here. So here is a first sample of when you generate speech by glueing each other. Let's see if it plays. - [Machine] The Blue Lagoon is a 1980 American romance and adventure film directed by Randal Kleiser. - Now we can see the parametric version of this. So this was before WaveNet with neural network. - [Machine] The Blue Lagoon is a 1980 American romance and adventure film directed by Randal Kleiser. - And here is an example of WaveNet. - [Machine] The Blue Lagoon is a 1980 American romance and adventure film directed by Randal Kleiser. - So if you sample this piece, there has been a huge quality improvement not only by me saying but actually, there are ways to measure this by seeing the quality of the sound. Another cool thing is that since this model is very general, we are not teaching it how to read text. We are teaching it to be general. So with the same architecture, you can go ahead and, for example, start producing music samples. So here is one example. Just maybe another one for music. Let's see. So yeah, these are machine generated, same system for both, for music and speech. So again, just switching gears a little bit, so I'm going now to talk about what's called reinforcement learning. I started the talk saying about deep learning. That's one way of training your neural networks where you have what we call label data. So you have input and output, and you show all these examples to your computer and you train how to perform well. But now, you can do in a different way what we call reinforcement learning. That is more realistic where you have a bunch of the main ingredients of reinforcement learning. First is an agent. So an agent, you can think it's you. So in this case, let's say the agent is me. I'm here trying to give a talk. And I have an environment that, in many cases, you can think that the environment is just the real world, and the agent always has a goal for a specific task. So let's say I want to give this talk. Now I'm going to observe the environment. So I'm going to look around, see how people are reacting the way I talk, and then I'm going to act. So if i maybe look at people and it seems that they are a little lost, I might want to slow down my pace. So I do this action and I wait for the environment to respond to it, so the environment gives me a reward. Did it work or did it not? Was it good? Was it bad? And then you start interacting. So you keep interacting with the environment by doing actions and observing how it changes and also by getting some rewards. So how do you train this agent to be smart? How do you make the agent to do just meaningful actions, not just random? You don't want to be just doing random stuff because it's very hard to get a goal if you don't have kind of a plan. So you start inserting all these techniques that I told you about, deep learning, you use them into your agent. So some of the work that DeepMind is very known for is using games for solving reinforcement learning. And why games? Well, we believe that they are very difficult and also interesting for humans. Many humans like to play games. Sometimes they will simulate real life situations. The kind of challenges you are going to face are very different. Sometimes, in a game, it's very important for you to just be fast. Sometimes you need to be very accurate or have good memory. Or maybe you need to solve a puzzle so you need to have comprehension, logic. So the kind of challenges are different according to the games. So if you are able to solve many, many games at the same time, it means that you have a good comprehension of the world if the game mimics well the real world. And one of the good things also about the game framework is that you have a built-in evaluation. So many times, it's hard to understand what's a reward. How do I know if the talk is going well or not? This is very subjective. Well, in games, generally you have a built-in criteria. So you have the game score, or in the end of the game, you win or you lose. So you have this idea of reward very attached to your game. So DeepMind has done some work, for example, in Atari game. That's our games from the '80s where it has an agent that can solve many of the Atari games at once. And also, we have this that is called the DeepMind lab that has been open source recently where you have games that are more challenging, so 3D mazes for example. And also, DeepMind has been working in strategic games like StarCraft and others. So just to go over a little bit of the publications, maybe you're not aware of it, it is, for example, DeepMind's first contribution, it was back in 2015 where it built an agent that can play these Atari games from the '80s. There are more than a hundred games, so a single agent will learn how to play them without you telling any of the rules. You just show the image and then you do the actions with the joystick, so you go left, right, up, or down, and then you show the next frame, so what happened and the reward. Here, it's always going to beat the amount of points you have in the game, and your agent just learned how to play from scratch. So here, just an example, from Breakout where your goal is to break as many of this block as possible. So the lower one have lower values. You want actually to break the higher one. But as the time passes, the ball goes faster so it's more challenging. Here is an agent that was trained by DeepMind that in the beginning is kind of bad. It's random. And then it starts improving after it trains for a while. And then towards the end, you see that it just figure out a very optimal strategy. So after 600 training episodes, it said, "You know what? "I can do a thing that I can open a tunnel, "and then I can get to these top blocks "that have all these high rewards. "And I just need to catch my ball there, "just wait in case it comes down, and that's it." So we were able to achieve over human performance very fast. So this was back in 2015. Nowadays, the models are way better than that, and you can train these agents in a few hours, like two hours to play all the... So this, in 2015, was the cover of Nature, how to develop this research, how to teach these agents how to play Atari. Now we've been doing, as I said, more challenging environments like 3D. So let's see, this is Stairway to Melon. So in Stairway to Melon, you can choose to pick some apples and there are lemons. Lemons are negative points. So you look, you see negative. You say, "No, I'm gonna get the apples that give me points." But you have a big reward that is a melon. So you see it up there, and then you say, "Well, now to get the melon, "I actually have to pay the price. "So I have to first get this lemon that gives me bad points, "but I know that later, the good reward is coming." It's kind of you take the risk of taking this action that look like it's bad because you know something better is coming later. So agents are also able to figure that out. Let's see. Another example is just Laser Tag. So you have to walk across platforms. And you want to tag your opponent but that comes at a risk because if you see your opponent, maybe he also sees you. So you lose points for being tagged and you gain points for tagging, so you can play around. Again, this is open source, so you can take the environment and train your agents if you like. Another thing that has been a huge success in DeepMind and maybe is the result that most people know about is called AlphaGo. So AlphaGo is an agent that is able to play Go. So if you don't know anything about Go, I'll tell just some small information. So it was originated in China. It is more than 3,000 years old. So there are over 40 million players around the world. By many people, especially by the players, they don't consider it to be just a game. They regard it as poetry or art. So if you play very well, you're considered very intellectual. There is this saying from Confucius that one of the four arts to be mastered by a true scholar is Go. So the game actually has very simple rules but it end up being a very complex game. It's a board game where you have more than the number of atoms in the universe of possibilities of configurations. It looks like this. It's very simple but it's a very, very complex game. So what does it mean that from the fact what is the limitation that this has many board configuration? You cannot brute force it. You cannot start just telling your machine, "If you see this, do that," because that doesn't fit into a machine. You cannot do that. So you have to learn how to play it in a smart way. Just so you have an idea, so the branching factor of a game is on average, how many different moves can you do throughout a game. So for Go, this branch factor is 200 while in chess, it's just 20. So you can see there is afactor that makes this very, very challenging. Another thing is many people believe that writing an evaluation function for the game of Go was going to be impossible. Others would think that this would take at least another decade. So it came with very surprise that this problem was actually tackled quite quickly. So maybe a little technical here but the way they did that, it is by combining these two techniques that I told you about, so deep learning and reinforcement learning. So first, you take data from human plays, and you try to predict what is going to be the move that the human is going to do. So first, you take your machine and you say, "That's your task. "You have to predict what the human is going to do." By doing that, the machine learns how to play the game reasonably well. Then you go to phase two where you say, "You know what, "now play against yourself and try to do your best." So that's when you allow your machine to build their own intuitions rather than trying to just mimic what it has seen. So it plays itself a lot, and that's also good because you can generate data in a cheaper way than getting it from humans. And then now, from this generated data, you can start to refine the way you play, and you build what is called a value network that is going to, given a board position, the machine is going to decide who's winning. Okay, so again, maybe a little technical but here, we have two networks when we are playing Go, one that is called the policy network. So the policy is the one that tells you what to do. So given a board configuration, so the lower one is going to tell you what are the moves that look promising, and it's going to give you probabilities. Maybe with 40%, you should do this move. With 20%, I think you should to this move and you're gonna win, and so forth. While the value network is just giving one single number. It's going to say, "I think you have a chance of winning, "given this board configuration, of 50% or of 70%." Of course, this value network is going to be more accurate towards the end of the game. So in the beginning of the game, it's going to say, "I don't know, it's 50-50. "Anyone could win." But as the game goes on and it sees who is ahead, it starts giving probabilities that are more accurate. So you can use these two networks to combine what is called tree search and rollouts. And why it's important to have these networks to do tree search? So let's see here in figure A. You start with a board configuration, and you start expanding your tree. So from there, where can I go? So you say, "Well, I can place a stone either here or here, "and then my opponent is going to play." So you start considering the possibilities, where your opponent is going to play. And then it says, "Well, now it's my turn." So you can see this tree gets very, very big. If you are going to expand this tree, exactly as I said before, more than the particles we have in the universe, so you cannot expand the tree. So what you do is you start using these neural networks that you train to expand your tree through the branches that look promising. You don't want to expand just all the possibilities. You want the good ones. But it's very hard to train an agent to tell you what are the good ones, but that's what it was developed there, that the agent it was just, so you can prune this tree and search more guided instead of just looking for all the possibilities. So we went ahead and we evaluated how, AlphaGo is the machine that plays the Go. That's the name it got. So it got 100% performance against other computer-made players that are Zen and CrazyStone. So last week, it also, for the first time, it played, last year, sorry, a professional. So it's Fan Hui, that now is working at DeepMind, where he was the European Champion. AlphaGo won a challenge five to zero becoming the first programme ever to beat a professional Go player. This is a decade earlier than many people believed that will happen. So in March last year, we also did a match match against Lee Sedol. So Lee Sedol is considered the strongest player of the last decade. He has been 18 times the world champion of Go. And AlphaGo won this match four-one. This event got lots and lots of attention in China, in Korea, in Japan, and we have more than 200 million viewers for the match. And again, the results of this finding were the picture in Nature where you can find all the details about the research and how things were done exactly, training and stuff like that. So with this, I conclude, and I'm happy to take questions, comments, et cetera. Thank you. Yes. Okay, that's a very good question. Well, one way to measure it is to start finding tasks where you can have an objective value. So if you want, for example, in the text to speech, you can see it has improved, so that's a step towards intelligence. If you can have self-driving cars that work well, that's also a step. But maybe your self-drive car works well, but you cannot have someone to do your household chores yet. So all these steps are intermediate, and they are important. So that's the way you measure, is first at the research level, so you have papers, they go to conferences. And then later is how is this going to affect people's life. Are we going to actually bring this technology to people? Or was this theoretically, it was very good but in practise, maybe not. So it's a long way, but there has been much improvement I believe in the last 10 years. Think about how much your life has changed and how much technology and AI you have working for you. It's a lot. You go from Facebook, you tag pictures, it recognises people, so all of this face recognition. Speech has been very good as well. And there are some areas that maybe are a little behind, but hopefully, we'll catch up. Yes. Yeah, very good question as well. So DeepMind has the separation, but we have the research team and we have the applied team. So many times, the applied team is going to follow up on research developments to bring it to the users. So it's two very different sets of skills that you need. So if you're in the applied team, you want to be very good at making things fast or putting maybe web interfaces and things, skills that I, for example, don't have. And while for developing the research, you need the theory. So these things, they work together. And actually DeepMind, it was acquired by Google in 2014. So we are part of Google so many of our ideas actually go directly to production. For example, work done by this research, so if you use Google Translate, this is all automated. This is all machine learning generated. You can also do Google Image search. You can take your picture, and you can say, "I want to see all my pictures that have dogs." It's going to find all of that by using these techniques that I said. So they are very connected not only at Google, at DeepMind. There are many startups that are just doing machine learning in a more user so you can have also consulting for your company and stuff like that. Yes. Yeah. Yes. I think the people I interact, they come from backgrounds, for example, biology, physics, computer science of course, like most people. And also, DeepMind has a very strong background in neuroscience. So we have many neuroscientists. The founder of the company, Demis Hassabis, is actually in neuroscience. And on the part of people from humanities area, in other teams, so I don't know about philosophy. But people trying to think about problems related to, they are not in the technical side but more in the human side, how to bring these things to the people. Yes. Definitely that humans are very, very, very smart. And we are very fast learners. It's incredible how hard it is to train this network. There are ways to do it, but you need to be a very good developer. And it's quite impressive how humans can learn. For example, there is this problem that we call one shot learning. So if I show you a bunch of images, if I show a machine one million examples of a cat, when it sees a cat, it's going to see it's a cat. Now if I show him a new instance, let's say I take a heart that has legs and has arms and I call it blah, I show it once, I say, "Okay, this is a blah." It's gonna say, "Okay." And then I show a different image, just a little bit. I turn it around and I ask, "What is this?" It's going to say, "I have no idea." It's going to be just, "That was random." While humans, we have this understanding of the world that even though this blah was not part of my life before, I saw it once and now I can recognise. So humans are very fast learners. So we want to transfer this, so how to bring these agents, this understanding that we have from the world. We understand about physics. We understand about objects. We can recognise things. We understand how things move. If I have a ball here, I know it's going to drop down. It's not going to go up. So just this knowledge that we build that is probably genetic and also, as you learn, you experiment a lot, it's very, very powerful. Yes. Yeah. Yeah. So you basically have to try to integrate this to your model. You want your agents to also take risks. And take risks, it means exactly that, that while maybe it's bad now but who knows later. So you try to incorporate this into your agents. There are ways to do it. You can, for example, just put some randomness. So you say every now and then, just do something random, just try it out and see what you got. And then you figure out that there are better alternatives than what you thought was the best. So yeah, introducing noise, that we call. Yes. That's a very good question, yeah. So one of the problem, it is if your data is biassed, then your model maybe is going to reproduce the bias that you had in your data. So one way of getting rid of it, it's getting better data. So you want to make sure that so if you are trying to solve a task, that your data is representative of that task. Another way to do it is the way that we call that is unsupervised learning that actually, you don't have labels on your pictures anymore. You really make the system to figure out by itself, but it needs some supervision from humans to say if it's good or it's not. So I think just doing a very good job in data collection and also maybe trying some different architectures that will take care of removing these biases from your data. But if you only work from data, the best you can do is propagate that bias. It's not going to disappear by itself. If I'm trying to teach my neural network how to recognise a cat and then all the cats that I show to the model are very short hair, so now if all of a sudden I get a very furry cute cat, it's not going to think that's a cat. But that was not necessarily a problem of my model. It was a problem of how I describe the problem. I said, well, recognising cats is just finding this little animal that has very short hair, so the model is not going to figure out by itself that a furry thing, it can also be a cat. Not constantly monitor. In the beginning, you have to do smart decisions of how you frame your task, what's the data you're presenting to your model and stuff like that. So it's always you have a real problem model and you want to translate it to a machine to solve it. So you have to make sure that your modelling is also good. So the first step I think is very important, framing the problem and what's the representative set of your problem, and how do we evaluate your model, how do you say it was good. This is important. Yes. I see, yeah, good question. There are some deadlines in the sense that as a researcher, there are some main conferences that they happen a few times per year. So generally, a little before these conferences, you want to send a paper, so there is a peak in productivity. That was the case two weeks ago. So you work a lot, and then you have kind of a peace for a while. I think the work life day by day is you go to lots of talks by people from DeepMind and also people that come to visit. You read lots of papers. You talk to people. You implement your ideas. Yeah, but there are definitely ups and downs. And for me, it generally happens when there's a conference deadline coming. Then it's rushed, and then after, it kind of washes out a little bit. And this may be three or four times per year. Yes. Yeah, yeah. This has always been a challenge for me. I was in math before, so it was also very male dominated in my undergrad. In my PhD programme, I believe we were 40 students. There were two females. And at work, also there is for sure a gender imbalance. And I think this comes from many, just if you look, graduation in computer science, there'll be a gender imbalance. And then when you go to the PhD level, it's a little worse. And then this thing, it also got translated. As a personal experience, I've seen a lot of improvement actually, so I'm very glad in this sense. I came from Brazil, and I don't think that my experience was very positive, to be honest. I used to go to conferences and people come to talk to you not to talk about your research. And it was so offensive back then that I had not even realised it. It took me time to process how, it took me actually, when I moved to the US, that I started being treated better at conferences and things like that, to realise how much these past experiences were hurting me. And now I find the environment to be very, I don't feel oppressed. I would for sure like to have more female at work to talk with. We have some. There are some. That's already a good step. And also, I think for me, just having someone to collaborate because in math, really, almost all my work was alone, just one person work and now, I collaborate with many people. Be they male or female, it's very good to be able to collaborate.