{"id":1020,"date":"2023-01-23T21:24:16","date_gmt":"2023-01-23T21:24:16","guid":{"rendered":"https:\/\/ethicsandvideogames.com\/?p=1020"},"modified":"2023-04-14T22:52:47","modified_gmt":"2023-04-14T22:52:47","slug":"episode-62-the-moral-challenges-of-using-an-ai-to-reduce-toxicity-in-games-with-camille-guillemot","status":"publish","type":"post","link":"https:\/\/ethicsandvideogames.com\/?p=1020","title":{"rendered":"Episode 62 &#8211; The moral challenges of using an AI to reduce toxicity in games (with Camille Guillemot)"},"content":{"rendered":"\n<p>[Release Date: February 14, 2023]&nbsp;Toxicity in online gaming is an incredibly complex problem to solve. Teams of moderators often seem hopelessly outmatched by the amount of toxicity and it&#8217;s sometimes ambiguous nature.&nbsp; But, what if we brought an AI into the game to help us with both toxicity and fraud by bots, which are essentially other AI?&nbsp; In this episode we look at one company\u2019s attempt to do just that.<\/p>\n\n\n\n<iframe style=\"border-radius:12px\" src=\"https:\/\/open.spotify.com\/embed\/episode\/7rlDrTxGWwDEfll26V9WRJ?utm_source=generator&#038;theme=0\" width=\"100%\" height=\"152\" frameBorder=\"0\" allowfullscreen=\"\" allow=\"autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture\" loading=\"lazy\"><\/iframe>\n\n\n\n<p class=\"has-text-align-center\">SHOW TRANSCRIPT<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>00:02:36.880 &#8211;&gt; 00:02:43.129<br>Shlomo Sher: All right, welcome, everybody we&#8217;re here with a to me. Oh, you know K. Me! Let me stop again and ask you<\/p>\n\n\n\n<p>28<br>00:02:43.350 &#8211;&gt; 00:02:45.829<br>Shlomo Sher: your last name.<\/p>\n\n\n\n<p>29<br>00:02:45.990 &#8211;&gt; 00:02:49.990<br>camille: yeah. Oh, perfect right on point.<\/p>\n\n\n\n<p>30<br>00:02:50.010 &#8211;&gt; 00:02:51.230<br>Shlomo Sher: Okay.<\/p>\n\n\n\n<p>31<br>00:02:51.280 &#8211;&gt; 00:02:52.250<br>Shlomo Sher: cool.<\/p>\n\n\n\n<p>32<br>00:02:52.410 &#8211;&gt; 00:02:56.749<br>Shlomo Sher: It&#8217;s been a while since I got one right. All right. That was perfect.<\/p>\n\n\n\n<p>33<br>00:02:57.340 &#8211;&gt; 00:03:15.080<br>Shlomo Sher: all right. All right, everybody. We&#8217;re here with Kamigu Moe Kami started her career fighting fraud by making gaming a safer space for gamers. But you quickly realized that fraud was only one part of the puzzle, and decided to gain yourself more on topics, that matter to her and join bodyguard as it&#8217;s gaming partnership manager<\/p>\n\n\n\n<p>34<br>00:03:15.090 &#8211;&gt; 00:03:29.200<br>camille: where she&#8217;s now, dedicating your energy to making, gaming a safer and better place for all players, developers and content creators can me welcome to the show. Yeah, thank you for having me. Thank you so much. It&#8217;s really happy to be here<\/p>\n\n\n\n<p>35<br>00:03:29.860 &#8211;&gt; 00:03:49.299<br>Shlomo Sher: all right. So I I told you before before we start today. to me. you&#8217;re from a company called Bodyguard, and you&#8217;re the first person we&#8217;ve actually had from a company, and we hope to do more. But, we&#8217;ve generally try to stay away from that. But we think what what you&#8217;re doing is so interesting.<\/p>\n\n\n\n<p>36<br>00:03:49.310 &#8211;&gt; 00:04:07.439<br>Shlomo Sher: and the way that you yourself are thinking about it so interesting. We really want to have this conversation with you, so let me jump right into it. let&#8217;s say my studio has a game out, and I&#8217;ve been told that some people that our players ability to communicate with each other really freely has led to a toxic environment.<\/p>\n\n\n\n<p>37<br>00:04:07.450 &#8211;&gt; 00:04:12.570<br>Shlomo Sher: Right? how can a company like yours help me deal with the toxicity that occurs in my game?<\/p>\n\n\n\n<p>38<br>00:04:13.580 &#8211;&gt; 00:04:37.459<br>camille: So in in in the world of online gaming toxicity. It includes, a lot of of different things. Actually so sexual harassment, hate, speech, Threats of violets doxing. also, it can also be, you know grooming spamming, flaming. It&#8217;s it&#8217;s it&#8217;s a lot of actually a lot of of different different things.<\/p>\n\n\n\n<p>39<br>00:04:37.470 &#8211;&gt; 00:04:42.030<br>camille: And what we do at bodyguard is that we<\/p>\n\n\n\n<p>40<br>00:04:42.190 &#8211;&gt; 00:05:02.160<br>camille: we go get actors from the gaming industry. So publishers developers esports, players, dreamers, and we try to build with them environment that are safer and inclusive using moderation. So today we know that you know<\/p>\n\n\n\n<p>41<br>00:05:02.170 &#8211;&gt; 00:05:11.450<br>camille: gaming is is more about, you know, connecting with each other about communities, interactions, and sometimes it led to, You know.<\/p>\n\n\n\n<p>42<br>00:05:11.460 &#8211;&gt; 00:05:37.799<br>camille: very toxic environment, where people are insulting each other, like, you know, being violent to each other verbally and all that kind of situation that you guys are probably aware of. And what we do is that we try to build a moderation that fits the spirit of their games that fits the the the environment and the type of connection they want to make with their community, to create a a safe and inclusive environment for for them and for their players.<\/p>\n\n\n\n<p>43<br>00:05:37.810 &#8211;&gt; 00:05:55.660<br>Shlomo Sher: So you&#8217;re building. So you said, we&#8217;re trying to build a moderation. All right, so explain what it. What does that mean in your context? So in the context is that today we have developed a technology that is doing a moderation work. So it&#8217;s a technology that is capable of<\/p>\n\n\n\n<p>44<br>00:05:55.790 &#8211;&gt; 00:06:05.540<br>camille: finding, analyzing and moderating content in real time. So this is the technology that we developed.<\/p>\n\n\n\n<p>45<br>00:06:05.670 &#8211;&gt; 00:06:20.099<br>camille: But the thing is today, with gaming like you. You know you&#8217;re not going to moderate an Fps shooter the same way. You are going to moderate a monopoly plus or just dance game. It&#8217;s<\/p>\n\n\n\n<p>46<br>00:06:20.110 &#8211;&gt; 00:06:31.140<br>camille: 2 different worlds to different player base. So what we do, what I mean by building a moderation with them is that we use the technology that we have. We use our expertise, our knowledge<\/p>\n\n\n\n<p>47<br>00:06:31.150 &#8211;&gt; 00:06:49.840<br>camille: on moderation, and we try to find the best way to use that technology for them to build a safe environment. So the technology is very, you know, very easy to modulate and to customize, it has. It&#8217;s Rule base. So you know, it&#8217;s like really easy to tailor to<\/p>\n\n\n\n<p>48<br>00:06:49.850 &#8211;&gt; 00:07:02.350<br>camille: someone&#8217;s need. And what we do is we use our technology that is already high performing to make sure that it&#8217;s like you know it. It fits the spirit of their game, and what cut, what type of moderation they want for their games.<\/p>\n\n\n\n<p>49<br>00:07:02.460 &#8211;&gt; 00:07:05.439<br>Right? So I think I think it&#8217;s important to just<\/p>\n\n\n\n<p>50<br>00:07:05.550 &#8211;&gt; 00:07:07.509<br>to step back a little bit and say.<\/p>\n\n\n\n<p>51<br>00:07:08.830 &#8211;&gt; 00:07:11.930<br>Prior, most moderation is done by humans.<\/p>\n\n\n\n<p>52<br>00:07:12.120 &#8211;&gt; 00:07:20.529<br>camille: Yes, people are actually just on monitoring what&#8217;s going on, in the, in, the chat, or listening in, and what&#8217;s happening, and then and then applying their<\/p>\n\n\n\n<p>53<br>00:07:21.030 &#8211;&gt; 00:07:34.809<br>camille: their own sense of what&#8217;s what&#8217;s right and wrong, and and and doing what they, and having a tools that they they can to either mute people or ban people, or do all these things right. So what you&#8217;re talking about is actually an AI,<\/p>\n\n\n\n<p>54<br>00:07:35.150 &#8211;&gt; 00:07:44.840<br>or you know it&#8217;s it&#8217;s it replaces the human moderator with somebody who is with a with a computer which can do things faster and more efficiently.<\/p>\n\n\n\n<p>55<br>00:07:45.470 &#8211;&gt; 00:07:53.190<br>Shlomo Sher: and and maybe more rule based, which is interesting, because, you know, in part of our previous conversations on this topic.<\/p>\n\n\n\n<p>56<br>00:07:53.250 &#8211;&gt; 00:08:10.399<br>Shlomo Sher: you know we always try to get to principles right? And when you have a bunch of individuals, you know, and people operate a lot of time really on a case by case basis. It&#8217;s, you know. But if you have an AI moderating and moderating conversation.<\/p>\n\n\n\n<p>57<br>00:08:10.680 &#8211;&gt; 00:08:25.160<br>Shlomo Sher: so, as it happens, which is, you know to me pretty incredible. yeah, right? when that happens right, you need those those kinds of principles, and we want to talk to you. So partly about that. But<\/p>\n\n\n\n<p>58<br>00:08:25.170 &#8211;&gt; 00:08:31.609<br>Shlomo Sher: before we do that, let&#8217;s talk about the whole challenge of controlling. Actually sorry before I do that, hey, Andy.<\/p>\n\n\n\n<p>59<br>00:08:32.000 &#8211;&gt; 00:08:37.950<br>Shlomo Sher: Can you just make sure that you&#8217;re suddenly your your volume on your<\/p>\n\n\n\n<p>60<br>00:08:38.090 &#8211;&gt; 00:08:49.290<br>Shlomo Sher: We Mike went down. Just make sure that your is everything. Okay, yeah, Everything&#8217;s fine. Everything&#8217;s fine. Okay, me good on. It looks good on my recording. So okay that never mind. Okay. So let me let me go back to what?<\/p>\n\n\n\n<p>61<br>00:08:49.310 &#8211;&gt; 00:08:51.380<br>Shlomo Sher: but before we do.<\/p>\n\n\n\n<p>62<br>00:08:51.390 &#8211;&gt; 00:09:16.990<br>Shlomo Sher: and before we get into any of that want to know about the problem of toxicity itself, you know, right now I mean companies hire moderators, so human moderators. that you know your a My! I may replace right and they&#8217;re they&#8217;re they&#8217;re to do something right, and that&#8217;s their job is very hard to do. Why is it so hard to control toxicity and game? That&#8217;s both moral and satisfying to the players?<\/p>\n\n\n\n<p>63<br>00:09:17.550 &#8211;&gt; 00:09:30.379<br>camille: For for so many reasons? Actually because first, when when the games were designed. It was designed, you know, to connect people, but it was not designed taking into account<\/p>\n\n\n\n<p>64<br>00:09:30.600 &#8211;&gt; 00:09:47.570<br>camille: toxicity and moderation. You know, it was just about living like a a chat box open for everyone, so everyone can connect. It was not, you know, taking into consideration. Okay. But what if someone you know loses, and there is a so a lot loser like<\/p>\n\n\n\n<p>65<br>00:09:47.740 &#8211;&gt; 00:09:55.099<br>camille: you know it. All of these considerations were not taken into into consideration. Actually, so<\/p>\n\n\n\n<p>66<br>00:09:55.110 &#8211;&gt; 00:10:13.140<br>camille: today it&#8217;s just like, you know, about creating gaming experience like to unify everyone. And and usually these problems are very like much aside, so we tend to see like a big changes in that. But but today it&#8217;s it&#8217;s still happening.<\/p>\n\n\n\n<p>67<br>00:10:13.150 &#8211;&gt; 00:10:32.359<br>camille: there is also, as you said, a lack of scalability. because these are humans beyond the moderation it&#8217;s very difficult to moderate everyone. Can you imagine, Can you imagine how many games of league of legends are played per hour. It&#8217;s like massive. You cannot have like humans behind<\/p>\n\n\n\n<p>68<br>00:10:32.370 &#8211;&gt; 00:10:33.780<br>camille: all of you know.<\/p>\n\n\n\n<p>69<br>00:10:34.220 &#8211;&gt; 00:10:45.939<br>camille: all of the of the of the games like trying to moderate. So there is like this problem of of scalability, because we are so many players in the world. So this is something that is that is missing. There is also the<\/p>\n\n\n\n<p>70<br>00:10:45.950 &#8211;&gt; 00:11:03.759<br>camille: the the question of centralization, you know. in in big publishers. Hierarchy. What is happening also is that you have studios that, you know. Handle their project on their own like very separately, and there is like no coherence when it comes to<\/p>\n\n\n\n<p>71<br>00:11:03.820 &#8211;&gt; 00:11:18.490<br>camille: community guideline player base guidelines. we see today creation of teams like diversity and inclusion player safety teams. But they are very recent, like, I think maybe 5 years ago. None of them<\/p>\n\n\n\n<p>72<br>00:11:18.790 &#8211;&gt; 00:11:30.120<br>camille: existed before, like none of them like we&#8217;re trying to think about those problematics, you know. But it is it&#8217;s starting to change. But there is this difficulty of scalability. There is<\/p>\n\n\n\n<p>73<br>00:11:30.210 &#8211;&gt; 00:11:35.699<br>camille: the freedom also that, being behind the screen offers. There is, like, you know.<\/p>\n\n\n\n<p>74<br>00:11:35.800 &#8211;&gt; 00:11:42.009<br>camille: the this kind of I&#8217;m. I&#8217;m kind of hidden protected by my screen, and I will allow myself to.<\/p>\n\n\n\n<p>75<br>00:11:42.090 &#8211;&gt; 00:11:58.119<br>camille: you know, to say stuff that I wouldn&#8217;t. I wouldn&#8217;t dare say to someone&#8217;s face if the street. So there is that we anonymized by like we have, you know, Handles gate handles instead of using our real names. And so we&#8217;re anonymized in that way. There&#8217;s no repercussions.<\/p>\n\n\n\n<p>76<br>00:11:58.750 &#8211;&gt; 00:12:18.429<br>camille: Exactly so that&#8217;s you know. Like all of those like factors that contributed to the fact that today toxicity is very difficult to tackle. There is like a lot of ketchup that needs to be done also on the publishers and developers side, because they are still, you know, trying to figure out how to do that, like some of them just<\/p>\n\n\n\n<p>77<br>00:12:18.440 &#8211;&gt; 00:12:35.180<br>camille: don&#8217;t want like multiplayer features in their game, so that you know they are like, okay, I don&#8217;t want to deal with those kind of topics. So i&#8217;m just gonna not put multi-place features. so it&#8217;s it&#8217;s really difficult today to to control all of this. and and these because of all those factors.<\/p>\n\n\n\n<p>78<br>00:12:35.470 &#8211;&gt; 00:12:41.439<br>Shlomo Sher: you know, we we had an episode about toxicity, and our guests essentially called it a wicked problem.<\/p>\n\n\n\n<p>79<br>00:12:41.670 &#8211;&gt; 00:12:51.990<br>Shlomo Sher: You you remember. Remember, you know, a wicked problem in the sense of it&#8217;s one of the really, really really tough problems out in the world, you know. So you know<\/p>\n\n\n\n<p>80<br>00:12:52.500 &#8211;&gt; 00:12:53.890<br>Shlomo Sher: It&#8217;s<\/p>\n\n\n\n<p>81<br>00:12:54.050 &#8211;&gt; 00:12:56.310<br>Shlomo Sher: It&#8217;s a challenge that seems<\/p>\n\n\n\n<p>82<br>00:12:57.390 &#8211;&gt; 00:13:04.670<br>Shlomo Sher: just so incredibly tough to get to get right. but here you are coming in with an AI<\/p>\n\n\n\n<p>83<br>00:13:04.680 &#8211;&gt; 00:13:23.669<br>Shlomo Sher: right? And we&#8217;re very curious to see. Well, okay, I mean, what are the kind of relative advantages or disadvantages of trying to resolve this with an AI as opposed to human monitors. I think you&#8217;ve already come up with one which is just a scalability issue.<\/p>\n\n\n\n<p>84<br>00:13:23.680 &#8211;&gt; 00:13:33.189<br>camille: Yes, so so the beauty of bodyguard and I&#8217;m going to preach a bit for my own chat. But but the beauty with Buddy Garden, like what<\/p>\n\n\n\n<p>85<br>00:13:33.240 &#8211;&gt; 00:13:50.939<br>camille: Charles, the the person who created the technology. He&#8217;s been like really thinking about it. He wanted to take the best of both worlds like with Hanaman Tamasi. you know it&#8217;s about taking the AI the scalability being able to do in real time.<\/p>\n\n\n\n<p>86<br>00:13:50.950 &#8211;&gt; 00:14:08.700<br>camille: being able to moderate massive numbers today fast. It&#8217;s also about protecting mental health. You know of those team that do moderation work because I&#8217;ve seen a I&#8217;ve seen a documentary of people doing moderation for Facebook met up, and it&#8217;s absolutely<\/p>\n\n\n\n<p>87<br>00:14:08.710 &#8211;&gt; 00:14:25.260<br>camille: horrific like they are like seeing violent images, violent like a text every day, like even even for us. Sometimes like when I need to, you know. Go see the data that we have been like analyzing, and i&#8217;m seeing a lot of stuff<\/p>\n\n\n\n<p>88<br>00:14:25.270 &#8211;&gt; 00:14:39.880<br>camille: it&#8217;s really really hard, You know when I spend like an hour, maybe just reading comments, and I see a horrible, horrible stuff, so I cannot imagine what those people like was like experience every day 8&nbsp;h a day doing this. so it&#8217;s it&#8217;s pretty like<\/p>\n\n\n\n<p>89<br>00:14:39.890 &#8211;&gt; 00:15:03.829<br>camille: also about preserving human&#8217;s eyes humans mental health from that. and also there is this idea that we are also fighting AI especially in the in the case of a fraud, because usually you have bots, you know, like floating like creating new ways of fishing people and all of that. So, having an AI fighting and other AI is also really interesting for us.<\/p>\n\n\n\n<p>90<br>00:15:03.840 &#8211;&gt; 00:15:04.730<br>camille: That&#8217;s<\/p>\n\n\n\n<p>91<br>00:15:04.960 &#8211;&gt; 00:15:19.420<br>A Ashcraft: wow. I never remotely thought about that. Yeah, as I fascinated it from a. From a sort of a William Gibson sort of point of view. Yeah, it&#8217;s out there that these these ais are are fighting each other, one to protect our mental health and the other to<\/p>\n\n\n\n<p>92<br>00:15:19.430 &#8211;&gt; 00:15:32.999<br>Shlomo Sher: I don&#8217;t know. Sell it stuff, probably. Yes, rob us, you know. Right? I mean, you know, trick us into into anything at all right? Exactly. So that&#8217;s exactly what we&#8217;ve been working on<\/p>\n\n\n\n<p>93<br>00:15:33.010 &#8211;&gt; 00:15:51.140<br>camille: with the one of the the editors that we&#8217;ve been working with is that they had a massive issue with bots, you know, creating gray markets trying to pull out the players out of the official ecosystem, you know, so they can buy in game currency elsewhere. So they had, like massive issue with that.<\/p>\n\n\n\n<p>94<br>00:15:51.150 &#8211;&gt; 00:15:58.629<br>camille: because first they were spamming. They were also ruining the economy of the game. So we had to think about. Okay, how can we, You know.<\/p>\n\n\n\n<p>95<br>00:15:58.640 &#8211;&gt; 00:16:21.769<br>camille: counter that using, using our technology. So it was about learning how they were behaving what? Where you know the patterns, and then creating something that will counter that. So? Yes, it&#8217;s it&#8217;s exactly that&#8217;s just creating AI against AI. But, as I said, there is also the human side that we cannot, you know, ignore it. You cannot moderate humans just by using<\/p>\n\n\n\n<p>96<br>00:16:21.780 &#8211;&gt; 00:16:37.729<br>camille: the machines. it&#8217;s. It&#8217;s something that you know needs to be fueled by humans, because at the end MoD moderating and toxicity. It&#8217;s something very subjective. It can be something really subjective. So you need to make sure that you have humans understanding what&#8217;s going on.<\/p>\n\n\n\n<p>97<br>00:16:37.740 &#8211;&gt; 00:16:45.440<br>camille: So with the humans. So, as I said, it&#8217;s human. It&#8217;s based rules, and those rules are created by our linguists.<\/p>\n\n\n\n<p>98<br>00:16:45.470 &#8211;&gt; 00:17:04.609<br>camille: and what they do is that they, you know, try to find what are the new trends of toxicity, because they, because with their human brains and their human emotions, they can understand the subtlety, and in in in toxic sense, you know. Sometimes you have very toxic the content that doesn&#8217;t content any<\/p>\n\n\n\n<p>99<br>00:17:04.970 &#8211;&gt; 00:17:08.229<br>camille: swear words, and not even a little<\/p>\n\n\n\n<p>100<br>00:17:08.760 &#8211;&gt; 00:17:18.079<br>camille: fuck. Sorry you&#8217;re you&#8217;re you&#8217;re you&#8217;re you&#8217;re you&#8217;re allowed to use it. Okay, I I don&#8217;t, because you know, when you are in the moderation business You are just<\/p>\n\n\n\n<p>101<br>00:17:18.160 &#8211;&gt; 00:17:35.390<br>camille: start to you know. Get Really.<\/p>\n\n\n\n<p>102<br>00:17:35.400 &#8211;&gt; 00:17:40.630<br>camille: this is, you know, this is toxic and I think it&#8217;s it&#8217;s quite beautiful, you know, because it&#8217;s<\/p>\n\n\n\n<p>103<br>00:17:41.020 &#8211;&gt; 00:17:53.599<br>camille: you know, in in sci-fi! It&#8217;s the whole deal like when in a When do we consider an AI as it&#8217;s diligent as as a human, because they don&#8217;t understand emotions. But we we&#8217;re trying to<\/p>\n\n\n\n<p>104<br>00:17:53.610 &#8211;&gt; 00:18:12.750<br>camille: to teach them as well as emotions like. What would a a human feel if they read that sentence? and and that&#8217;s you know. That&#8217;s that&#8217;s the part the human part of the technology that we do. Also, very that we do cherish, because we do feel it&#8217;s really important to keep a human side to the moderation that we perform.<\/p>\n\n\n\n<p>105<br>00:18:12.960 &#8211;&gt; 00:18:16.919<br>A Ashcraft: Yeah, because there&#8217;s a lot of nuance between, you know.<\/p>\n\n\n\n<p>106<br>00:18:17.350 &#8211;&gt; 00:18:30.269<br>camille: you using using the F word to call some, you know, to as a as in some sort of slur, or something like that, and just going off when I&#8217;ve lost a game. Exactly.<\/p>\n\n\n\n<p>107<br>00:18:30.280 &#8211;&gt; 00:18:43.670<br>camille: So context context is something that we, if we really work on it&#8217;s really important with bodyguard. And and that&#8217;s why I I i&#8217;m, i&#8217;m emphasizing it, we are a contextual moderation solution. We don&#8217;t work on keywords.<\/p>\n\n\n\n<p>108<br>00:18:43.680 &#8211;&gt; 00:18:56.559<br>camille: because, as you mentioned like fuck in the sentence, can mean like so many things you need to understand the full context. So we we, we work on the context. And you know, another example of that is like<\/p>\n\n\n\n<p>109<br>00:18:56.570 &#8211;&gt; 00:19:06.359<br>camille: something really specific for gaming like. Very recently we had like a lot of misogyny, I mean, i&#8217;m not saying it&#8217;s recent. It has always been there, but we are being more vocal about it.<\/p>\n\n\n\n<p>110<br>00:19:06.410 &#8211;&gt; 00:19:26.230<br>camille: and you know it&#8217;s like Go make new sandwich that a lot of people are using against streamer female streamer to say you. You have no space in here, so in in essence go make me a sandwich, and they I wouldn&#8217;t understand that but us as human, because we have all the context and everything. We do understand that<\/p>\n\n\n\n<p>111<br>00:19:26.390 &#8211;&gt; 00:19:39.399<br>camille: this shouldn&#8217;t be said to her to a streamer live. So is this, and and the beauty with the rule base that we can actually say to the AI: okay. So go make me a sandwich in this context should be moderated.<\/p>\n\n\n\n<p>112<br>00:19:39.450 &#8211;&gt; 00:19:42.450<br>Shlomo Sher: So so so just to be clear<\/p>\n\n\n\n<p>113<br>00:19:42.930 &#8211;&gt; 00:20:01.859<br>Shlomo Sher: as i&#8217;m as i&#8217;m trying to wrap my head around this. Originally I thought this, you know. So you have an AI and the AI also does learning as AI&#8217;s learning. I&#8217;m. Assuming from from the context, or is it just getting so an AI can look at a space and essentially get feedback from us from the space and<\/p>\n\n\n\n<p>114<br>00:20:01.870 &#8211;&gt; 00:20:10.359<br>Shlomo Sher: develop kind of its own, algorithm based on on that feedback or the AI can be fed with rules from from the very beginning.<\/p>\n\n\n\n<p>115<br>00:20:10.380 &#8211;&gt; 00:20:23.830<br>Shlomo Sher: and continuous rules is your account is yours, and and the the former ones the ones with the AI. Just learn from the space has famously led to a bunch of ais that were sexist or racist or<\/p>\n\n\n\n<p>116<br>00:20:23.910 &#8211;&gt; 00:20:27.240<br>Shlomo Sher: so Exactly. Is that what you guys are trying to fix here.<\/p>\n\n\n\n<p>117<br>00:20:27.850 &#8211;&gt; 00:20:43.750<br>camille: So we are trying to fix, but by not using what we would call machine learning, because, like my machine learning is exactly that. You know, the AI is learning by itself with the database that it&#8217;s allowed to access. We don&#8217;t want that, because<\/p>\n\n\n\n<p>118<br>00:20:43.890 &#8211;&gt; 00:21:01.330<br>camille: again, like machine learning is, is highly inefficient for for moderation to our opinion. And again, you have, like so many new trends like today, like people can write words without, you know the full letters. They find new ways, you know, adding commas between the letters, or like so many creative ways.<\/p>\n\n\n\n<p>119<br>00:21:01.340 &#8211;&gt; 00:21:19.190<br>camille: So using machine learning is for moderation is not is not efficient, and that&#8217;s not what we&#8217;re using. What we use is our linguists and the feed the AI with roles. So saying, this, plus this plus this and this context shouldn&#8217;t be shouldn&#8217;t be allowed.<\/p>\n\n\n\n<p>120<br>00:21:19.200 &#8211;&gt; 00:21:30.999<br>camille: That&#8217;s why it&#8217;s like so easy to customize for different. And you know people we&#8217;re working with. Because obviously, if you say to someone on, I don&#8217;t know like<\/p>\n\n\n\n<p>121<br>00:21:31.200 &#8211;&gt; 00:21:47.609<br>camille: something under a picture of kids. And you say, oh, i&#8217;m, i&#8217;m gonna murder them. It&#8217;s very different to someone saying under a video game post, you know, of a shooter saying, oh, i&#8217;m, i&#8217;m gonna murder them because it&#8217;s like so so different, like<\/p>\n\n\n\n<p>122<br>00:21:47.620 &#8211;&gt; 00:21:55.679<br>camille: context is very different, or I&#8217;m going to murder you is not the same thing in a shooter in game chats that i&#8217;m going to murder you in<\/p>\n\n\n\n<p>123<br>00:21:56.330 &#8211;&gt; 00:22:05.409<br>Shlomo Sher: in adjustments, you know. But, by the way, that so? So to get some clarity are we only talking about text-based communication or also voice.<\/p>\n\n\n\n<p>124<br>00:22:05.420 &#8211;&gt; 00:22:18.600<br>camille: Yes, at the moment we are. We are focusing on on texts. Oh, Audio audio is something that we think about, but at the moment I think it&#8217;s it&#8217;s it&#8217;s too early. we already see<\/p>\n\n\n\n<p>125<br>00:22:18.630 &#8211;&gt; 00:22:29.459<br>camille: changes when it comes to moderation. Reason we are still not quite there. but I mean on the market side. I mean so maybe maybe in the future I hope<\/p>\n\n\n\n<p>126<br>00:22:29.620 &#8211;&gt; 00:22:45.430<br>camille: I mean it&#8217;s a huge jump from text to voice. I mean, it is. It&#8217;s a huge term also. Can you do this in real time, like right? You do by writing like you know so many questions, so many questions, and it also sounds like you have to, because you you mentioned that, you know, having a picture<\/p>\n\n\n\n<p>127<br>00:22:45.480 &#8211;&gt; 00:22:53.090<br>Shlomo Sher: in the text stream, and then and then the context of the picture.<\/p>\n\n\n\n<p>128<br>00:22:53.340 &#8211;&gt; 00:22:55.220<br>camille: So how crazy, difficult!<\/p>\n\n\n\n<p>129<br>00:22:55.520 &#8211;&gt; 00:23:07.990<br>camille: So the the way we are performing at the moment is that the rules are set per pages, that we are like pages or platforms that we are protecting. So, for example.<\/p>\n\n\n\n<p>130<br>00:23:08.020 &#8211;&gt; 00:23:12.790<br>camille: we don&#8217;t cross information between pictures and text yet.<\/p>\n\n\n\n<p>131<br>00:23:12.870 &#8211;&gt; 00:23:25.310<br>camille: but what we do is that. Oh, we know that this page, that this sweeter account that we are protecting or aiming for like a a young audience. So we know that those kind of like<\/p>\n\n\n\n<p>132<br>00:23:25.320 &#8211;&gt; 00:23:37.660<br>camille: sentence should it be found in this? You know this. So that&#8217;s how we but one day, maybe maybe I hope that we&#8217;ll be able to cross as many information as possible.<\/p>\n\n\n\n<p>133<br>00:23:38.020 &#8211;&gt; 00:23:45.510<br>Shlomo Sher: Right? I mean, yeah, this is the scalability of this. Yeah, I there&#8217;s just so much data involved here in real time.<\/p>\n\n\n\n<p>134<br>00:23:45.540 &#8211;&gt; 00:23:59.160<br>Shlomo Sher: you know, the other part, and I want to get obviously to the actual thoughts about moderation and all that. And the principles you guys are using, and all that. But one, a bit of clarity. you, said linguist, which is interesting, right? Because<\/p>\n\n\n\n<p>135<br>00:23:59.170 &#8211;&gt; 00:24:06.050<br>Shlomo Sher: they&#8217;re obviously studying the language, and you&#8217;re also dealing with multiple languages at the same time.<\/p>\n\n\n\n<p>136<br>00:24:06.060 &#8211;&gt; 00:24:23.769<br>Shlomo Sher: How is that? And mind you, I don&#8217;t know how that&#8217;s done by humans, either right, because you know, you could be in a text chat where people will chat in like Arabic and mandarin and Spanish and English in one stream, and I&#8217;ve never understood how a human being could.<\/p>\n\n\n\n<p>137<br>00:24:23.780 &#8211;&gt; 00:24:30.060<br>Shlomo Sher: I mean, you know, could know so many languages, and be able to effectively moderate something like this.<\/p>\n\n\n\n<p>138<br>00:24:30.350 &#8211;&gt; 00:24:43.439<br>camille: Well, it&#8217;s it&#8217;s a lot of work. So at the moment we have 6 premium languages that we cover with our 6 different 7 different languages. So we have English. We have both<\/p>\n\n\n\n<p>139<br>00:24:43.620 &#8211;&gt; 00:24:57.860<br>camille: English from the Uk and from the Us. Because culturally very different type of toxicity and and English. We also have Portuguese, Spanish, German.<\/p>\n\n\n\n<p>140<br>00:24:57.870 &#8211;&gt; 00:25:04.529<br>camille: French. So these are the 6 premium languages that we covered today. and these are like the highest performing<\/p>\n\n\n\n<p>141<br>00:25:04.570 &#8211;&gt; 00:25:09.850<br>camille: at the moment, but for the other languages it&#8217;s a it&#8217;s another challenge, because<\/p>\n\n\n\n<p>142<br>00:25:09.900 &#8211;&gt; 00:25:11.580<br>camille: first of all, I mean.<\/p>\n\n\n\n<p>143<br>00:25:11.960 &#8211;&gt; 00:25:21.729<br>camille: we cannot hire a linguist from all those languages to cover. Not yet, at least like we are not like that a big company that can hire a language from every language.<\/p>\n\n\n\n<p>144<br>00:25:21.780 &#8211;&gt; 00:25:31.009<br>camille: But but we are working on you know, being able to moderate also all these different languages by using a translation module<\/p>\n\n\n\n<p>145<br>00:25:31.100 &#8211;&gt; 00:25:35.180<br>camille: that will then pass the content through<\/p>\n\n\n\n<p>146<br>00:25:35.270 &#8211;&gt; 00:25:37.929<br>camille: for the technology bodyguard. And then<\/p>\n\n\n\n<p>147<br>00:25:38.310 &#8211;&gt; 00:25:42.020<br>camille: by doing this, it&#8217;s like. Of course, we will not<\/p>\n\n\n\n<p>148<br>00:25:42.140 &#8211;&gt; 00:25:46.730<br>camille: moderate Arabic languages the same way with the same<\/p>\n\n\n\n<p>149<br>00:25:46.770 &#8211;&gt; 00:25:56.050<br>camille: quality as we do with with English at the moment, because we don&#8217;t have the linguists, but our AIM is to be able to protect as many people<\/p>\n\n\n\n<p>150<br>00:25:56.070 &#8211;&gt; 00:26:08.080<br>camille: as possible, so it&#8217;s already a first step, and and it&#8217;s already what&#8217;s been what&#8217;s what is done on the market today in terms of of of moderation, anyway. But for for today. What we want to do is to<\/p>\n\n\n\n<p>151<br>00:26:08.190 &#8211;&gt; 00:26:25.099<br>camille: protect as many people as we can. so it means that we will have to adjust. We will have to work really hard to make sure that those languages are protected at the best standards as possible. but that&#8217;s that&#8217;s how it&#8217;s it&#8217;s working at the at the moment. But again, you know, it&#8217;s like<\/p>\n\n\n\n<p>152<br>00:26:25.460 &#8211;&gt; 00:26:27.110<br>camille: so so<\/p>\n\n\n\n<p>153<br>00:26:27.190 &#8211;&gt; 00:26:29.420<br>camille: many, so much data to<\/p>\n\n\n\n<p>154<br>00:26:29.460 &#8211;&gt; 00:26:50.550<br>Shlomo Sher: to analyze into so much work, so much what to do<\/p>\n\n\n\n<p>155<br>00:26:50.560 &#8211;&gt; 00:27:07.630<br>Shlomo Sher: you know? And British English, which is not even going to Australia yet? Right? and if we&#8217;re talking about Arabic, I mean, you could have, you know, one set of norms in Morocco and a very different one in you know, Lebanon. Right<\/p>\n\n\n\n<p>156<br>00:27:07.640 &#8211;&gt; 00:27:20.170<br>Shlomo Sher: that&#8217;s a lot of expertise needed whether you&#8217;re doing it as humans or as AI<\/p>\n\n\n\n<p>157<br>00:27:20.180 &#8211;&gt; 00:27:28.109<br>camille: sub languages, you know, like a very specific way to talk this language in certain regions, it&#8217;s it&#8217;s getting really really difficult.<\/p>\n\n\n\n<p>158<br>00:27:29.210 &#8211;&gt; 00:27:30.680<br>Shlomo Sher: Okay, okay.<\/p>\n\n\n\n<p>159<br>00:27:30.750 &#8211;&gt; 00:27:49.259<br>Shlomo Sher: Now, now, let&#8217;s jump to actually the creative, the the creative decisions I don&#8217;t know. But yeah, the decisions that are actually made. So You know, you guys are a a moderation software, essentially a moderation team. but you know what some people call moderation is. Other people just call censorship.<\/p>\n\n\n\n<p>160<br>00:27:49.370 &#8211;&gt; 00:27:54.260<br>Shlomo Sher: So is moderation. Censorship. Is this a limitation of free speech?<\/p>\n\n\n\n<p>161<br>00:27:55.880 &#8211;&gt; 00:28:06.060<br>camille: It is. It is not because moderation it&#8217;s not the way I see it, or the way we see it, bodyguard, because first<\/p>\n\n\n\n<p>162<br>00:28:06.190 &#8211;&gt; 00:28:17.509<br>camille: you choose to to, to, to moderate like the the platform that you know that call us for help. They choose to to moderate. There is also, the<\/p>\n\n\n\n<p>163<br>00:28:17.740 &#8211;&gt; 00:28:34.330<br>camille: the application that you can download. People choose to have moderation on their own page. So this is not censorship. It&#8217;s self protection, self preservation. there is also something that we say quite often, and and I will repeat it again and again.<\/p>\n\n\n\n<p>164<br>00:28:34.680 &#8211;&gt; 00:28:44.419<br>camille: your the the freedom of some people stop where the the suffering of others begin, which means, you know, Sometimes you have<\/p>\n\n\n\n<p>165<br>00:28:44.590 &#8211;&gt; 00:28:52.830<br>camille: loud people that allow themselves to be very toxic to one another and others that because they are so afraid of.<\/p>\n\n\n\n<p>166<br>00:28:53.000 &#8211;&gt; 00:28:58.670<br>camille: you know, receiving those insults, receiving those toxic content, they decide to<\/p>\n\n\n\n<p>167<br>00:28:58.840 &#8211;&gt; 00:29:13.040<br>camille: actually sh themselves, and not to participate to conversation, and not to enjoy the game like I&#8217;m. I&#8217;m one of these person like to be to be honest as a as a woman in in gaming. Sometimes I I<\/p>\n\n\n\n<p>168<br>00:29:13.320 &#8211;&gt; 00:29:17.090<br>camille: I I almost never use the voice chat.<\/p>\n\n\n\n<p>169<br>00:29:17.270 &#8211;&gt; 00:29:27.629<br>camille: because I know I know that&#8217;s i&#8217;m gonna get targeted, or you know i&#8217;m gonna get insults. So you know, I I actually censor myself<\/p>\n\n\n\n<p>170<br>00:29:27.740 &#8211;&gt; 00:29:46.560<br>camille: because i&#8217;m fearing toxicity. So what we want to do. And and there is, something that that we really work really hard with ready guard is that we want to give the voice voice to everyone. We want to make sure that everyone on platforms on social media is allow themselves to<\/p>\n\n\n\n<p>171<br>00:29:46.570 &#8211;&gt; 00:29:54.550<br>camille: participate to conversation, that they enjoy the the the experience as much as anyone will do.<\/p>\n\n\n\n<p>172<br>00:29:54.560 &#8211;&gt; 00:30:07.009<br>camille: and also we do not moderate criticism. It&#8217;s not something that we do. We very much like. Put a lot of effort, and when we start working with clients. We are very like.<\/p>\n\n\n\n<p>173<br>00:30:07.020 &#8211;&gt; 00:30:21.370<br>camille: very clear with them, like criticism will not be moderated. Toxic content. Violent content will be moderated to protect your teams, to protect, to protect you. But criticism will will, will not. So it&#8217;s really important.<\/p>\n\n\n\n<p>174<br>00:30:21.380 &#8211;&gt; 00:30:27.759<br>camille: to see moderation as actually a tool of freedom, you know, to allow everyone to<\/p>\n\n\n\n<p>175<br>00:30:28.160 &#8211;&gt; 00:30:36.220<br>camille: to express themselves, to participate to conversation, not to be afraid to be out there, because, you know the minorities today they are like<\/p>\n\n\n\n<p>176<br>00:30:36.500 &#8211;&gt; 00:30:52.980<br>camille: victims of toxicity. I&#8217;ve seen like a very recent like study again, like the numbers are terrifying. we need to give everyone a a voice, and that that contributes also to like, you know, diversity on social media.<\/p>\n\n\n\n<p>177<br>00:30:52.990 &#8211;&gt; 00:31:03.059<br>camille: on on, on those platforms. So to me and to to bodyguard, we see it as a tool for freedom more than more than censorship.<\/p>\n\n\n\n<p>178<br>00:31:03.340 &#8211;&gt; 00:31:07.529<br>A Ashcraft: Yeah, I&#8217;d like a Your first point, I think, is really interesting that you<\/p>\n\n\n\n<p>179<br>00:31:09.360 &#8211;&gt; 00:31:12.070<br>The things that that get posted to my page<\/p>\n\n\n\n<p>180<br>00:31:12.780 &#8211;&gt; 00:31:19.620<br>A Ashcraft: whether that I post them or somebody else posts them, are ultimately my responsibility. And so in some ways they are my speech.<\/p>\n\n\n\n<p>181<br>00:31:19.670 &#8211;&gt; 00:31:23.620<br>A Ashcraft: So even if Shlomo posts something to my page, and I don&#8217;t like it, I can delete it.<\/p>\n\n\n\n<p>182<br>00:31:23.640 &#8211;&gt; 00:31:29.090<br>camille: and that&#8217;s not censorship, because it&#8217;s my page, and people are gonna see that as as my owner. Yes.<\/p>\n\n\n\n<p>183<br>00:31:29.290 &#8211;&gt; 00:31:30.040<br>A Ashcraft: right<\/p>\n\n\n\n<p>184<br>00:31:30.330 &#8211;&gt; 00:31:36.339<br>Shlomo Sher: let coming so to pick it back on that just to just if I got the technology. Right?<\/p>\n\n\n\n<p>185<br>00:31:36.410 &#8211;&gt; 00:31:40.320<br>Shlomo Sher: did you say that this is is this more of a client<\/p>\n\n\n\n<p>186<br>00:31:40.360 &#8211;&gt; 00:31:50.780<br>Shlomo Sher: based, or the client turns it on. So that, rather than moderate the speech in the entire conversation. This is something I can<\/p>\n\n\n\n<p>187<br>00:31:50.820 &#8211;&gt; 00:32:10.710<br>Shlomo Sher: decide to, turn on, or turn off for myself in of in terms of protecting myself, rather than moderating the entire conversation. That does that make sense that each person can decide whether to engage with it or not?<\/p>\n\n\n\n<p>188<br>00:32:10.720 &#8211;&gt; 00:32:15.669<br>camille: so there is like 2 2 sites. So there is like the<\/p>\n\n\n\n<p>189<br>00:32:15.850 &#8211;&gt; 00:32:25.000<br>camille: the the app that is now available that anyone can use, and it&#8217;s just you know about moderating, and then don&#8217;t see content. But for clients they actually choose what they want<\/p>\n\n\n\n<p>190<br>00:32:25.020 &#8211;&gt; 00:32:25.850<br>camille: to<\/p>\n\n\n\n<p>191<br>00:32:26.120 &#8211;&gt; 00:32:36.979<br>camille: appear on their on their page. So I just want to make one thing clear that we we are always moderate a content that is,<\/p>\n\n\n\n<p>192<br>00:32:38.660 &#8211;&gt; 00:32:49.290<br>camille: against the law, like racism, homophobia. All those Kenya you know, of of content that are like against the load. They will always always be moderated like.<\/p>\n\n\n\n<p>193<br>00:32:49.410 &#8211;&gt; 00:33:00.109<br>camille: No, the client would say, oh, I wanna you know the n word to to stay on the page. It&#8217;s a big No, no, we we have, like the Those things are not against the law in the United States.<\/p>\n\n\n\n<p>194<br>00:33:00.410 &#8211;&gt; 00:33:05.969<br>camille: They are. They are in Europe. So in in Europe we decide. I mean<\/p>\n\n\n\n<p>195<br>00:33:06.200 &#8211;&gt; 00:33:08.310<br>camille: we are. You know we are very<\/p>\n\n\n\n<p>196<br>00:33:08.780 &#8211;&gt; 00:33:21.240<br>camille: engage with what we do like. We we we do believe that those behaviors shouldn&#8217;t happen. So that&#8217;s that&#8217;s that&#8217;s where. Also, like the all headache, part of it. It comes to, you know.<\/p>\n\n\n\n<p>197<br>00:33:21.250 &#8211;&gt; 00:33:34.610<br>camille: to the game is that we make strong choices at bodyguard to always, always moderate those kind of content. They will always be moderated for the other parts, like, you know, very specific<\/p>\n\n\n\n<p>198<br>00:33:34.620 &#8211;&gt; 00:33:45.069<br>camille: topic. Then they can decide what they want to do with the content. If they want you to be permissive with their moderation. They can, if they want to be very strict with their moderation, they can.<\/p>\n\n\n\n<p>199<br>00:33:45.190 &#8211;&gt; 00:33:49.109<br>Yeah, I think I think Slomo is thinking that, like I<\/p>\n\n\n\n<p>200<br>00:33:49.600 &#8211;&gt; 00:33:54.290<br>he could download it and use it to protect himself from<\/p>\n\n\n\n<p>201<br>00:33:54.390 &#8211;&gt; 00:34:10.379<br>Shlomo Sher: stuff on Facebook. Yes, or like on some on a website or no, I either, or something like that I I meant something. No, I meant some when i&#8217;m in client based, I meant in the technical sense, so in the sense that the user is using client software.<\/p>\n\n\n\n<p>202<br>00:34:10.389 &#8211;&gt; 00:34:14.970<br>Shlomo Sher: So let&#8217;s say, i&#8217;m playing a game, and I have the option on my end<\/p>\n\n\n\n<p>203<br>00:34:14.989 &#8211;&gt; 00:34:33.119<br>Shlomo Sher: to have the moderation on or off, as opposed to let&#8217;s say a plain link of legends, and you know I, you know I have the ability to turn bodyguard on for myself as opposed to so that you&#8217;re not hearing or seeing. I should say you&#8217;re not seeing<\/p>\n\n\n\n<p>204<br>00:34:33.130 &#8211;&gt; 00:34:42.390<br>Shlomo Sher: right, so it&#8217;s moderating the speech that I would be seeing as opposed to everybody&#8217;s speech. Do you see what I mean to me, and Andy, that that&#8217;s where I thought you were going<\/p>\n\n\n\n<p>205<br>00:34:42.460 &#8211;&gt; 00:34:49.049<br>A Ashcraft: with what you were talking about, my page right. I was thinking like I am. I am league of legends.<\/p>\n\n\n\n<p>206<br>00:34:49.179 &#8211;&gt; 00:34:52.699<br>Shlomo Sher: and so I I hire bodyguard<\/p>\n\n\n\n<p>207<br>00:34:53.400 &#8211;&gt; 00:34:56.249<br>to moderate the things that are<\/p>\n\n\n\n<p>208<br>00:34:57.290 &#8211;&gt; 00:35:00.949<br>Shlomo Sher: sure that are being posted onto legal legends.<\/p>\n\n\n\n<p>209<br>00:35:01.260 &#8211;&gt; 00:35:08.929<br>Shlomo Sher: I I see. But if if that was the case we would say, Well, league of Legends and censoring the conversation right? It&#8217;s it&#8217;s it&#8217;s owned by league of legends.<\/p>\n\n\n\n<p>210<br>00:35:09.000 &#8211;&gt; 00:35:28.739<br>Shlomo Sher: Right? Well, you you could say they can&#8217;t right. It&#8217;s because they can do what they want right. But you still might say there. whereas if it was a a client based kind of software, you might say it&#8217;s an option that someone who wants moderation can turn on<\/p>\n\n\n\n<p>211<br>00:35:28.880 &#8211;&gt; 00:35:30.139<br>Shlomo Sher: or off<\/p>\n\n\n\n<p>212<br>00:35:30.240 &#8211;&gt; 00:35:34.839<br>Shlomo Sher: right. So then it becomes an individual person, kind of situation.<\/p>\n\n\n\n<p>213<br>00:35:34.950 &#8211;&gt; 00:35:42.809<br>camille: So it&#8217;s it&#8217;s an individual case for the app that we are offering. But when we work with clients they are<\/p>\n\n\n\n<p>214<br>00:35:42.950 &#8211;&gt; 00:35:49.650<br>camille: the users, you know. If I, as a user I want to play League of legends, I cannot, you know.<\/p>\n\n\n\n<p>215<br>00:35:49.670 &#8211;&gt; 00:35:54.139<br>camille: Say, I want to do that. I don&#8217;t want body guard because you are under<\/p>\n\n\n\n<p>216<br>00:35:54.250 &#8211;&gt; 00:35:55.109<br>camille: the<\/p>\n\n\n\n<p>217<br>00:35:56.520 &#8211;&gt; 00:36:07.240<br>camille: community guidelines that has been set to by riots in that case, and if Ryad says we moderate all those type of content, then you know<\/p>\n\n\n\n<p>218<br>00:36:07.420 &#8211;&gt; 00:36:11.479<br>camille: it&#8217;s, it&#8217;s it&#8217;s it&#8217;s there. It&#8217;s their choice.<\/p>\n\n\n\n<p>219<br>00:36:11.500 &#8211;&gt; 00:36:23.860<br>camille: but in in any case the the look like the the big part of the work that we&#8217;re doing with them is actually to find, you know, the the right, the right balance the right balance with them, because obviously we don&#8217;t want to<\/p>\n\n\n\n<p>220<br>00:36:24.430 &#8211;&gt; 00:36:43.389<br>camille: fall into censorship because it&#8217;s really easy to follow through censorship with with the with with moderation. it&#8217;s a it&#8217;s a lot of work with them before we integrate the solution to make sure that it&#8217;s right. Balance that it&#8217;s not, you know, moderating what shouldn&#8217;t be moderated.<\/p>\n\n\n\n<p>221<br>00:36:43.400 &#8211;&gt; 00:36:58.750<br>camille: but at least for a homophobia and a racism. It it! It should be gone, and I don&#8217;t think it it should be right by by by anyone. But these are like the minimum. I&#8217;d say there is like a minimum of of things that we would always moderate anyway.<\/p>\n\n\n\n<p>222<br>00:36:59.290 &#8211;&gt; 00:37:07.899<br>Shlomo Sher: okay. So now let&#8217;s get to the the the big challenges, then. and the big challenges is the challenge of moderation itself<\/p>\n\n\n\n<p>223<br>00:37:08.010 &#8211;&gt; 00:37:12.120<br>Shlomo Sher: right and and i&#8217;m wondering here if there is. So<\/p>\n\n\n\n<p>224<br>00:37:12.710 &#8211;&gt; 00:37:20.540<br>Shlomo Sher: if you have human moderators right, i&#8217;m assuming, and and I don&#8217;t know how this goes. This works. So this is me guessing and<\/p>\n\n\n\n<p>225<br>00:37:20.550 &#8211;&gt; 00:37:48.389<br>Shlomo Sher: and Andy to me. you know you guys can correct me on this because i&#8217;m coming at it from i&#8217;m just a gamer right? I mean, I I just play games right. I don&#8217;t really know what this works. I i&#8217;m i&#8217;m assuming. Let&#8217;s say a company like riot would traditionally have a moderation team, and they would essentially have meetings and decide for the individual moderators what it is that they should be looking for, and how it is that they should deal with those things. And then you have a<\/p>\n\n\n\n<p>226<br>00:37:48.400 &#8211;&gt; 00:37:55.560<br>Shlomo Sher: a policy of also what happens right? So one is. you know.<\/p>\n\n\n\n<p>227<br>00:37:56.000 &#8211;&gt; 00:38:01.450<br>Shlomo Sher: are they going to actually interrupt? I don&#8217;t know how human moderators can do this in real time<\/p>\n\n\n\n<p>228<br>00:38:01.480 &#8211;&gt; 00:38:07.619<br>Shlomo Sher: right. It&#8217;s really, you know, in real time. It&#8217;s very, very difficult.<\/p>\n\n\n\n<p>229<br>00:38:07.750 &#8211;&gt; 00:38:11.420<br>camille: I&#8217;ve I&#8217;ve worked for a big publisher for 40 years when I was a<\/p>\n\n\n\n<p>230<br>00:38:11.440 &#8211;&gt; 00:38:17.079<br>camille: working anti fraud, and what I&#8217;ve seen is that it&#8217;s it&#8217;s not what the rated at all.<\/p>\n\n\n\n<p>231<br>00:38:17.240 &#8211;&gt; 00:38:23.039<br>camille: What is happening is that it&#8217;s exactly. It&#8217;s not moderated. It&#8217;s<\/p>\n\n\n\n<p>232<br>00:38:23.270 &#8211;&gt; 00:38:41.660<br>camille: it&#8217;s kept analyzed afterwards. And then we take sanctions, and then, if the player is not happy with that he can complain, and then a human is checking on what has been happening, and then the human takes a decision. So it&#8217;s actually.<\/p>\n\n\n\n<p>233<br>00:38:41.670 &#8211;&gt; 00:39:00.769<br>camille: I think, to my opinion, not the right way to do it, because for I mean for so many reasons. First of all, you you don&#8217;t moderate in real time. So someone shoots an insult. You have read it. It&#8217;s too late, like the damages are done. You&#8217;ve read it. You feel bad. You want to quit the game so it&#8217;s ready back for your players retention<\/p>\n\n\n\n<p>234<br>00:39:00.820 &#8211;&gt; 00:39:09.810<br>camille: second, you you, If If the the the player who has been toxic is is banned like 2 days afterwards, like<\/p>\n\n\n\n<p>235<br>00:39:09.830 &#8211;&gt; 00:39:28.560<br>camille: what&#8217;s the point like? It&#8217;s too late. He probably doesn&#8217;t even remember the fuck off that he just sent like 2 days before. So there is that, and then they are allowed to complain, which I think it&#8217;s fair like they. They have the right, you know, to disagree with the decision. But then you have a human moderator that checks.<\/p>\n\n\n\n<p>236<br>00:39:28.570 &#8211;&gt; 00:39:44.029<br>camille: that reads on the chat logs and makes a decision according to his subjective view, without being without participating to to the game, and then makes a decision and say, okay, maybe we should lift the ban or remove the ban.<\/p>\n\n\n\n<p>237<br>00:39:44.040 &#8211;&gt; 00:39:57.440<br>camille: so I I do think it&#8217;s. It&#8217;s it&#8217;s not a good experience for the players, for the toxic players, for the motor rates for the publishers like so so so many reasons, I think we need to stop.<\/p>\n\n\n\n<p>238<br>00:39:57.670 &#8211;&gt; 00:40:04.089<br>camille: I think that&#8217;s why real time moderation is so important, and you know, because if we can prevent all of this<\/p>\n\n\n\n<p>239<br>00:40:04.360 &#8211;&gt; 00:40:05.129<br>to<\/p>\n\n\n\n<p>240<br>00:40:05.510 &#8211;&gt; 00:40:20.260<br>camille: to happen, you know, just by hiding the comment, like just by hiding the insults First, you don&#8217;t have people leaving the game because they have been insulted? You, Don&#8217;t. Have you don&#8217;t? Have people living the game because they have been banned? You don&#8217;t have to ask moderators to read.<\/p>\n\n\n\n<p>241<br>00:40:20.270 &#8211;&gt; 00:40:31.190<br>camille: toxic content to tank on the company time, you know, just to read and to make a decision based on their subjective and and I think there&#8217;s one. There&#8217;s one thing left out that<\/p>\n\n\n\n<p>242<br>00:40:31.200 &#8211;&gt; 00:40:44.310<br>Shlomo Sher: that you mentioned earlier that I&#8217;ve never thought of before, which is the whole BoT situation and fishing schemes. You know this is the kind of thing that definitely human moderation after the fact is not going to be able to do anything about<\/p>\n\n\n\n<p>243<br>00:40:44.520 &#8211;&gt; 00:40:51.389<br>camille: it&#8217;s it&#8217;s it&#8217;s not possible. It&#8217;s not scalable for for moderators. You You are not going to catch<\/p>\n\n\n\n<p>244<br>00:40:51.620 &#8211;&gt; 00:41:00.759<br>camille: everyone by by doing this right. This whole conversation is making me think about these Facebook groups that i&#8217;m in, that all have human moderators<\/p>\n\n\n\n<p>245<br>00:41:00.880 &#8211;&gt; 00:41:04.220<br>and one of the interesting things. There&#8217;s one group<\/p>\n\n\n\n<p>246<br>00:41:04.620 &#8211;&gt; 00:41:13.169<br>and I&#8217;m in a variety of different groups in a variety of different topics, right? And there&#8217;s one group that for some reason the moderators are constantly fighting with<\/p>\n\n\n\n<p>247<br>00:41:13.430 &#8211;&gt; 00:41:15.550<br>the the the group.<\/p>\n\n\n\n<p>248<br>00:41:15.740 &#8211;&gt; 00:41:21.319<br>Shlomo Sher: Yeah, but that and it&#8217;s I feel like it&#8217;s become a game in and of itself.<\/p>\n\n\n\n<p>249<br>00:41:21.950 &#8211;&gt; 00:41:31.829<br>camille: Okay, right. This group is very large. It&#8217;s a fan base for for a popular popular thing. And this and it&#8217;s just attracting people who are like. Oh, these moderators will<\/p>\n\n\n\n<p>250<br>00:41:31.930 &#8211;&gt; 00:41:32.569<br>I<\/p>\n\n\n\n<p>251<br>00:41:32.890 &#8211;&gt; 00:41:38.279<br>Shlomo Sher: are under are under threat. Yeah, the moderators feel like they&#8217;re under siege, right?<\/p>\n\n\n\n<p>252<br>00:41:38.440 &#8211;&gt; 00:41:44.759<br>Shlomo Sher: And this is a voluntary position for or is this.<\/p>\n\n\n\n<p>253<br>00:41:44.850 &#8211;&gt; 00:41:51.299<br>Shlomo Sher: you know, official moderators right? No, this is just the the people who put the put the page up<\/p>\n\n\n\n<p>254<br>00:41:51.490 &#8211;&gt; 00:41:53.910<br>are moderate, trying to moderate the page<\/p>\n\n\n\n<p>255<br>00:41:54.420 &#8211;&gt; 00:41:59.620<br>it feels like, and it&#8217;s the only group in the whole, and all the groups that I&#8217;ve been, and it feels like that.<\/p>\n\n\n\n<p>256<br>00:41:59.860 &#8211;&gt; 00:42:04.930<br>And I wonder if you see this? Do people start to try to fight the moderation?<\/p>\n\n\n\n<p>257<br>00:42:05.480 &#8211;&gt; 00:42:13.000<br>Shlomo Sher: Yeah, or or let me to to pig it back on that right now. Let&#8217;s say you you got an AI, that&#8217;s working in real time.<\/p>\n\n\n\n<p>258<br>00:42:13.210 &#8211;&gt; 00:42:28.229<br>Shlomo Sher: right? Can I? You know. Do you see people trying to like? Oh, look at that! I I censored me. Let me try to go around that AI. At least I at least that AI is not going to feel battered like Andy&#8217;s Moderator.<\/p>\n\n\n\n<p>259<br>00:42:28.470 &#8211;&gt; 00:42:42.339<br>camille: That&#8217;s you know that&#8217;s the but it becomes a game now, right? That&#8217;s that&#8217;s the thing like. I think it&#8217;s quite funny to see them try to be dai like that&#8217;s what makes it challenging, you know, because they actually also giving us<\/p>\n\n\n\n<p>260<br>00:42:42.350 &#8211;&gt; 00:43:00.009<br>camille: new keys of understanding what toxicity means are like. Oh, what&#8217;s going to be their next move? I think that&#8217;s why. Because we all with this job, you know. I think it&#8217;s so so interesting. but it&#8217;s also. The worst part of the job is that it&#8217;s it&#8217;s a cat and math situation. It&#8217;s always going to be like<\/p>\n\n\n\n<p>261<br>00:43:00.020 &#8211;&gt; 00:43:04.700<br>camille: with with with fraud, especially like they always gonna have like, you know.<\/p>\n\n\n\n<p>262<br>00:43:04.790 &#8211;&gt; 00:43:21.330<br>camille: one move. They&#8217;re always going to be one. Move ahead of you just to counter the AI, but it&#8217;s quite. It&#8217;s quite interesting, and that&#8217;s that&#8217;s where we need to act fast, and that&#8217;s where that&#8217;s why Also, it&#8217;s moderation is so difficult, because you know, it&#8217;s not one technology that will fits<\/p>\n\n\n\n<p>263<br>00:43:21.340 &#8211;&gt; 00:43:40.959<br>camille: that will fix everything. It&#8217;s not just one type of moderation that will fix everything that that&#8217;s why we always, you know, update the technology, keep up with the people we are working on, making sure. You know, we don&#8217;t see new trends because they are new trends like it&#8217;s it&#8217;s the thing that I I mentioned earlier. You see people<\/p>\n\n\n\n<p>264<br>00:43:41.090 &#8211;&gt; 00:44:06.869<br>camille: creating letters with symbols, you know, for example, the n word you have. You know, the bar and the v, and that creates an, and then they are able to write the full words. You know they are like already, you, you know, pretty creative people. and that&#8217;s that&#8217;s where we always need to to be on it, with everything to keep up with the with everything, but they will all. Obviously they will try to counter the AI, and it&#8217;s it&#8217;s what&#8217;s<\/p>\n\n\n\n<p>265<br>00:44:06.960 &#8211;&gt; 00:44:11.140<br>camille: makes it quite a quite a quite interesting, but also very challenging.<\/p>\n\n\n\n<p>266<br>00:44:13.440 &#8211;&gt; 00:44:15.410<br>Shlomo Sher: Okay,<\/p>\n\n\n\n<p>267<br>00:44:15.500 &#8211;&gt; 00:44:21.019<br>Shlomo Sher: Speaking of that challenge, all right. So a lot of this is.<\/p>\n\n\n\n<p>268<br>00:44:21.550 &#8211;&gt; 00:44:26.389<br>Shlomo Sher: Besides, how fascinating! Just all of this is. you know.<\/p>\n\n\n\n<p>269<br>00:44:26.740 &#8211;&gt; 00:44:37.090<br>Shlomo Sher: for more controversial stuff, I mean, you know, people a lot of times really disagree. And what is offensive? What is racist? What is sex is what is homophobic? Right? What is true?<\/p>\n\n\n\n<p>270<br>00:44:37.290 &#8211;&gt; 00:44:43.290<br>Shlomo Sher: All right, just true or not true right? you know, just to<\/p>\n\n\n\n<p>271<br>00:44:43.440 &#8211;&gt; 00:44:57.329<br>Shlomo Sher: you know that. you know there is no let&#8217;s say trans is only you know, men and women, and these are fixed. these are fixed positions, male female, right?<\/p>\n\n\n\n<p>272<br>00:44:57.470 &#8211;&gt; 00:45:06.999<br>Shlomo Sher: that is a position that is taken by some people to be transphobic or to reject a non-binary existence.<\/p>\n\n\n\n<p>273<br>00:45:07.040 &#8211;&gt; 00:45:21.880<br>Shlomo Sher: or in or even intersects. other people will say No, that&#8217;s just true about the world, right? And there&#8217;s lots of things like this that some people might say is racist. Some people might say sexes. Other people say No, that&#8217;s just true. When<\/p>\n\n\n\n<p>274<br>00:45:21.930 &#8211;&gt; 00:45:40.320<br>Shlomo Sher: you have human moderators right? You said this is subjective, so they give their they look at it they subjectively be analyzed instead. You guys have linguistic experts. Is that where you put it? Okay, so what is that? What does that mean? And how did they deal with the fact that people sometimes disagree? And<\/p>\n\n\n\n<p>275<br>00:45:40.330 &#8211;&gt; 00:45:56.659<br>Shlomo Sher: And what recourse is there to say? Look! I was talking about something real and true. It&#8217;s funny, because in the context of a game which is very different than in the context of Twitter and I think that matters right. It&#8217;s matters if you&#8217;re talking, and what&#8217;s maybe<\/p>\n\n\n\n<p>276<br>00:45:57.110 &#8211;&gt; 00:46:08.000<br>Shlomo Sher: the kind of freedom maybe you need to talk about in a place like Facebook or Twitter versus in the middle of a game might be very different. But still, what? What? How your experts handle things like that!<\/p>\n\n\n\n<p>277<br>00:46:08.420 &#8211;&gt; 00:46:18.770<br>camille: There is a lot of fights and heated conversations. No, actually it&#8217;s it&#8217;s diversity and inclusions. These are my<\/p>\n\n\n\n<p>278<br>00:46:19.220 &#8211;&gt; 00:46:22.119<br>camille: i&#8217;m going to say to because i&#8217;m not going to count. And<\/p>\n\n\n\n<p>279<br>00:46:22.170 &#8211;&gt; 00:46:37.039<br>camille: the 2 favorite world favorite world words in the world. They are so important. We need to make sure it&#8217;s it&#8217;s a it&#8217;s a big challenge within bodyguards. Is it&#8217;s true, like it&#8217;s and we don&#8217;t always have. The right answer is how we make sure that<\/p>\n\n\n\n<p>280<br>00:46:37.090 &#8211;&gt; 00:46:43.549<br>camille: our technology fits the needs of today, and that we are, like, you know, protecting as many people<\/p>\n\n\n\n<p>281<br>00:46:43.600 &#8211;&gt; 00:46:49.339<br>camille: as possible. we try it. It comes first with, you know, recruiting<\/p>\n\n\n\n<p>282<br>00:46:49.790 &#8211;&gt; 00:46:55.620<br>camille: diverse people, making sure that we are not. We don&#8217;t have only, you know, white, female, or<\/p>\n\n\n\n<p>283<br>00:46:56.080 &#8211;&gt; 00:47:14.630<br>camille: white male in the team, making sure you know that we have conversation that we talk about things it&#8217;s it&#8217;s it&#8217;s really, really really important, because when Charles the creator of of the technology, created the technology, he, he was just alone in his room, you know, and and trying to think about.<\/p>\n\n\n\n<p>284<br>00:47:14.650 &#8211;&gt; 00:47:20.160<br>camille: What would people there to say behind the screen that they wouldn&#8217;t? They are saying, you know.<\/p>\n\n\n\n<p>285<br>00:47:20.270 &#8211;&gt; 00:47:23.720<br>camille: in front of other people. But you know he has only his<\/p>\n\n\n\n<p>286<br>00:47:24.030 &#8211;&gt; 00:47:32.940<br>camille: white male prison, you know, is just like seeing that through his own experience. so it&#8217;s it&#8217;s where it&#8217;s really.<\/p>\n\n\n\n<p>287<br>00:47:33.120 &#8211;&gt; 00:47:36.840<br>camille: and and I&#8217;m going to really stress on that really important to<\/p>\n\n\n\n<p>288<br>00:47:37.100 &#8211;&gt; 00:47:55.519<br>camille: have people surrounding you that have different experiences that have. You know they&#8217;re coming from different places that they do understand, like other people, people&#8217;s point of view to make sure that the technology is always like reflecting, reflecting that very recently one of our<\/p>\n\n\n\n<p>289<br>00:47:56.030 &#8211;&gt; 00:48:03.639<br>camille: experts submitted a test to us. It was just a massive survey of like a 100<\/p>\n\n\n\n<p>290<br>00:48:03.700 &#8211;&gt; 00:48:06.500<br>camille: sentences in French.<\/p>\n\n\n\n<p>291<br>00:48:06.510 &#8211;&gt; 00:48:24.709<br>camille: and he said, okay, so now you&#8217;re gonna do what a moderator would do. You&#8217;re gonna take a decision on those sentence, and we had to classify those by saying, oh, this is insulting. This is neutral, and this is supportive, and this they. It was the hardest, really the hardest thing I&#8217;ve ever done.<\/p>\n\n\n\n<p>292<br>00:48:24.720 &#8211;&gt; 00:48:30.009<br>camille: and my in my whole career at bodyguard, because, you know, it was just trying to put myself<\/p>\n\n\n\n<p>293<br>00:48:30.250 &#8211;&gt; 00:48:44.240<br>camille: out of my own experiences, because i&#8217;m i&#8217;m going to be more sensitive about. You know, certain topics, but not so much to others because of my experiences because of who I am, my identity. So it was really difficult to put. You know<\/p>\n\n\n\n<p>294<br>00:48:44.600 &#8211;&gt; 00:49:03.150<br>camille: my myself out out of my own body and experience. And to try to answer this, the more objective is possible. It was impossible, and you cannot do it best. Just one person clicking on. Say, i&#8217;m gonna remove. I&#8217;m gonna i&#8217;m gonna say Yes, it has to be a collective work. We are not.<\/p>\n\n\n\n<p>295<br>00:49:03.160 &#8211;&gt; 00:49:21.530<br>camille: but, you guys, you&#8217;re very aware that we are not the police. We are not the justice. We are just here to try and empower people with our technology empower them with our knowledge of of of moderation. And what toxicity means trying to put the the technology in the right hands with the the right guidelines.<\/p>\n\n\n\n<p>296<br>00:49:21.540 &#8211;&gt; 00:49:26.589<br>camille: and it&#8217;s it&#8217;s it&#8217;s really tough, like Honestly, it&#8217;s it&#8217;s a work of every day<\/p>\n\n\n\n<p>297<br>00:49:26.920 &#8211;&gt; 00:49:32.729<br>camille: questioning yourself questioning what you&#8217;re doing. Making sure that you&#8217;re always during the<\/p>\n\n\n\n<p>298<br>00:49:32.990 &#8211;&gt; 00:49:34.899<br>camille: what you think is the right thing.<\/p>\n\n\n\n<p>299<br>00:49:35.230 &#8211;&gt; 00:49:40.029<br>All right. So let me let me ask a question. This comes from my white guide.<\/p>\n\n\n\n<p>300<br>00:49:40.310 &#8211;&gt; 00:49:43.219<br>My, my my 50 year old white guy hat. Right?<\/p>\n\n\n\n<p>301<br>00:49:43.450 &#8211;&gt; 00:49:44.350<br>So<\/p>\n\n\n\n<p>302<br>00:49:44.460 &#8211;&gt; 00:49:46.609<br>A Ashcraft: I have typed something<\/p>\n\n\n\n<p>303<br>00:49:46.660 &#8211;&gt; 00:49:47.830<br>into a chat<\/p>\n\n\n\n<p>304<br>00:49:47.970 &#8211;&gt; 00:49:49.450<br>that I think is fine.<\/p>\n\n\n\n<p>305<br>00:49:49.760 &#8211;&gt; 00:49:52.750<br>but the<\/p>\n\n\n\n<p>306<br>00:49:53.100 &#8211;&gt; 00:49:57.740<br>but it turns out somebody that I I can&#8217;t imagine is going to find it offensive.<\/p>\n\n\n\n<p>307<br>00:49:59.140 &#8211;&gt; 00:50:01.159<br>what happens to me?<\/p>\n\n\n\n<p>308<br>00:50:03.680 &#8211;&gt; 00:50:07.839<br>camille: What happens to you is, it&#8217;s it&#8217;s another.<\/p>\n\n\n\n<p>309<br>00:50:08.260 &#8211;&gt; 00:50:17.640<br>camille: It&#8217;s it&#8217;s really it&#8217;s really up to first of the people who decided to put the moderation on what they want to. To do with you?<\/p>\n\n\n\n<p>310<br>00:50:17.710 &#8211;&gt; 00:50:26.220<br>camille: To me Banning is not the right answer. Silencing is not the right answer, because again, I think there is degree in<\/p>\n\n\n\n<p>311<br>00:50:26.240 &#8211;&gt; 00:50:30.279<br>camille: in in what you you would say in toxicity. This is something that we<\/p>\n\n\n\n<p>312<br>00:50:30.310 &#8211;&gt; 00:50:31.700<br>camille: we do.<\/p>\n\n\n\n<p>313<br>00:50:31.750 &#8211;&gt; 00:50:39.560<br>camille: But we do care about. You know the severity, because again saying, you know something really offensive and and threatening. Someone to death is is<\/p>\n\n\n\n<p>314<br>00:50:39.640 &#8211;&gt; 00:50:41.789<br>camille: is is really different.<\/p>\n\n\n\n<p>315<br>00:50:41.900 &#8211;&gt; 00:50:46.909<br>camille: but that&#8217;s the first. Just my my comment itself just never gets<\/p>\n\n\n\n<p>316<br>00:50:47.000 &#8211;&gt; 00:50:53.510<br>camille: never gets published right? It&#8217;s never gets published, and you don&#8217;t know it first thing right? And I don&#8217;t know. Okay, and you don&#8217;t know<\/p>\n\n\n\n<p>317<br>00:50:54.000 &#8211;&gt; 00:50:55.470<br>camille: in the<\/p>\n\n\n\n<p>318<br>00:50:55.590 &#8211;&gt; 00:51:00.879<br>camille: in in an ideal world to me. I would like it. I would. I would like to see<\/p>\n\n\n\n<p>319<br>00:51:01.020 &#8211;&gt; 00:51:13.449<br>camille: other steps behind this. Maybe you know explanation, or you know, just informative as to why your common wouldn&#8217;t appear like okay. So this is<\/p>\n\n\n\n<p>320<br>00:51:13.460 &#8211;&gt; 00:51:25.790<br>camille: not going to appear. Because, according to I don&#8217;t know, maybe this law, or you know this context, you you shouldn&#8217;t be using it, you know, to start like informing people, because a lot of<\/p>\n\n\n\n<p>321<br>00:51:26.280 &#8211;&gt; 00:51:39.699<br>camille: a lot of like toxic common comes from a a desire to hurt other people, not like understanding what those words mean out of ignorance. and I don&#8217;t want people to be toxic<\/p>\n\n\n\n<p>322<br>00:51:39.710 &#8211;&gt; 00:51:50.900<br>camille: at all. but I think being toxic by by ignorance, because you haven&#8217;t, you know, tried to put yourself in other people&#8217;s shoes, is is to me probably one of the<\/p>\n\n\n\n<p>323<br>00:51:50.970 &#8211;&gt; 00:51:53.249<br>camille: of of the war thing like<\/p>\n\n\n\n<p>324<br>00:51:53.490 &#8211;&gt; 00:52:04.569<br>camille: let&#8217;s fight. Ignorance is, I think, one of the thing that I would like to see you know having like extra steps after after muting the comment.<\/p>\n\n\n\n<p>325<br>00:52:04.650 &#8211;&gt; 00:52:15.970<br>Shlomo Sher: You, you know this is where the things I like about reddit reddit moderators are usually very clear about what who you&#8217;ve broken, and they tell you, you know these are the rules, and you broke rule number 2.<\/p>\n\n\n\n<p>326<br>00:52:15.980 &#8211;&gt; 00:52:35.550<br>Shlomo Sher: But even then I mean a lot of times, you know, you could potentially have back and forth, you know I didn&#8217;t break a rule number 2. What it? How exactly is it but to apply? And maybe this is a sort of thing where, oh, my God! As a moderator, it&#8217;s just. It would drive me crazy if I had to have these arguments with people. But right. But at least that AI could, hopefully.<\/p>\n\n\n\n<p>327<br>00:52:35.830 &#8211;&gt; 00:52:51.610<br>Shlomo Sher: because it takes almost no work, Give you a bunch of details right there. As to the thinking that that led you there. At least that&#8217;s a possibility. If the company decides to provide them with that kind of feedback, and it seems only fair to provide you with the kind of feedback you&#8217;re talking about.<\/p>\n\n\n\n<p>328<br>00:52:51.850 &#8211;&gt; 00:53:04.880<br>camille: Yeah, that&#8217;s that was one of the thing that I was talking to my manager last time was, you know, the the question of the dig picks so I don&#8217;t know how it how it is in the Us. But in France now it&#8217;s we can<\/p>\n\n\n\n<p>329<br>00:53:05.130 &#8211;&gt; 00:53:16.549<br>Shlomo Sher: get sued for such a thing. You can, you know, pay fine for sending Dick picks. Yeah, if you&#8217;re like, if you&#8217;re like, if you&#8217;re like, if you&#8217;re like.<\/p>\n\n\n\n<p>330<br>00:53:16.820 &#8211;&gt; 00:53:28.839<br>Shlomo Sher: No. But yeah it it that&#8217;s really interesting in the Us. I mean, you know. yeah, I&#8217;ve never heard of any of any of anything like that. It would be protected under free speech, i&#8217;m assuming, unless you&#8217;re<\/p>\n\n\n\n<p>331<br>00:53:29.050 &#8211;&gt; 00:53:57.539<br>camille: well, I I guess I don&#8217;t know i&#8217;m not a lawyer. I&#8217;m gonna walk that one back. Well, well in France, it is. You can get fined for sending a dig pick to to someone, because it would be considered as sexual harassment. And it&#8217;s Condon by law in France. So you know, when someone is sending a dick big maybe not sending it like, maybe moderating it and say, oh, did you know actually that sending a date B can cost you like \u20ac3,000, because it is condemned by law.<\/p>\n\n\n\n<p>332<br>00:53:57.550 &#8211;&gt; 00:54:00.859<br>camille: and that I would like to see, because i&#8217;m pretty sure that<\/p>\n\n\n\n<p>333<br>00:54:00.970 &#8211;&gt; 00:54:19.399<br>camille: all these authors sending digits. It will quite change a lot of things the same thing, for you know, racism like saying, you know that you sending racist comments, can you cost you that much of money you can get condemned. We can go to prison, you know. I think that would change, because<\/p>\n\n\n\n<p>334<br>00:54:19.430 &#8211;&gt; 00:54:35.020<br>camille: I think even and and even more for the younger audiences that sometimes send those words because they see it. So they just, you know, mimic the the toxic behaviors, and they don&#8217;t realize how severe this can be, and I think that would you know<\/p>\n\n\n\n<p>335<br>00:54:35.050 &#8211;&gt; 00:54:51.079<br>camille: they like a cold shower. to to that. And and you know again trying to educate people. Let&#8217;s let&#8217;s educate people. Let&#8217;s not be punitive and just bad people. Let&#8217;s let&#8217;s educate people. I think I think it&#8217;s the best way to go to go forward with that.<\/p>\n\n\n\n<p>336<br>00:54:51.480 &#8211;&gt; 00:55:02.070<br>Shlomo Sher: It&#8217;s it&#8217;s interesting that, on the one hand and the one hand you&#8217;re you&#8217;re educating them on the rules, and how exactly they&#8217;re not meeting the rules. On the other hand, hopefully, you&#8217;re also educating them on<\/p>\n\n\n\n<p>337<br>00:55:02.120 &#8211;&gt; 00:55:03.410<br>Shlomo Sher: how to become.<\/p>\n\n\n\n<p>338<br>00:55:03.460 &#8211;&gt; 00:55:08.939<br>Shlomo Sher: You know better better people, decent decent human beings. That would be good<\/p>\n\n\n\n<p>339<br>00:55:08.990 &#8211;&gt; 00:55:28.539<br>Shlomo Sher: hopefully, but to but sorry. Go ahead. I I do understand why companies choose not to, though, because they don&#8217;t want to get into that conversation right? And and people are rules. Lawyers, right? They right? Yes, they will poke at the rules and poke at the rules and poke at the rules until<\/p>\n\n\n\n<p>340<br>00:55:28.740 &#8211;&gt; 00:55:31.170<br>camille: that&#8217;s that&#8217;s why.<\/p>\n\n\n\n<p>341<br>00:55:31.720 &#8211;&gt; 00:55:36.979<br>camille: that&#8217;s why it&#8217;s so difficult, you know, because you on our side, we have to educate<\/p>\n\n\n\n<p>342<br>00:55:37.120 &#8211;&gt; 00:55:48.310<br>camille: the the the people on the Internet. But we also have to educate the actors that have the power to make decisions, and and you know, to to make safe spaces, because they don&#8217;t<\/p>\n\n\n\n<p>343<br>00:55:48.320 &#8211;&gt; 00:56:18.299<br>camille: sometimes really realize what&#8217;s happening on their community, or they are like. Oh, you know, a bit of competitivity is good for the game. I&#8217;m like, Yes, okay. But maybe threatening. Some one of rape is not, you know, competitive. It&#8217;s just like something more so. And you know there is also. Yes, but we want to create a seamless experience for the players. You know we want competitivity. We want people to fight around the games, and i&#8217;m like, Yes, I do understand, but maybe maybe protect your players as well, because<\/p>\n\n\n\n<p>344<br>00:56:18.310 &#8211;&gt; 00:56:24.380<br>camille: it&#8217;s just, you know so much more important like, do not forget that people still<\/p>\n\n\n\n<p>345<br>00:56:24.400 &#8211;&gt; 00:56:46.189<br>camille: kill themselves because they are harassed online. So this is this is really something that we try to educate people about like we do understand. Gaming can be very competitive and like there is it&#8217;s it&#8217;s a business of its own. but let&#8217;s let&#8217;s be more human, maybe for 2,023 as well. try to You know.<\/p>\n\n\n\n<p>346<br>00:56:46.430 &#8211;&gt; 00:56:54.549<br>camille: protect the players, the people, the kids, the the young players that are behind the screens, is is something that would like to to see a bit more.<\/p>\n\n\n\n<p>347<br>00:56:56.000 &#8211;&gt; 00:57:07.940<br>Shlomo Sher: We&#8217;re right on it. Okay, then, then it it seems like a good place to to kind of wrap things up where it can be where I ask you our last question and all that.<\/p>\n\n\n\n<p>348<br>00:57:07.980 &#8211;&gt; 00:57:09.569<br>camille: Yes.<\/p>\n\n\n\n<p>349<br>00:57:09.940 &#8211;&gt; 00:57:24.410<br>Shlomo Sher: all right. do you want to take a do you want to take a minute to think about what you want to say to as a I&#8217;m going to ask you Our final question is going to be you know. What do you want to leave our audience with. And do you know, I think I think I&#8217;m ready. That&#8217;s fine. Okay, yes.<\/p>\n\n\n\n<p>350<br>00:57:24.480 &#8211;&gt; 00:57:28.669<br>Shlomo Sher: all right to me. we&#8217;re just about done. what do you want to leave our audiences with?<\/p>\n\n\n\n<p>351<br>00:57:29.600 &#8211;&gt; 00:57:44.399<br>camille: First of all, let&#8217;s make 2023 the the year of safety and inclusivity of diversity and inclusion. I&#8217;ve read a study very recently that 87% of the 1824 years old<\/p>\n\n\n\n<p>352<br>00:57:44.410 &#8211;&gt; 00:57:59.240<br>camille: felt like cyber violence in their life. Let&#8217;s make it to 0 in 2,023. You are not alone. People like us are trying our best to make gaming a better place. So so we here we got your back<\/p>\n\n\n\n<p>353<br>00:57:59.780 &#8211;&gt; 00:58:01.909<br>Shlomo Sher: all right. Sounds good.<\/p>\n\n\n\n<p>354<br>00:58:02.010 &#8211;&gt; 00:58:08.950<br>Shlomo Sher: Let&#8217;s do it in 2,023 alright. To me. Thank you for coming to the show Thank you so much for having me.<\/p>\n\n\n\n<p>355<br>00:58:09.040 &#8211;&gt; 00:58:10.339<br>Shlomo Sher: All right! Play nice everyone!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[Release Date: February 14, 2023]&nbsp;Toxicity in online gaming is an incredibly complex problem to solve. Teams of moderators often seem hopelessly outmatched by the amount<\/p>\n","protected":false},"author":1,"featured_media":1021,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_FSMCFIC_featured_image_caption":"","_FSMCFIC_featured_image_nocaption":"","_FSMCFIC_featured_image_hide":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[4],"tags":[221,220,223,222],"class_list":["post-1020","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-the-podcast","tag-camille-guillemot","tag-game-moderation","tag-league-of-legends","tag-toxicity-in-games"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ethicsandvideogames.com\/wp-content\/uploads\/2023\/01\/62-1-1-1.png?fit=1200%2C628&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/posts\/1020","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1020"}],"version-history":[{"count":4,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/posts\/1020\/revisions"}],"predecessor-version":[{"id":1072,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/posts\/1020\/revisions\/1072"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=\/wp\/v2\/media\/1021"}],"wp:attachment":[{"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1020"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1020"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ethicsandvideogames.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1020"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}