Episode 66 – Video Game Design Lessons from Moral Psychology  (with Paul Formosa and Malcolm Ryan)

[Release Date: April 11, 2023] How do players morally engage with games?  What can user experience research and moral psychology tell us about how players experience and think about ethical decisions in games?  We chat with philosopher Paul Formosa and Game Designer Malcolm Ryan about their collaborative ongoing research exploring these questions.

SHOW TRANSCRIPT

00:05:14.680 –> 00:05:22.970
Shlomo Sher: All right. Welcome. Everybody. We’re here with Paul Fromosa, a professor of Philosophy, head of the Department of Philosophy and Co-director of the Center for agency values and ethics.

60
00:05:23.020 –> 00:05:30.880
Shlomo Sher: you know, I should have stopped before we did this, Paul, and ask you ask, you guys, how do you? How do you pronounce? Is it, Macquarie?

61
00:05:30.970 –> 00:05:38.230
Shlomo Sher: Yeah, my Corey. Okay, Anything else I need to know. And I had pronounced, No, we we’re good. Alright.

62
00:05:38.460 –> 00:05:49.509
Shlomo Sher: All right. Welcome, everybody. We’re here with Paul Fromosa, a professor of Philosophy, head of the Department of Philosophy and Co-director of the Center for Agency Values and Ethics at Macquarie University in Sydney.

63
00:05:49.520 –> 00:06:16.329
Shlomo Sher: Australia, Paul has published widely on topics and moral political philosophy with the recent folks on ethical issues raised by new technology, such as video games and AI. He also collaborates regularly with colleagues from a range of different discipline disciplines outside of philosophy, one of whom was our other guest today. Malcolm Ryan, Malcolm Ryan is course director of the game design and development program in the school of computing at Macquarie University in Sydney, Australia.

64
00:06:16.340 –> 00:06:23.919
Shlomo Sher: By the way, we love our Aussie. We’ve had so many on this show. You guys do some really interesting stuff.

65
00:06:24.020 –> 00:06:37.090
Shlomo Sher: Malcolm is published in game design, virtual reality and and and artificial intelligence. His current research focuses on how players make ethical decisions and video games and collaboration with colleagues from philosophy, psychology, and creative writing.

66
00:06:37.270 –> 00:06:50.279
Shlomo Sher: He blogs about video game, ethics, research at morality play org. So both of you guys really do collaboration with people from other fields which is fantastic, and they’re here to talk to us together.

67
00:06:50.360 –> 00:06:53.550
Shlomo Sher: right? The research together is that

68
00:06:53.560 –> 00:07:18.580
Shlomo Sher: is really founded in moral psychology and user experience or Ux research. They’re interested in how players engage morally with games, how they exercise their ethical thinking to make decisions and games, and how they experience these games, these kinds of decisions. The hope is that this research will lead to richer, more engaging ethical content and games both for entertainment and education. And wow, that was a very long introduction.

69
00:07:18.590 –> 00:07:24.910
Shlomo Sher: Welcome, and, Paul, welcome to the show. Yeah, thank you. Thank you very much for having us scratch speaking.

70
00:07:26.040 –> 00:07:42.409
Shlomo Sher: Okay. So our episode is about what you’ve learned about more. So about how more so sorry our episode is how what you’ve learned about moral psychology can help video games. Better engage with ethics. So to set things up clearly for listeners. What is moral psychology.

71
00:07:43.290 –> 00:08:02.359
Malcolm Ryan: do I? I’ll take that one so essentially moral Psychology is the study. I like to think of it as it’s. How we do morality when we’re doing morality so as thinking creatures. When we’re being moral, we’ve got a various processes going on in our mind trying to work out how we do, what we do when we moral.

72
00:08:02.440 –> 00:08:12.219
Shlomo Sher: And so it’s different from sorry someone’s starting it more and more outside. That’s very timely

73
00:08:12.230 –> 00:08:22.990
Malcolm Ryan: from philosophical. I think that it’s not asking sort of theoretically what is ethics or what is right and wrong? It’s more asking. Okay, you’re sitting down. You’re making an ethical decision.

74
00:08:23.040 –> 00:08:26.180
Malcolm Ryan: What is going on in your head when you’re when you’re doing this.

75
00:08:26.210 –> 00:08:33.750
Malcolm Ryan: And so and i’m i’m really interested in it, because I think you know these are the questions we, as game designers think about all the time is, how does player

76
00:08:34.240 –> 00:08:43.710
Malcolm Ryan: think about what they’re doing when they’re playing our game? So in this case the game is a moral problem of some sort.

77
00:08:43.780 –> 00:08:47.739
Paul Formosa: What I what are we doing? I pulled you on it.

78
00:08:47.750 –> 00:09:07.780
Paul Formosa: Well, I I guess we’ll get into it as we go along. But I I look yeah, I just wanna I I agree with Michael Malcolm’s kind of definition there and like, when we think about like more psychology and games like it’s pretty straightforward like. So how how are people making those moral decisions in games? What sort of things influence those decisions, for example, things like morality meters, or the the design of the game.

79
00:09:07.790 –> 00:09:23.530
Paul Formosa: the different characters they interact with, and so on. And the other thing we also want to think a little about in in our psychology is also things about sort of model development, moral engagement. So you know, when we’re thinking about models of culture. We’re also thinking like, how how can people change or get better or worse?

80
00:09:23.560 –> 00:09:39.030
Paul Formosa: And we can also think about how games by engagement, with games, we might be able to make it a lot better or worse. And what sort of different design strategies or techniques will interact or engage with those more capacities. And that’s the sort of things we’ve been trying to look at our space.

81
00:09:39.080 –> 00:09:41.770
Shlomo Sher: Okay. So

82
00:09:41.960 –> 00:09:51.589
Malcolm Ryan: okay, so i’ll go. So sort of historically, both in philosophical ethics and also in the in the psychology there’s sort of been 2

83
00:09:51.670 –> 00:10:04.170
Malcolm Ryan: contrasting approaches to thinking about. How do we do this? And one is the more rational. Okay, Morality is about thinking about certain rules and sudden God, you know, and working out how to apply those.

84
00:10:04.180 –> 00:10:11.729
Malcolm Ryan: And so it’s a historically a very sort of rationalist approach to to how do we do morality? We sit down and we solve model problems.

85
00:10:11.760 –> 00:10:31.130
Malcolm Ryan: The alternative approach is is sort of thinking about morality more as character or more as sort of intrinsic intuitive things. And and some people will go so far as saying, You know all that rationalization is just. Post talk rationalization of what we emotionally want, or I don’t want. And we just say

86
00:10:31.140 –> 00:10:44.510
Malcolm Ryan: modern psychology, what moral psychology seems to sort of prefer what’s called a dual process model, which is like the the thinking, fast and slow model that people are familiar with from Kahneman’s research, and so forth.

87
00:10:44.520 –> 00:10:52.189
Malcolm Ryan: Where we’ve got, we do have these rational processes, and sometimes we sit down and we do rationalize our moral responses.

88
00:10:52.240 –> 00:11:10.560
Malcolm Ryan: But we also have these intuitive moral responses and automatic role responses and a lot of what we’re doing in real life. We don’t have the opportunity to sit down and do the do. The the moral math. We just make, make, make, make decisions automatically. And so this is what really this is sort of the model that informs what we’re doing

89
00:11:10.570 –> 00:11:25.729
Malcolm Ryan: is thinking about. Yeah, we do have these times when we sit down, and we really not out a moral choice. But we also have times where we just sort of reacting morally to things. And so then using that sort of psychological model. And then there’s sort of

90
00:11:25.740 –> 00:11:31.329
Malcolm Ryan: more detailed theories about the different components of that, and how, how, what we, what we’re doing when we’re doing that

91
00:11:31.910 –> 00:12:00.499
Shlomo Sher: Okay. Games obviously can present you with lots of different opportunities right to engage both of those kind of ways of thinking, though when most people think of how video games try to gauge the players with ethics, I think what they typically tend to think about is morality meters, or of like big moral dilemmas with a lot of large at stake like you know my favorite are from the mass effect games that give you action, options that to choose from, and then you get rated

92
00:12:00.510 –> 00:12:17.009
Shlomo Sher: for your actions as paragon or renegade, or good or evil, or something like that, and they affect your characters more alignment on some sort of morality meet or something like that. Your research shows that this sort of focus is too narrow right that there’s other aspects of our moral expertise.

93
00:12:17.020 –> 00:12:35.560
Shlomo Sher: And and your notes. I use this word moral expertise, as if it’s clear as if we know what that means that they you engage with. So before we we talk about kind of a, you know, moral expertise. What exactly our morality meters? And are they really, substantially different parts, types of morality meters.

94
00:12:36.050 –> 00:12:51.879
Malcolm Ryan: So I mean they Probably they’ve been around in some some sort of moral accounting, and games has been around since, at least since the early ultimate games. Where you get points for goodness or evilness in the game.

95
00:12:51.930 –> 00:13:00.669
Malcolm Ryan: I guess morality made is really kind of first appeared as a very you on screen mechanic in in a night for the old Republic.

96
00:13:00.850 –> 00:13:13.759
Malcolm Ryan: I think that was the first game that had that had an explicit meter on the screen that was telling you how many points of good or evil you had, and as you decision, you made decisions in the game that was reflected. And so and essentially.

97
00:13:13.770 –> 00:13:31.970
Malcolm Ryan: you know, and it’s called. There are lots of variations on this, and lots of games that have some moral accounting going on where the behind the scenes are explicitly presented to you. But when we’re talking about morality meeting meetings, we most, and especially in the research we’re doing. We’re holding a very visible on screen.

98
00:13:31.980 –> 00:13:40.690
Malcolm Ryan: you know. Very explicit meter showing you. Yes. This decision resulted in you getting 10 points of good. This decision resulted in you getting

99
00:13:40.770 –> 00:13:54.230
Malcolm Ryan: negative 10 points or or 10 points of evil, and you’ll need to start somewhere at 0 and moves towards good or evil as the game, assessing your your moral character in some sense based on your choices.

100
00:13:54.300 –> 00:13:59.829
Malcolm Ryan: And there are a lot of games that do have. This is a very explicit, you know, explicit mechanic in the game.

101
00:14:00.080 –> 00:14:03.860
A Ashcraft: Would you count the the giant beast in black and white

102
00:14:04.080 –> 00:14:06.260
A Ashcraft: as you as a moral meter?

103
00:14:06.270 –> 00:14:24.480
Malcolm Ryan: Yeah, so it’s a definitely. It’s definitely a kind of moral accounting going on there. It’s showing it and same. Similarly, with like, I think Fable does it much more in terms of showing your avatar, changing and expressing it in that way. So there are definitely more subtle ways of showing this sort of thing, but still behind the same with some.

104
00:14:24.490 –> 00:14:36.890
Malcolm Ryan: You know that immune value in the game, which is, you are currently at 20 points of good, and that is represented on the screen somehow visually, I think I mean, we’re looking at very explicit meters, because, you know, we wanted.

105
00:14:37.030 –> 00:14:42.120
Malcolm Ryan: you know, for the for starting out research. We wanted to say, yeah, this is the thing. See how it affects people’s thing

106
00:14:42.360 –> 00:14:59.079
Malcolm Ryan: decisions. But ultimately there are lots of ways. This can be done. I guess a related mechanic is the kind of reputation mechanics that you see in games like full out in Vegas and other games like that, where it’s less explicitly about telling you

107
00:14:59.130 –> 00:15:07.400
Malcolm Ryan: your overall, good or evil, and more about Well, this faction or this individual, regards you as good, and regards, and which

108
00:15:07.480 –> 00:15:09.280
Malcolm Ryan: allows you

109
00:15:09.340 –> 00:15:19.879
Malcolm Ryan: to as a designer to present more than one axis of of morality, which is again something we haven’t gone into. Yet something i’m interested in exploring further is what if we have

110
00:15:20.280 –> 00:15:39.849
Malcolm Ryan: different morality meters that Well, we so far we have sort of looked at having one morality meter, but having it expressed different kinds of morality. But you know it would be interesting. And for the research to look at. Well, you’ve got this meter which tells you. According to this person’s standards. You’re a good person. But according to this person standards you’re you’re you’re less good or so

111
00:15:39.930 –> 00:15:48.569
Shlomo Sher: right to. To. To To me yeah, those those 2 things seem dramatically different, and it makes sense. You guys, you guys, but separate them.

112
00:15:48.630 –> 00:15:58.039
Shlomo Sher: You know one is judging you very explicitly. The other one is not so much judging you, but kind of letting you know that you have a reputation

113
00:15:58.070 –> 00:16:05.980
Shlomo Sher: right among different groups, and allows you for reputation management rather than to kind of. You know, mess with your

114
00:16:06.570 –> 00:16:23.950
Shlomo Sher: I don’t know overall alignment. It depends how you look at or overall character, so to tell us about the the studies that you did that you guys did right. You ran a study of morality meters, and it included creating a game specifically for that purpose. What did you guys learn about how users interact with morality meters?

115
00:16:24.230 –> 00:16:25.520
Malcolm Ryan: I want to talk about that. Poll.

116
00:16:25.570 –> 00:16:29.379
Paul Formosa: How about you? Start? Talk about the game, and i’ll talk about the qualitative study.

117
00:16:29.540 –> 00:16:44.370
Malcolm Ryan: Okay, yeah. So we so our AIM was to just look at very basically because nobody is there. There’s a lot of conversation in the in the research about how morality meters are good or bad, whether or not they.

118
00:16:44.380 –> 00:17:00.619
Malcolm Ryan: a lot of the concerns well, whether they just purely instrumentalize the morality and turns it into a I just a point scoring exercise, and people don’t actually think we’re at morally about their choices. They just go, and you know, choose the thing that is labeled good, and and there are various

119
00:17:00.720 –> 00:17:09.460
Malcolm Ryan: lots of research that is based on, you know, player, ethnography, or or so deep readings of games, but very little, You know, Mass.

120
00:17:09.480 –> 00:17:12.400
Malcolm Ryan: you know, getting lots of people play a game and see what happens.

121
00:17:12.450 –> 00:17:18.989
Malcolm Ryan: And so we wanted to look into that and say, get some real data on how to morale. You need to actually change people’s decisions.

122
00:17:19.130 –> 00:17:25.630
Malcolm Ryan: And we had a bunch of different kinds of ethical decisions we want to represent in the game, and we wanted to show how

123
00:17:25.720 –> 00:17:32.829
Malcolm Ryan: different kinds of morality meters telling you different kinds of morality, whether or not they whether they had effect on on the player.

124
00:17:33.320 –> 00:17:40.029
Malcolm Ryan: In order to do this, we looked at, you know, existing games, but there’s very few games out there that are

125
00:17:40.280 –> 00:17:51.529
Malcolm Ryan: short enough to play in a research set setting that give us that. Give us the explicit mechanics and control over those mechanics, and the way that we wanted to design is to change up the design.

126
00:17:51.580 –> 00:18:07.010
Malcolm Ryan: and you know, and and you know all those for the factors, and allow us to measure the things we want to measure. And so we thought, okay? Well, we really need to make our own game in order to do this properly and that way we have the full source of the game. We can change any design, make any design changes we want.

127
00:18:07.020 –> 00:18:19.970
Malcolm Ryan: And so we specifically designed this game, the great file which you can play on that webpage and borrelity played it all. Actually, it’s on morality played on itch dot I/O

128
00:18:20.130 –> 00:18:22.210
you can

129
00:18:22.400 –> 00:18:38.529
Malcolm Ryan: in the game. We. So we engaged a creative writing academic from Macquarie and a local game development team chaos, theory games, and they helped write the game and develop the game produce a beautiful.

130
00:18:38.940 –> 00:18:46.639
Malcolm Ryan: It’s sort of a film. No, it’s game set in a in a cinema in in sort of 1940 s country, Australia.

131
00:18:46.680 –> 00:18:58.159
Malcolm Ryan: and it’s a your You play on Asher at the cinema, and it tells the story. It’s sort of a visual novel style game telling the story of one day one tragic day at the at work.

132
00:18:58.430 –> 00:19:13.120
Malcolm Ryan: and and as such you have a lot of difficult moral decisions to make along the way, and we were able to. Then you know, record exactly what players were doing in those moral decisions, and present a variety of different morality meters there.

133
00:19:13.130 –> 00:19:42.879
Malcolm Ryan: showing, you know kind of the the different different. So we had one sort of reality meter which recommended the intuitively the obviously intuitively good choices. We’re all labeled as good, and the obviously intuitively evil choices were labeled as evil, although we did have some choices that are actually sort of closer to being trolley problem dilemmas, whether it wasn’t clearly one good or evil. And so we were interested in seeing how changing that meter around, recommending some things as good and some things as evil would change

134
00:19:42.890 –> 00:19:44.060
Malcolm Ryan: as the

135
00:19:44.690 –> 00:19:58.949
Malcolm Ryan: Hmm. Yeah. And so this is by making our game we could. Then we then ran an experiment. We ran. Actually, we run lots of experiments, we we, the nice thing about having our own game is we can keep coming up with another experiment idea and doing a different different version of it.

136
00:19:59.020 –> 00:20:04.420
Malcolm Ryan: the but the first one that all talk about. We ran a qualitative study where we got I can’t remember

137
00:20:04.720 –> 00:20:15.719
Paul Formosa: around 25, I think. 25, 2 groups of you know, had 2 groups play the game basically with one group. Had the in should have made a setting. So this I think there was 7 or 8 key choices.

138
00:20:15.730 –> 00:20:32.800
Paul Formosa: so we wanted them to be nicely structured. So we had, facing morality versus self interest, type choices and a contact and non contact version. So one you sort of trip over someone’s still and money one the money to stand. You steal it. We had 2 trolley problems which are basically down to logical, first consequentialist type decisions.

139
00:20:32.810 –> 00:20:43.120
Paul Formosa: So one is a version of the the standard trolley problem where you pull lever. There was a that version of that one is the a version of the one where you push the large man off the bridge. In our case you have to kick a chair off somebody.

140
00:20:43.220 –> 00:20:59.959
Paul Formosa: and then we have a a nominal choice, a coin toss which has been 50 50. So we want to see. Okay, has a meter impact, something like that. And we had another choice in the end, which sort of came up organically about getting revenge on the on the on the evil and evil person who does all this damage, or you could hand over the police.

141
00:21:00.280 –> 00:21:16.290
Paul Formosa: And so, as as Martin said, one group got what we consider the intuitive thing so doing in the rally of a self interest, doing the model thing with counts and children, so stealing would count as evil and not ceiling account as good, and the other group had that flipped around. So stealing something says Good, not stealing, says evil.

142
00:21:16.520 –> 00:21:30.829
Paul Formosa: and likewise with the others. Now, of course, the try problems. It’s much more ambiguous, you know, although most people think you should pull the liver lever. Not if one agrees, and and then we should probably explain what the trolley problem is, just in case somebody it doesn’t know what it is.

143
00:21:30.840 –> 00:21:45.490
Paul Formosa: Yeah. So Charlie, problems are widely study the most psychology. There’s a lot of studies on them, and they come out of Philip the foot to work around a lot of fossil F. And basically the idea is, I can imagine there’s a run of my train. It’s hurling down track. There’s a lever there.

144
00:21:45.500 –> 00:21:54.430
Paul Formosa: There’s 3 people on the track. If you don’t do anything. The train is gonna run over and kill us 3 people. If you pull this lever it’ll turn the train into another path, and it’ll kill one person.

145
00:21:54.560 –> 00:21:59.979
Paul Formosa: What do you do? Well, most people think. Oh, in that case, pull the lever, save the 3. Unfortunately, you’ve killed one.

146
00:22:00.250 –> 00:22:01.110
Paul Formosa: Now

147
00:22:01.160 –> 00:22:08.080
Paul Formosa: there’s a variation called what’s called the the fat man the large man version where you imagine there’s a bridge so same thing as a runaway train.

148
00:22:08.150 –> 00:22:33.809
Paul Formosa: Once again there’s 3 people on the track. They set up pulling a lever to stop the train killing the 3. You have to push a large man off a bridge. Now you can’t go for yourself, you you 2, or something like that. But the large man will somehow stop it to try, and I don’t know how it’s supposed to happen, but anyway, it’s it’s it’s supposed to not weigh a whole lot like it’s a mass. Well, his stuff will is intestines. We’ll get into the wheels. I I really don’t know this is where this is where the hand waving comes in and now

149
00:22:33.820 –> 00:22:38.730
Paul Formosa: but anyway, so but then most people think well actually no, it’d be wrong to push this large guy off the

150
00:22:38.740 –> 00:22:56.790
Paul Formosa: the bridge. And then the question is, what’s the difference? Right in both cases One person’s dying. 3 people think, say, why is it okay to pull? Leave it, but not push a large guy, and there’s lots of answers. People give one. You’re using the large management means, and the other case, you know it’s just a full same side effect, but not intended consequence, and it’s the document double fact, and all sorts of

151
00:22:56.800 –> 00:23:10.570
Paul Formosa: philosophical issues which not that interesting here. So what we wanted, though, is to put equivalent versions in into our game, and and the reason you know that they’re interesting is because there are competing moral concerns. Right, you know it would be.

152
00:23:10.580 –> 00:23:21.680
Paul Formosa: It’s good to say the 3. But you know, is it all right to, you know, use our voice personal violence in one case, or in personal levers to to in the other case?

153
00:23:21.820 –> 00:23:23.399
Paul Formosa: So basically, what we found

154
00:23:23.430 –> 00:23:30.039
Paul Formosa: was really interesting. Now, of course, it’s only small studies that needs to be kept in mind, and Mac will talk about a bigger study afterwards.

155
00:23:30.730 –> 00:23:43.640
Paul Formosa: But basically. We went through and looked at, Coded, the various themes that people bought up, and we put them. We full found 4 main sorts of issues. When we asked you about how they, you know, thought about the meet, or engage with the meter.

156
00:23:43.870 –> 00:23:50.029
Paul Formosa: and one of them was well. 2 of them were about in difference of rejecting the meter. So about half the people

157
00:23:50.050 –> 00:24:04.840
Paul Formosa: just said I ignored it didn’t pay attention to it, or deliberately put it aside or rejected it, or something like that, so that half the people in both our groups, whether whatever the whether the made it was intuitive or not, intuitive, just rejected or in different to it. So I think that’s an interesting finding

158
00:24:04.920 –> 00:24:13.949
Paul Formosa: that half of people just just didn’t engage with the met at all. Didn’t particularly like it didn’t want to influencing their decisions. They wanted to make decisions on their own merits, not, you know, not with the media to them to do.

159
00:24:14.640 –> 00:24:24.470
Paul Formosa: But the I think more interesting finding is that in our insurance group the most common thing was treating them the need as a guide as a moral guide, so people would say things like

160
00:24:25.060 –> 00:24:40.880
Paul Formosa: I didn’t just follow up, but it made me sort of stop and think, or gave me sort of it gave me, you know I I Without that I would have just kind of done whatever, but it made me think seriously about what was going on, or you know it got me thinking about this or so it was. It was actually a if they actually treated it, kind of useful moral guide.

161
00:24:40.930 –> 00:24:49.389
Shlomo Sher: And so was it was the idea that they know that they’re being judged in some way because they know it’s going to affect where they are in the meter. Is that the idea?

162
00:24:50.190 –> 00:25:07.619
Paul Formosa: Okay. So I mean, these are all self report things. This is what people tell us, what what, what’s actually going on. We we get to in a second when we look at the quantitative study. But we actually see, is it making a difference or not? But what people told us was, you know that that that would take into account no making and think about what to do, and so on.

163
00:25:08.050 –> 00:25:10.449
Paul Formosa: So that was the most common when it was intuitive, made up.

164
00:25:10.670 –> 00:25:21.320
Paul Formosa: But when we look at the non-intuitive meter group, it flipped around. That was the that was the least common thing, and the most common thing was training the meters of school.

165
00:25:21.700 –> 00:25:25.200
Paul Formosa: so they thought so. They said things like oh.

166
00:25:25.540 –> 00:25:37.120
Paul Formosa: I thought it was the wrong thing to do, but it said it was 15 good, so it didn’t, or I was already on 90, and so I did it because I want to get to 100 on my meter or or other people would sort of have different sort of

167
00:25:37.140 –> 00:25:44.739
Paul Formosa: methods, like they might say. Oh, I want to sort of keep my made about halfway. So this one was plus 15 able up getting back about halfway, so I pick pick that

168
00:25:45.350 –> 00:26:02.780
Paul Formosa: so I can think of these 2 approaches as a kind of a much more kind of an instrumental relationship to the to the meter and the kind of more, I I guess, intrinsic relationship with a sort of it’s. It’s getting to think about morality. So it’s really interesting. When the made it was non-intuitive people just instrumentalized

169
00:26:02.790 –> 00:26:08.419
Paul Formosa: the Major a lot more. It was just it was just another mechanic another school to be optimized or do what you want with.

170
00:26:08.500 –> 00:26:26.180
Paul Formosa: It was kind of meaningless in a sense, to them. Morally it wasn’t he wasn’t promoting reflection, or anything like that; whereas but if in the more tube of meet up, it was tending to to do that much more often so, I thought that was a really interesting Finding that it did seem to lead to this more reflective play. If it was intuitive and instrumental play if it was not intuitive.

171
00:26:26.190 –> 00:26:31.099
A Ashcraft: So the and the difference between the the intuitive meter and the non-intuitive meter is

172
00:26:32.070 –> 00:26:43.820
Malcolm Ryan: a non-intuitive meter is is flipped around so that every choice that you might intuitively think of as good was labeled evil, and vice versa. Stealing money was good. You get points for good

173
00:26:44.000 –> 00:26:58.669
Malcolm Ryan: picking a dog. They, one of the most controversial choice amongst all the players, was there’s one there’s just a harmless dog sitting there, and it says, do you want to kick the dog or not kick the dog. And and if we say, you know there.

174
00:26:58.740 –> 00:27:28.269
Malcolm Ryan: that was the choice that it’s surprisingly of all the choices. That was the one that you know people just had. No, there’s no way of kicking the dog. There was like. There’s a choice at the end whether you want to shoot a guy, and much happier with that idea than they were with just kicking the dog, although there was one person in the study, who played with the the counterintuitive meter, and just made every single choice. Because when I asked him, you know, why did you steal the money gets because the media says it was good

175
00:27:28.300 –> 00:27:47.170
Malcolm Ryan: and just did the thing. So I mean a lot of our report. A lot of our results in general, you know, very generalized. But there are some people who react very differently to this thing, so some people that we completely ignored and say no. I paid no attention to the meter at all. I just did my own thing, and some people they went. Yeah.

176
00:27:47.270 –> 00:27:56.989
Malcolm Ryan: I did what the mayor told me when that was the whole that was the whole game for me. Do you suppose that that some of that has to do with? If it’s a non-intuitive meter.

177
00:27:57.090 –> 00:28:01.859
A Ashcraft: it’s creating some sort of like, you know, into like a a

178
00:28:02.120 –> 00:28:04.679
A Ashcraft: friction like I have to resolve

179
00:28:04.800 –> 00:28:14.790
A Ashcraft: the the difference between like or like the cognitive dissonance between. Yeah, yeah, My, what I think is good. And what this game is telling me is good.

180
00:28:15.490 –> 00:28:29.210
Paul Formosa: Yeah, that’s that’s definitely something we found in the that would the next day we should tell him about in a second. So first thing we found is that there was a we also with the reaction time, and we found that when that first time they got that count on truth of thing, it was a very long reaction time, because I think, Whoa! What’s going on in the trying to process it.

181
00:28:29.250 –> 00:28:49.180
Paul Formosa: We also have in that second study which Mac will talk about in a minute a mix to meter, which was intuitive up until the trolley problems, and then it was flipped around. And so that was trying to create exactly what you said, trying to get over that cognitive distance, trying to get some trust and then flipping it around and looking at the impacts.

182
00:28:49.190 –> 00:29:09.079
Malcolm Ryan: Yeah, so so like, Paul said, we ran the the so we run 4 versions, baseline, with no major at all, just to see how people would normally play the game a. And then the intuitive meter, which, recommending everything, you would normally think of as good is good. The counterintuitive meter, which was, you know, everything that you normally think as good as evil, and then the Mix made it, which started off

183
00:29:09.090 –> 00:29:20.400
Malcolm Ryan: for the first sort of 5 or 6 decisions. It was doing the intuitive thing. Don’t kick the dog, don’t still the money. Don’t everything like that, and they got to the trolley problems, and we switch them over. And so we said, You know what happens if we switch these.

184
00:29:20.480 –> 00:29:35.039
Paul Formosa: I should say what the rating was in the trolley problem. Actually, that probably helped. So so I at first the the standard try problem with the the case. We said that the intuitive thing the good thing to do is pull the lever in the

185
00:29:35.090 –> 00:29:48.589
Paul Formosa: In the large man version of Push off the bridge version, which in our case, you have to kick a chair off somebody which would hang them. Most people tend to think that it’s wrong to push the large guy off the bridge, or in this case keep the chair off. So the intuitive thing in that case was not

186
00:29:48.720 –> 00:29:50.200
Paul Formosa: not to kill the one person.

187
00:29:50.590 –> 00:29:53.789
Malcolm Ryan: and the baseline results without the meters reflected.

188
00:29:53.940 –> 00:30:12.129
Malcolm Ryan: reflected this throughout, like people would do mostly do the intuitively good thing. There was still 10% of people who kicked the dog, and one of the things that people did talk about in the qualitative study was Well, it’s just a game I wanted to see what would happen. Things like that, or or otherwise. People would also say it’s a game

189
00:30:12.410 –> 00:30:32.150
Malcolm Ryan: i’m going to get rewarded for being good, or i’m going to get punished for being able. So people had expectations of how the game would respond. We design the game to be. You know it’s. It’s a very linear narrative. I don’t. I I mean, for people at home. Yeah, your choices really matter. But we didn’t, you know it was only the meter that really told you anything about that? So you the question.

190
00:30:32.180 –> 00:30:40.570
Shlomo Sher: Yeah, yeah, it’s interesting with the setup like that, knowing that people are going to interpret the usefulness of the meter differently.

191
00:30:40.900 –> 00:30:45.350
Shlomo Sher: How do you get good results when you know that for some people.

192
00:30:45.910 –> 00:30:55.300
Shlomo Sher: you know, the some people will take the meter seriously, and other people will essentially use it as a mechanic that they’re that they think they’re just supposed to follow in order to do well in the game.

193
00:30:55.310 –> 00:31:13.260
Malcolm Ryan: Yeah. So we deliberately avoided telling them anything about the met, and none of the instructions mentioned it at all. So what we really we are wanting to try to measure that is saying, Well, how do people respond to this at all? We’re not telling you you should do the good thing. There’s nothing in the game. There’s no game instructions at all, but there’s nothing in the game that says.

194
00:31:13.270 –> 00:31:20.349
Malcolm Ryan: This is what you should be doing. All there is is a bar at the top of the screen with, we’re good at one end, and the word evil at the other end, and it moves.

195
00:31:20.540 –> 00:31:24.489
Malcolm Ryan: and so that very minimal people.

196
00:31:24.550 –> 00:31:36.930
Malcolm Ryan: what, whatever meaning they expect on to that. And some of our players would have been get quite, You know, people who had played games for, and would recognize that as a as a sort of thing, and that came through in the qualitative study. There are different people who

197
00:31:37.000 –> 00:31:46.179
Malcolm Ryan: played this sort of game before, and you what what this meant. But there are also people who are just responding to it as, yeah. There’s a there’s a thing on the screen that’s telling me that i’m good.

198
00:31:47.230 –> 00:32:04.600
Paul Formosa: You know. What does that mean? And so it could be really quickly at sorry correctly that in those big choices where the made it was affected. It did come up like this choice, it would say, you know, Kick the dog plus 15 good don’t keep the dog minus 15 or plus. So it was clear before you made the choice that this would affect the meter in this way.

199
00:32:04.640 –> 00:32:24.400
Malcolm Ryan: Yeah, we wanted to, which which is obviously, you know. Yeah, for for the sake of the experiment, we we wanted to be up front about that. It was something we sort of went back and forth within the design. We figured it was necessary, for the a lot of games have these things hidden. But

200
00:32:24.410 –> 00:32:41.809
Malcolm Ryan: but ultimately you’re doing the same sort of behaviors over and over again. So you do learn what the morality is attached to things, or you can go spoil yourself. But you know, reading spoilers about how many more like, how many come a point something is worth. We wanted to be up front about that, to let you know what the effect was going to be, because there wasn’t enough game

201
00:32:41.820 –> 00:32:46.110
Malcolm Ryan: sort of to learn to to predict what the meter might be might be rewarding you for

202
00:32:46.530 –> 00:33:06.070
Malcolm Ryan: the interesting thing in the in the baseline data was that I mean our our expectations were validated. People mostly chose the good options for the intuitively good options for everything except for the the the large man or the footbridge problem version, which which was in our game. It was

203
00:33:06.080 –> 00:33:12.900
Malcolm Ryan: your your boss. There, there’s a there’s a mad. I’m spoiling the game. So if you want to go play the game and come back.

204
00:33:12.910 –> 00:33:28.539
Malcolm Ryan: But in the same there’s there’s a crazy guy on the loose, killing people and setting 5 of things and whatever. And there’s one scene where he sets up your your bosses standing on a on a chair with a news around his neck, and

205
00:33:28.550 –> 00:33:42.039
Malcolm Ryan: and there’s he tells you there’s a bomb wire to another room where where the other people will die, and if he he says, you know, if you don’t kick the chair now and kill your boss, I will blow up the room and kill the other 3 other people.

206
00:33:42.630 –> 00:34:00.609
Malcolm Ryan: and and this in the baseline data we got exactly 50 50 in in people wanting to. 50% people chose to kill their boss and save 3 lives, and 50% of people chose to to to do the opposite to to say No, they wouldn’t kill it. And then 3 people died.

207
00:34:00.620 –> 00:34:14.039
Malcolm Ryan: and I thought that was really interesting, because we had, you know, a real moral dilemma there, that that didn’t have a intuitively good thing to do. And this was where we could really see. Well, it’s the media going to affect. That is the me. It going to sway people’s opinions when we’re in another.

208
00:34:14.050 –> 00:34:21.639
Malcolm Ryan: And the data was interesting in that it depended on the type of the meter, and it can be depended on the type of decision. And so the

209
00:34:22.179 –> 00:34:23.219
Malcolm Ryan: the

210
00:34:23.320 –> 00:34:42.130
Malcolm Ryan: for the early decisions, where there was an intuitively good choice people, largely weren’t, affected by the meter, giving them a meter which said, you should steal money, you should kick the dog like Paul said there was this reaction time where people were like. Wait what? I don’t understand why that’s good. But then.

211
00:34:42.139 –> 00:35:00.790
Malcolm Ryan: very quickly, people just seem to learn to ignore the meter and say, i’m going to do the good thing, anyway. I’m not going to steal money just because the media says it’s good. I’m not going to kick a dog just because to me this isn’t good. We sort of validate. There’s some existing work research which says, yeah, people play good, regardless of what you tell them to do in in most of these games.

212
00:35:00.810 –> 00:35:06.520
Malcolm Ryan: And so that was, you know, in line with previous research.

213
00:35:06.800 –> 00:35:10.629
Malcolm Ryan: But then we came to this probably problems, and we said, Well, okay, okay, what if we.

214
00:35:10.720 –> 00:35:14.039
Malcolm Ryan: what do people do here? And again, if the media had been telling you.

215
00:35:14.070 –> 00:35:23.160
Malcolm Ryan: Obviously they may have been telling you good things all well. So if the meter had been telling you evil things all along, if the meter it’s in and kick the dog if you need him, and saying, Steal the money.

216
00:35:23.360 –> 00:35:30.499
Malcolm Ryan: The me had no effect on the trolley problems. It was, it was still 50 50. In that case, so people just said, learnt that met up

217
00:35:30.550 –> 00:35:33.420
Malcolm Ryan: is not a moral guide. I’m going to ignore it.

218
00:35:33.440 –> 00:35:37.599
Malcolm Ryan: and and people would make seem to make choices the same way as as before.

219
00:35:37.690 –> 00:35:42.130
Malcolm Ryan: But if the meter had been telling you good things all along.

220
00:35:42.180 –> 00:35:51.900
Malcolm Ryan: then then get to that choice. And then we had 2 versions of the meter at that point, one which had been telling you good and said, okay, kick the chair, and one of which is telling you good, and said, Don’t, kick the chair.

221
00:35:52.050 –> 00:36:03.169
Malcolm Ryan: and that seems to sway people. People seem to go all right. Okay. So the meter has been. Is this trustworthy guide? It’s been matching my personal morality, although we don’t have

222
00:36:03.360 –> 00:36:19.530
Malcolm Ryan: any way to assess whether or not that but at least it’s been telling me things that are intuitively good. And then at that point there was a significant difference between the people who were told, yeah, kick. The chair is a good thing, and the people who were told don’t give the chair is a good thing, so that when the meter was acting

223
00:36:19.590 –> 00:36:25.549
Malcolm Ryan: so it didn’t seem like their behavior was consistent with people just trying to optimize the meter.

224
00:36:25.590 –> 00:36:44.959
Malcolm Ryan: It seemed very much more like people were taking this as a point of data in their decision, making in their moral choice a second, a second opinion. Here’s Here’s one that’s hard for me. Let me look and see what what the meter asks. Yeah. And that was consistent with some of the the the verbal feedback we were getting from people in the in, the

225
00:36:44.970 –> 00:36:46.259
Malcolm Ryan: they qualitative study.

226
00:36:46.390 –> 00:36:54.949
Malcolm Ryan: What I’m interested in in exploring, and what we didn’t think to do in this experiment, although we have done in later experiments, is to ask them after the fact.

227
00:36:54.970 –> 00:37:08.170
Malcolm Ryan: What did you use to make your decisions? And we’ve asked in a. In a, in a similar study? We’ve done a a similar study since which we Haven’t published. Yet where we have data with rather than telling you, this is good, and this is evil. We’ve told you

228
00:37:08.340 –> 00:37:25.410
Malcolm Ryan: most people chose this. 60% of people chose this. 60% of people chose that to a classic kind of walking dead staffing. But again presented on the screen when you’re making the choice. And again, that seems to and I Haven’t completed. This is sort of preliminary data, but that seems to also sway people’s opinion.

229
00:37:25.420 –> 00:37:41.779
Malcolm Ryan: If you, if you’d say 60% people to kick the chair, people will will favor that choice of you. Say people didn’t that people favor that choice, and after that study we asked them in a survey, what count? What was the foundation for your decision making? What factors did you use? And was it.

230
00:37:41.790 –> 00:37:57.610
Malcolm Ryan: Your writing, your personal morality, your you know the value, trying to do the good thing on the meter, trying to do the evil thing on the meter trying to just do things for people you like in the game trying to progress the narrative of the game, Lots of different kinds of motivations that we’ve sort of encountered.

231
00:37:57.730 –> 00:38:03.539
Malcolm Ryan: and by and large people said my personal morality was the thing that drove my decisions in the game.

232
00:38:04.040 –> 00:38:14.590
Malcolm Ryan: and we asked, Did you try to optimize the meter in any way is that No, not at all. I was not not influenced by the meter, strongly discrete with the idea that I, influenced by the meter.

233
00:38:14.880 –> 00:38:21.319
Malcolm Ryan: which is interesting, because clearly they were influenced by the Meta. And so I

234
00:38:21.370 –> 00:38:31.689
Malcolm Ryan: We’re doing more variations of this study. And this is something that I want to look into a bit more deeply is, how much do people think they’re being influenced by these factors? Versus how much are they actually doing.

235
00:38:31.770 –> 00:38:48.670
Shlomo Sher: And so the next study that we want to do is an eye tracking study. We want to see, actually, how they, how much attention are they paying for the meter at any point in time while playing the game?

236
00:38:48.680 –> 00:39:07.740
Shlomo Sher: I mean the bandwagon effect has got to play a role in all this, even though you don’t we you changing in real time the the number of people so that could could they see themselves like good question. Yeah, no, No. So we I mean the data was the data was all based on the previous study, and then fight appropriately to

237
00:39:07.750 –> 00:39:19.370
Malcolm Ryan: to to change up versions that we want for the decisions we wanted to test.

238
00:39:19.860 –> 00:39:22.530
Shlomo Sher: Okay, let me.

239
00:39:22.740 –> 00:39:25.389
Shlomo Sher: I I want to go now to

240
00:39:25.600 –> 00:39:33.569
Shlomo Sher: making the kind of big moral judgments that we’re used to in video games, right? Guys, just to make sure we’re done with the morality meter part.

241
00:39:33.860 –> 00:39:34.490
Okay.

242
00:39:34.690 –> 00:40:03.480
Shlomo Sher: So okay, so let’s let’s go back to talk about the the big Mall judgments that people make in video games. Right? So my moral engagement with the game here is that I get to choose whether to do X or Y right. So I decide whether to save the Crogan race and mass effect, or the risk of possibility that they’ll unleash their wrath upon the galaxy later on, or I get to decide whether Euthanize, my best friend in life is strange as she’s asking me to do or to liver to live the rest of her life as a quadriplegic.

243
00:40:03.490 –> 00:40:14.310
Shlomo Sher: So these are kinda every players familiar with these kind of decisions. Hard decisions like these, supposed to engage us with morality. And from what you guys found, do they actually work as intended.

244
00:40:15.000 –> 00:40:20.470
Malcolm Ryan: So to go back to the moral psychology stuff we talked about at the beginning

245
00:40:20.620 –> 00:40:33.590
Malcolm Ryan: the one of the problems I think, with especially with this sort of morality, major research, and also these these very big binary choices. Do you do the good thing or the evil thing? Or do you do? The sort of choice between things

246
00:40:33.700 –> 00:40:45.459
Malcolm Ryan: is when we look at how people actually make moral choices in real life. There’s a there’s a a model that we tend to follow in drawing from moral psychology called the full component model

247
00:40:45.610 –> 00:40:50.300
Malcolm Ryan: moral psychologist, James Rest and colleagues, and a lot of people following in this tradition

248
00:40:50.370 –> 00:40:54.079
Malcolm Ryan: say that there are really 4 different sort of

249
00:40:54.520 –> 00:40:55.700
Malcolm Ryan: congratulations.

250
00:40:55.880 –> 00:40:59.710
Malcolm Ryan: processes that are going on when we, when making doing ethics.

251
00:40:59.790 –> 00:41:07.890
Malcolm Ryan: and they moral motivation or moral focus, which is your, you know your drive to do the moral thing. In the first place

252
00:41:07.920 –> 00:41:15.870
Malcolm Ryan: moral sensitivity, which is your ability to read a situation. See that it is a moral situation, because rural situations don’t

253
00:41:16.000 –> 00:41:26.160
Malcolm Ryan: present to us, You know, present themselves to us as here as a moral thing, and see what the salient factors are, and and look at, you know, to read the world in a moral way.

254
00:41:26.200 –> 00:41:45.350
Malcolm Ryan: moral judgment, which is the then making the decision, deciding what what it is that you’re supposed to do, and using, You know whatever sort of factors that you know, influence your moral choice, and then moral action, which is to go out and actually put that action into effect in the world. And Usually, you know, being laurel is not an easy thing.

255
00:41:45.410 –> 00:41:46.889
Malcolm Ryan: and

256
00:41:46.930 –> 00:42:06.939
Malcolm Ryan: the problem with a lot of games from this model is that we really reduce that down to moral judge, which is the thing that we’re doing here is, and we say, okay, we don’t sort of eliminate moral sensitivity by saying, here is a moral choice. It’s heavily signed, posted in the gamers. There is a moral question here. Here are the 2 options you can make, and we just have to sort of choose between those options.

257
00:42:06.950 –> 00:42:11.190
Malcolm Ryan: and then also sort of removing moral action, because often it’s just like, okay.

258
00:42:11.210 –> 00:42:13.529
Malcolm Ryan: make the dial on choice that

259
00:42:13.550 –> 00:42:17.260
Malcolm Ryan: chooses that thing, and then you don’t have to put it into effect.

260
00:42:17.310 –> 00:42:30.610
Malcolm Ryan: Right? It’s it’s one click either way. Yeah, one click, I do that thing, and it’s clearly. I’m gonna do. Yes, there’s nothing about making a hard choice. Well, that choice itself might be difficult, but there’s nothing. Once you’ve made the choice to go clicking. It’s done.

261
00:42:30.640 –> 00:42:40.570
Malcolm Ryan: And so what we’re interested in is these: some of these big choices to present themselves in those ways, and traditionally, we’ve had. You know very much. This focus on

262
00:42:40.670 –> 00:42:43.370
Malcolm Ryan: here is a Here is an option. What you do.

263
00:42:44.670 –> 00:42:59.039
Malcolm Ryan: I’m interested in this thinking about this as a design advice for looking into those other kinds of moral moral components that we they’re ringing to bear and thinking about how to design for those. And so thinking about how to design a moral.

264
00:42:59.080 –> 00:43:09.370
Malcolm Ryan: I have a how to let the player sort of read in the different world things rather than hand them the moral problem which some of those bigger choices do I mean? Life is strange doesn’t.

265
00:43:09.760 –> 00:43:16.579
Malcolm Ryan: you know, and it’s more in the narrative about Well, what are the moral questions here that doesn’t hand. You like the

266
00:43:16.620 –> 00:43:33.490
Malcolm Ryan: often you have the 2 people saying you should do this, and you should do that, and you have to choose, You know, when you have a rich and narrative, and you have more going on. People read in lots of different factors like we found this in it in our game. One of the last decisions you make in the game is choosing whether or not to kill the bad guy

267
00:43:33.510 –> 00:43:36.890
Malcolm Ryan: out what I order to hand him over to the to the police.

268
00:43:36.920 –> 00:43:52.090
Malcolm Ryan: and and people would people would report. The name of the town was Mayhem. One player said I, just the very first introductory text presents. The name of the town is my him, and they said, I didn’t think

269
00:43:52.100 –> 00:43:59.449
Malcolm Ryan: in a town called Mayhem, I didn’t think that the police would be trustworthy. And so I thought I should take the law into my own hands, and so red

270
00:43:59.480 –> 00:44:14.019
Malcolm Ryan: those other elements of the narrative into that that moral choice. And so we’re. We’re very capable of of taking a much bigger and richer concerns than than as design. As we often feel like. We need to lay out here

271
00:44:14.320 –> 00:44:19.829
Malcolm Ryan: thing arguments for the one side here, arguments to the other side and and and choose.

272
00:44:20.140 –> 00:44:26.730
Malcolm Ryan: and and the sort of lens of moral sensitivity suggests. Well, look, you know, give the player

273
00:44:26.790 –> 00:44:45.909
Malcolm Ryan: respect the players moral sensitivity, reflect the fact that they can read the details of a larger situation and make up their own mind and bring to bear factors, that we may, as designers may have never even considered a a part of that decision. And I think big narrative games like like life, is strange, really. Do that in that. It’s not just.

274
00:44:45.920 –> 00:44:51.660
Malcolm Ryan: Here is a moral, you know it’s not. It’s not your mass effect. You walk up to a couple of strangers, and who ask you whether or not

275
00:44:51.680 –> 00:45:06.970
Malcolm Ryan: solve this moral problem for us. Here it is. It’s this is a person you’ve been living, and you know, being friends with, for you know, hours of play already have this established relationship a lot going on in this one decision that you’re making at this point in time.

276
00:45:07.410 –> 00:45:28.930
Paul Formosa: I think it’s useful. Oh, sorry, my question. I just I I think it’s useful. Then, to break something back it down into those different components. Right? So life life is straight, for example, more action is still very easy to just click this equip that. But things like moral focus, or there’s lots of reasons why you might care about morality. There’s lots of competing interest. You’ve got your relationship and and all the all the rich narrative, and it’s been going on.

277
00:45:28.940 –> 00:45:38.939
Paul Formosa: and sensitivity likewise. You know they they it doesn’t just sort of flag. This is a model issue or not. A moral issue is usually relationship with the other characters that a moral issue, and not a moral issue. How does that relate to

278
00:45:39.020 –> 00:45:57.480
Paul Formosa: like it’s not. It’s not so it so you can think about like a sensitive. It could be quite difficult. But what exactly is moderately relevant here. I’m: not. That’s going to be quite challenging moral focus. You know you’ve got personal relationship. And how does that work with morality? Once again it can make more focus, quite difficult judgment, obviously difficult thing to do. But then more action. It’s pretty easy. You just so click a button on that button

279
00:45:57.490 –> 00:46:11.960
Paul Formosa: so you can think of different games that different ways of designing games can make. Some of those components are the easier, harder, or gauge of those skills in different sorts of ways, and it’s not like Every game is going to all those things. It’s fine, and not ever again has to. It’s fine that the same old action is easy in this game, but these other ones might be hard.

280
00:46:11.970 –> 00:46:29.360
Paul Formosa: So it’s just a matter of like thinking through. How does a game engage with these different aspects, how they kind of easy or hard does it design a want to engage with those, and also just. But I guess the important thing is thinking more broadly. Then there’s more to relevance. I want to make a judgment that’s just short, as one thing you can do. But there’s always other things, too.

281
00:46:29.790 –> 00:46:41.570
A Ashcraft: Did Did any of your in in in the in, the follow up questions when they were giving you verbal verbal feedback on it? Did anybody talk about making decisions because they thought the game would go longer.

282
00:46:43.270 –> 00:46:54.060
Malcolm Ryan: Hmm. No, I don’t remember. There were definitely people who were thinking it would. Would. It would influence the the future of the game. But there is. Suddenly.

283
00:46:54.620 –> 00:47:06.940
Malcolm Ryan: there is. Yeah, there’s some idea of this. In fact, this is a study I was recently pondering with the student is what if we made a study where you made a sequence of moral decisions? But you also had

284
00:47:07.030 –> 00:47:09.590
Malcolm Ryan: a combat scene in between them, and

285
00:47:09.730 –> 00:47:21.180
Malcolm Ryan: there was a factor in the moral decision which made you more likely to win or lose the combat. And if you lost the combat you lost the game, and the game would stop, because that’s the the punishment in most games eventually is

286
00:47:21.330 –> 00:47:27.410
Malcolm Ryan: stopping you from playing the game. You know the reward is, there is more game, and the punishment is not you lost?

287
00:47:27.450 –> 00:47:37.890
Malcolm Ryan: I think about it in the other way. I think about it as reward. Yeah, the best reward you can give a player is more game. Yeah.

288
00:47:38.140 –> 00:47:45.270
Malcolm Ryan: Yeah. And so I’m: I would be very interested in doing that study of saying, okay, here is a moral choice, but there is also a pragmatic.

289
00:47:45.800 –> 00:48:04.510
Malcolm Ryan: Do you do the thing that will risk, you know. Maybe every choice has a do the good thing, but lose your You’ve got a 5% chance of losing the next combat, or a 10% chance, or whatever it is, how you willing to weigh up that chance of losing the game versus the versus the moral, the moral doing the moral thing.

290
00:48:04.640 –> 00:48:21.339
Malcolm Ryan: Nobody really expressed that in the study that we looked at, but it’s suddenly a a factor that’s really unique to making normal choices in games versus making moral choices in real life. Obviously, you know, you although I guess many of our moral choices in real life are also affecting our ability to continue

291
00:48:21.380 –> 00:48:28.460
Malcolm Ryan: in the relationship. We’ll continue in the job. We’ll continue in the whatever the game is that we’re playing, that the the moral choice comes up in.

292
00:48:28.760 –> 00:48:33.199
Malcolm Ryan: But there is this very specific thing in the game where you know I

293
00:48:33.650 –> 00:48:35.199
Malcolm Ryan: You feel the need to

294
00:48:35.300 –> 00:48:48.089
Malcolm Ryan: play more game, and that’s the objective of the game is to keep playing the game and to finish the game in some sense. And so we Haven’t examined that, and it Hasn’t come up in a in a study so far, but it would suddenly be an interesting

295
00:48:48.210 –> 00:48:49.839
Malcolm Ryan: factor to examine.

296
00:48:49.950 –> 00:48:52.629
Malcolm Ryan: and something that I would love to run an experiment on.

297
00:48:52.720 –> 00:49:12.219
Paul Formosa: Did you get some? I’m sorry i’m sorry, please. I was just gonna say we did get some nearby comments for some people talked about. I just chose what would make the story more interesting at the beginning. Still, in the all goes Money, I thought. Make kind of boring if I didn’t steal it. So some people picked it, because I thought narratively, it’ be more interesting.

298
00:49:12.230 –> 00:49:21.619
A Ashcraft: It wasn’t always how you would length in it, but you could make it more interesting. I space right if I didn’t know how long the game was. I might try to play the meter

299
00:49:21.700 –> 00:49:34.280
A Ashcraft: such a way that the game extended like oh, is i’m going to play this until and like i’m going to try to keep it in the middle, right? Because I might. The game might end if I get it to one end or the other.

300
00:49:34.790 –> 00:50:03.639
Malcolm Ryan: They suddenly were people interesting on that, Toby. There were people who in the early choices did something immoral just to try. And then in the we sort of did a we did a focus group discussion. So there was like 2 or 3 people talking, and they were so embarrassed by making that choice like there’ be. The people who reported those things, but like I didn’t realize where it was going like they they were like I made. I thought it was just a game. I thought I could just steal this money. It would be fine.

301
00:50:03.650 –> 00:50:16.259
Malcolm Ryan: And the guy you steal the money from comes back later in the game and actually helps you helps you survive, and things like that, and they like I Really, if i’d known where, what kind of game or was where it was going, I would never have made that choice.

302
00:50:16.270 –> 00:50:31.730
Malcolm Ryan: So there’s a point in which it turned for them from being just again, and something that I can do what I like and explore things to. This is a story that I’m invested in, and I care about the characters that i’m in that i’m doing. I think that as a designer. That’s that’s sort of the

303
00:50:31.850 –> 00:50:45.949
Malcolm Ryan: comes to. That question of moral focus is, how do we, as designers get people to engage Very tempting as a game to to just game it right. Most of our games are a moral spaces where we don’t ask the player to even think about the morality, what they’re doing, and that’s fine.

304
00:50:46.050 –> 00:50:54.739
Malcolm Ryan: Games are meant to be sandboxes where we can mess around, but we also want in some games to say, no. This is a game where your morality matters.

305
00:50:55.170 –> 00:51:13.299
Malcolm Ryan: And so that’s a moral focus question of how do we get you into the game in a way that says your choices. You need to be thinking about the morality of your choices in this game, whereas you know, in your average shooter you’re not going to go. Oh, my God! Should I shoot this person or not? It’s like, you know. Games Don’t. Then bring that.

306
00:51:13.780 –> 00:51:31.410
Shlomo Sher: you know I I really like the idea of of focusing more on moral sensitivity, right? Because, yeah, I mean, you know, there’s tons of choices that lead to ethical reflection and some where you can put you control it down and think about it for a half hour before deciding. But it’s all supposed to be about reasons.

307
00:51:31.420 –> 00:51:42.339
Shlomo Sher: and i’m curious. I i’m trying to remember life and strange I mean I’m. Since we were talking about that, I I think there is a scenario where, let’s say, there’s a kid in the quad, who seems, you know, lonely.

308
00:51:42.480 –> 00:51:59.159
Shlomo Sher: and you know you have the opportunity. But the game is not gonna for you to to let’s say, come and come up and talk to to that fellow student. Right? That’s let’s see an opportunity that might test your moral sensitivity, or it might get you to be the kind of player that just checks out every option

309
00:51:59.170 –> 00:52:16.289
Shlomo Sher: right? But i’m i’m curious about If you guys know some games that you think might might have been well designed to explore the moral sensitivity that players might have for the situation. And if you do, is the idea, then to

310
00:52:16.300 –> 00:52:29.370
Shlomo Sher: because I can see a situation where you encounter characters that you know things will happen in your interaction that they will think are morally important. But your character might have not realized, and later on they could essentially, in the conversation.

311
00:52:29.380 –> 00:52:43.339
Shlomo Sher: come back to you, perhaps upset, perhaps happy because of how things morally turned out according to how they saw it. But you might not have an idea. I’m just kind of curious. If you guys have any ideas of how to do the moral sensitivity part. Well.

312
00:52:43.380 –> 00:52:46.569
Malcolm Ryan: so well, we we we I mean, so

313
00:52:46.620 –> 00:53:04.850
Malcolm Ryan: yeah, blah, blah blah something that I find very interesting. This is most of our games. Do you know the sign post here is a moral choice, and I want to answer. We’ve been thinking a little bit about how do you step back? How do you. How do you design a game to let to not tell the play that they’re making moral choices and

314
00:53:04.860 –> 00:53:22.669
Malcolm Ryan: and part of that comes back to sort of making the game more systemic and more generic, and the choices that you’re making, and that and that the is not one scripted choice that has. You know this is this is the moral choice, and these are the options it’s. Here is a system with a number of choices that you’re making along the way, and

315
00:53:23.040 –> 00:53:32.450
Malcolm Ryan: there is a moral arc to that to that game play. But there, isn’t necessarily, you know, a moment where you’re making the moral choice. You’re making a bunch of choices, and and

316
00:53:32.600 –> 00:53:34.280
Malcolm Ryan: they have moral impact. And

317
00:53:34.590 –> 00:53:35.229
Malcolm Ryan: And

318
00:53:35.330 –> 00:53:53.590
Malcolm Ryan: and so we actually did just an investigation. Looking at papers, please, in this regard as a more systemic systems driven game where there is ethics to what you’re doing in the game. But there’s not there. Well, there are some scripted moments where you’re making, you know clearly a scripted choice. But there’s also just an overall

319
00:53:53.710 –> 00:53:59.819
Malcolm Ryan: it moral AR to what you’re doing, and whether or not you feel like you came away as a as a good person in that game.

320
00:54:00.310 –> 00:54:01.229
Malcolm Ryan: And

321
00:54:01.280 –> 00:54:10.070
Malcolm Ryan: And so this is a very interesting kind of different way of approaching approaching morality in games is rather than saying, Here is a moral choice. It’s like. Here is

322
00:54:10.600 –> 00:54:28.989
Malcolm Ryan: a system which has moral consequence. Similar games. Frost punk. Does that? Well, I think you are there. You know again. Sort of it’s more of a management, SIM, that you’re planning there, but overall the choices you’ll make. Have have sort of moral impact for, and there are sort of ways of reading that in there, and those

323
00:54:29.280 –> 00:54:38.420
Malcolm Ryan: those sorts of games then require require more of you as a player to bring moral focus to the game, because it’s not in your face, saying, Here is a moral choice to you.

324
00:54:38.600 –> 00:54:43.460
Malcolm Ryan: to you, euthanize your friend. It’s here as a society you have to run.

325
00:54:43.520 –> 00:54:45.409
Malcolm Ryan: There are competing interests

326
00:54:45.520 –> 00:54:47.720
Malcolm Ryan: at the end of the day. Are you a good person?

327
00:54:47.820 –> 00:54:50.450
Malcolm Ryan: And also

328
00:54:50.460 –> 00:55:08.399
Malcolm Ryan: It also challenges you for moral action because it it says, you know, yeah, you want to do the right thing. You want to do the good thing and papers, please. You want to let through the people who You’re going to be nice to the people you want. You haven’t actually be good at the game to do that. You can’t just go. I’m going to do the good thing and get good points. It’s like

329
00:55:08.410 –> 00:55:13.870
Malcolm Ryan: I really want to do the good thing, but i’m so bad at this game that i’m on the verge of losing, and I can’t.

330
00:55:14.220 –> 00:55:23.319
Malcolm Ryan: I’m? You know my child is sick, and whatever, and I need to actually get better at the game in order to be allowed to exercise my morality within the game.

331
00:55:23.400 –> 00:55:24.970
Malcolm Ryan: So I think

332
00:55:25.040 –> 00:55:41.990
A Ashcraft: it seems like you would want to have low low value decisions early on, just to get people sort of used to the kinds of decisions that they’re making, so they can explore their the gains of agency that they have in the game, and the kinds of decisions that they’re going to come.

333
00:55:42.000 –> 00:55:56.459
A Ashcraft: I think, about the Black Mirror episode, band or snatch the the the choose. Your own adventure kind of style I I don’t know if you, if you all have seen it’s great like the first 2 choices are are utterly meaningless.

334
00:55:57.090 –> 00:56:08.229
A Ashcraft: but they get you, and they they get you used to like. Oh, this is how I make choices, because this is, you know this is a different medium, and I’m. I’m. This is what I can do. How how can how does this work?

335
00:56:08.350 –> 00:56:12.780
A Ashcraft: And so you make a couple of choices that are not not not meaningful at all.

336
00:56:12.820 –> 00:56:16.970
A Ashcraft: and before it gets into anything that’s sort of meteor than that.

337
00:56:17.410 –> 00:56:22.309
Paul Formosa: Yeah, I think that’s really important point, and it gets us to come back to something. We talk about it starting around all expertise.

338
00:56:22.450 –> 00:56:41.239
Paul Formosa: It’s too often like we don’t treat rally as it’s something that can be difficult or hard or easier, and we just throw people in. But if we think about like how again a game always starts with some of tutorials. It’s like how to use the system. It’s the you know. The opponent’s start of easy. Get harder because you’re supposed to progress and get better. But we often don’t think and rally that way in games as well. We just to throw people in, and

339
00:56:41.250 –> 00:57:01.100
Paul Formosa: but this sort of thing suggesting, and some sort of written about as well. If, when once you sort of think about more as an expertise or a skill, it’s something that can. You can start off with sort of simpler types of dilemmas where there’s kind of an obvious answer. And you know you get sort of different kind of feedback from that. But then, so these kind of ramp up morality in the same way you might ramp up the

340
00:57:01.190 –> 00:57:07.849
Paul Formosa: you know the toughness of bosses, or something like that. So yeah, I think that’s definitely right. It’s to sort of think about

341
00:57:07.880 –> 00:57:18.240
Paul Formosa: all the different aspects, you know. You might wrap up, you know sensitivity, or something like that, or action, and you know, but do that gradually, so that the player gets kind of experience and gets better at it.

342
00:57:18.260 –> 00:57:37.649
Paul Formosa: and one, you know. Once again my pay. This place is is kind of good like that in that, you know. There isn’t there’s just all these little decisions you have to make all the time, and it’s not obvious. I mean it’s really exploring this idea of the penalty people how just having you standard motives like I want to do my job on, for my family can lead you to, you know, and be involved in evil actions.

343
00:57:37.660 –> 00:57:40.390
Paul Formosa: and it doesn’t that. That’s why I think the sensitivity is

344
00:57:40.580 –> 00:57:42.570
Paul Formosa: intentionally difficult in that game, because.

345
00:57:42.600 –> 00:57:50.589
Paul Formosa: you know you’ve got your right. You got a feature family. It’s cold at night. You’ve got little money, so you kind of want to focus on that. But then, but you know there’s people pleading for their lives coming through.

346
00:57:50.660 –> 00:57:53.219
Paul Formosa: So you know you. You’ve kind of

347
00:57:53.460 –> 00:58:11.090
Paul Formosa: It’s so busy sometimes. You got very quick with the in the game, makes you focus very quickly on the processing the passport, and and so it’s. It’s almost pushing You don’t don’t. Just ignore the dial by quickly, Just quickly process process. But but of course, then, of course, you’re missing the kind of the model point as well. So you know it. In that sense it does a really good job. Another

348
00:58:11.100 –> 00:58:15.129
Paul Formosa: what I quite like is more about more action is in the walking dead, the first one

349
00:58:15.310 –> 00:58:22.149
Paul Formosa: There’s this really nice. A lot of them choices are just simple, you know. Do this do that. But there’s this really nice one where you, Lee is trying to stop. I think Kenny

350
00:58:22.170 –> 00:58:33.439
Paul Formosa: ken he’s driving train. He’s really upset. I think his sons just got infected, or something like that, and he had in the the dial. Choice is not like. Get him to stop without violence or use violence. It’s like you have to make about 8 or 9

351
00:58:33.490 –> 00:58:36.299
Paul Formosa: decisions in a dollar choices and run.

352
00:58:36.370 –> 00:58:44.589
Paul Formosa: and they’re all they. They’re quite quite a a degree of kind of you know, emotional intelligence like what’s going to anger him. What’s going to calm him down? What’s going to frustrating what’s not?

353
00:58:44.640 –> 00:58:55.190
Paul Formosa: And you know you’ve got to make, you know, like I said, I don’t know on choices in a row, and only when you’ve done that we’ve sort of be able to talk him down without violence. So, instead of saying, you know, talking down with on so not You’ve got to. Actually, you have

354
00:58:55.200 –> 00:59:08.690
Paul Formosa: moral skill in this case. Small action skills around emotional sensitivity or things like that. We can imagine other sort of things like communication skills or leadership skills, and needed to actually achieve a moral goal rather than just so click a box. So I think that’s a another nice example.

355
00:59:08.700 –> 00:59:23.110
Shlomo Sher: It’s it’s interesting. How much of this is, you know, engaging in personal relationships with people and thinking in terms of personal relationship. Instead of these kind of trolley problem examples which are, you know, really abstract or really

356
00:59:23.510 –> 00:59:25.390
Shlomo Sher: dealing with.

357
00:59:25.560 –> 00:59:32.920
Shlomo Sher: You know, people that you don’t have. Let’s say relationships with right, because digging into these relationships kind of

358
00:59:33.700 –> 00:59:43.930
Shlomo Sher: connects with you seeing a person from a variety of that mentions hopefully. And I’m. Curious how things like that stack against interpersonal morality.

359
00:59:43.940 –> 01:00:02.079
Shlomo Sher: Sorry against the impartial morality. Right? So you you have this interpersonal morality, let’s say, with Kenny in in the first walking dead right? He’s your friend. How should you treat your friend as a point where you think he’s not doing the right thing. And this is a kind of a nice thing. I think this another dilemma in in that first one

360
01:00:02.390 –> 01:00:17.499
Shlomo Sher: you have loyalty to him as a friend. Right? And you’re aware that as a friend you have certain obligations to him as a friend right? And and you’re aware that he’s very upset, but he also wants to do something that seems

361
01:00:17.510 –> 01:00:22.700
Shlomo Sher: from the point of view of impartial morality. And I don’t remember what it is, but it seems like unjust in in some way

362
01:00:22.730 –> 01:00:27.500
Shlomo Sher: right. And I I thought that contrast was was was really nice.

363
01:00:27.640 –> 01:00:36.409
Shlomo Sher: But again, that contrast it’s You need to make it a moral judgment, but it’s not just a moral judgment. It’s also how do you manage the moral expectations of your friends

364
01:00:36.560 –> 01:00:41.479
Shlomo Sher: right and managing more expectations, I think, was that was, I think, the the more interesting part.

365
01:00:42.600 –> 01:00:55.790
Paul Formosa: Yeah, I mean. And that must get that. That pulls on a bunch of different skills there like. So it’s 1 one you might think it’s moral focus, you know. Is it among you know? Am I to prioritize my friendship over morality. But there’s also model sensitivity like are some of these

366
01:00:56.170 –> 01:01:14.980
Paul Formosa: other kind of models, dimensions to my relationship with this other person. It’s, it’s, and it’s not clearly like this is moral, not moral. And so I think once, what what are you being said? What are the model? The silent issues is the fact that i’m a friend that I have. This deep relationship with him is that morally sailing? And how does the way up with other morely sailing features? So I think that yeah, I think. And and I think this actually is what

367
01:01:15.020 –> 01:01:21.270
Paul Formosa: a really good feature game I met Nac. We talked about this idea of games as a sandbox, but we can also think about it as a you know, as a way to explore

368
01:01:21.300 –> 01:01:25.980
Paul Formosa: ethics is a way to support morality, and you know, and that that’s one of those cases where there are

369
01:01:26.270 –> 01:01:37.389
Paul Formosa: genuine, competing, different, competing, ethical considerations. And so you know what it it’s about. What prioritizing what are you sensitive to, as well as the kind of judgment, and and implementing it as well?

370
01:01:37.410 –> 01:01:44.500
Paul Formosa: And I think it’s nice that we could sort of explore those, and again doesn’t have to tell you which one of those is the wrong one or not moral, you know it can be partly up to you.

371
01:01:44.690 –> 01:01:51.889
Paul Formosa: but I just want to quickly touch on. Another thing. You mentioned that, like the interpersonal relation, I think that’s when we think about Morality Day to day.

372
01:01:52.150 –> 01:02:08.990
Paul Formosa: It’s pretty much usually interpersonal stuff. I mean that that’s that’s right. That’s a bread and butter of morality, for we we’re not all facing these big life and death decisions all the time, you know. Maybe someone’s face that some of the time, but they’re not the sort of everyday moral decisions we have to sort of face every day. And so, I think you know.

373
01:02:09.000 –> 01:02:15.389
Paul Formosa: getting those in games as well like thinking about how they how to put them in games and plenty of people explore those, I think, is also

374
01:02:15.420 –> 01:02:17.780
Paul Formosa: important aspect to.

375
01:02:18.420 –> 01:02:20.899
Malcolm Ryan: So there’s another

376
01:02:21.790 –> 01:02:32.990
Malcolm Ryan: another theory of of moral psychology. The moral oral foundation theory which talks about it’s based in the idea of intuitive morality, and it says there are

377
01:02:33.000 –> 01:02:48.499
Malcolm Ryan: 5, maybe 6 different sort of moral priorities that the different people sort of right at different levels of importance, and that I can remember the more concerns about justice and fairness concerns about care versus harm.

378
01:02:48.520 –> 01:03:08.349
Malcolm Ryan: concerns about purity, purity, authority, and loyalty to your in group and liberty and freedom. And and there’s evidence that different groups in different societies will priority on on some of those over others.

379
01:03:08.360 –> 01:03:10.370
In Western society.

380
01:03:10.840 –> 01:03:20.640
Malcolm Ryan: More liberal people put priority on on fairness, care, and liberty, and more conservative people put more relatively, more priority on authority and and

381
01:03:20.930 –> 01:03:25.589
Malcolm Ryan: loyalty and and purity which is interesting. I think I think

382
01:03:25.850 –> 01:03:37.630
Malcolm Ryan: there’s some evidence that that people do respond in game in following these sorts of foundations. And so there there! There are measures to say, what are your foundations? You know what are your priorities?

383
01:03:38.020 –> 01:03:47.489
Malcolm Ryan: And look, if you present these kinds of choices and games, but I also think they they make for really interesting material as game designers to say, okay, I want to design a choice.

384
01:03:47.610 –> 01:03:48.569
Malcolm Ryan: We lean

385
01:03:48.600 –> 01:04:05.949
Malcolm Ryan: heavily on the trolley problem a lot of times, these utilitarian versus sort of d and logical choices. But there are other kinds of dilemmas that that challenge us in more interesting ways and more sort of really natural kinds of dilemmas rather than we very rarely have. This do I say, how many people do I save kind of choices in real life.

386
01:04:06.040 –> 01:04:15.509
Malcolm Ryan: but we do often have choices between. Do I do the caring thing, or do I do the fair thing like as a as a teacher. I encounter this all the time. There’s a student with

387
01:04:15.520 –> 01:04:28.599
Malcolm Ryan: who’s, you know, for whatever reasons you know hasn’t done their assignment, and has comes to you with some sob story, and I’m faced with a moral dilemma. Do I care for this person? To care? Anything to do right now would be to say.

388
01:04:28.650 –> 01:04:32.500
Malcolm Ryan: and that doesn’t matter, you know. Just do the work and and give you the mark.

389
01:04:32.830 –> 01:04:40.519
Malcolm Ryan: The fair thing is to treat all students equally, and there are other students who didn’t come to me with those problems, and they just

390
01:04:40.550 –> 01:04:43.589
Malcolm Ryan: submitted the best work that they could within the deadline.

391
01:04:43.800 –> 01:04:58.549
Malcolm Ryan: and the fair thing would be to treat them all equally, and say to this student: No, I can’t do that, and so, I think, as as designers. These moral foundations are really good way of thinking about really sort of much more natural, everyday kind of decisions.

392
01:04:58.770 –> 01:05:18.400
Malcolm Ryan: Do I do the thing that is looking after my friends versus do I? By the person who’s in authority over me, you know? Do I do I do the to boosting, or do I? Do the you know the thing that’s gonna make me feel feel uncomfortable versus do I actually do the caring, or whatever in circumstance, I, you know.

393
01:05:19.590 –> 01:05:28.910
Malcolm Ryan: And so I think those that’s that gives us much more meters designers than than these sort of big utilitarian, d and logical philosophical moral choices.

394
01:05:29.730 –> 01:05:32.649
A Ashcraft: though there are, there are.

395
01:05:32.670 –> 01:05:43.249
A Ashcraft: I believe there are sort of online, you know. Take this quiz and get your your your moral foundation ratings. Things right. Are they any? Are they of any value

396
01:05:43.330 –> 01:05:59.399
Malcolm Ryan: up. Well, I mean they are. They are the the big one is actually made by the researchers who do this work. And so, presumably the the moral foundations questionnaire. You can actually take online. And that is, you know, that is actually one of the most.

397
01:05:59.450 –> 01:06:12.480
Malcolm Ryan: There’s more data validating that as a survey than basically any kind of psychological instrument that I’ve investigated in this field at all. They have that massive worldwide studies of of that. And so

398
01:06:12.560 –> 01:06:22.869
Malcolm Ryan: Yeah, I mean it’s it’s like any of these things. It’s very much based on self report. And so it’s like, do you, Prior? Which of these things do you prioritize? And so it is

399
01:06:22.910 –> 01:06:28.249
Malcolm Ryan: to some degree representative of your self image of yourself as a moral person, and maybe not

400
01:06:28.480 –> 01:06:35.280
Malcolm Ryan: one of the interesting questions there is. Do you actually does your behavior reflect your your your what you say on the moral?

401
01:06:35.340 –> 01:06:44.849
Shlomo Sher: I’m. I’m curious. If you could take you know I was a huge fan of ultimate, for when I was in middle school

402
01:06:44.970 –> 01:07:01.980
Shlomo Sher: and I was I was 13 when when when I when I played it, and you know i’m thinking if because in ultimate 4 right you, you’re engaging in actions that are then correlated to virtues, and you get points and and their morality systems for virtue.

403
01:07:01.990 –> 01:07:20.240
Shlomo Sher: It it’s interesting if some if someone could do something like that. But instead of kind of these morality meters, you’re shifted to some sort of because in the beginning of October you’re asked kind of these small dilemmas that create what your persona is in the game where your moral meter starts.

404
01:07:20.250 –> 01:07:39.299
Shlomo Sher: I wonder if something like that could be done in a game that incorporates these these core values in some sort of test that sets up your character in the game, in accordance with whatever kind of value system you have. I’ve been dying to see anything like ultimate for for years.

405
01:07:39.390 –> 01:07:41.950
Shlomo Sher: just because it’s such a such a personal favorite.

406
01:07:42.360 –> 01:07:49.019
Malcolm Ryan: This is something to relate back to the morality meter research. We talked in early about the the idea of reputation as well as being

407
01:07:49.040 –> 01:07:53.890
Malcolm Ryan: sort of competing meters or different ways of of moral standards in the world.

408
01:07:53.930 –> 01:08:01.539
Malcolm Ryan: and study that I again want to do when in when I get time to make. But we have to make a new game and making a game takes forever.

409
01:08:01.550 –> 01:08:19.889
Malcolm Ryan: But I want to make a game where which does keep these different moral foundations against each other and presents you with Well, maybe just another meter, but a meter that has more axes that show you. Yeah, you’re You’re sitting here in the in the in the moral world at the moment. How does and this this? This choice is going to give you points in

410
01:08:20.380 –> 01:08:22.670
Malcolm Ryan: injustice, and this choice is going to give you points in

411
01:08:22.910 –> 01:08:24.699
Malcolm Ryan: Okay.

412
01:08:24.740 –> 01:08:43.929
Malcolm Ryan: And the idea there is is sort of I’m interested in exploring the idea of moral role play where it’s not often we go. I’m going to play a good player on to play an evil player, but you know, and we go. We do set into a game to say I’m going to play evil, and I wanna that’s moral role. Play. It’s not reflecting my values. It’s like. I want to be a villain.

413
01:08:43.939 –> 01:08:58.029
Malcolm Ryan: but interested in these games which the which give you this richer set of of okay. Well, i’m going to moral. I’m going to role, play. I’m. Not a person who’s really caring and really kind, or I’m: going to role, play a person who really cares about justice, and that’s the

414
01:08:58.080 –> 01:09:13.529
Malcolm Ryan: primary driver, and maybe it’s less a reflection of my personal morality and more a choice of a character that I want to play in the game and using these meters, then as a as a guide for moral role, play rather than is it rather than as a a measurement of mind reality. Excuse me.

415
01:09:15.140 –> 01:09:19.719
A Ashcraft: I don’t know there is a I can say as a as a big role player. I have done a lot of that.

416
01:09:19.800 –> 01:09:20.540
Malcolm Ryan: Yeah.

417
01:09:20.790 –> 01:09:38.090
Malcolm Ryan: yeah, absolutely, I mean even I mean D. And d with it’s, you know, lawful, chaotic kind of gives you different approaches from reality. And you say i’m going to be good. I can be lawful good. I can be a you know we’re buying the rules, or I can be counted good. I can be ignoring the rules, and you know, and just doing whatever matters.

418
01:09:38.189 –> 01:09:39.880
Malcolm Ryan: And I think

419
01:09:39.899 –> 01:09:46.679
Malcolm Ryan: giving the player more room for that kind of moral role. Play, I think, will will lead to more interesting and and rich again. In a sense.

420
01:09:46.720 –> 01:10:03.060
Malcolm Ryan: there’s a game by an Australian studio coming out this year, called Broken Roads, which i’m really keen to see, because it sort of expands. This idea of the morality meter to to represent different moral philosophies in your in your rally meter.

421
01:10:03.340 –> 01:10:10.699
Malcolm Ryan: I’m very excited to see it. I we’ve had a bit of chat with the developer about what they’re doing. And

422
01:10:11.240 –> 01:10:17.580
Malcolm Ryan: yeah, i’m keen to team to play that game that’ my next research paper will be playing the game and reporting back

423
01:10:17.690 –> 01:10:33.059
Shlomo Sher: cool. Hey, guys, I I hate to say it, really I cause I have a i’m a bunch of other questions, but i’m looking at the clock. It’s 13 min. So I want to ask you guys kind of the the last 2 questions, and then I I I really have to get out of here. So the the first one is really

424
01:10:33.300 –> 01:10:34.349
Shlomo Sher: there’s a lot of

425
01:10:34.380 –> 01:10:38.679
Shlomo Sher: places to go here, I mean, you know I I feel like

426
01:10:38.940 –> 01:10:49.899
Shlomo Sher: the way you’re kind of talking about it. It’s mostly games. Do at games. Explore one out of 4 dimensions of of morality. And you know

427
01:10:50.270 –> 01:10:58.040
Shlomo Sher: you know what are some of the implications. If you’re saying, let’s look at the other 3 for game designers. Players. Anyone interested in ethics.

428
01:11:00.350 –> 01:11:15.959
Malcolm Ryan: So yeah, we wrote a paper. You sort of Jesse Shell wrote the the Book of Lenses for game design where you presented. He had this concept where we’ll just present the questions to ask yourself as designers and not answers about how to do design. They they’re very much posed, as as you.

429
01:11:16.060 –> 01:11:17.360
Malcolm Ryan: You know these sort of prompts.

430
01:11:17.570 –> 01:11:26.789
Malcolm Ryan: and we wanted to follow in that model and say, Well, what is this sort of 4 component model? What are the questions that poses to us designers to ask about a line, and to consider these kind of factors.

431
01:11:26.930 –> 01:11:34.340
Malcolm Ryan: And and this is we were didn’t want to say, Look, you know, this is the way to make a little moral game, and you have to make your games better, because they

432
01:11:35.160 –> 01:11:39.859
Malcolm Ryan: the more choices and games at the moment are good, but I think we have more scope by

433
01:11:39.900 –> 01:11:58.500
Malcolm Ryan: sitting down and looking at our game and asking how we engaging the players moral sensitivity? Are we handing the moral questions to them, or we are inviting them to do it? How we engaging the players moral focus, how we, what what is it about our game that invites us to think about playing this game morally? How we engaging moral action! Now we just

434
01:11:58.510 –> 01:12:08.839
Malcolm Ryan: click on it, and and the moral thing that happens? Or do we have to strategize about a Mar? Or do we have to be skillful in character? In in the dialogue do we have to be skillful in some other sense?

435
01:12:08.930 –> 01:12:19.069
Malcolm Ryan: I don’t think they Again, there’s no one sort of answer to these questions. We can bring some some of these we can say, oh, yeah, we’re totally ignoring that. We moral action is not in this game.

436
01:12:19.180 –> 01:12:31.420
Malcolm Ryan: but some we can say, but we really want to dive into moral sensitivity. We want to, you know, make the moral problems rich and ambiguous, and bring a lot of factors to bear on that, and we’re going to do that in through narrative or through whatever it might be.

437
01:12:31.630 –> 01:12:40.450
Malcolm Ryan: So this is where I think you know, understanding. The psychology really just gives us a better set of tools to look at our design, and as game designers, and that for me is.

438
01:12:40.520 –> 01:12:41.559
Malcolm Ryan: you know, as

439
01:12:41.580 –> 01:12:43.190
Malcolm Ryan: you know, I come into this

440
01:12:43.320 –> 01:12:57.160
Malcolm Ryan: with more concern about games than about ethics, in the sense that you know. I’m not. I believe that a lot of what we can, what we’re doing can be used to improve that, make games that teach ethics and make games that improve ethical development.

441
01:12:57.170 –> 01:13:03.609
Malcolm Ryan: But for me, moral choice is a fun and a designer. I want to make more interesting, more meaningful, more sort of

442
01:13:03.630 –> 01:13:09.960
Malcolm Ryan: adult games where i’m really sort of, you know, like all of our media. You know. All of our media had wrestles with morality in

443
01:13:10.140 –> 01:13:13.690
Malcolm Ryan: in very interesting ways. It’s a big part of what we do for entertainment, and

444
01:13:13.770 –> 01:13:24.459
Malcolm Ryan: and I want to make games which wrestle with that in much more interesting ways. And I think engaging with these myself, moral psychologies uses those those lenses to look at our game and say, how can we make this better by

445
01:13:24.880 –> 01:13:27.520
Malcolm Ryan: better engaging parts of the by, his moral expertise?

446
01:13:28.050 –> 01:13:35.690
Shlomo Sher: Yeah, that’s great, Paul. Do you want to have. Do you want to add anything to that out of curiosity from coming for more of the ethics than than the the gave side?

447
01:13:36.640 –> 01:13:44.349
Paul Formosa: Not really I mean, so I guess we, as I said, games are our sandboxes, and I think it’s really interesting. The way we we can

448
01:13:44.430 –> 01:13:49.470
Paul Formosa: you know role-play, different ethical personas are explored different ethical situations.

449
01:13:49.650 –> 01:14:08.569
Paul Formosa: So I think games are useful in that way, and that that that sort of then leads into the stuff. Malcolm was talking about like. Well, if we want that, then we want to think about the ways that we can gauge those different aspects of our moral expertise, or you know, some, you know, 10 games teaches to be better, more than you know. What counts is better. How we how do we do that?

450
01:14:08.640 –> 01:14:17.399
Paul Formosa: One thing we actually work on another game we haven’t talked about, yet is again looking at, develop around cyber, security ethics. And so we have a game that tries to take sensitivity around cyber security ethics.

451
01:14:17.410 –> 01:14:28.649
Paul Formosa: So one thing is like, what what are the ethical issues in some security? They’re not obvious. And so one thing that game is trying to do is okay. Then we just focus on sensitivity. How can we make people more aware of what the ethical issues are in sub security in the gaming context.

452
01:14:28.660 –> 01:14:38.680
Paul Formosa: so that there there’s an example of what like we have to go. We want to improve sensitivity, and you know, then we could design the game to try and achieve that. And so I think, just you know, being aware that there are these different components

453
01:14:38.790 –> 01:14:58.259
Paul Formosa: being kind of conscious and thinking about them. And then, as game is flipping that around, thinking, you know, think about how those different moments are being engaged with or not being engaged with. I’ve been challenged, not being challenged, you know, and and think about more broadly how ethics is put into games, and how you engage with it in games beyond, just like he’s the big choice. What do you do?

454
01:14:58.980 –> 01:15:14.199
Shlomo Sher: All right, Cool. So, guys. So we end our podcast with essentially this. What do you want to live our listeners with? And right we ask people to do it under 1 min. We rarely have had 2 people, and that was, after

455
01:15:14.470 –> 01:15:28.090
Shlomo Sher: which one of you guys wanted once once to take that to kind of give us our like, what you guys want to live with, and under 1 min we end up using it. If it’s good in in a promotional, you know, from the promotional capacity

456
01:15:29.360 –> 01:15:38.989
Malcolm Ryan: you can. I’m Sure, it’s your Hmm. All right. So I think. Let me let me start that again.

457
01:15:39.490 –> 01:15:50.159
Malcolm Ryan: I mean, I think moral psychology is is useful to design. And so a lot of what we teach in game design is understanding. The player, understanding

458
01:15:50.170 –> 01:16:07.069
Malcolm Ryan: how what the player will do, and understanding what the player will feel, and these are the fundamental questions of game design. And how do we design the game to get the player to do what we want them to do and feel what we want them to feel and and understanding play. Psychology is a big part of that as a As a design. I want to get into the player’s head.

459
01:16:07.190 –> 01:16:17.340
Malcolm Ryan: So if we want to design morally engaging games, and we want to get into the player’s moral psychology, we want to understand how the players, how plays, moral thinking works.

460
01:16:17.350 –> 01:16:37.270
Malcolm Ryan: And so there, there’s a lot of we can learn from from the existing moral psychology research to say, okay, Well, he is. What’s going on in your player’s head when they’re making a moral choice in your game. And here are these different components, this some moral moral focus, more sensitivity, more judgment, more action, these different components of their thinking that are going on when they’re making and executing the world

461
01:16:37.280 –> 01:16:54.589
Malcolm Ryan: choice. And as designers. If we can learn about this kind of psychology and use it to reflect on the our game design. So we can make games that engage that moral expertise and practice and play that more, Alex. But it’s more interestingly. We can also learn more about how the players experience that pounded players.

462
01:16:54.600 –> 01:17:04.750
Malcolm Ryan: E. C.

463
01:17:04.790 –> 01:17:15.739
Malcolm Ryan: I would encourage any game designers out there to go and investigate moral psychology come to our blog. We have lots of resources from more psychology research, and how they might relate to games

464
01:17:16.230 –> 01:17:25.399
Malcolm Ryan: the plug from rally, play.org, and also and and then think about. Use that as a sort of lens to think about your design. And for players.

465
01:17:25.520 –> 01:17:46.830
Malcolm Ryan: Yeah, think about what you’re doing when you’re making moral choices and games. Think about. You know. What is that that influence this particular choice was I influenced by you know these these instrumental factors of the meter, or worrying about losing the game? Or was I actually making my own mo following my own moral judgment. What was I morally role playing in this situation? And do I want to enter into this game in a way that i’m playing? And I know

466
01:17:46.840 –> 01:17:47.990
a new kind of morality.

467
01:17:48.040 –> 01:17:59.619
Malcolm Ryan: and gives you no more reflection on how you’re playing, and what you what your what your own moral psychology is, and why you’re making the choices you’re making. And then maybe you can think about how that relates to real life, although it

468
01:18:00.140 –> 01:18:01.609
Oh, that’s interesting.

469
01:18:02.050 –> 01:18:14.079
Shlomo Sher: That really tiled off that the of the I’m. I’m going to I’m going to take a clip from that. I can make that work.

470
01:18:14.090 –> 01:18:41.240
Shlomo Sher: Malcolm Ryan. Paul Fromosa. Thank you so much. This this was just super interesting. I I got my class right after this, and i’m gonna talk to them about the the the for the 4 dimensions here. I’ve actually I I forgot how it said. Call these d 4 component components for 4 components of how we morally engage. Well, how we engage morality in in this case. So also how we could engage morality in games.

471
01:18:41.250 –> 01:18:43.890
Shlomo Sher: And we want to thank you guys.

472
01:18:44.300 –> 01:18:53.420
Shlomo Sher: It was a real pleasure, real pleasure. Thank you very much.

Related Posts