Understanding Player Interpretation: An Embodied Approach

10 minute read

Here is a slightly edited version of the talk I gave at the Philosophy of Computer Games 2013. Today I’d like to tell you how game studies could benefit from theories of embodied cognition,  when thinking about players. I’ll give you an overview of what embodied cognition means and how it applies to players.

First, I’ll tell you where embodied cognition comes from and give you a somewhat loose definition of what embodied cognition is.

Second, we are going to look at some examples of experiments that show certain kind of embodied cognition going on. These are used to give you an idea of what kind of processing embodied cognition actually is, and how it affects our being in the world. There are three examples, with the last one most relevant to our topic.

Then we are going to move onto games. We’ll look at two specific examples of how embodiment might affect our interactions with games and how one might take embodiment into account when designing interfaces.

Let’s first look at the background, and where the theory of embodied cognition comes from.

First of all, it is known by a couple of names, like the embodied mind thesis and is also discussed in similar, related theories, like situated cognition. It is studied, for example, in cognitive and social psychology, cognitive linguistics and neuroscience.

Please also notice that I’m limiting the discussion to examples that are relatively modern, mostly from the eighties onward.

There have been earlier researchers that have paid attention to how our bodies affect our thinking, for example the phenomenological tradition, Heidegger and Maurice Merleau-Ponty to name just one example.

There have also been games researchers who have somehow noted the existence of this phenomenon, with probably the most thorough argument presented by Gee (2008).

What connects the modern strands of research together is the claim that the nature of our mind is determined by our bodies. There are stronger and weaker versions of this claim, with varying amounts of evidence.

Different researchers connect slightly different arguments to this thesis, but commonly these six can be found among the arguments.

  1. cognition is situated;
  2. cognition is time-pressured;
  3. we off-load cognitive work onto the environment;
  4. the environment is part of the cognitive system;
  5. cognition is for action;
  6. off-line cognition is based on our bodies.

These come from Wilson’s (2002) review paper.

I’m not going to go through all of them in detail, but you can see how they fit together to form some of the central ideas of embodied cognition. You should also note that not all of them are equally rooted in experimental evidence: Wilson finds the fourth one to be most suspect.

Next, let’s look at some examples of how embodiment affects our cognition in practice.

Our first example comes from a paper by Zwaan, Stanfield and Yaxley (2002). You can check their paper for more details, but I’ll give you an overview of the experiment.

They presented their participants with pictures of animals and objects, presented so that each participant had a pair who was shown a different shape of the same object. Examples included pictures of an eagle with wings outstretched or wings drawn in.

They presented subjects with experimental sentences like “the ranger saw the eagle in the sky” and noted that whenever the picture matched the orientation in the sentence (wings outsretched), subjects were faster to recognize the objects correctly.

They argue that what this experiment shows is that instead of amodally storing the linguistic information of eagles and eggs, the subjects used perceptual simulation as part of accessing that linguistic information. Or to put it in other terms, they had a perceptual image of the object while figuring out what the word refers to.

This is called cross-domain mapping, where linguistic activation primes perceptual activation, or vice versa, and is one of the ways that show how linguistic processing is based on bodily processes, here perception.

Let’s look at the next example by Boroditsky and Ramscar (2002). They first primed their test subjects with one of the pictures you see on the slide. The subjects were asked to draw an arrow showing how they maneuvered the chair towards the X. Depending on which picture they got, they did it by either moving themselves, or moving the chair towards themselves.

They were then asked the ambiguous question:

“Next Wednesday’s meeting has been moved forward two days. What day is the meeting now that it has been rescheduled?”

They answered either Friday or Monday, depending on how they understood the question. And as you might already guess, the task they did before answering the question had a significant priming effect, priming them either for the ego-moving perspective (Friday) or time-moving perspective (Monday). In other words, and this is important for us, the subjects used their conception of space to think about time.

Now, let’s move on to the third example, which is most directly relevant to the topic of games.

The third example is from Antle, Corness and Droumeva (2009). It was an experiment to see how embodied metaphors would help in understanding an augmented user interface.

They built a system for interacting with sound by moving in space. The thing they were testing was how different parameters affect how people learn to use the system. They had two versions, one based on an embodied approach where the parameters were mapped to embodied metaphors and a non-embodied version, where the parameters were not corresponding to the embodied metaphors.

When given a short time to learn the system and then given tasks to perform there was a significant difference in how well the different groups of subjects learned to perform the tasks. The ones using the system with embodied metaphors correctly performed 80 % of the tasks, while the ones using the system with non-metaphorical mappings only managed to succeed in 20 % of the tasks.

Now, you might think that this is not so surprising and feel that the mappings listed as metaphorical are intuitive and easy and the ones listed as non-metaphorical are unintuitive.

And you would be right in doing so. This is exactly the point of finding metaphorical mappings: they are intuitive and easy for us to grasp because they rely on our embodied experiences.

Designing good interfaces, and I use the term very broadly as ways of interaction, is using those embodied experiences to make systems intuitive for their users.

You should now have an idea of what we talk about when we talk about embodied cognition. I’ll now move on to present examples of using this framework to understand players and how they understand spatiality and interfaces.

If we first discuss interfaces and how they affect players’ experiences of spatiality, Igoe and O’Sullivan (2004) are unfortunately a good starting point. Regardless of how large a game is from the inside, on the outside – it’s interface – it is still a set of ways for the player to interact with it.

Usually and traditionally, these ways of interaction have been limited to a few standard forms of interaction. In the example of the keyboard, it is tool not designed for gaming, but for typing. The keyboard is a metaphoric extension of the typewriter, with all the possibilities and limitations that follow.

However good the simulation of an avatar’s body is within the game, it is still interacted with through a series of button-presses or mouse clicks. The systems understanding of a players body is still the finger-ear-eye-hybrid presented by Igoe and O’Sullivan.

The most obvious way of improving the situation is to expand the system’s understanding of what the players body is like. This can be accomplished for example with Microsoft’s Kinect.

Here, the players body is mapped onto the wireframe the game can understand. And actual movement in physical space is mapped onto movement in a digital space, bringing it much closer to what the interface was like in the experiment run by Antle, Corness and Droumeva.

But designers don’t need to be working with systems like Kinect to take our embodiment into account. It can be also done with systems not as aware of our bodies as Kinect.

Let’s first take a look at an example, Elder Scrolls V: Skyrim. All player characters in Skyrim are right handed. This means, that by default, they wield any weapons with the right hand.

When playing Skyrim with a mouse and keyboard, the game defaults to controls inherited from other first person games, mostly first-person shooters. The left mouse button is used for attacking with the right hand, and the right mouse button is used for blocking and spellcasting with the left hand. Essentially, this means that the left mouse button controls the right hand and the right mouse button controls the left hand.

Later in Skyrim, it becomes possible to use both hands for spellcasting. Still, the left hand is the default for casting spells, so the additional second hand is used by the default attack button, that is, the left mouse button.

The character holds both hands up when ready for spellcasting, so you can see both hands on the screen, but when you click the button on the corresponding side, the opposite hand moves.

You can see a similar situation in Borderlands 2, where one of the characters is able to draw a second gun and shoot with both at the same time. But in this case, the designers have noticed the possible problem that this might cause for the players and added the option of switching the mouse buttons when wielding two guns at the same time.

Another simple example of how to handle this situation better would be to look at how the controls are laid out in the Xbox 360 version of the game Dishonored. The controls are laid out so that the actions of the left hand are controlled with the triggers on the left and the actions of the right hand are controlled with the triggers on the right.

Again, you see both hands on the screen to remind you what each is used for, but this time there is correspondence with the buttons that control those hands.

This is almost trivially simple, but compared to playing the same game with a mouse and a keyboard, is more intuitive because it corresponds with our experiences of handedness.

Now, in conclusion, we can recap what game studies can learn from the theory of embodied cognition.

Let’s return to the first two examples discussed about priming. Let’s take a minute to think about how simple priming effects could be used to help the players reach the interpretations and choices meant by the designers.

Cross-domain mapping could be used either from linguistic to spatial processing, or vice versa, for example to guide players through some area or highlight some parts of it. And note how the information itself does not need to change, only the way it is presented, just like in the experiments discussed before.

If we acknowledge that players are being affected by their embodiment, we can move on to discuss how that is happening and how we might use that knowledge to better design game experiences. It may be through using priming effects, or better acknowledging how we humans are used to interacting with the world through our bodies.

In closing, we can think for a moment about what kind of research on embodied cognition we could see in the future.

First of all, we might hope to see research into what would the be effects of embodied cognition specific to games or especially prominent in games.

We can also expect to see studies in cognitive science that use games as part of the experimental setup. This has happened before, and is likely to happen again.

And I’m certainly not surprised by this. If our cognition indeed is situated, time-pressured, body-based, action-oriented and done in conjunction with the environment, it is very suitable for dealing with the kinds of problems most games present.