Intersubjective Understanding in Interpreted Table Conversations for Deafblind Persons

 Accepted on 13 Sep 2021            Submitted on 25 Jan 2021

Introduction

One of the most common activities in everyday life is to sit down with others, eat, and engage in social discussions. It is both physical activity and a dialogical activity; participants must handle the situated artifacts, establish mutual attentions, and coordinate their talk with the other members of the group. The question to be raised in this study is how intersubjectivity and shared understanding can be defined by tactile and haptic resources established in interpreter mediated group dialogues for deafblind persons. The study partners have analyzed video recordings of table conversations among deafblind persons who are seated around a table at a restaurant banquet. The analysis found that message structures are made accessible through a range of semiotic resources. Further, the analysis discovered that there are multiple states of intersubjectivities. It is not only the deafblind participants who establish states of temporal intersubjectivity in the dialogue with each other; the interpreters are also part of their social world.

The aim of this study is to establish more awareness of multimodal strategies that can support meaning-making in communication with deafblind participants. The article enlights insight into the interpreters’ professional practice and how they alternate between interpreting spoken utterances, describing the context, and facilitating different kinds of haptic signals. This insight is also relevant to other professional groups working to assist deafblind persons, for instance, teachers, social workers, health workers and personal assistants. The findings will also contribute to the field of linguistics research. By exploring a specific setting, where multimodal resources are used by participants who have a combined sensory loss, we can understand language use in this specific setting, but also theoretical concepts about general structures in human communication and how they, as a group, are are “carrying out courses of action in concert with one another” (Goodwin 2013).

The deafblind persons who participated in this study were from Norway and Sweden. The interpreting service for deafblind persons in those countries involves the actions of interpreting, describing, and providing guidance (Berge & Raanes 2013; WASLI 2013). Interpreting conveys a mediation of the meaning in the speakers’ utterances. Descriptions convey information about the context and the ongoing activity, e.g., the arrangement in the room, who is present, what they are doing, who is talking and the listeners’ reactions. Guidance conveys information about how to mobilize oneself safely in the physical environment. Because the amount of information is too great to communicate in its entirety, the interpreter must make critical choices regarding the focus of mediation, the clarification of topics, and mediation strategies. Deaf and deafblind persons have access to hire interpreters from a public interpreter service. In Norway, this right was established in 1981 for deaf persons, and in 1986 for deafblind persons. The service is free to use in conversations connected to work, education, health, and everyday activities.

Deafblind individuals are a non-homogeneous group when it comes to sensory loss and preferred communication methods. Therefore, the preferred interpretation techniques vary. Persons who are primarily deaf often use signs that can be recognized through either residual sight or tactile sign language. Persons who are primarily hearing may capture the speakers’ utterances by use of hearing aids, and the talk is addressed to the persons talking with a clear voice. Haptic signals are a variety of conventional signals constructed on the deafblind person’s body. Their purpose is to provide contextualising information about the environment and the other participants’ nonverbal expressions such as their turn-taking signals, minimal-response signals, and emotional expressions. As such, the deafblind person can use the haptic signals to understand the activity they are taking part in, and use that information to regulate their own selfpresentation (Bjørge & Rehder 2015; Raanes & Berge 2017). Such signals have been increased in many communities of deafblind individuals during the last decades and are often mediated simultaneously with the interpretation of utterances or the description of context (Edwards 2018; Raanes 2020a, 2020b).

Interpreting as a situated, joint activity

Different theories provide different explanations on the construction of meaning and interpretation, which in turn influence the view of interpreting and their function for the primary participants (Wadensjö 1998). The monological approach thinks that meaning is related to the speakers’ literal words. The interpreter’s responsibility is to be a neutral channel between the primary participants, and they should be as “invisible” as possible in the situation. This approach has had a strong influence on professional practice and ethical guidelines, though it has been criticized as well. The sociocultural approach to explaining interpreting, especially with the contribution of Linell (1998) and Wadensjö (1998), sees meaning-making as a situated activity, unfolding sequence by sequence and in joint interaction between the participants. The interpreter is seen as an interlocutor: She or he seldom express their opinions, but their understanding and choice of actions will influence what information is passed on (and not), and their physical presence influences the seating arrangements, the rhythm of talk, and the establishment of mutual gaze.

Clearly, in interpreted mediated dialouges there will be some barriers that are not found in settings where the participants use the same language. In the field of interpreting, there are two approaches to whether interpreters have a responsibility to reduce those barriers. The monological point of view is that it is not the interpreters› responsibility. If the spoken words are mediated, the primary participants must solve the interaction barriers themselves. Conversely, the dialogical point of view values this as a joint responsibility. In order to maintain the dialogue, all participants coordinate their talk and actions toward each other. The interpreters’ speaking position is in the middle of different participants and languages, and they have the opportunity to explicitly coordinate the interaction order and the message structures (Wadensjö 1998). For instance, in sign language interpreting for deaf and for deafblind persons, it is documented that interpreters use embodied signals to coordinate mutual attention, negotiate turn-taking, and exchange minimal-response signals to maintain the turn-taking in the primary participant›s dialogue (Berge & Raanes 2013; Berge 2014; Raanes & Berge 2017; Gabarro-Lopez & Mesch 2020). The interpreter’s vision about language and role-set creates a professional framework for their performance. This requires the establishment of a shared situation definition with the primary participants.

Intersubjectivity

Human dialogues are concentrated around the collaborative aspects of constructing a shared understanding of the situation, what they want to achieve, and the others’ contribution. This can be described with the concept of intersubjectivity (Rommetveit 1974; Rommetveit 1985; Rommetveit 1992). Entering a conversational situation, which aspect is focused upon by everyone is determined by the actor’s background knowledge, engagement, and perspective (Rommetveit 1985: 186). Whatever is being established as a shared social world will necessarily be only partially shared. However, even when they know that their perspectives are different, they take for granted that they will understand each other. To do so, there must be a sense of basic trust in the other’s communicative intentions, and that they all adapt their speech following what they assume is the others’ perspective. In asymmetric meetings communication control is unequally shared. The participants have different possibilities to control the focus of attention, identify references, and describe the objects of their world (Rommetveit 1985: 191). Asymmetry can be found both in ordinary and in professional discourse practices, e.g., in conversation roles between parents and children, and between teachers and pupils. This study claims that it also can be found in the relationship between the interpreter and the person with sensory loss. The vision will therefore be to establish professional practices that give deafblind participants access to information in addition to some control of the situation.

Intersubjectivity and message structures

Communication facilitates a transcendence of the private worlds between the participants towards a temporarily shared social reality of what is going on. In order to make the information known to each other, the topic of the dialogue must be made relevant. However, humans often talk in quite cryptic, segmented, and unfulfilled utterances. The talk and the words’ possible meaning potential must therefore be framed toward the background and the context of the conversation. Rommetveit (1974: 36; 2003) has focused on degrees of intersubjectivity in human communication, and has constructed a model of the spatial-temporal-interpersonal coordinates of message structures. The model postulates three actions that have to be established in meaning-making. One feature is the mutual understanding of the place of the dialogue, the second is the timeline the communication refers to (before, now, afterward), and the third is the identity of the speaker and listener. In face-to-face dialogues between two persons, the calification of “I” and “you” are immediately given when the speaker is addressing the listener. However, when they are using deictic words, talking about someone else or referring to past or future events, this reference must be clarified.

Deafblind persons have reduced access to auditive and visual inputs from the situation, meaning that access to background information and language references are not immediately given. The analyzes in this study enlighten strategies interpreters use to mediate such information, and how they contribute to established states of temporally shared social reality—intersubjectivity—between the primary participants.

Intersubjectivity in shared situation definitions

A situation definition is the manner in which objects and events are represented and understood by those who are operating in the problem-solving situation (Wertsch 1985: 159). In an activity, the interlocutors may differ in their representation of the same set of objects and events. To categorize the background information, the participants need to establish a mutual situation definition that sets their focus of attention and their goals within the activity; intersubjectivity exists when the interlocutors share some aspect of their situation definitions (Rommetveit 1974; Rommetveit 2003). Collaboration typically involves the process of defining the situation, and some sort of explicit account of the situation and action pattern is often required (Coffman 1981). Such an account can be provided by clarifying the goal of the activity, the representation of objects, and the possible action patterns for operating them (Wertsch 1984; 1991).

Intersubjectivity as a shared focus of attention

An important perspective in intersubjectivity is how the units of speech, the attention, and the turn at talk are constructed in interaction (Iwasaki 2011). Trevarthen (1998) has studied the interaction between mothers and their infants and how they share attention, either on each other (primary intersubjectivity), or on an object (secondary intersubjectivity). The shared focus of attention contributes to achieving an intersubjective sense of the shifts of turns. This is an embodied, collaborative achievement. In spoken conversation, the participants employ embodied signals expressed through eye gaze, voice modulation, body orientation, environmentally coupled gestures, and complete contextual phrases. Goodwin (2007; 2013) has documented how talk, gesture, embodied action, and structure in the world are mutually coordinated when making meaning to indexical references as you, it, and there. While attending their body orientation and their gaze toward each other and the world that is the focus of their attention, the participants manage to organize their activities. This can be seen as an embodied participation framework (Goodwin 2000), which is central to the organization of talk and interaction.

Intersubjectivity in embodied interaction

Central keys in the embodied framework that support meaning-making are the visible signs and the organization of the participants’ bodies. Goodwin (2011) analyzes moments of intersubjectivity in the conversations of an aphasic man who can utter only three words. He describes the multimodal practices that the man and his conversation partner use to coordinate their actions of talk together. The study reveals the complexity of cooperative semiosis by participants who use arrays of symbolic and environmental resources, including their own and each other bodies. In another work, Goodwin (2000) analyzed how an archeologist uses her index finger to make visible drawings in the sand to explain some findings to her student. By itself, the talk and the actions are partial and too incomplete to accomplish meaning and relevant next action. When joined together in local contextures of action, diverse semiotic resources augment each other and create a whole that is both greater than, and different from, any of its constituent parts (Goodwin 2000).

Earlier Research on Deafblind Interpreting

In Scandinavia, video-based studies on conversations with tactile sign language and haptic signals have increased our knowledge of interpreting for deafblind persons. Johanna Mesch’s (1998) doctoral dissertation studied Swedish tactile sign language, while Eli Raanes’s (2006) studied Norwegian tactile language. They have also worked together and used cognitive theories of blending to study the meaning potentials in signs, gestures, and constructed actions (Mesch, Raanes & Ferrara, 2015). Lahtinen (2008) has studied the use of haptic signals in a Finnish context, and Bjørge and Rehder (2015) have published handbooks on haptic communications from a Norwegian context. [The authors] (2017) studied the function of haptic signals in interpreter-mediated group conversations. They have also studied interpreters’ work on coordinating mutual attention related to managing turn-taking (Berge & Raanes 2013) and how interpreters are sensitive toward the concept of social and inner speech related to the address in the deafblind person’s utterances (Berge 2014). Gabarro-Lopez and Mesch (2020) have studied how interpreters for deafblind persons use environmental artifacts to describe and mediate information about the context. Saltnes (2019) has studied the education of interpreters for deafblind, and how ethic reflection is developed into their practical work. Kermit (2020) has discussed inclusion and participation in interpreter mediated meeting in general.

Fletcher and Guthrie (2013), and Hovland (2019) have studied disturbances in the apprehension of sensation signals, caused by stress relating to collecting fragmented parts of information due to dual sensory loss or because of disturbance related to Charles Bonnet Syndrome—a syndrome that affects visual or auditory field disturbances in perception and interpretation of stimuli. In the United States, Terra Edwards (2018) has studied how social communication develops in societies that are adapted to tactile communication. In Australia, Iwasaki (2020) has studied tactile Australian sign language and considered methodological issues in the collection, transcription, and analysis of CA data. Together with a team of researchers, she has studied the transition from visual to tactile modality by deafblind signer’s (Iwasaki et al. 2019; Willoughby, Manns, Iwasaki & Bartlett 2020), humorous utterances in tactile sign language (Willoughby et al. 2019), and how misunderstandings arise and are resolved (Willoughby et al. 2014; 2018). However, research on intersubjectivity and the process of constructing mutual understanding in tactile communication is still scarce, and when it comes to interpreter mediated communication, earlier publications have not been found.

Method

This research is inspired by the thought of video-ethnography, which produced analysis concerned about of the sequential interaction order in human talks, and how individual actions are situated and related to the other participants’ actions (Heath et al., 2002; Knoblauch, Schnettler, Raab & Soeffner 2012). Also utilized is a multimodal approach striving to analyze the simultaneous use of speech, gestures, gaze, body orientation and actions, and explore how ‘multiple parties are carrying out courses of actions in concert with each other’ (Streeck, Goodwin and LeBaron 2011: 1).

Our data are video recordings from a corpus of Norwegian and Swedish tactile sign language, collected by Johanna Mesch and Eli Raanes in 2019. The data collection lasted for three days, where a group of deafblind persons was involved. From this corpus one situation was selected, a restaurant banquet, where four deafblind persons and eight interpreters took part. The length of the resturant recordings is 28 videoclips, lasting 4 hours and 18 minutes. The project is approved and follows regulations set by the NSD/Norwegian Center for research data. All participants were informed about the purpose of the project, and had signed NSD’s formal informed consents. All names are anonymous, and instead of pictures from the video-recordings, drawings are used to illustrate the person’s actions.

The analytic work began with a thorough review of the video recordings, studying patterns of action related to how interpreters and deafblind persons work together to establish intersubjectivity, understood as a mutual understanding of the context, who was talking, and their messages. The keys in the analysis was spoken and signed utterances, bodily movements, and the use of artifacts. Detailed transcriptions were made of a selection of the video-recordings. Repeated patterns of mediation strategies were identified. Some of these examples are presented in the following section.

Results Defining the seating arrangements

Interpreters’ professional vision is that it is the other participants who are the primary participant of the ongoing activity, and their mediation strives to lay the ground for the others’ decision making. Deafblind persons can use information about the physical context to navigate their behaviour. Therefore, when entering a room, the interpreters often describe the physical arrangements. Then they wait for the deafblind person iniatives about where they would like to sit [citation redacted for review]. The requisited information for this kind of decision making is the size of the room, the seating arrangements, the available chairs, and other people’s location. Interpreters’ professional vision is also expressed in how they use embodied gestures in their guiding so the deafblind person can understand and manage the table context. An example from the video demonstrates:

To sit down at a chair in front of a table is an everyday human action. However, to solve the problem of sitting down at an available chair on the correct side of the table is a complicated activity. The interpreter and the deafblind person are using several different information sources. Some information is mediated through the interpreter’s embodied gestures. First, the deafblind person can feel the movement of walking and stopping, which gives an embodied understanding of the spatial space (lines 1 and 2). In the next sequence of interaction, the interpreter is constructing an embodied gesture as they lays their hand on top of the chair (line 3). In the deafblind community this is an established strategy of guiding, and the message potential “here is your chair” is somehow taken for granted. The deafblind person uses the embodied gesture to navigate their hand down toward the chair (lines 4 and 5). By tactile sences they investigate the distance between the table and the chair (line 6), and use this information to sit down, facing the center of the table (line 7).

Note that there is no utterance of words/signs in this transcribed sequence. Still, there is a meaningful dialogue going on between the interpreter and the deafblind person. In turns, the interpreter and the deafblind person are coordinating their actions, establishing mutual attention toward specific objects, considering the other’s point of view, and constructing an intersubjective understanding of the spatial arrangements. Their embodied gestures acquire meaning from the situation (Goodwin 2007), and their message potential are taken for granted (Rommetveit 1985). Their coordination is also framed with a common understanding of each other’s role: When the interpreter and the deafblind person have established an intersubjective understanding of “here is an available chair, at the table with the other primary participants” the interpreter steps back (line 5), and the deafblind person independently sits down. The interpreter is not taking control over the deafblind person’s body. Rather, they are constructing embodied gestures that the deafblind person uses to define the situation and independently navigate their own action in the problem-solving situation of entering and sitting down at the table.

For the group of deafblind participants and interpreters, the seating arrangement was mainly decided by the deafblind participants. Some were sensitive to the sunlight from the window. Others were concerned about the opportunity to speak directly with other guests:

Figure 2 illustrates the seating arrangements for two deafblind participants (seated in the middle of the picture). In some sequences they talk directly with each other with tactile sign language. In other sequences they have access to mediated information from their personal interpreter. The seating arrangement is therefore adjusted for both direct and mediated talk.

Figure 1 

Definition of where to sit. DB = deafblind.

Figure 2 

Embodied orientation due to direct or mediated talk.

Defining the table environment

The next sequence in the activity is to construct a situation definition of the table arrangements. In Figure 3, notice that the interpreter mediates a description of the table set. However, to solve the problem of eating, the situated placement of the utensils must be identified. To do so, the deafblind participant use tactile resources. Together, the two information sources contribute to establish an intersubjective understanding of the table environment, as this example from the video shows:

Figure 3 

Definition of the table environment.

The mediated word “banquet table” is defined by the tactile feeling of linen tablecloth, wine glass, and porcelain plates. With the use of tactile resources, the meaning potential in the mediated word is given meaningful structure, and an intersubjective situation definition is constructed. The tactile exploration also creates an embodied feeling of the genre and the language games. There is a genre related to being at a restaurant. Managing it lays the ground for participating in the joint activity “talking and dining at a nice restaurant.” This language competence is somehow taken for granted: The interpreter trusts the deafblind to manage the genre without any further guidance.

The table environment is not static. During a four-course banquet, there are several temporal situation definitions (Rommetveit 1985). For each course, the food is presented by the chef, and waiters place the plate in front of the guests. The interpreters mediate the chefs’ utterances and describe the ongoing activity. To handle the problem of eating, the deafblind participants use tactile resources to define the new course, and from that understanding, decide how to use the utensils. When needed, they ask the interpreter for further clarification. Three examples from the analyzed video illustrate this cooperation:

The physical tool for eating, the texture of the food, the size of the course, but also the actions of the others define the situation, and how they decide to handle the problem of eating. The situation definitions are constructed in cooperation between the deafblind and the interpreter. Reduced sight makes it harder to identify the food and get it on the fork. In Figure 5, one of the primary participants is completing his main course, but some piece of meat remains on his plate. His interpreter assumes that he wants to know this information since he earlier has expressed that he enjoyed the food. To make this information accessible, the interpreter constructs haptic signals on the deafblind person’s back.

Figure 4 

Seeking information to make definitions of the meal.

Figure 5 

Definitions through haptic signals.

The duration of this sequence is 1 minute and 33 seconds. Transcribing with the program ELAN, we counted that the interpreter’s hand touches the deafblind person’s back 14 times, constructing haptic signals for a total of 18 seconds. The first is when the interpreter is making a haptic signal “circle” on John’s back (line 1). This signal can have different meaning potential: a plate, a face, or a round table. The situated meaning must therefore be defined. The most likely meaning potential is that the haptic sign is referring to the plate. John is moving his knife and fork, searching for the last bites. In the next sequence of interaction, the interpreter is pointing at the left side of the circle (line 2). This haptic signal contributes to establishing mutual attention, and John starts to search for food on the left side of the plate. However, food is not easy to find. The interpreter describes where the knife and fork should be placed with small haptic movements (lines 4, 6, 8). John find the bites, and when finished eating, he turns toward his interpreter and smiles (line 9). The interpreter’s response is several tactile minimal-response signals tapped on his hand, and on his back she makes a haptic signal for “finished” (line 10). This response tells him that they have fulfilled the activity goal: To find the last bites, in a well-prepared meal, where each bite is appreciated.

The haptic signals are a resource used by John when navigating in the problem-solving activity of eating. The analysis of this excerpt illustrates that intersubjectivity requires mutual attention (Trevarthen 1998). The excerpt also illustrates how the meaning potential in haptic signals are situated: They can only be meaningful when they are connected to the ongoing activity. Also illustrated here is how the interpreter’s mediation is timed and adjusted towards John’s actions towards his dinner plate, and that their dialogue is driven by an mutual exchange of iniatives and responses (Linell 1998).

Defining the speakers’ identity

Two central persons in this situation are the waiter and the chef. They have responsibilities of being a host, taking orders, introducing the food, and serving. Identifying the speaker lays the ground for establishing an intersubjective understanding of the message structures (Rommetveit 1974). A responsibility among the interpreters is, therefore, to mediate information about who is doing the talking. To do so, they first introduce the speakers’ name or role, they point at the speaker, and then they mediate their utterances. In some cases, if the speaker is introduced for the first time, the interpreters might describe some features of their appearance. One example of this strategy is when the waiter comes to take the order for their drinks:

In this excerpt, the interpreter is framing the turn at talk and the message structures. The appearance of the new speaker is explicitly made clear by introducing her as “the waiter,” her features are described as “a young lady,” and her physical placement is pointed out with an embodied gesture. The gesture of pointing out the waiters’ location can be used by the deafblind persons to navigate her sight and body orientation towards the other speaker. Due to interpreting the spoken utterances, the waiters’ expectation of receiving an answer is highlighted. The question, originally addressed to the whole group, is coordinated toward a direct question to “YOU.” Put together, the interpreters’ mediation strategy is informing the deafblind person that the waiter, standing nearby, just now, is addressing you, and is waiting for an answer. The mediation is giving information about the place, the time, and the identity of the speaker, and is responding to Rommetveit’s (1974: 36) model of basic elements to understand the message structures.

The interpreter’s work of clarifying the identity of the speaker is further illustrated in Figure 7 when the chef arrives to present today’s menu:

Figure 6 

Embodied gestures for visual navigation.

Figure 7 

Identifying the “I” and “you.”

In line 1 and 2, the interpreter introduces the new speaker by describing his identity as “the chef,” and she constructs an embodied gesture of pointing towards his placement. She is then taking a pause, giving the deafblind person time to navigate her gaze towards the chef (line 3). This mediation strategy lays the ground for understanding the turns of speakers.

We can also notice that when the chef says “I have prepared a meal for you,” the interpreter points to her own chest for the reference “I.” However, this sign does not refer to the interpreter, but the chef. Grounded in earlier experiences of restaurant interactions, the interpreter takes for granted that the deafblind person understands that the utterance is said by the chef, and not herself. In other sequences, when the interpreter contributes with personal utterances (for instance in Figure 5, line 10), the identity of the speaker is somehow implicitly understood due to their situated interaction.

The interpreter constructs her mediation in a way that reflects the genre of politeness in the chef’s and the waiter’s utterances. This genre is also used by the deafblind lady when she turns her body and face towards the chef, nodding and smiling at his directions in Figure 7 (lines 3 and 8). Her embodied gestures are navigated in line with the interpreter’s descriptions of the situation, and in earlier experiences of politeness. As figures, the three of them are part of the language game that takes place at the restaurant. To make this game, the interpreters must master the speaking genre, but also set focus on a complex activity. This paves the way for establishing an intersubjective understanding between the primary participants, which is chaining their dialogical interaction (Trevarthen 1998; Linell 1998; Goodwin 2011).

This is a context with several participants. In some sequences of their dialogue, they have mutual attention at one specific person, like the chef. In other sequences, several side talks are going on. There are dialogues between an interpreter and deafblind, between two deafblind seated nearby each other, but also between deafblind seated on each side of the table. Figure 8 illustrates a side-by-side dialogue that went on for 5 minutes and 41 seconds. It was between a deafblind person using tactile sign language, and a deaf person seated on the other side of the table. In front of the picture sits the deafblind person (to the right), and her interpreter (to the left), and the deaf person is seated in the back of the picture:

Figure 8 

Mutual attention across the table.

The basis for being able to establish mutual attention with a participant seated on the other side of the table is the interpreter’s description of the seating arrangements. Each participant knows who is present and where they are seated. To negotiate for the other person’s attention, the deafblind lady use an embodied gestures of waving (line 1). The interpreter coordinates her initiative by turning her gaze at the addressed person (line 2), and they establish eye contact (line 3). Information about the establishment of eye contact is mediated through a series of tactile response signals (line 4). The deafblind person understands that she has the deaf person’s attention, and she starts to talk (line 5). During their dialogue, the interpreter is both mediating their utterances and coordinating their interaction by turning her gaze at the speaker who has the turn at talk (line 7). However, after some time, their side-talk becomes the main talk around the table as several interpreters are incorporating their talk in their interpretation. As a team, the interpreters are orchestrating the primary participants dialogue by constantly using their professional vision to make evaluations of what is going on around the table, and who is participating as listeners or talkers.

Conclusions

This article studies sign language interpreters at work. It is shown that their mediation strategy is complex (Raanes & Berge 2017; Kermit 2020; Saltnes 2019). They coordinate the deafblind persons’ attention to the material world by describing the environment, the table, the food, the other participants’ placement, and the ongoing activity. They are interpreting the primary participants’ utterances, and they are coordinating their interaction by framing the message structure and the speakers’ identity. They are also involved in the negotiation process of establishing mutual attention and chaining their turns at talk. Due to their contribution, the deafblind participants, despite their inability to see or hear one another, can navigate themselves in the ongoing activity and decide their dialogical contribution. When leaving the restaurant, they have all experienced being part of a group enjoying a meal at a fine dining restaurant.

In a Norwegian and Swedish context, interpreters for deafblind work in formal and informal settings. This study analyzes video-data from an informal setting, where a group of deafblind people is joining a dinner party, and where their dialogue builds up naturally. In this setting, the interpreters contribute. Not with sharing their own opinions, but with framing the meaning potential in the spoken utterances and in the sequences of interaction. Their work of linguistic and interactional coordination is in line with the understanding of interpreting as a sociocultural and situated activity, where all participants contribute (Linell 1998; Wadensjö 1998). Not many studies of deafblind’s communication have dealt with multiparty conversations, or used sociocultural theories as a point of departure. This study found the approach useful for understanding theoretical concepts and for describing professional practice.

The multimodal communication methods analyzed in this study are developed within the deafblind communities. The study can, therefore, increase our understanding of how to adapt communication with people who have sensory loss. The analysis presented in this article complements linguistic theories about intersubjectivity and construction of shared understanding in general. The intersubjective perspective in the process of framing and making meaning has been under-communicated in interpreter studies. By bringing in the dialogical theoretical basis of Linell (1998), Rommetveit (1974; 2003), Goodwin (2011), and Goffman (1981) this study contributes to how meaning builds up as a joint activity in situated contexts by embodied signals. In this perspective the analyzed data are representing a crossing point between theoretical and empirical knowledge.

A new contribution from this study is the concept of “environmental intersubjectivity.” The analysis has highlighted that contextual information is necessary for deafblind persons when handling the activity of eating, but also for understanding different participants› utterances. The chef, the waiter, other deafblind guests, and the interpreters› utterances are understood in light of their contextual role and the ongoing activity around the table. The analysis has also shown that several states of intersubjectivity are established. Some states unfold between the interpreter and the deafblind person they are interpreting for, while other states are shared between the primary participants.

The analysis has also documented the simultaneous use of different semiotic resources. However, one can also notice that interpreted mediated dialogues for deafblind persons has their own rhythm: It takes some extra time to describe the context and to let the deafblind person orient themselves in the physical environment. In this setting, all the primary participants were deafblind and used to coordinating their interactions due to their disability and to their preferred communication method. Additionally, the interpreters are adjusting their mediation toward this demand, pausing where the deafblind persons can see, feel, and handle the artifacts. However, the need for pauses and timing of talk might be a new aspect for those who aren’t experienced in talking with deafblind persons.

The interpreters’ action is framed by their professional vision, which is highlighting that it is the deafblind interlocutors who are the primary participants of the activity. The deafblind participants have information to express their meanings, use different genres (politeness, humor, decision making), and they know when the floor is ready for their speech. This is only possible if interpreters and the deafblind participants cooperate (Raanes & Berge 2017). With inspiration from Bakhtin (1981), we can say that it is a multi-voiced situation, where the voice of the deafblind persons is ventriloquized through the interpreters’ voice.

Having access to independently handle the environment can contribute to a deafblind persons feeling of empowerment. To sit at a restaurant table, to eat and talk with friends might not be an everyday life experience. For instance, to be informed about the other participants› location and have the possibility to decide one’s own seating at the table, is a different experience than being placed in a random chair. Likewise, to have information about the course of meals, and be able to eat the food yourself is different experience than being fed or served food that is cut in pieces by a helper. Also, to be able to identify the chef and the waiter, and turn politely to them, lays the ground for participating in this genre and language games.

The interpreters’ multimodal strategy of mediating situated information, and their professional vision of adapting their services so that the deafblind persons are the primary participants, is a professional vision that other professions may beneficially adopt.

Acknowledgements

Thanks to all participants, to the restaurant for creating the dinner party exclusively for our group, and to the two anonymous reviewers giving fruitful comments to the text.

Competing Interests

The authors have no competing interests to declare.

References

Bakhtin, Michail M. 1981. The Dialogic Imagination. Translated by C. Emerson, M. Holquist. Austin: University of Texas. 

Berge, Sigrid Slettebakk. 2014. “Social and private speech in an interpreted meeting of deafblind persons.” Interpreting 16(1): 81–105. DOI: https://doi.org/10.1075/intp.16.1.05ber 

Berge, Sigrid Slettebakk, and Eli Raanes. 2013. “Coordinating the Chain of Utterances: An Analysis of Communicative Flow and Turn Taking in an Interpreted Group Dialogue for Deaf-Blind Persons.” Sign Language Studies 13(3): 350–371. DOI: https://doi.org/10.1353/sls.2013.0007 

Bjørge, Hildebjørg, and Katrine Rehder. 2015. Haptic Communication. Sands Point, New York: Helen Keller National Center. 

Edwards, Terra. 2018. “Re-Channeling Language: The Mutual Restructuring of Language and Infrastructure among Deafblind People at Gallaudet University.” Journal of Linguistic Anthropology 28(3): 273–292. DOI: https://doi.org/10.1111/jola.12199 

Fletcher, Paula C., and Dawn M. Guthrie. 2013. “The Lived Experiences of Individuals with Acquired Deafblindness: Challenges and the Future.” International Journal of Disability, Community & Rehabilitation 12(1). 

Gabarro-Lopez, Silvia and Johanna Mesch. 2020. “Conveying Environmental Information to Deafblind People: A Study of Tactile Sign Language Interpreting.” Frontiers in Education 5: 157. DOI: https://doi.org/10.3389/feduc.2020.00157 

Goffman, Ervin. 1981. Forms of Talk. Philadelphia: University of Pennsylvania Press. 

Goodwin, Charles. 2000. “Action and Embodiment within Situated Human Interaction.” Journal of Pragmatics 32: 1489–1522. DOI: https://doi.org/10.1016/S0378-2166(99)00096-X 

Goodwin, Charles. 2007. “Environmentally Coupled Gestures.” In Gesture and the Dynamic Dimension of Language, edited by S. Duncan, J. Cassel and E. Levy, 195–212. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/gs.1.18goo 

Goodwin, Charles. 2011. “Contextures of Action.” In Embodied Interaction: Language and Body in the Material World, edited by J. Streeck, C. Goodwin and C. LaBaron, 182–193. New York: Cambridge University Press. DOI: https://doi.org/10.1017/S0047404512000711 

Goodwin, Charles. 2013. “The Co-operative, Transformative Organization of Human Action and Knowledge.” Journal of Pragmatics 46(1): 8–23. DOI: https://doi.org/10.1016/j.pragma.2012.09.003 

Heath, Christian, Paul Luff, Dirk Vom Lehn, Jon Hindmarsh and Jason Cleverly. 2002. “Crafting Participation: Designing Ecologies, Configuring Experience.” Visual Communication 1(1), 9–33. DOI: https://doi.org/10.1177/147035720200100102 

Hovland, Line. 2019. CBS: Charles Bonnet Syndrom. Report. Drammen: Eikholt nasjonalt ressurssenter for døvblinde. 

Iwasaki, Shimako. 2011. “The Multimodal Mechanics of Collaborative Unit Construction in Japanese Conversation.” In Embodied Interaction: Language and the Body in the Material World, edited by J. Streeck, C. Goodwin and C. LaBaron, 106–120. New York: Cambridge University Press. DOI: https://doi.org/10.1016/j.pragma.2018.05.003 

Iwasaki, Shimako, Meredith Bartlett, Howard Manns and Willoughby Louisa. 2019. “The Challenges of Multimodality and Multi-sensoriality: Methodological Issues in Analyzing Tactile Signed Interaction.” Journal of Pragmatics 143: 215–227. DOI: https://doi.org/10.1016/j.pragma.2018.05.003 

Kermit, Patrick. 2020. “Introduction.” In Ethics in Public Service Interpreting, edited by M. Phelan, M. Rudvin, H. Skaaden, and S. Patrick. New York: Routledge. 

Knoblauch, Hubert, Bernt Schnettler, Jügen Raab, and Hans-Georg Soeffner. 2012. Video analysis: methodology and methods: qualitative audiovisual data analysis in sociology (3rd., rev. ed.). Wien: Peter Lang. 

Lahtinen, Riitta. 2008. Haptics and Haptemes: A Case Study of Developmental Process in Social-haptic Communication of Acquired Deafblind People. Essex: A1 Management UK. 

Linell, Per. 1998. Approaching Dialogue: Talk, Interaction and Contexts in Dialogical Perspective. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/impact.3 

Mesch, Johanna. 1998. Teckenspråk i taktil form: Turtagning och frågor i dövblindas samtal på teckenspråk [Sign language in tactile mode: Turntaking and questions in deafblinds’ conversations in sign language]. Doctoral dissertation. Stockholm University, Stockholm. 

Raanes, Eli. (2020a). “Access to Interaction and Context Through Situated Descriptions: A Study of Interpreting for Deafblind Persons.” Frontiers in Psychology 11: 1–15. DOI: https://doi.org/10.3389/fpsyg.2020.573154 

Raanes, Eli. (2020b). “Use of Haptic Signal in Interaction with deaf-blind persons.” In Danielle I. J. and Emily Shaw (Ed.) The Second International Symposium on Signed Language Interpretation and Translation Research. Selected Papers. Washington, DC: Gallaudet University Press. 58–79. 

Raanes, Eli, and Johanna Mesch. 2019. “Dataset: Parallel Corpus of Tactile Norwegian Sign Language and Tactile Swedish Sign Language.” Norwegian University of Science and Technology (NTNU)/Stockholm Univeristy. 

Raanes, Eli, and Sigrid Slettebakk Berge. 2017. “Sign language interpreters’ use of haptic signs in interpreted meetings with deaf

留言 (0)

沒有登入
gif