Chapter V

Results of the Protocol Study

 

In this chapter I focus on the part of the protocol reported in Appendix A. This protocol was the second of the three protocols obtained from these subjects. Recall that the first protocol was intended as a introduction to the problem, and that I expect that in the second protocol the subjects will have a good notion of the task and how to accomplish it.

 

Modeling the Conversation

 

 Having transcribed the protocol as described in Chapter IV, the illocutions of the acts noted in the transcripts were then developed through interpretation of (1) the specifics of the verbal and non‑verbal behaviors associated with the act, (2) the known context of the conversation, and (3) the context of the conversation reasonably imputable to the subjects. In Cohen’s (1984) terms, I derived the illocutionary acts as a rational strategy of action, given attributions of the participants’ beliefs, goals, and expectations at the point in the discourse in which the illocutionary acts actually occurred. Note that the apparent subjectiveness of this method is both unavoidable and unobjectionable. It is unavoidable in a qualitative study which seeks to model otherwise unobtainable mental states; post-elicitation of subjects’ mental states about comprehension and production are likely to produce confabulations, because the states and acts are generally unconscious ones. The subjectiveness is unobjectionable because there really is no single objective account of the conversation. Although the conversants are repairing a shared model of the conversation, the sharedness is a subjective quality; they do not really share the actual verbatim states. Therefore, an interpretation of the interaction which plausibly explains the interchange can be taken as both valid and useful. This approach is consistent by recent work in understanding language as an action situated in its context (Suchman, 1987). In this view, the aim of research into language as action is not to produce formal models but to explore the relation of knowledge and action to the particular circumstances in which knowing and acting occur. This approach requires changes in the methodology of research on purposeful linguistic action:

 

The first [change] is a fundamental change in perspective, such that the contingence of action on a complex world of objects, artifacts, and other actors, located in space and time, is no longer treated as an extraneous problem with which the individual actor must contend, but rather is seen as the essential resource that makes knowledge possible and gives action its sense. The second change is a renewed commitment to grounding theories of action in empirical evidence: that is, to building generalizations inductively from records of particular, naturally occurring activities, and maintaining the theory’s accountability to that evidence. Finally, and perhaps most importantly, this approach assumes that the coherence of action is not adequately explained by either preconceived cognitive schema or institutionalized social norms. Rather, the organization of situated action is an emergent property of moment‑by‑moment interactions between actions, and between actors and the environment of their action. (Suchman, 1987, p. 179)

 

Accordingly, based on the hierarchy of illocutionary acts in conversation presented in Figure 2, and its elaboration in Chapter III into a taxonomy, an initial interpretation of the illocution of the conversants’ acts was determined. This set of acts and their mapping onto conversational phenomena were refined in successive passes back over the protocol. The final set of acts is listed in Appendix B, Predicate Representations.

 

Modeling of Meta-Locutionary Acts

 

From the transcribed and coded protocol, I developed a representation of one of the conversants’ model of their conversation. That is, as described in Chapter II, the conversants are viewed as jointly constructing a conversation. Each conversant is checking his or her model of the constructed conversation against the actual evidence of the conversation, namely the utterances themselves in their context. Although the terms may be slightly confusing here, the model which I developed of A’s model is, in effect, a hypothesis as to a set of beliefs which would permit a rational agent to achieve this particular coherent conversation under the contextual circumstances. Thus beginning with some reasonably inferred state, A’s model can be updated act-by-act to reflect additions and deletions from the state of the conversation. Figure 10 shows the initial second of the protocol, the initial state of A’s model (after B’s act Bi1, which is directed to the experimenter and is not further considered here), the changes in A’s model resulting from A and B’s conversational acts, and the state of A’s model after A’s act Ai2.9 The initial state reflects A’s immediate domain knowledge and her immediate domain goal. The changes in A’s model due to the acts of both A and B represent A’s perceptions of the effects on the shared conversational model. Note that a similar set of perceptions, changes, and states can be constructed for B. Both A and B’s models, though, should be viewed as representing what A and B respectively believe the state of the conversation to be. Thus from an initial state of (i) A’s not knowing the first letter of the sequence and (ii) A wanting B to inform her of the first letter, the sequence of acts B2, A1, A2 results in the state of A’s model being that (i) A does not know the first letter of the sequence, (ii) A wants B to inform her of the first letter, (iii) it is A’s turn in the conversation, and (iv) A has informed B that A does not know the first letter.

 

Using this technique, the state of A’s model of the conversation was determined after each act identified in the transcribed segment of the protocol. This account of A’s model is set out in Appendix C.

 

 

 

T  00.0   00.2   00.4   00.6   00.8   01.0...  

=============================================

Av                   blank...................

Ap (turn'g-hUBeB)hDB... 

Ai               Ai1 Ai2

Bv 'kay....................

Bp (hAeA,rlhU)   rlhD,together,leaning-back 

Bi Bi1           Bi2   

 

 

Bi1:  {indicating that conversation can start}

      = not(know(A,<first letter>))

      = wants(A,inform(B,A,<first letter>))

Bi2:  give‑turn(B,A)[1][1][1]

      + accedes(B,(turn(A))

Ai1:  acknowledge-turn(A)

      ‑ accedes(B,(turn(A))

      + turn(A)

Ai2:  inform(A,B,{?X = doesn't have first letter})

      + informed(A,B,not(know(A,<first letter>)))

 

 

not(know(A,<first letter>))

wants(A,inform(B,A,<first letter>))

turn(A)

informed(A,B,not(know(A,<first letter>)))

 

Figure 10. Illocutionary interpretation of the protocol. The figure contains the first second of the protocol, the initial state of A’s model (after B’s act Bi1, which is directed to the experimenter and is not further considered here), the changes in A’s model resulting from A and B’s conversational acts, and the state of A’s model after A’s act Ai2. `=‘ denotes a predicate initially part of the state, `+’ denotes addition of a predicate to the state, and `‑‘ denotes removal of a predicate from the state.

 

Conversational Operators

 

From the illocutionary acts and the state‑models representations of the conversational structure, I initiated development of a set of operators for the acts, as delineated in Appendix D. The representation used here and in Appendix D is a simplification for purposes of clarity of the more complex form of rule actually used in the simulation discussed in Chapter VI. An example of one of the conversational operators is presented in Figure 11. The left‑hand side “IF” part of the operator represents a set of felicity conditions for execution of the operator. The right-hand “THEN” clause of the operator identifies the act (or acts) to be taken if the “IF” conditions are satisfied. These acts constitute the conversant’s contribution to the structure of the conversation. The changes in the conversant’s account of the mutual model of the conversation are noted in the “EFFECTS” part of the operator.

 

 

 

IF    not(mutually‑known(me,Other,wants(me,Act))

      request(me,Other,Act)

      turn(me)

THEN  repeat(request(me,Other,Act))

EFFECTS     ‑ not(mutually‑known(me,Other,wants(me,Act))

      + mutually‑known(me,Other,wants(me,Act))

 

Figure 11. Operator repeat‑act‑1.  The left‑hand‑side clauses are matched against the conversants model. If all clauses are true and the operator is executed, the acts “repeat” and “give-turn” are performed, with their variables instantiated as matched in the “IF” clauses. The conversant’s model is modified by the accordingly instantiated “EFFECTS” clauses.

 

 

Thus given the state of A’s model of the conversation just after act Bi6, the “IF” part of the operator matches clauses 7 and 2 of the model, plus Act Ai4 of the conversation.  This instantiates the operator as shown in Act Ai5. The instantiated “EFFECTS” clauses are then applied to the state, resulting in the new state following the act.  The details of the process are set out in Figure 12.  Examples of meta-locutionary conversational operators developed from analysis of the protocols are presented in Appendix D.

 

 

 

A's state after act Bi6:

      (1)  wants(A,inform(B,A,<first letter>))

      (2)  turn(A)

      (3)  informed(A,B,not(know(A,<first letter>)))

      (4)  mutually-known(wants(B,turn(B)))

      (5)  wants(B,clarify(A,?X)

      (6)  mutually-known(not(know(A,<first letter>)))

      (7)  not(mutually-known(wants(A,inform(B,A,<first letter>))))

A's act Ai5:

      repeat(A,request(A,B,inform(B,A,{?Y = first letter})))

      give‑turn(A,B)

Effects on A's model:

      ‑ not(mutually-known(wants(A,inform(B,A,<first letter>))))

      ‑ mutually-known(wants(B,turn(B)))

      ‑ turn(A)

      + mutually-known(wants(A,inform(B,A,<first letter>)))

      + turn(B)

A's State after act Ai5:

      (1)  wants(A,inform(B,A,<first letter>))

      (3)  informed(A,B,not(know(A,<first letter>)))

      (5)  wants(B,clarify(A,?X)

      (6)  mutually-known(not(know(A,<first letter>)))

      (8)  mutually-known(wants(A,inform(B,A,<first letter>)))

      (9)  turn(B).

Figure 12. Application of operator repeat-act-1 after act Bi6. The enumerated facts represent the state of A’s active memory.  At the beginning of the figure, A’s state of understanding of the conversation is portrayed immediately after B takes act Bi6. Based on this state, then, A takes the acts which make up act Ai5. The effects of these acts on A’s model of the conversation are then set out. The figure concludes with A’s new model of the conversation, after application of the effects of act Ai5.  Note that the changes in conversants’ turns are effected through a simultaneous application of a give‑turn operator.

 

From these operators conversational plans can be developed. Such plans are not intended to describe multiple turns but rather are intended to show how different operators can be combined through a planning process to achieve complex acts within a turn. For example, if the conditions for application of repeat‑act‑1 were present except for `turn(me)’ then an operator such as T1 might be applied to facilitate repeat-act-1. I want to stress here that the use of operators which can be embodied in rules does not mean that I have proposed a planning system. In the set of operators which were developed from the protocol study and were adapted into the simulation, none relied on backchaining or chained through a single cycle. Rather, the operators were constructed to model (1) situated responses to local context, (2) goal-directed inference reflecting intentionality, and (3) understanding. Conversations resulting from application of the operators as written achieve their interactive behavior entirely through recognition of situations and then consequently posting goals. Suchman (1987) has observed that real conversations are not planned top-down, although later analysis can usually be parsed into plan-like structures:

 

While the organization ... of any interaction can be analyzed post hoc into a hierarchical structure of topics and subtopics, or routines and subroutines, the coherence that the structure represents is actually achieved moment by moment, as a local, collaborative, sequential accomplishment. This stands in marked contrast to the assumptions of students of discourse to the effect that the actual enactment of interaction is the behavioral realization of a plan. Instead, every instance of coherent interaction is an essentially local production, accomplished collaboratively in real time .... (Suchman, 1987, p. 94)

 

Accordingly, the operators proposed here are to be interpreted as constituting local behaviors which, in the aggregate, produce coherent linguistic action. In the implementation of the model as a rule‑based system, as discussed in Chapter VI, the felicity of every operator is assessed on each cycle. Of course conversations have structure; they are intentional processes. In the model proposed here, intentionality is represented by the posting of goals in active memory. Whether a goal is subsequently attained is not a function of the operator that posted the goal. Rather, the achievement or abandonment is a consequent of the matching and executing of later instantiation of operators in response to the local conditions then existing. A person may enter into a conversation intending to ask after the health of the other party’s spouse but never attain that goal because other more urgent matters capture the conversational focus. The person is not stuck with a stack‑based planning model which prevents the person from concluding the conversation without mentioning the other’s spouse. Such goals are simply abandoned in the face of later local situations. The meta‑locutionary model, then, despite relationships among operators, has more of an opportunistic control structure than that of traditional planning systems. The overall structure of a conversation certainly exists in the sense that a given initial--or subsequent, for that matter--state of a conversant’s memory plus his set of operators create, in effect, an expectation as to the conversation’s path. Yet the coherence of the interaction does not depend on that specific path being followed. Each change in the state as a result of interaction creates a new implicit path for the conversation (which could be the same as the old path if the interaction follows expectations). It is also important to note that the conversants’ acts noted in the protocol are complex: they are effected through multiple simultaneous acts at different conversational levels. For example, A’s act Ai5 shown in Figure 12 consists of acts at both the repair and turn‑taking levels.

 

Application of the Model

 

The modeling techniques described above permitted development of plausible explanations of a number of interesting aspects of the conversation. Here I analyze a portion of the protocol in terms of meta-locutionary acts involving reference, turn-taking, and repair. The relevant section of the transcript is presented in Figure 13. The analysis suggests that even in such an apparently simple domain as the joint-recall task, the conversants encounter coherence difficulties which require relatively complex meta-locutionary interaction to resolve. Further, the conversants’ speech acts demonstrate an information‑systemic rationale for the use of indirect speech acts; this arises from the conversants’ mutual responsibility for maintenance of the conversational belief structure.

 

 

T  00.0   00.2   00.4   00.6   00.8   01.0   01.2   01.4   01.6   01.8...

=========================================================================

Av                   blank.................................

Ap (turning‑hUBeB) hDB...

Ai                 Ai1Ai2

Bv 'kay....................

Bp (hAeA,rlhU)   rlhD,together,leaning‑back    mO.......

Bi Bi1           Bi2                           Bi4[1][1]

 

 

T  02.0   02.2   02.4   02.6   02.8   03.0   03.2   03.4   03.6   03.8...

=========================================================================

Av                 is...the.....first...one....so....what...was.the.first

Ap        eBlink

Ai        Ai3        Ai4                                ....Ai5

Bv                                               "O"...........

Bp                                mO

Bi                                Bi5             Bi6

 

 

T  04.0   04.2   04.4   04.6   04.8   05.0   05.2   05.4   05.6   05.8...

=========================================================================

Av ....one........                                  OK...........

Ap                                                       eBlink

Ai                                                  Ai6  Ai6a

Bv                             "O"......                       an'.I.....

Bp                  eBlink

Bi                  Bi7        Bi8                             Bi10

 

 

T  06.0   06.2   06.4   06.6   06.8   07.0   07.2   07.4   07.6   07.8...

=========================================================================

Av            "I"..................          "S"...............   "U"....

Ap         hNod                        hNod/eBlink            hNod/eBlink

Ai         Ai7Ai8                      Ai9   Ai10                  Ai11

Bv ...                                                            "U"..

Bp                                hNod,hNod/eBlink.                hNod

Bi                                Bi10 Bi11                        Bi12

 

Figure 13.  Partial transcript of experimental protocol. This conversation is the beginning of the second trial for subjects A and B.

 

 

 

Reference/Information Acts

 

One level of the taxonomy of meta‑locutionary acts presented in Chapter III was the reference/information level. I begin the analysis of the protocol section by looking at various acts at this level.

 

Request

 

A’s act Ai2, about 0.5 seconds on the transcript timeline in Figure 13, is an act of requesting that B provide information about the first symbol in A’s sequence, a blank. This interpretation is plausible in light of the following factors: (1) We believe that A understands the experimental task of joint recall because she competently performed the task for the first trial. Accordingly, she will have as a goal the act of confirming the entire sequence as mutual. (2) Moreover, we believe that A knows that her first symbol is a blank (because she says so). This leads to the reasonable assumption that A has developed as a subgoal obtaining from B the letter which corresponds to the blank. Therefore, A’s act is probably something like request(A,B,{first letter is “blank”}).10 After act Ai2, A’s active‑memory model of the conversation will then be something like11

 

not(know(A,<first letter>)).

goal(A,assert(B,A,<first letter>))

turn(B)

mutually‑known(A,B,not(know(A,<first letter>))).

 

Assert

 

At Bi6, B’s act--in response to A’s act Ai4--is “assert(B, A, first letter is “O”).” This interpretation is a plausible one in light of three factors. First, B apparently understands the experimental task and can be construed as having the goal that both conversants have mutual knowledge of the first letter of the sequence. Second, A’s request (combining Ai2 and Ai3) was made as part of the conversation, so the conversants have directly copresent knowledge of the request; it would be plausible in the conversational context, then, for B to have responding to A’s request as a specific goal. Third, B’s first letter is, in fact, “O.” His act at Bi6, then, can be seen as an assertion which provides information about a conversational referent, namely the first letter of the sequence. B’s active‑memory model of the conversation is now probably something like

 

asserted(B,{first letter is O})

turn(B).

 

Acknowledge

 

At Bi8, B has (again) asserted that the first letter was “O.” At A’s next act, Ai6, she confirms that she understood that the first letter was “O.” Her act is probably something like acknowledge(A,{first letter is O}), confirm‑mutual(A,B,{first letter is O}). Note that the second part of the act is at the mutual‑knowledge‑repair level, for reasons discussed with respect to repair below.

 

Another example of an acknowledge act is at Ai7. B has, at Bi10, just said “an I ...” Thus as A knows that the next letter is in fact “I” she acknowledges B’s assertion with her act Ai7, which might be coded as acknowledge(A,assert(B,{next letter is I})).12

 

Turn‑Taking Acts

 

The protocol contains examples of situations both where turn‑taking occurred and where it might have occurred. At Ai3 A could have given a turn but does not do so. The context for this act is that B has invited A to speak, A says “Blank,” B indicates that he wants to take a turn, and then A blinks. At this point A does not give the turn to B, though, since she goes on to extend her verbal utterance. In the meta‑locutionary interpretation of the protocol, B’s act Bi4 constitutes a control act for the conversation, and A’s subsequent act Ai3 is an acknowledgment of this.13 Note though, that this acknowledgment, as lexicalized, is more of a place‑holder than a full‑blown expression of comprehension. This act is, in a sense, a vestigial one. That is, it marks a place in the conversation where A would have interposed an indication of miscomprehension had she failed to understand B’s act. As A in fact did understand the act, no indication of miscomprehension was needed. It aids the conversational process, though, for A to mark this place in the conversation because, absent such a marker, B may be forced to wait an indefinite period for the miscomprehension indicator before he continues. Accordingly, A’s act at Ai3 can be seen either as an acknowledgment of B’s act at Bi4 or as holding her turn--because B didn’t respond for almost a second after A said “Blank.

 

Give-turn

 

A’s act at Ai2, where she utters “Blank,” can be viewed as a complex act. It is not only a request, as discussed above, but also the giving of a turn. The fact that A expects B to take the turn after Ai2 is apparent from A’s subsequent pause between 01.6 and 02.5 seconds on the protocol timeline. A’s complete act at Ai2, then, is probably something like request(A,B,{first letter is “blank”}), give‑turn(A,B). This is a logical consequence of the circumstance that A is making a request of B.

 

Similarly, A’s act at Ai6a can also be viewed as something like give‑turn(A,B). Here A has just acknowledged at Ai6 that she understands B’s assertion that the next letter is “O.” In the absence of a turn‑taking or turn‑giving act, the control of the conversation would appear to rest with A. B’s subsequent picking up of the conversation at that point is evidence that the turn has been given to him.

 

Acts Repairing the Mutual Model

 

As was seen to be the case with turn‑taking, there were contexts where an act occurred and other contexts where an act might have occurred but did not. Similar circumstances occur with respect to acts which repair the conversants’ mutual model of their interaction. The experimental protocol is replete with occasions where one or both of the conversants is in a position to detect divergence of their models of the conversation. In some cases, repair acts are undertaken; in others, repair is apparently deemed not necessary. The following discussions examines examples of both cases.

 

Clarification Repairs Made

 

A’s acts at Ai4 and Ai5 can both be viewed most plausibly as acts of repair of the mutual model of the conversation. At Ai2, A has--she believes--made a request for B to assert the first letter of the sequence because she has a blank. She waits, she acknowledges that B seems to want at turn, and yet B doesn’t say anything. At Ai4, then, A elaborates her utterance to make explicit the informational content of her act at Ai2.

 

At about 3.3 seconds into the conversation, at act Bi6, B asserts that the first letter of the sequence is “O.” A, however, goes on at act Ai5 to request explicitly that B tell her the first letter of the sequence. Why, then, does A take act Ai5?

 

Conversant A naturally enough does not have access to either B’s actual intentions in saying “O” or to any genuinely objective account of the shared conversational structure. Rather, she depends on her own interpretative faculties to place B’s utterance coherently into their conversation. In this case, A’s action can be accounted for by realizing that she interpreted B’s utterance “O” as “Oh”--that is, acknowledging A’s statement at Ai4 but not telling her what the letter actually is. The effect of this misunderstanding can be traced through A’s model of the conversation. Before B’s act Bi6, A’s model is:

 

wants(A,inform(B,A,<first letter>))

turn(A)

asserted(A,B,not(know(A,<first letter>)))

mutually-known(wants(B,turn(B)))

wants(B,clarify(A,?X)

mutually-known(not(know(A,<first letter>))).

 

At Bi6, B’s act (from B’s point of view) is “assert(B, A, {first letter is “O”}),” but A hears “Oh.” and thinks that B’s act is “mutually-known(A,B,not(know(A,<first letter>))),” but not “acknowledge(B,request(A,B,assert(B,A,{first letter})))).” As a consequence, A updates her model of the conversation as follows:

 

‑ not(know(A,<first letter>))

+ mutually-known(not(know(A,<first letter>)))

+ not(mutually-known(wants(A,assert(B,A,<first letter>)))).

 

Applying these changes, A’s model is now:

 

wants(A,assert(B,A,<first letter>))

turn(A)

asserted(A,B,not(know(A,<first letter>)))

mutually-known(wants(B,turn(B)))

wants(B,clarify(A,?X)

mutually-known(not(know(A,<first letter>)))

not(mutually‑known(wants(A,assert(B,A,<first letter>)))).

 

Note that A and B’s models have diverged significantly at this point, since B’s model of the conversation, as discussed above with respect to assert acts, presumably contains something like

 

asserted(B, A,{first letter is “O”})

mutually‑known(A,B,{first letter is “O”}).

 

In A’s version of the conversation, though, the state is that A has told B that her first letter is blank, B has indicated that he has understood this, but B has not told her what the letter was. A thus concludes that B must not have interpreted her utterance at Ai4 as a request. This leads her to then repeat her request in act Ai5. When B in turn repeats “O,” A presumably realizes their misunderstanding and updates her model accordingly.

 

It could be argued that B’s long pause (between 01.5 and 02.5 seconds of the transcript) before A again begins speaking represents his non‑comprehension of A’s act at Ai2. This interpretation, however, is implausible in light of the express negotiation of interaction patterns which had just taken place a few minutes before in the previous “training” protocol. It is consistent with the negotiated forms of interaction from the first protocol for B to understand that A had a blank for her first letter. If anything, B’s pause should be ascribed to an expectation that A was going to continue recounting her sequence.

It may also be true that physical constraints on cognition are determining some aspects of A’s utterance at Ai5. If she hears B say “O” while she is producing the utterance, she may simply finish her utterance. In either case, though, B hears A make a clarification of her original request.

 

Implications for Computational Views of Indirect Speech Acts

 

The modeling and explication of this exchange also has some implications for the theory of indirect speech acts. Unlike indirect acts resulting from politeness (as discussed in Chapter I), here we see evidence of indirection arising from coupling a desire for efficiency with the creation of the conversation as a shared structure. A’s act at Ai2, where she opens the verbal part of the conversation with "Blank," is indirect. Although her direct act is, on its face, an assertion, but she clearly intends the utterance to function as a request. B certainly interprets it as a request. Thus if the underlying logical structure of A’s act was really request(A,B,inform(B,A, ?Y = first letter))), why didn’t she just make a direct request or question along the lines of “What’s the first letter?”

 

The reason for A’s indirection comes from the meta‑character of A’s discourse interaction. She is engaged in the process of building with B a mutual model of the context. For both A and B to have mutual knowledge of the true character of the sequence, A needs to tell B that she has blank for her first letter. But, if this is the case, why then did A not say something like “I have a blank so what’s the first letter?” Here the reason appears to be efficiency. It is more efficient just to assert “I have a blank” because this utterance also functions as an indirect speech act constituting a request. A’s striving for efficiency is apparent from the form of her actual locution, which is simply “Blank.” When A then misapprehends that B has perceived only her direct act and has failed to perceive the indirect act, she then produces the request as a direct act: “... so what is the first letter?”

 

This analysis thus suggests that some indirect speech acts may result from meta‑locutionary attempts to maintain the shared model of the conversation while interacting efficiently. This is a systematic, as opposed to a socially conventional, motivation for indirect speech acts. That is, while some acts appear to establish social relationships or conform to social expectations of politeness, the indirect act taken by A here appears to arise directly from the nature of language as a medium for interaction.

 

Clarification Repairs Not Made

 

In the preceding discussion, I suggested that at Bi6 B intends to tell A that the first letter is “O” but A understands B as meaning “Oh” and simply confirming that he understood her. Having just gone through this miscommunication and minor repair, B is again coïncidentally misunderstood by A in his act Bi10. Interestingly, the misunderstanding is precisely the converse of what they had just experienced. At Ai6a gives the turn back to B. B then says “an’ I ...” and A immediately confirms at Ai7 and Ai8 that the next letter is indeed “I.” A’s further recitation of the succeeding letters is evidence that she believes that their understanding is mutual that the second letter is “I.” At Bi10, then, it is probable that A interprets B’s act as something like inform(B,A,{next letter is “I”}). As a result, A’s model of the conversation in plausibly in this state:

 

Mutually-known(A,B,{first letter is O})

turn(B)

mutually-known(A,B,{second letter is I})

 

However, A’s interpretation, which then leads to her acts at Ai7 and Ai8, is mistaken. Obviously, she does not have direct access to B’s sequence and therefore did not know that B’s second letter was a blank! In other words, if we were following B’s model instead of A’s, B’s act at Bi10 is actually the start of something like “inform(B,A,{my next letter is a blank}.” Lexicalized, the full locution of B’s aborted utterance would probably have been “and I ... have a blank.” Through coïncidence, A’s second letter turned out to have been “I,” so she heard B’s utterance not as a word but as a letter. Note that this is virtually the exact converse of her misunderstanding about “O.”

 

In this case, though, no repair utterances ensue. This appears to be the result of fairly rapid inference on B’s part. A stretches out her locution confirming “I” and after 0.5 second B nods twice rapidly and blinks (perhaps in astonishment). B does not need to repair the conversation because at that point B knows that the knowledge that “I” is the second letter is mutual. At this point, then, the conversants have different models of what they think the discourse is, but only B knows this. He’s satisfied to continue confirming the recitation of the subsequent letters because the future path of the conversation is not likely to be adversely affected. Eventually the conversants’ complete their joint recall of the sequence without A ever learning that B’s second symbol was a blank.

 

Consequences of Repairs

 

The attenuated acknowledgment at Ai3 is quite a contrast to the explicit verbal acknowledgment given by A after B’s act Bi8. When at Bi8 B repeats “O,” A responds at Ai6 with “OK.” In both cases A is effectively confirming the mutuality of the shared model of the conversation. Why, then, are the acts she takes so different? The answer, as suggested in the model set out in Appendix C, is that in Ai6 A was coping with the consequences of the preceding misunderstanding. If this had been a response to B’s first utterance “O” at Bi6, a normally vestigial response might well have been appropriate, since in this case there would have been no miscomprehension. The actual circumstances at Ai6, though, were that miscomprehension had in fact been communicated by A, so a place‑marker response could be seen as ambiguous; an explicit affirmative response was thus called for.

 

These differences can be seen in the particulars of the acts which A takes at Ai3 and Ai6. At Ai3, A’s act is simply “assert(A, B, comprehend(A, take‑turn(B))).” At Ai6, her act is necessarily more complex: “acknowledge(A, {first letter is O})” and “confirm‑mutual(A, B, first letter is O).” It is the addition of the confirm‑mutual act which results in the explicit acknowledgment.

 

The Computational Utility of Meta-Locutionary Acts

 

In looking at examples of various meta-locutionary acts at different conversational levels, I have tried to point out how explication of the interaction in terms of these acts provides a plausible, rational basis for reconstruction of the conversants’ linguistic behaviors. In considering turn‑taking, for example, the analysis suggests that meta-locutionary acts at the turn-taking level can be applied to a representation of the conversational context to produce an adequate explanation of control. These qualities might be extended to actual control of interaction through appropriate implementation of the operators. In Chapter VI, then, I present a partial implementation of the model in order to show that the theory is sufficient to produce behaviors that, under simulated conditions similar to those encountered and created by the actual conversants, reasonably replicate the kind of interaction observed in the protocol.

 

 

9.In Figure 9 and in the discussion, these notational conventions are followed: Indices: T time, Av A's verbal, Bv B's verbal, Ap A's physical, Bp B's physical, Ai A's illocution, Bi B's illocution; Body: h head, e eyes, a arm, ah hand, r right, l left, f finger, m mouth; Directions: U up, D down, L left, R right; Referents: A conversant, B conversant, O object, E experimenter; Actions: W forehead-wrinkles, C forehead-clear, O open-mouth, S close-mouth; Notation: () previous state, ... duration.

 

10. Actually, Ai2 is somewhat more complex in that it also has a turn-taking component. The turn-taking aspects of the act are discussed later in this section.

 

11. The notation used in these examples is more abstract than that used in the simulation. Conversants no doubt keep track of additional information about these states, including temporal information. For purposes of understandability, the memorial predicates used here are simplified to make clear their most relevant features.

 

12. It turns out that B’s act Bi10 could not have been assert(B,<next letter is I>) because he did not know the next letter was “I.” I examine the consequences of this in the discussion of repair acts below. Nevertheless, A’s act Ai7 is still a valid act, because it depends on her interpretation of what B meant at Bi10.

 

13. The analysis here of A’s blink at Ai3 as an act of acknowledgment does not imply that blinks can be uniformly interpreted as having this meaning. First, the meaning of action is provided by its context; a blink under different circumstances will communicate different information to the other conversant--even “I've got some dust in my eye.” Second, actions may be involuntary concomitants of communication. See the discussion of intention and action in Chapter III and the discussion of lexicalization in Chapter VI.