Skip to navigation – Site map

HomeIssues7-1, 2Computational Grounding: An Overv...

Computational Grounding: An Overview of Common Ground Applications in Conversational Agents

Maria Di Maro
p. 133-156

Abstract

This work reports on the literature on grounding in conversational agents, as one of the pragmatic aspects adopted to ensure a better communicative efficiency in dialogue systems. The paper starts with a general description of the theory of grounding. As far as its computational implications are concerned, grounding phenomena are firstly framed in the common grounding processes described in terms of grounding acts. Secondly, they are considered in the argumentation-related framework within which already grounded information are processed. Open issues and application gaps are finally highlighted.

Top of page

Full text

1. Introduction

1In Stalnaker’s words, “when speakers speak, they presuppose things and what they presuppose guides both what they choose to say and how they intend what they say to be interpreted. To presuppose something means to take it for granted as background information – as common ground among the participants during their conversation” (Stalnaker 2002, 701). In fact, communication is a joint activity in which two speakers must share information or, in other words, they must have a common ground, i.e., mutual knowledge, mutual beliefs, and mutual assumptions, as the foundation for mutual understanding (Clark and Brennan 1991). To coordinate on this process, speakers need to update, check, or revise their common ground with a process that constantly evolves through time. The importance of focusing on this communicative process reflects the need to bridge the gap left in the study and development of dialogue systems caused by the lack of insights into the application of pragmatics to conversational agents. Although pragmatics is very important in dialogue, as it is one of the aspects governing interpretation, understanding, and efficiency, its computational application is mainly focused on the study and identification of speech acts (Leech 2003). Furthermore, in the last ten years, semantics has been a more investigated topic within the dialogue systems field with respect to pragmatics, especially as far as the understanding of the correct recognition of the received intent was concerned, as shown in the publications on dialogue systems (Figure 1).

Figure 1

Figure 1

Number of Google Scholar’s results on publications about dialogue systems applying semantics versus pragmatics from 2010 to 2020 [Retrieved on 30/04/2021].

2On the other hand, as far as pragmatics is concerned, in the last ten years, the research on Common Ground has started to see a thriving impulse (Figure 2). Nevertheless, a more in-depth analysis of pragmatic phenomena, such as Clarification Requests, related to Common Ground construction and consistency checks in human-machine interaction appears to be a missing spot in the research on dialogue systems.

Figure 2

Figure 2

Number of Google Scholar’s results on publications about clarification requests and common ground from 2010 to 2020 [Retrieved on 30/04/2021].

3Different scholars (Bousquet-Vernhettes, Privat, and Vigouroux 2003; Beun and Eijk 2004; Purver 2004a; Roque and Traum 2009; Hough, Zarrieß, and Schlangen 2017; Müller, Paul, and Li 2021) highlighted the urge of including pragmatic aspects in their systems to improve the communication process. This need resulted from the users’ need to interact with an agent capable of cooperating on the communicative actions.

4This survey aims both at presenting a literature review on grounding theories and their application in dialogue systems, and at pointing out pragmatic aspects which still need to find a computational model. The paper is organised as follows: in the next section the theory of grounding is summarised; in section 2.1, the grounding acts reported in (Traum 1994) are explained; in section 3, computational applications of the theory are reported starting from the aforementioned grounding acts to more general works; finally, open issues concerned with the processing of grounding information are presented.

2. Grounding the Grounding Process

5Stalnaker (Stalnaker 2002) defined the notion of common ground as the sum of interlocutors’ mutual, common, or joint beliefs and knowledge. Since Grice (Grice 1975), the importance of cooperation in a successful conversation was pointed out. In Grice (Grice 1989, 65), the term of common ground was introduced as related to communicative processes. In fact, participants in a conversation must have grounded knowledge in order to understand each other. Common ground, as Clark (Clark 2015) acknowledged, can be of four main types: personal, local, communal and specialised. Personal Common Ground is established collecting information over time through communicative exchanges with an interlocutor and it can be considered as a record of shared experiences with that person. A part of Personal Common Ground is Local Common Ground that is tied to a piece of information obtained from a single exchange with an unknown or known interlocutor. According to Clark (Clark 2015), information of this type can be, for instance, the opening hours for a shop, train timetables, and so on. Communal Common Ground refers to the amount of information shared with people belonging to the same community, that is to say, people that share general knowledge, knowledge about social background, education (schools attended, levels of education attained), religion, nationality, and language(s). Within a larger community, a smaller one can be found: Specialised Common Ground pertains to those people that share particular areas of expertise about some domain of knowledge, such as colleagues, friends, or acquaintances. It is marked by specialised vocabulary of that specific domain, such as medicine, law, and so on.

6The process of grounding takes place in dialogue when the interlocutors update their common ground by accumulating information in the perceived common ground. In Clark and Schaefer (Clark and Schaefer 1989), the classical model of grounding is illustrated: dialogue participants reach their mutual belief by checking the mutual understanding. This is accomplished through contributions, that is the communicative actions collected through dialogue. Contributions can be divided into presentation phase and acceptance phase. During the presentation phase, the utterance is presented, whereas in the acceptance phase, the utterance is accepted by the interlocutor as understood. The utterance acceptance or refusal is signalled via diverse types of feedback. The refusal, for instance, can depend on different aspects, such as acoustic, semantic or intentional misunderstanding. According to Allwood et al. (Allwood, Nivre, and Ahlsén 1992, 4–5), feedback is indeed a linguistic mechanism which enables interlocutors to exchange information about four different basic communicative functions: i) contact (i.e., feedback expressing the will and/or ability to continue the interaction); ii) perception (i.e., feedback referring to the will and/or ability to perceive the message); iii) understanding (i.e., feedback about the will and/or ability to understand the message); iv) attitudinal reactions (i.e., feedback referring to the will and/or ability to react and respond appropriately). According to Clark and Brennan (Clark and Brennan 1991), the first main form of positive evidence for acceptance are the acts of acknowledgement (the complete classification of grounding acts is detailed in Section 2.1), in particular: i) back-channel responses that include continuers such as uh, huh or yeah, used to signal that the utterance has been understood and that there is no need to initiate a repair in the next turn; ii) assessments (i.e., gosh, really) that are usually produced without taking the turn. A second form of positive evidence is the initiation of the relevant next turn: suppose A is trying to ask B a question; if B understands it, the answer will be expected in the next turn. Questions and answers constitute adjacency pairs. In other words, once the first part of the adjacency pair is uttered, the second part is considered as conditionally relevant for the next turn. The third and most basic form of positive evidence is continued attention provided by an attentive listener. In conversation, people monitor their partner from time to time and immediately adapt to their feedback. If A utters something and notices that B was not paying attention, A could assume that B did not understand him. B must show that he is paying attention through different social signals, like eye gaze or other communicative feedback. A can, therefore, use phatic expressions (i.e., Are you listening?, You know what I mean?) to understand if B is following, or she can elicit attentive listener feedback in B. On the other hand, B could also want to show his attention by using communicative feedback. Positive evidence of understanding, thus, is provided by communicative feedback and comes with attention that is unbroken or undisturbed (Buschmeier 2018; Buschmeier and Kopp 2018). Furthermore, according to (Clark 1996, 147–48), these actions are processed following the concepts of upward completion, i.e., in a ladder of actions, it is only possible to complete actions from the bottom level u through any level in the ladder, and downward evidence, i.e., in a ladder of actions, evidence that one level is complete is also evidence that all levels below it are complete.

7As argued by Clark and Schaefer (Clark and Schaefer 1989), the strength of evidence that B has understood A can depend on several factors, including the complexity of the presentation, the importance of its understanding, and the closeness among the participants. Moreover, since the acceptance phase can be recursive, as B’s acceptance to A’s presentation needs to be accepted as well, in Traum (Traum 1999) the Strength of Evidence Principle, introduced in Clark and Schaefer (Clark and Schaefer 1989, 268), is instead preferred to avoid recursion. This principle states that “The participants expect that, if evidence e0 is needed for accepting presentation u0, and e1 for accepting presentation of e0, then e1 will be weaker than e0 (Traum 1999, 2). In other words, the evidence is stronger when the need for acceptance is higher. The authors exemplified the principle as follows: A presents a book identification number, f, six, two, B accepts it by displaying it verbatim f, six, two; then A accepts the B’s acceptance by using a weaker evidence like yes. Lastly, B accepts the A evidence by proceeding to the next contribution. The traditional version of this principle exhorts speakers not to expend any more effort than they need to get their addressees to understand them with as little effort. Grice (Grice 1975) used two maxims of the cooperative principle to account for the communicative effort: according to the maxim of quantity, the speaker must not make their contribution more informative than is required, and, according to the maxim of manner, they must also be brief and avoid prolixity. In detail, the general principle of least collaborative effort introduced by Clark and Wilkes-Gibbs (Clark and Wilkes-Gibbs 1986) was used by the authors to criticise the general speaker economy principle (Brown 1958) which does not always represent the right strategy for grounding. As claimed by Clark and Wilkes-Gibbs (Clark and Wilkes-Gibbs 1986), there are three main problems with this principle: i) time pressure, speakers tend to limit the effort for planning an utterance which could result in incorrect productions; ii) errors that a speakers can make during speaking that need to be repaired; iii) ignorance of the personal knowledge and beliefs of the interlocutor can cause improper utterances. Instead, the authors focus on the minimisation of the collaborative effort, as “speakers and addressees try to minimise collaborative effort, the work both speakers and addressees do from the initiation of the referential process to its completion” (Clark and Wilkes-Gibbs 1986, 26).

8From a more cognitive point of view, grounding, referred to as an explicit signal of cooperation in dialogue, is also represented in the cooperation model of communication reported by Tomasello (Tomasello 2010) (Figure 3): the communicator C has individual goals, such as goals and values pursued in their life. If for any reason, C feels that the recipient R can be of any help in the achievement of some goals, C will produce specific acts which will bring R to do something, know something, or share something. This is represented by C’s social intention, which is expressed through communication. Therefore, a communication act (verbal or not verbal) is mutually manifested in the joint attentional frame. C’s communicative intention is consequently shared. C can also draw R’s attention to some referential situation in the external world (referential intention) designed to lead R to infer social intentions via processes of cooperative reasoning (Huang 2017, 282). On the other hand, R attempt to firstly identify the referent, typically within the space of the common ground, and secondly to infer the social intention, also by relating it to the common ground. Then, assuming that R understands C’s social intention, R can decide whether or not to cooperate as expected (Tomasello 2010; Huang 2017).

Figure 3

Figure 3

Summary of cooperative model of human communication (C = communicator; R = recipient); Source: Tomasello (2010); All rights belong to their respective owners.

9Whereas the cognitive and linguistic aspects of grounding are naturally clear, its computational applications can be prone to diverse difficulties. Pragmatics can sometimes be subjective, contextual, ambiguous, and its phenomena can be described through one-to-many and many-to-many relationships. Their computational modelling is, therefore, challenging, although different scholars worked on some aspects as it will be summarised in this work. In the next sections, we will focus on grounding acts, as they were described in (Traum 1994), and how they could be mapped on research approaches described by different scholars. This work is, therefore, intended as a schematic literature review on some aspects of grounding that can function as a guide and lead to new studies, as research gaps are also highlighted.

Table 1: Conversation Act Types (Traum 1999, adapted); UU and DU stand respectively for ‘utterance unit’ and ‘discourse unit’

Discourse Level

Act Type

Sample Acts

Sub UU

Turn-taking

take-turn, keep-turn, release-turn, assign-turn

UU

Grounding

Initiate, Continue, Acknowledgement, Repair, Cancel, RequestRepair, RequestAcknowledgement

DU

Core Speech Acts

Inform, YesNoQuestions, Check, Evaluate, Suggest, Request, Accept, Reject

Multiple Dus

Argumentation

Elaborate, Summarise, Clarify, Q&A, Convince, Find-Plan

2.1 Grounding acts

10Traum (Traum 1994) provided a computational model of grounding. In his theory, he introduced a description of the so-called grounding acts, which are speech acts used to ground the traditional illocutionary speech acts (Austin 1975; Searle 1985). In other words, they correspond to “the actions performed in producing particular utterances which contribute to this groundedness” (Traum 1994, 31). In particular, he accounted for the protocol determining, for any sequence of grounding acts, whether the content of the communicated utterances is grounded or not. In table 4, its conversation acts are presented, among which the grounding acts are listed.

11Each of the grounding acts considered is described as follows:

Initiate

12This act is the initial utterance of a discourse unit and usually corresponds to the first utterance of the presentation phase (Clark and Schaefer 1989).

Continue

13This represents the continuation of a previous act performed by the same speaker. A continue is expressed in a separate utterance unit, but is syntactically and conceptually part of the same act.

Acknowledgement

14An act of acknowledgement is used to claim or demonstrate understanding of a previous utterance. It may be either a repetition or paraphrase of all or part of the utterance, an explicit signal of comprehension such as ok or uh huh, or an implicit signalling of understanding. Typical cases of implicit acknowledgement are answers to questions. Acknowledgements are also referred to by some as confirmations (Cohen and Levesque 1991) or acceptances (Clark and Schaefer 1989). Traum (Traum 1994) prefers the term ‘acknowledgement’ as a signal of understanding, whereas ‘acceptance’ is referred to a core speech act signalling agreement with a proposed domain plan.

Repair

15A repair is used to change the content of the current discourse unit. This may correspond either to a correction, or it can concern the addition of material. Both solutions will change the interpretation of the speaker’s intention. Repair actions should not be confused with domain clarifications. Repairs are concerned merely with the grounding of content. On the other hand, domain clarifications, which modify grounded content, are considered as argumentation acts (Traum 1994). As we will see in the next sections, this particular act that processes grounded information can have interesting computational applications.

Cancel

16This act closes the current discourse unit as ungrounded. Rather than repairing the current unit, a Cancel leaves it; the speaker intention must, therefore, be possibly expressed in a new discourse unit.

RequestRepair

17A request for a repair is, conversely, uttered by the interlocutor. This is equivalent to a next turn repair initiator or clarification request (Schegloff, Jefferson, and Sacks 1977). Often a RequestRepair can be distinguished from a Repair or Acknowledgement only by intonation. Implicit requests have also been studied (Schettino, Di Maro, and Cutugno 2020).

RequestAcknowledgment

18The act is used as an attempt to elicit an Acknowledgement act in the other agent. This invokes a discourse obligation on the listener to respond with either the requested acknowledgement, or an explicit refusal or postponement (i.e., a followup repair or a repair request). Starting from the description of grounding acts, in the next section, we will explore the studies that concentrated on their computational modelling, or of some of their aspects, in dialogue systems.

3. Computational Grounding

19This section reports on pragmatics applied to dialogue modelling and automatic text processing. This branch of computational pragmatics, especially when applied to conversational agents, mostly deals with corpus data, context models, and algorithms for context-dependent utterance generation and interpretation (Huang 2017, 326). Nevertheless, conversational agents should be able not only to process local but also global structures of dialogues (Airenti, Bara, and Colombetti 1993). Whereas local structures are involved with linguistic rules (i.e., speech acts, turn-taking, etc.), which can be derived from corpus analysis, global structures refer to the conversation flow, that is the dialogue’s action plan and how this is mutually known by dialogue participants (i.e., opening, closing, etc.). Cognitive pragmatics looks at these global structures derived from behavioural games, which in turn derive from grounding processes (Bara 1999). Different authors started including these processes in their dialogue systems architectures, especially as far as evaluating and updating common ground with their human partner. For instance, Roque and Traum (Roque and Traum 2009) have developed a dialogue system that tracks grounded information in the previous conversation. As a consequence, the dialogue system is capable of selecting its utterances using different types of evidence of the user’s understanding (i.e., whether the dialogue system has just submitted material or the user has also acknowledged it, repeated it back, or even used it in a subsequent utterance) (Müller, Paul, and Li 2021).

20Using grounding strategies in conversational agents led to interesting implementations. One aspect which has not yet been investigated is concerned with the mechanisms of grounding between humans and dialogue systems. Experimental investigations have mostly studied “how users evaluate the interaction, instead of studying interaction mechanisms” (Müller, Paul, and Li 2021, 3). For instance, Roque and Traum (Roque and Traum 2009) performed a user study in which subjects interacted with their system and rated how much they felt the system understood them, put effort into understanding them, and gave appropriate responses. Conversely, what most studies do not ask is how a specific dialogue principle, such as the use of a particular type of request, is used by a system to affect user behaviours. Therefore, to learn more about human–machine dialogues mechanisms, it is important to turn to more basic experimental research methods (Müller, Paul, and Li 2021).

21With the purpose of providing a structured view concerning the application of grounding in dialogue systems, we start with the classification presented in Traum (Traum 1994), and summarised in section 2.1, as a point of departure to understand which aspects of grounding has been modelled over time. As we will see, some of them are more investigated than others, while new other aspects have been considered. In table 5, the studies in which grounding acts are modelled are reported.

Table 2: Computational Grounding acts state of the art

Grounding Act

References

Initiate

(Dahlbäck and Jönsson 1998)

Continue

(Schlangen and Skantze 2011) (Visser et al. 2012)

(Visser et al. 2014)

Acknowledgement

(Skantze, House, and Edlund 2006)

(Wang, Lee, and Marsella 2013) (Visser et al. 2012, 2014)

(Eshghi et al. 2015) (Buschmeier 2018)

(Buschmeier and Kopp 2018) (Schlangen 2019)

Repair

(Skantze 2008) (Swerts, Litman, and Hirschberg 2000)

(Hough and Purver 2012) (Marge and Rudnicky 2015)

(Purver, Hough, and Howes 2018) (Di Maro et al. 2019)

(Marge and Rudnicky 2019)

Cancel

N/A

RequestRepair

(Gabsdil 2003) (Rodrı́guez and Schlangen 2004)

(Purver 2004a) (Schlangen 2004)

(Purver 2006) (Stoyanchev, Liu, and Hirschberg 2014)

(Müller, Paul, and Li 2021)

RequestAcknowledgement

(Misu et al. 2011) (Buschmeier and Kopp 2014)

Initiate

22In the LINLIN dialogue model (Dahlbäck and Jönsson 1998), the initiative is defined as the move whose aim is to introduce a goal. It can have different functions: update, question, answer, discourse opening, discourse continuation, discourse ending. The initiation act in dialogue systems is described in terms of presentation phase, in which form it is presented and which function it shows. This act, as reported in Clark and Schaefer (Clark and Schaefer 1989), introduces something that has to be grounded, via implicit and explicit feedback, to proceed with the exchange. Since this act can also, from other aspects, overlap with other type of grounding acts, such as acknowledgement, as also reported in (Clark and Schaefer 1989), specific details are given in this section, when the corresponding grounding acts are dealt with more in detail.

Continue

23For the continue act, defined as the continuation of a previous act by the same speaker, we can account for the studies on incrementality in dialogue. Dialogue processing is, indeed, incremental: the processing starts before the input is completed (Kilger and Finkler 1995). Systems designed for incremental processing can process the user inputs, with or without intermediate feedback, before the system output is generated. Incrementality is, for this reason, a research aspect comparable to what has been studied for continue grounding acts. Here, the aspect of grounding is referred to the fact that the previous act is considered as understood and grounded, in that no repair is needed, and therefore the current speaker can go on with the contribution which it refers to. In (Schlangen and Skantze 2011), a model for incremental processing architecture is presented. In their model, this act corresponds to the incremental unit, which is the “minimal amount of characteristic input”. The incremental processing is composed of a left buffer, a processor, and a right buffer, as represented in Figure 4. The authors also point out for future application the necessity to connect such model for incremental processing and grounding of interpretations in previous processing with models of dialogue-level grounding in the information-state update tradition (Larsson and Traum 2000). For example, the study of self-correction could be a starting point in the connection of sub-utterance processing and discourse-level processing (Ginzburg, Fernández, and Schlangen 2007). Visser et al. (Visser et al. 2012, 2014) define incremental understanding in terms of pairs of frames generated every 200 milliseconds, where the first frame is a prediction of the meaning of the complete user utterance, although not yet fully uttered, whereas the second frame is the sub-frame of what the user said so far. Here, feedback of different kinds are analysed before the completion of the utterance.

Figure 4

Figure 4

Speech recognition as an example of incremental processing; Source: Schlangen and Skantze (2011); All rights belong to their respective owners.

Acknowledgment

24The use of acknowledgement feedback in human-machine interaction has been deeply investigated. In Skantze et al. (Skantze, House, and Edlund 2006), for instance, investigate feedback produced both by the user and the system. These were used along with other types of feedback categorising the subjects’ responses based on pragmatic meaning. In Wang et al. (Wang, Lee, and Marsella 2013), a Listener Feedback Model for virtual agents in multi-party conversations is presented, for which the importance of using such systems is underlined. The use of understanding feedback were also studied in incremental models as signals used to update the grounding state (Visser et al. 2012, 2014; Eshghi et al. 2015). In (Buschmeier and Kopp 2018; Buschmeier 2018), acknowledgement acts are studied as attentiveness markers: “artificial conversational agents should have the capability to use such a mechanism, too, because it would allow them to approach potential or upcoming problems in understanding (and other listening related communicative functions) before they become more serious and require costly repair actions” (Buschmeier and Kopp 2018, 1220). Acknowledgement acts are important for collaborative goals, as also pointed out in Schlangen (Schlangen 2019), and more generally also in Benotti and Blackburn (Benotti and Blackburn 2021).

Repair

25The repair act is aimed at grounding information which may not be clear to either the user or the system. Purver et al. (Purver, Hough, and Howes 2018, 426), indeed, describe repair as one of “the primary strategies by which interaction participants achieve and maintain shared understanding”. This set of strategies is specifically used to highlight and/or resolve miscommunications or potential miscommunications. In Di Maro et al. (Di Maro et al. 2021), different miscommunication scenarios are listed. Starting from Allwood et al. (Allwood, Nivre, and Ahlsén 1992) four basic communicative functions, the communication levels contact, perception, understanding, intention were defined. At each level, one or many problems can occur, which are triggered by specific linguistic or informational issues. According to the type of problem, a specific repair strategy can be used to ensure the grounding process to be successful. For instance, in Marge and Rudnick (Marge and Rudnicky 2015, 2019), recovery strategies were studied in three different scenarios: referential ambiguity (more than one possible action), impossible-to-execute (zero possible actions), and executable (one possible action). In Hough and Purver (Hough and Purver 2012, 143), repair acts are incrementally generated “in line with psycholinguistic evidence of preference for locality and the availability of access to the semantics of repaired material”. Prosodic features of repairs are also investigated both from the perspective of users (Di Maro et al. 2019) and of machines (Swerts, Litman, and Hirschberg 2000).

Cancel

26Among the investigated breakdown recovery strategies, the cancel act appears to be not so explored. In fact, although the speaker could leave the interaction without giving any further explanation or without trying to repair, thus without modifying their common ground, when the system does not understand them, it is unlikely to find studies which focus on how a system does not try to recover the interaction. This act, in fact, could be more interestingly investigated when its adoption is caused by the analysis of multiple discourse units. In this scenario, when the modification of the common ground interests a previous discourse unit, the repair could imply a higher cost or effort. As a consequence, the dialogue can go on without accepting the last utterance and a re-planning is therefore needed. A parallelism can be drawn, for example, with a car’s satellite navigation system that prefers to recompute the route, when repairing the misunderstood action could be too difficult or impossible for the driver. Nevertheless, actions on multiple discourse units are studied in Traum (Traum 1994) as argumentation acts where negotiation is important. On the other hand, as far as the last action is concerned, the problem is usually not followed by a cancellation, but by a repair.

RequestRepair

27This act is investigated in dialogue systems especially in terms of clarification requests. Clarification requests (CRs) are one of the pragmatic tools used in conversation to prove, ensure, and maintain the mutual understanding of the communicated message between the interlocutors. Purver (Purver 2004a, 2004b, 2006) stated that interlocutors initiate a CR when a problem in processing the previous utterance occurs. For this reason, they are also called anaphoric feedback, as they refer to what has previously been uttered. Furthermore, CRs are considered as meta-communicative tools as well, since they function as an acknowledgement of the level of understanding of an utterance (Ginzburg and Macura 2005). The use of CRs is also described in terms of cognitive-pragmatic instruments adopted for grounding purposes. As pointed out by scholars, such as Clark (Clark 1996), to pursue the aim of succeeding in their joint activity, interlocutors need to ground what has been communicated. Among the scholars who pursued the intent of categorising different types of CRs, Purver (Purver 2004b) classified CRs according to form and reading, where form refers to the surface form, such as when i) an element from the previous utterance is used in the request (reprise), ii) an element from the previous utterance is used in combination with a wh-interrogative pronoun (wh-reprise), or iii) when a reformulation or a generic question is adopted (non-reprise). Reading, conversely, refers to the compromised item that the request questions about, such as a constituent or a clause. This classification established a precise way to describe how CRs can be automatically recognised or selected by a system and opened the way to further investigations, also concerning the causes and problems triggering the initiation of such requests. For instance, Rodriguez and Schlangen (Rodrı́guez and Schlangen 2004) introduce the notion of problem, causing the instantiation of a CR. In fact, different kinds of problems, such as acoustic or lexical ones, can determine the adoption of a different informative CR.

28Clarification is then a fundamental part of the grounding process. Through the pragmatic tool of CRs the interlocutors can maintain the mutual understanding of the communicated message during a conversation. Clarifications are usually uttered in a context of miscommunication. Following Hirst et al. (Hirst et al. 1994), miscommunication can be partitioned into three different types: Misunderstanding, non-understanding, and misconception.

29Misunderstandings are not immediately detected, since the hearer thinks that what has been understood is the right message, but it is not the one the speaker intended to convey.

30The second type of miscommunication is non-understanding that occurs when the hearer finds the message uttered by the speaker ambiguous, or, as Gabsdil (Gabsdil 2003) noticed, when the hearer is uncertain about the interpretation given to the message. In this case, even the form in which the requests are formulated can vary. Uncertain interpretations can coarsely be associated with single polar questions, whereas ambiguous understanding is more likely to result in alternative questions or wh-questions. Furthermore, non-understanding in general can occur on several different communicative levels, ranging from establishing contact among the dialogue partners to the intended meaning or function of the utterance in context, as previously also pointed out. Clark (Clark 1996) listed four basic levels of communication in a framework that represents the interaction as a joint activity of the dialogue participants: i) execution/attendance, ii) presentation/identification, iii) signal/recognition, iv) proposal/consideration. As Gabsdil (Gabsdil 2003) pointed out, on the lowest level, dialogue participants establish a communication channel, which is then used to present and identify signals on level two. On level three, these signals are interpreted before their communicative function is evaluated on the proposal/consideration level. The framework of joint actions requires that dialogue participants coordinate their individual actions on all of those levels. Gabsdil (Gabsdil 2003) combined the cause of non-understanding with Clark’s four levels of communication, giving some examples and organising a coarse-grained classification of clarifications. Connected to these levels, two main readings for clarifications were proposed by Ginzburg and Cooper (Ginzburg and Cooper 2001). Their “clausal reading” can be related to the presentation/identification level and their “constituent reading” to the signal/recognition level. Clausal readings are used “to confirm the content of a particular sub-utterance” (Ginzburg and Cooper 2001, 1), and it can roughly be paraphrased as “Are you asking/asserting that X?” or “For which X are you asking/asserting that Y?”. Constituent readings, on the other hand, “elicit an alternative description or ostension to the content (referent or predicate etc.) intended by the original speaker of the reprised sub-utterance.” (Ginzburg and Cooper 2001, 1).

31Misconceptions, finally, occur when the “hearer’s most likely interpretation suggests that beliefs about the world are unexpectedly out of alignment with the speaker’s” (Hirst et al. 1994). Clarifications in response to misconceptions usually convey extra-linguistic information like surprise or astonishment.

  • 1 http://www.sfb360.uni-bielefeld.de/

32As already anticipated, CRs can occur in different forms and readings. The correlation between form and function of CRs has also been investigated by Rodriguez and Schlangen (Rodrı́guez and Schlangen 2004), who presented a multidimensional classification of CRs forms and a fine-grained correlation between them and their functions. The study has been carried out in a corpus of German task-oriented dialogues, the “Bielefeld Corpus,”1 which contains 22 dialogues consisting of about 3962 turns, and 36,000 words. In the experimental setup, a dialogue participant was supposed to give instructions to the interlocutor to build a model plane. The authors pointed out some features used to describe the surface form of CRs. Concerning the attribute Mood, the possible values are declarative, polar questions, alternative questions, wh-questions, imperative and others; for Completeness are particles (Pardon?), partial fragments or complete sentences; for Relation are literal repetition of the unclear part, the addition of a part to the repetition, reformulation of the problematic utterance, or independent (i.e., no part of the utterance are repeated or reformulated); finally, for Boundary tone are rising or falling intonation.

33Rodriguez and Schlangen (Rodrı́guez and Schlangen 2004) posed the foundation for the identification of problems that could cause misunderstanding, taking into account the CRs readings proposed earlier, but trying to define them in a more fine-grained way. The authors devised a multidimensional classification scheme where form and function are meta-features taking sub-features as attributes. They start from the models of Clark (Clark 1996) and Allwood (Allwood 1995) concerning the four levels of communication mentioned before, adding other types of sub-levels. Each of those levels is a possible locus for communication problems. This dimension specifies the extent and severity of the problem. The extent, as the authors argued, describes whether a specific CR points to a problematic element in the problematic utterance or not. The severity, on the other hand, describes which action the CR initiator requests from the interlocutor: the CR initiator can ask for a reformulation of the problematic utterance, probably triggered by a complete understanding failure, or they can ask for a confirmation of the previous hypothesis of which they are not certain. The scholars also classified the answers to CRs that can be i) yes/no answers, ii) repetitions or reformulations of the unclear element, iii) elaborations of the problematic utterance with the addition of new elements, iv) word definitions, or, lastly, v) no reaction at all. As a consequence, the satisfaction of CR initiators to the reaction of the CR addressee can be classified as happy or unhappy, according to the right or wrong interpretation of the CR.

34Stoyanchev et al. (Stoyanchev, Liu, and Hirschberg 2014) point out how important it is for the communicative efficiency in human-machine interaction to have clarification requests which are not generic but targeted, in that they are based on contextual information. For instance, in Müller et al. (Müller, Paul, and Li 2021), rephrasing strategies are used to ask for correctness before grounding the received information.

35As it will pointed out at the end of this section, other types of misunderstanding repair strategies to be considered are more classifiable as related to argumentation acts of grounded information, a field of research that is becoming worth exploring.

RequestAcknowledgment

36In the “Media Equation” (Reeves and Nass 1996), it is hypothesised that “people will give more spontaneous back-channels to a spoken dialogue system that makes more spontaneous back-channel-inviting cues than a spoken dialogue system that makes less spontaneous ones”. Based on this hypothesis, Misu et al. (Misu et al. 2011) presented the basis for a dialogue system capable of eliciting back-channels from users. For this purpose, they constructed a dialogue-style TTS which makes use of back-channel-inviting cues, whose application resulted in the more user’s spontaneous back-channels, informative for the system. Similarly, Buschmeier and Kopp (Buschmeier and Kopp 2014) defined when the system should elicit feedback in the user in order to avoid undesirable dialogue states. In fact, the system needs feedback when i) its belief about the user’s mental state is not informative enough; ii) its belief about the user’s mental state has not changed in a long time; iii) its belief about the user’s mental state is different from a desired one deriving from a previous communicative action by the agent. In Buschmeier and Kopp (Buschmeier and Kopp 2018), the same result as in Misu (Misu et al. 2011) was reported: participants provided more feedback with an attentive listener agent, that is with agent capable of a) interpreting communicative listener feedback from users, b) adapting their production to the users’ needs, whose interpretation is based on the processed feedback, and c) eliciting feedback through feedback elicitation cues when needed. The use of such feedback is moreover important to other grounding acts, such as initiate.

37In this section, we focused on defining a parallel between theory and application, by describing some works on dialogue systems which explicitly applied grounding acts in the dialogue. As a take-home message, it can be pointed out that theory was diversely adapted to the available technology and different new methodologies were implemented. A perfect mapping between theory and application has not yet been reached. Some aspects of grounding were therefore more investigated than others, and some others became crucial. In general, the importance of the grounding process has been variously highlighted, starting from uncertainty signalisation (Fernández et al. 2007; Hough and Schlangen 2017), to different degrees of grounding (Roque and Traum 2008; Roque 2009; Petukhova et al. 2015), to the use of grounding in dialogue systems evaluation (Curry, Hastie, and Rieser 2017; Zou 2020). The research on dialogue systems, in fact, has always underlined the need to test and evaluate their functionality and performances. Nevertheless, the evaluation of dialogue systems has always been a problematic task to carry out. When Turing (Turing 1950) suggested the imitation game as a possible evaluation of the intelligence a machine can show, he was thinking of replacing the question whether a machine is able to think with its imitation capabilities. The concept of thinking has always been difficult to define. Instead, the imitation game could actually be a valid and answerable question to pose. To answer this question positively, the evaluator should not be able to tell the difference between the machine and the human interlocutor, in that the machine succeeded in imitating intelligent human-like behaviour. Here, the concept of intelligence needs some in-depth consideration. Gottfredson (Gottfredson 1997, 13) defined intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience”. As we may easily comprehend, this definition is far from the possibility a machine can have to imitate some behaviours. If the aim is not only to reproduce, but also to evaluate some intelligent aspects a machine could have, we may need to adopt different tests. Therefore, the Turing test, although sometimes still used, can conversely represent the desirable goal of an intelligent agent, which shows behaviours that are human-like, rather than an evaluation tool for system performances. At the same time, the question that could here be raised is whether we really want a system to be completely indistinguishable from human beings and why we want that. Conversely, we might want systems capable of showing their specific intelligent features which might be suitable for artificial beings only. Similarly to Turing, Schatzmann et al. (Schatzmann, Georgila, and Young 2005) evaluate two aspects: i) human-like system’s responses; ii) how well the user models cover the variety of the user population in the training data. Even here, what is missing is a shareable framework to carry out this evaluation and an in-depth description of how the system is actually working.

38Whichever is the way we imagine our dialogue system to be, the evaluation should rather consider some specific traits of what we call intelligence, or better, in this case, of interactional intelligence. With interactional intelligence, we mean the ability to recognise intentions, beliefs, and aptitudes towards the dialogic exchange and the ability to respond appropriately (Levinson 1995; Buschmeier 2018). As we will see in section 4, this capability makes the system argumentation-skilled. For goal-oriented dialogue systems, the completeness of the task, dialogue length, and user satisfaction are usually taken into account. On the other hand, for general purpose dialogue systems, approaches like next utterance classification and word perplexity are preferred (Serban et al. 2018). To the present day, fully satisfactory automatic classification metrics for dialogue systems do not exist. Nevertheless, the combination of different methodologies could lead to better results. Grounding acts can, in this sense, also be used as a methodology to evaluate dialogue system’s performances. More specifically, Curry et al. (Curry, Hastie, and Rieser 2017) report a comparison between systems using explicit feedback and systems using implicit feedback. In Zou (Zou 2020), on the other hand, evaluation techniques are compared and faults are highlighted in that not “all aspects of dialogue from naturalness and coherence to long-term engagement and flow” are captured. One possible evaluation metrics could consider usability principles (Dix et al. 2003), namely learnability, flexibility, robustness. Specifically, as far as the robustness principle is concerned, that is the level of support that the system provides to the user in completing and assessing a task successfully, dialogue systems can make their internal states observable through verbal or non-verbal interaction, thus via grounding acts. More in detail, when problems occur in information processing, the observable character of such states can be utilised to recover the problems. In section 4, the need for this type of analysis will be better detailed.

3.1 Latest datasets for grounding acts

39In order to make the process of grounding modelling possible, dialogue datasets are needed. In the past years, many dialogue datasets have been collected to study grounding and grounding-related problems (Serban et al. 2018). Nevertheless, the latest corpora collections are particularly important to mention as they are mostly concerned with collecting large amount of data in order to be used to train dialogue systems with machine learning, which indeed need more data when compared to past collections. Different techniques can be used to model and train dialogue systems: whereas some use online learning (Liu and Mazumder 2021), reinforcement learning (Pietquin 2007; Young et al. 2010), probabilistic reasoning (Skantze 2007; Stoyanchev, Lison, and Bangalore 2016; Rossignol, Pietquin, and Ianotto 2010), or graph representations (Liu and Mei 2020; Mi et al. 2020; Chaudhuri, Rony, and Lehmann 2021), many grounding phenomena are learned and modelled in conversational agents via machine learning algorithms. It is important to point out that grounding can be better observed in spontaneous conversation, as eliciting it can be easier for some aspects (i.e., feedback) rather than for others. For this purpose, in the past, there have been works on agents interacting with humans applying improvisation (Bruce et al. 2000; Martin, Harrison, and Riedl 2016; Winston and Magerko 2017). Nevertheless, there are not so many corpora collecting such spontaneous dialogues, and the ones available are also far too small for machine learning purposes (Busso and Narayanan 2008). In the SPOLIN corpus (Cho and May 2020), 6,760 English Improv dialogues, comprising 90,000 turn pairs, have been collected. The improvisational theatre dialogues considered here are important for grounding purposes, as in this form of theatre everything is performed without a script, a scenery, or other established environment; for this reason, everything must be grounded via interactions. The specific aim of this dataset was to study yes-ands turns, where an acknowledgement act was combined with a new next relevant contribution. Similarly, different common grounding phenomena, like the ones described in Traum (Traum 1994), are observable in the collection presented in Udagawa and Aizawa (Udagawa and Aizawa 2019, 2020), comprising 6,760 dialogues, and whose aim is to be adopted in the training of end-to-end dialogue systems. End-to-end dialogue systems, in fact, are usually based on neural networks (Shang, Lu, and Li 2015; Vinyals and Le 2015; Sordoni et al. 2015; Dodge et al. 2016; Serban et al. 2016) and need large amount of data. For the same purposes, Chen et al. (Chen et al. 2021) collected 10K human-to-human dialogues containing 55 distinct user intents. The few amount of appropriate dialogue corpora for grounding applications in dialogue systems in various languages can be still considered as the Achilles’ heel of the data-driven research, like the machine learning-based one.

4. “What the heck are you saying?” Corrective dialogues and grounded information

40As reported in the previous sections, different scholars highlighted the urge of including grounding processing in their systems, for which argumentation of grounded information needs more investigation. In this section, the attention will be focused on grounding-related corrective dialogues. In this context, the argumentative nature of some of such dialogues, in the form of Common Ground Inconsistencies, will also be taken into account.

41Among the most investigated grounding aspects, corrective dialogues have drawn much attention as their adoption improves the communication process. This resulted from the users’ need to interact with an agent capable of cooperating with the communicative actions. Human interlocutors always contribute with questions, answers, and feedback (Beun and Eijk 2004). For instance, a corrective dialogue is a particular type of dialogue occurring when: i) the user notices an error in the system and corrects it; ii) the user changes their mind; iii) the user’s beliefs are in contradiction with the system’s beliefs and expectations. In the first two cases, the corrective dialogue is initiated by the user (it corresponds to the grounding act of Repair), whereas, in the last case, it is initiated by the system (it corresponds to the grounding act of RequestRepair) (Bousquet-Vernhettes, Privat, and Vigouroux 2003). One example of corrective dialogue in human-machine interaction is the one presented in Beun and van Eijk (Beun and Eijk 2004). The authors focused on a particular communicative problem related to conceptual discrepancies between a computer system and its user. Starting from the assumption that both the system and its user have a mental representation of a domain, the mental representation of the system, e.g., the ontology, contains conceptualisations that are made explicit in a formal language. Despite their possible incompleteness and inaccuracy, this information can be used to trace the system’s reasoning about concepts, items, and their properties. Most importantly, this representation also allows the detection of conceptual discrepancies, arising when the system observes that the user applies an incorrect action to a particular object. The authors also stated that, although feedback of different kinds are now generally used in such systems, there is still no accurate mathematical theory for natural communicative behaviours and their computational model to human-machine interaction, especially as far as conceptual discrepancies are concerned. What is still missing is, therefore, a reference model guiding the adoption of a specific type, content, and form of the feedback that has to be generated in a particular situation (Beun and Eijk 2004).

42While conceptual discrepancies can be concerned with the last dialogue state whose inaccuracy can lead to a RepairRequest + Repair or directly to a Repair act, some inconsistency can also refer to a previous stage of the interaction, as in Khouzaimi et al. (Khouzaimi, Laroche, and Lefevre 2015). In this case, Traum (Traum 1994) considers the consequent acts as argumentation ones, as already grounded information are now being negotiated. The linguistic activity of argumentation is pragmatically regulated by a sequence of purposive speech acts in conflict (Walton and Godden 2006), as it represents the discussion of opposing ideas to find the truth, namely dialectics. Dialectics in dialogue systems can be framed in the field of formal and computational argumentation, where two main research topics are listed: argumentation-based inference and argumentation-based dialogue. Argumentation-based inference concentrates on establishing what conclusions can be drawn starting from incomplete or inconsistent information. Argumentation-based inference models work similarly to Hegel’s dialectic, since they investigate statements from a logical point of view without considering multiple participants. Historically, the first one who described an Abstract Argumentation Framework was Dung (Dung 1995). On the other hand, Pollock (Pollock 1987) first established the basis for formal argumentation-based inference.

43Argumentation-based inference is different from argumentation-based dialogue, in that the former is a formal method which is applied to a single entity to decide about the truth of an argument. On the other hand, argumentation-based dialogue considers problems arising from dialogues among different agents. In such cases, information is, in fact, distributed among the agents, who may or may not be willing to share it at different points in time due to individual strategies and goals. A solid argumentation-based dialogue theoretical framework is, in fact, still missing because of the complexity of the phenomenon in question: “the study of argumentation-based dialogue consists of a variety of different approaches and individual systems, all exciting work but with few unifying accounts or general frameworks” (Prakken 2017, 53). Among the types of dialogue that are studied in argumentation-based studies, we mention persuasion, negotiation, information seeking, deliberation, inquiry, and quarrel (Walton 1984; Walton and Krabbe 1995). These classes, however, are not meant to be absolute, as multiple goals may be present during a single dialogue. Among the ones listed, persuasion dialogues appear to have been studied the most (Yuan, Moore, and Grierson 2004; Prakken 2008). As far as deliberation dialogues are concerned, the collaboration, here, takes place to find an optimal solution to a problem for which the involved agents have not yet a solution. For this type of dialogue, an interesting result was found. In case of a two-agents system adhering strictly to the communication protocol, forming their claims on the basis of their knowledge and adopting a collaborative attitude, it was demonstrated that the agreed solution is always acceptable to both parties (Black and Atkinson 2010). This results from employing argumentation, whose usefulness in dialogue systems, designed for deliberation, was demonstrated in Kok et al. (Kok et al. 2010).

44The problem that characterises argumentation-based dialogue with respect to argumentation-based inference is, therefore, the presence of different agents in the setting. This introduces multiple, not necessarily aligned, knowledge and, possibly, conflicting goals in the pursuit of a solution to a problem. Pragmatic strategies adopted in such situations are to be investigated, as they are generally concerned with grounded information. Based on the analysis of map-tasks, whose structure can be compared to deliberation dialogue for their goals, an argumentation-based act trigger was identified (Di Maro et al. 2021), namely Common Ground Inconsistencies, which can lead the interlocutor to the adoption of clarification requests, as its corresponding argumentation-based act. Similarly to the aforementioned conceptual discrepancies, Common Ground Inconsistencies refer to problems with grounded information.

Figure 5

Figure 5

Representation of the Common Ground CRs elicitation scenario

  • 2 We are aware that Clarification Requests are generally used to correctly update the common ground. (...)

45In Figure 5, a Common Ground Inconsistency scenario eliciting a Common Ground clarification request (Common Ground CR) is displayed. With Common Ground CR we refer to clarification requests with an argumentative function. In fact, they do not help the speaker ground a piece of information, but they refer to previous discourse units, where that piece of information was already grounded. In the current state of the dialogue, a new evidence clashes with the grounded one, and, therefore the Common Ground CR is uttered2 (Di Maro 2021; Di Maro, Origlia, and Cutugno 2021a). As in Figure 5, in the mind of the female agent A, the Communal Common Ground is stored to guide the process of accumulating information in the Personal Common Ground. The information (i1, i2, i3, ..., in) are communicated by the male agent B to A, and sequentially stored in her Personal Common Ground. When B utters a new information iz, this is represented as a new item candidate to be part of the Personal Common Ground. This representation has generates a bias/evidence conflict (Domaneschi, Romero, and Braun 2017), in that the presence of the new item iz in the Personal Common Ground clashes with the presence of another item i3, whose validity is now questioned. This conflict represents a Common Ground Inconsistency and is translated in the Common Ground CR \(\neg i_3?\), whose form, function and illocutive effect are reported in Di Maro et al. (Di Maro, Origlia, and Cutugno 2021b). As also highlighted in Di Maro et al. (Di Maro et al. 2021), polar questions are especially important to express Common Ground Inconsistencies, in that their epistemic stance is clearly expressed compared to other types of questions (Domaneschi, Romero, and Braun 2017). Finally, differently from other CRs, Common Ground CRs do not necessary refer to the immediately previous utterance, but to previously - correctly or wrongly - grounded information.

5. Conclusion

46In Human-Machine interaction, the study and application of pragmatic aspects has covered few phenomena, although their importance was recognised in various linguistic studies. On the one hand, error handling and requests for clarification have always had a central role, since the correct understanding and the consequent task completion of the system are the desired goals. On the other hand, back-channels and acknowledgement feedback have also been investigated to ensure grounding. If commercial systems try to identify possible mistakes which can be caused by users or by technology limits, their ability to understand the real cause of problems to adequately signal them and let the human user correct them is still a frontier not exhaustively explored. The complexity of possible misunderstanding and conflicting situations makes it necessary to study the communicative strategies used to efficiently handle the related interaction problems.

47As mentioned at the beginning of this work, the aim pursued here was also to stimulate further investigations and applications of pragmatics, and especially grounding, in conversational agents, by underlying application gaps. In fact, whereas semantics has been a more investigated topic within the dialogue systems field with respect to pragmatics, where speech acts modelling drew more attention. Furthermore, although CRs and corrective dialogues are widely studied in linguistics, their application in dialogue systems is still limited, especially when referred to already grounded information. Further investigations on grounding-related problems concerning dialogue states which do not necessary correspond to the current dialogue state but to previous steps of the dialogue history are therefore needed. This could, moreover, expand the study on argumentation-based dialogues leading to the foundation of a shared theoretical framework.

48The author would like to thank the reviewers for their comments and help. This work was also supported by the Italian PON I&C 2014-2020 within the BRILLO project, no. F/190066/01-02/X44.

Top of page

Bibliography

Gabriella Airenti, Bruno Giuseppe Bara, and Marco Colombetti. 1993. “Conversation and Behavior Games in the Pragmatics of Dialogue.” Cognitive Science 17 (2): 197–256.

Jens Allwood. 1995. “An Activity Based Approach to Pragmatics.” In Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics. John Benjamins.

Jens Allwood, Joakim Nivre, and Elisabeth Ahlsén. 1992. “On the Semantics and Pragmatics of Linguistic Feedback.” Journal of Semantics 9: 1–26.

John Langshaw Austin. 1975. How to Do Things with Words. Vol. 88. Oxford University Press.

Bruno Giuseppe Bara. 1999. Pragmatica Cognitiva: I Processi Mentali Della Comunicazione. Bollati Boringhieri.

Luciana Benotti and Patrick Blackburn. 2021. “Grounding as a Collaborative Process.” In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 515–31. Online.

Robbert-Jan Beun and Rogier M. van Eijk. 2004. “Conceptual Discrepancies and Feedback in Human-Computer Interaction.” In Proceedings of the Conference on Dutch Directions in Hci, 13. Amsterdam, The Netherlands: Association for Computing Machinery, New York, NY, United States.

Black Elizabeth and Katie Atkinson. 2010. “Agreeing What to Do.” In International Workshop on Argumentation in Multi-Agent Systems, 12–30. Toronto, Canada: Springer.

Caroline Bousquet-Vernhettes, Régis Privat, and Nadine Vigouroux. 2003. “Error Handling in Spoken Dialogue Systems: Toward Corrective Dialogue.” In ISCA Tutorial and Research Workshop on Error Handling in Spoken Dialogue Systems. Château d’Oex, Vaud, Switzerland.

Roger Brown. 1958. “How Shall a Thing Be Called?” Psychological Review 65 (1): 14.

Allison Bruce, Jonathan Knight, Samuel Listopad, Brian Magerko, and Illah R. Nourbakhsh. 2000. “Robot Improv: Using Drama to Create Believable Agents.” In Proceedings 2000 Icra. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), 4:4002–8. San Francisco, CA, USA: IEEE.

Hendrik Buschmeier. 2018. “Attentive Speaking. From Listener Feedback to Interactive Adaptation.” PhD thesis, Bielefeld, Germany: Faculty of Technology, Bielefeld University. https://doi.org/10.4119/unibi/2918295.

Hendrik Buschmeier and Stefan Kopp. 2014. “When to Elicit Feedback in Dialogue: Towards a Model Based on the Information Needs of Speakers.” In International Conference on Intelligent Virtual Agents, 71–80. Boston: Springer.

Hendrik Buschmeier and Stefan Kopp. 2018. “Communicative Listener Feedback in Human-Agent Interaction: Artificial Speakers Need to Be Attentive and Adaptive.” In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, 1213–21. AAMAS ’18. Stockholm, Sweden: International Foundation for Autonomous Agents; Multiagent Systems. http://dl.acm.org/citation.cfm?id=3237383.3237880.

Carlos Busso and Shrikanth S. Narayanan. 2008. “Scripted Dialogs Versus Improvisation: Lessons Learned About Emotional Elicitation Techniques from the Iemocap Database.” In Ninth Annual Conference of the International Speech Communication Association (Isca). Brisbane, Australia.

Debanjan Chaudhuri, Md Rashad Al Hasan Rony, and Jens Lehmann. 2021. “Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-Trained Transformers.” In European Semantic Web Conference, 323–39. Online: Springer.

Derek Chen, Howard Chen, Yi Yang, Alex Lin, and Zhou Yu. 2021. “Action-Based Conversations Dataset: A Corpus for Building More in-Depth Task-Oriented Dialogue Systems.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Online.

Hyundong Cho and Jonathan May. 2020. “Grounding Conversations with Improvised Dialogues.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2398–2413. Online.

Eve V. Clark. 2015. “Common Ground.” In The Handbook of Language Emergence, 328–53. Chichester, UK: Wiley. https://doi.org/10.1002/9781118346136.ch15.

Herbert H. Clark. 1996. Using Language. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9780511620539.

Herbert H. Clark, and Susan E. Brennan. 1991. “Grounding in Communication.” In Perspectives on Socially Shared Cognition, edited by Lauren B. Resnick, John M. Levine, and Stephanie D. Teasley, 222–33. Washington, DC, USA: American Psychological Association.

Herbert H. Clark and Edward F. Schaefer. 1989. “Contributing to Discourse.” Cognitive Science 13 (2): 259–94.

Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. “Referring as a Collaborative Process.” Cognition 22 (1): 1–39.

Philip R. Cohen and Hector J. Levesque. 1991. “Confirmations and Joint Action.” In Proceeding of the International Joint Conferences on Artificial Intelligence Organization (Ijcai), 951–59. Sydney, Australia.

Amanda Cercas Curry, Helen Hastie, and Verena Rieser. 2017. “A Review of Evaluation Techniques for Social Dialogue Systems.” In Proceedings of the 1st Acm Sigchi International Workshop on Investigating Social Interactions with Artificial Agents, 25–26. Glasgow, UK.

Nils Dahlbäck and Arne Jönsson. 1998. “A Coding Manual for the Linköping Dialogue Model.” Unpublish Manuscript.

Maria Di Maro. 2021. “"Shouldn’t I Use a Polar Question?" Proper Question Forms Disentangling Inconsistencies in Dialogue Systems.” Upublished Dissertation, Università Degli Studi Di Napoli Federico II.

Maria Di Maro, Hendrik Buschmeier, Stefan Kopp, and Francesco Cutugno. 2021. “Clarification Requests Negotiating Personal Common Ground.” In Proceedings of the Xprag.it (Poster). Online.

Maria Di Maro, Antonio Origlia, and Francesco Cutugno. 2021a. “Common Ground Inconsistencies in Dialogue Systems: Conflict Patterns Implied by Polar Question Forms (Submitted).” Speech Communication.

Maria Di Maro, Antonio Origlia, and Francesco Cutugno. 2021b. “PolarExpress: Polar Question Forms Expressing Bias-Evidence Conflicts in Italian.” International Journal of Linguistics 13 (4): 14–35.

Maria Di Maro, Jana Voße, Francesco Cutugno, and Petra Wagner. 2019. “Perception Breakdown Recovery in Computer-Directed Dialogues.” In Proceedings of the First International Seminar on the Foundations of Speech (Sefos 2019). Sønderborg, Denmark.

Alan Dix, Alan John Dix, Janet Finlay, Gregory D. Abowd, and Russell Beale. 2003. Human-Computer Interaction. Pearson Education.

Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. 2016. “Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems.” In Proceedings of the International Conference on Learning Representations. San Juan, Puerto Rico.

Filippo Domaneschi, Maribel Romero, and Bettina Braun. 2017. “Bias in Polar Questions: Evidence from English and German Production Experiments.” Glossa: A Journal of General Linguistics 2 (1).

Phan Minh Dung. 1995. “On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games.” Artificial Intelligence 77 (2): 321–57.

Arash Eshghi, Christine Howes, Eleni Gregoromichelaki, Julian Hough, and Matthew Purver. 2015. “Feedback in Conversation as Incremental Semantic Update.” In Proceedings of the 11th International Conference on Computational Semantics, 261–71. London, UK.

Raquel Fernández, Andrea Corradini, David Schlangen, and Manfred Stede. 2007. “Towards Reducing and Managing Uncertainty in Spoken Dialogue Systems.” In Proceedings of the 7th International Workshop on Computational Semantics (Iwcs07). Tilburg, Netherlands.

Malte Gabsdil. 2003. “Clarification in Spoken Dialogue Systems.” In Proceedings of the 2003 Aaai Spring Symposium. Workshop on Natural Language Generation in Spoken and Written Dialogue, 28–35. Palo Alto, California.

Jonathan Ginzburg and Robin Cooper. 2001. “Resolving Ellipsis in Clarification.” In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, 236–43. Toulouse, France.

Jonathan Ginzburg, Raquel Fernández, and David Schlangen. 2007. “Unifying Self-and Other-Repair.” In Decalog 2007: Proceedings of the Eleventh Workshop on the Semantics and Pragmatics of Dialogue. Rovereto, Italy: Rotooffset Paganella.

Jonathan Ginzburg and Zoran Macura. 2005. “The Emergence of Metacommunicative Interaction: Some Theory, Some Practice.” In Proceedings of the 2nd International Symposium on the Emergence and Evolution of Linguistic Communication, 35–40. Hatfield, UK.

Linda S. Gottfredson. 1997. “Mainstream Science on Intelligence: An Editorial with 52 Signatories, History, and Bibliography.” Intelligence 24: 13–23.

Herbert Paul Grice. 1975. “Logic and Conversation.” In Speech Acts, 41–58. Brill.

Grice, Paul. 1989. Studies in the Way of Words. Harvard University.

Graeme Hirst, Susan McRoy, Peter Heeman, Philip Edmonds, and Diane Horton. 1994. “Repairing Conversational Misunderstandings and Non-Understandings.” Speech Communication 15 (3-4): 213–29.

Julian Hough and Matthew Purver. 2012. “Processing Self-Repairs in an Incremental Type-Theoretic Dialogue System.” In Proceedings of the 16th Semdial Workshop on the Semantics and Pragmatics of Dialogue, 19–21. Paris, France.

Julian Hough and David Schlangen. 2017. “It’s Not What You Do, It’s How You Do It: Grounding Uncertainty for a Simple Robot.” In 2017 12th Acm/Ieee International Conference on Human-Robot Interaction (Hri), 274–82. Vienna, Austria: IEEE.

Julian Hough, Sina Zarrieß, and David Schlangen. 2017. “Grounding Imperatives to Actions Is Not Enough: A Challenge for Grounded Nlu for Robots from Human-Human Data.” In GLU 2017 International Workshop on Grounding Language Understanding, 88–91. Stockholm, Sweden. https://doi.org/10.21437/GLU.2017-18.

Yan Huang. 2017. The Oxford Handbook of Pragmatics. Oxford University Press.

Hatim Khouzaimi, Romain Laroche, and Fabrice Lefevre. 2015. “Turn-Taking Phenomena in Incremental Dialogue Systems.” In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 1890–5. Lisbon, Portugal.

Anne Kilger and Wolfgang Finkler. 1995. “Incremental Generation for Real-Time Applications.” Tech. Rep. RR-95-11. Saarbrücken, Germany: Deutsches Forschungszentrum Für Künstliche Intelligenz.

Eric M. Kok, John-Jules Ch. Meyer, Henry Prakken, and Gerard A. W. Vreeswijk. 2010. “A Formal Argumentation Framework for Deliberation Dialogues.” In International Workshop on Argumentation in Multi-Agent Systems, 31–48. Toronto, Canada: Springer.

Staffan Larsson and David R. Traum. 2000. “Information State and Dialogue Management in the Trindi Dialogue Move Engine Toolkit.” Natural Language Engineering 6 (3-4): 323–40.

Geoffrey Leech. 2003. “Pragmatics and Dialogue.” In The Oxford Handbook of Computational Linguistics. Oxford University Press.

Stephen C. Levinson. 1995. “Interactional Biases in Human Thinking.” In Social Intelligence and Interaction, 221–60. Cambridge University Press.

Bing Liu and Sahisnu Mazumder. 2021. “Lifelong and Continual Learning Dialogue Systems: Learning During Conversation.” Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-2021).

Bing Liu and Chuhe Mei. 2020. “Lifelong Knowledge Learning in Rule-Based Dialogue Systems.” arXiv Preprint arXiv:2011.09811.

Matthew Marge and Alexander Rudnicky. 2015. “Miscommunication Recovery in Physically Situated Dialogue.” In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 22–31. Prague, Czech Republic.

Matthew Marge and Alexander I. Rudnicky. 2019. “Miscommunication Detection and Recovery in Situated Human–Robot Dialogue.” ACM Transactions on Interactive Intelligent Systems (TiiS) 9 (1): 1–40.

Lara J. Martin, Brent Harrison, and Mark O Riedl. 2016. “Improvisational Computational Storytelling in Open Worlds.” In International Conference on Interactive Digital Storytelling, 73–84. Los Angeles, CA, USA: Springer.

Jinpeng Mi, Jianzhi Lyu, Song Tang, Qingdu Li, and Jianwei Zhang. 2020. “Interactive Natural Language Grounding via Referring Expression Comprehension and Scene Graph Parsing.” Frontiers in Neurorobotics 14.

Teruhisa Misu, Etsuo Mizukami, Yoshinori Shiga, Shinichi Kawamoto, Hisashi Kawai, and Satoshi Nakamura. 2011. “Toward Construction of Spoken Dialogue System That Evokes Users’ Spontaneous Backchannels.” In Proceedings of the Sigdial 2011 Conference, 259–65. Portland, Oregon.

Romy Müller, Dennis Paul, and Yijun Li. 2021. “Reformulation of Symptom Descriptions in Dialogue Systems for Fault Diagnosis: How to Ask for Clarification?” International Journal of Human-Computer Studies 145: 102516.

Volha Petukhova, Harry Bunt, Andrei Malchanau, and Ramkumar Aruchamy. 2015. “Experimenting with Grounding Strategies in Dialogue.” In Proceedings of the 19th Workshop on the Semantics and Pragmatics of Dialogue (Semdial 2015 goDIAL). Gothenburg, Sweden.

Olivier Pietquin. 2007. “Learning to Ground in Spoken Dialogue Systems.” In 2007 Ieee International Conference on Acoustics, Speech and Signal Processing-Icassp’07, 4:IV–165. Honolulu, HI, USA: IEEE.

John L. Pollock. 1987. “Defeasible Reasoning.” Cognitive Science 11 (4): 481–518.

Henry Prakken. 2008. “A Formal Model of Adjudication Dialogues.” Artificial Intelligence and Law 16 (3): 305–28.

Henry Prakken. 2017. “Historical Overview of Formal Argumentation.” IfCoLog Journal of Logics and Their Applications 4 (8): 2183–2262.

Matthew Purver. 2004a. “Clarie: The Clarification Engine.” In Proceedings of the 8th Workshop on the Semantics and Pragmatics of Dialogue (Catalog), 77–84. Barcelona, Spain.

Matthew Purver. 2004b. “The Theory and Use of Clarification Requests in Dialogue.” PhD thesis, London, UK: King’s College, University of London.

Matthew Purver. 2006. “Clarie: Handling Clarification Requests in a Dialogue System.” Research on Language and Computation 4 (2-3): 259–88.

Matthew Purver, Julian Hough, and Christine Howes. 2018. “Computational Models of Miscommunication Phenomena.” Topics in Cognitive Science 10 (2): 425–51.

Byron Reeves and Clifford Nass. 1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People. Cambridge university press Cambridge, UK.

Kepa Joseba Rodrı́guez and David Schlangen. 2004. “Form, Intonation and Function of Clarification Requests in German Task-Oriented Spoken Dialogues.” In Proceedings of the 8th Workshop on the Semantics and Pragmatics of Dialogue. Barcelona, Catalonia, Spain.

Antonio Roque. 2009. Dialogue Management in Spoken Dialogue Systems with Degrees of Grounding. University of Southern California.

Antonio Roque and David Traum. 2008. “Degrees of Grounding Based on Evidence of Understanding.” In Proceedings of the 9th Sigdial Workshop on Discourse and Dialogue, 54–63. Columbus, Ohio, USA.

Antonio Roque and David Traum. 2009. “Improving a Virtual Human Using a Model of Degrees of Grounding.” In Twenty-First International Joint Conference on Artificial Intelligence. Pasadena, California, USA.

Stéphane Rossignol, Olivier Pietquin, and Michel Ianotto. 2010. “Simulation of the Grounding Process in Spoken Dialog Systems with Bayesian Networks.” In International Workshop on Spoken Dialogue Systems Technology, 110–21. Shizuoka, Japan: Springer.

Jost Schatzmann, Kallirroi Georgila, and Steve Young. 2005. “Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems.” In 6th Sigdial Workshop on Discourse and Dialogue. Lisbon, Portugal.

Emanuel A. Schegloff, Gail Jefferson, and Harvey Sacks. 1977. “The Preference for Self-Correction in the Organization of Repair in Conversation.” Language 53 (2): 361–82.

Loredana Schettino, Maria Di Maro, and Francesco Cutugno. 2020. “Silent Pauses as Clarification Trigger.” In Laughter and Other Non-Verbal Vocalisations Workshop: Proceedings (2020). Bielefeld, Germany.

David Schlangen. 2004. “Causes and Strategies for Requesting Clarification in Dialogue.” In Proceedings of the 5th Sigdial Workshop on Discourse and Dialogue at Hlt-Naacl 2004, 136–43. Cambridge, Massachusetts, USA.

David Schlangen. 2019. “Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings.” Computing Research Repository Journal.

David Schlangen and Gabriel Skantze. 2011. “A General, Abstract Model of Incremental Dialogue Processing.” Dialogue & Discourse 2 (1): 83–111.

John R. Searle. 1985. Expression and Meaning: Studies in the Theory of Speech Acts. Cambridge University Press.

Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2018. “A Survey of Available Corpora for Building Data-Driven Dialogue Systems: The Journal Version.” Dialogue & Discourse 9 (1): 1–49.

Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. “Building End-to-End Dialogue Systems Using Generative Hierarchical Neural Network Models.” In Conference of the Association for the Advancement of Artificial Intelligence, 16:3776–84. Phoenix, Arizona, USA.

Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. “Neural Responding Machine for Short-Text Conversation.” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Beijing, China.

Gabriel Skantze. 2007. “Making Grounding Decisions: Data-Driven Estimation of Dialogue Costs and Confidence Thresholds.” In Proceedings of the 8th Sigdial Workshop on Discourse and Dialogue, 206–10. Antwerp, Belgium.

Gabriel Skantze. 2008. “Galatea: A Discourse Modeller Supporting Concept-Level Error Handling in Spoken Dialogue Systems.” In Recent Trends in Discourse and Dialogue, 155–89. Springer.

Gabriel Skantze, David House, and Jens Edlund. 2006. “User Responses to Prosodic Variation in Fragmentary Grounding Utterances in Dialog.” In Ninth International Conference on Spoken Language Processing. Pittsburgh, Pennsylvania, USA.

Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. “A Neural Network Approach to Context-Sensitive Generation of Conversational Responses.” In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado.

Robert Stalnaker. 2002. “Common Ground.” Linguistics and Philosophy 25 (5/6): 701–21.

Svetlana Stoyanchev, Pierre Lison, and Srinivas Bangalore. 2016. “Rapid Prototyping of Form-Driven Dialogue Systems Using an Open-Source Framework.” In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 216–19. Los Angeles.

Svetlana Stoyanchev, Alex Liu, and Julia Hirschberg. 2014. “Towards Natural Clarification Questions in Dialogue Systems.” In AISB Symposium on Questions, Discourse and Dialogue. Vol. 20. Goldsmiths College, University of London, UK.

Marc Swerts, Diane Litman, and Julia Hirschberg. 2000. “Corrections in Spoken Dialogue Systems.” In Sixth International Conference on Spoken Language Processing. Beijing, China.

Michael Tomasello. 2010. Origins of Human Communication. MIT press.

David R. Traum. 1994. “A Computational Theory of Grounding in Natural Language Conversation.” Rochester Univ NY Dept of Computer Science.

David R. Traum. 1999. “Computational Models of Grounding in Collaborative Systems.” In Psychological Models of Communication in Collaborative Systems: Papers from the Aaai Fall Symposium, 124–31. North Falmouth, MA, USA.

A. M. Turing. 1950. “Computing Machinery and Intelligence.” Mind LIX (236): 433–60. https://doi.org/10.1093/mind/LIX.236.433.

Takuma Udagawa and Akiko Aizawa. 2019. “A Natural Language Corpus of Common Grounding Under Continuous and Partially-Observable Context.” In Proceedings of the Aaai Conference on Artificial Intelligence, 33, 01:7120–7. Hilton Hawaiian Village, Honolulu, Hawaii, USA.

Takuma Udagawa and Akiko Aizawa. 2020. “An Annotated Corpus of Reference Resolution for Interpreting Common Grounding.” In Proceedings of the Aaai Conference on Artificial Intelligence, 34, 05:9081–9. Hilton New York Midtown, New York, USA.

Oriol Vinyals and Quoc Le. 2015. “A Neural Conversational Model.” arXiv Preprint arXiv:1506.05869.

Thomas Visser, David Traum, David DeVault, and Rieks op den Akker. 2012. “Toward a Model for Incremental Grounding in Spoken Dialogue Systems.” In Proceedings of the 12th International Conference on Intelligent Virtual Agents. Santa Cruz, CA, USA.

Thomas Visser, David Traum, David DeVault, and Rieks op den Akker. 2014. “A Model for Incremental Grounding in Spoken Dialogue Systems.” Journal on Multimodal User Interfaces 8 (1): 61–73.

Douglas Walton, and David M. Godden. 2006. “The Impact of Argumentation on Artificial Intelligence.” Edited by Peter Houtlosser and Agnes van Reespages. Considering Pragma-Dialectics: A Festschrift for Frans H. Van Eemeren on the Occasion of His 60th Birthday, 287–99.

Douglas Walton and Erik C. W. Krabbe. 1995. Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. SUNY press.

Douglas N. Walton. 1984. Logical Dialogue-Games. University Press of America, Lanham, Maryland.

Zhiyang Wang, Jina Lee, and Stacy Marsella. 2013. “Multi-Party, Multi-Role Comprehensive Listening Behavior.” Autonomous Agents and Multi-Agent Systems 27 (2): 218–34.

Lauren Winston and Brian Magerko. 2017. “Turn-Taking with Improvisational Co-Creative Agents.” In Proceedings of the Aaai Conference on Artificial Intelligence and Interactive Digital Entertainment. Vols. 13, 1. Snowbird, Utah, USA.

Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. “The Hidden Information State Model: A Practical Framework for Pomdp-Based Spoken Dialogue Management.” Computer Speech & Language 24 (2): 150–74.

Tangming Yuan, David Moore, and Alec Grierson. 2004. “Human-Computer Debate, a Computational Dialectics Approach.” Unpublished Doctoral Dissertation, Leeds Metropolitan University.

Yiqian Zou. 2020. “An Experimental Evaluation of Grounding Strategies for Conversational Agents.” Master Thesis. Department of Philosphy, Linguistics and Theory of Science. University of Gothenburg.

Top of page

Notes

1 http://www.sfb360.uni-bielefeld.de/

2 We are aware that Clarification Requests are generally used to correctly update the common ground. Nevertheless, the term Common Ground CR refers here, as in the mentioned studies, to requests used to check what is already stored in the common ground.

Top of page

List of illustrations

Title Figure 1
Caption Number of Google Scholar’s results on publications about dialogue systems applying semantics versus pragmatics from 2010 to 2020 [Retrieved on 30/04/2021].
URL http://journals.openedition.org/ijcol/docannexe/image/890/img-1.jpg
File image/jpeg, 113k
Title Figure 2
Caption Number of Google Scholar’s results on publications about clarification requests and common ground from 2010 to 2020 [Retrieved on 30/04/2021].
URL http://journals.openedition.org/ijcol/docannexe/image/890/img-2.jpg
File image/jpeg, 134k
Title Figure 3
Caption Summary of cooperative model of human communication (C = communicator; R = recipient); Source: Tomasello (2010); All rights belong to their respective owners.
URL http://journals.openedition.org/ijcol/docannexe/image/890/img-3.jpg
File image/jpeg, 237k
Title Figure 4
Caption Speech recognition as an example of incremental processing; Source: Schlangen and Skantze (2011); All rights belong to their respective owners.
URL http://journals.openedition.org/ijcol/docannexe/image/890/img-4.jpg
File image/jpeg, 180k
Title Figure 5
Caption Representation of the Common Ground CRs elicitation scenario
URL http://journals.openedition.org/ijcol/docannexe/image/890/img-5.jpg
File image/jpeg, 171k
Top of page

References

Bibliographical reference

Maria Di Maro, “Computational Grounding: An Overview of Common Ground Applications in Conversational Agents”IJCoL, 7-1, 2 | -1, 133-156.

Electronic reference

Maria Di Maro, “Computational Grounding: An Overview of Common Ground Applications in Conversational Agents”IJCoL [Online], 7-1, 2 | 2021, Online since 01 December 2021, connection on 02 December 2024. URL: http://journals.openedition.org/ijcol/890; DOI: https://doi.org/10.4000/ijcol.890

Top of page

About the author

Maria Di Maro

Interdepartmental Center for Advances in Robotic Surgery. Department of Electrical Engineering and Information Technology. University of Naples “Federico II”, Italy. E-mail: maria.dimaro2@unina.it

By this author

Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search