- 1 Sign languages display several meanings distributed on several articulators at the same time, unlik (...)
- 2 The location (LOC) is the place where the hand is placed at the beginning of a sign.
1All SLs and over 90% of all Vocal Languages (VLs) have not yet developed or acquired a writing system. Linguistic study, however, requires collecting, grouping and classifying data. To do so, one needs to use a typeface that represents language in a graphic form (Slobin et al. 2001). The aim of this paper is to present Typannot, a typographic system able to: 1) transcribe the multi-linearity1 of SLs; and 2) solve the time-consuming issue of SL transcription. In this paper, we will exemplify the principles of Typannot through the parameter of handshape (HS), although the system is also able to represent other SL parameters: orientation, location2 (LOC), movement, and facial expression.
- 3 In SL, the distance between the thumb and the index finger carries meaning. This handshape seems to (...)
2To start, we will bring to light the stakes and issues in SL that call for a new tool. The first issue is that SLs are multi-linear3 languages that challenge the VL-centric conception of representation (see §2). Some Sign Language transcription systems (TranSys) have already been proposed and we will present them through the perspective of translation/identification, on the one hand, and transcription of forms on the other hand (see §3). The following review of existing SL TranSys provides cues to understanding the gap between the purpose of each TranSys and the reality of SLs and their notations. The second issue pertains to the transcription scope, i.e. data searchability: transcriptions are chiefly made for database queries. What is the structure of the information and how can we design a system to represent the combination of its components in an exhaustive way (see §4)?
3Next, we will present the four design principles that we developed as working guidelines: genericity, readability, modularity, scriptability. Through an interdisciplinary reflection involving linguists, designers, and developers, we define the junctions between the different working frames, from linguistic modelization to a systematic typographic system (see §5). We applied these principles to design an initial typeface that represents the parameter of handshape (HS) (see §6). With the typographic system in place, we had to develop our own typesetting tool and to propose input options that utilize current technology such as OpenType format, gesture capture, and recognition solutions (see §7). Apart from the representation of the forms of SL’s parameters, another problem we tackle is the reduction of transcription time.
4Finally, we will present a corpus in French Sign Language (LSF) that has been transcribed entirely using Typannot HS generic characters. We also annotated the LOC and orientation of the palm in a graphemic form. Some preliminary results will be presented (see §8).
5VLs with an “oral-only” tradition share the phono-acoustic channel with VLs having an “also written” tradition, thus it is possible to represent them using adaptations of existing phonographic systems. SLs, however, are based on the gestural-visual channel, which requires creating a different system of representation. One of the consequences for SL concerns the constraints of the linearity of writing: unlike VL writing, which is mono-linear because of the mono-linearity of speech production (phonemes are sequentially disposed [Linnel 2004]), the visuo-gestual channels of SL production are multi-linear. Until today, no existing writing system has represented multi-linearity: from inscription (traditionally done with some kind of nib: pen, quill, brush, etc.) to reading phases, all writing productions are built around VL mono-linearity or structured by VL mono-linearity, even in ideographic systems. Traditional writing systems are clearly inappropriate for addressing SL’s iconic and multi-linear nature, considering that SL’s "oral" articulators (fingers, hands, forearms, arms) are both informing the sign as a gesture at a phonological level, and transferring a meaning at a semantic level. The problem of SL multi-linearity is not a reading issue, but more deeply an inscription issue: in fact, a conductor is able to read a symphonic score despite the significant number of staff lines. But the composer is not able to write several “voices” simultaneously. VLs are traditionally hand-written with a tool (e.g., a pen or a brush) that delivers a line by moving a single point, whereas SL signs are “drawn” in space with several articulators (fingers, hands), as if the signer were using his body as a writing tool.
6Luckily, computers split the inscription support (hard disk) from the reading support (screen). This separation opens up a new way of writing, not drawing (pen) but typing (keyboard). Typographic technology can automate the selection of specific glyphs using a combination of keystrokes or selections in a graphic user interface (GUI). The HS, or any other SL parameter, can also be grabbed in whole by a motion capture system and transformed into a glyph. Thus, the traditional single-tip tools for writing can be substituted with a gestural form by selecting a glyph rather than producing it in an essentially multi-punctual process.
7If the multi-linearity of SL production could be set by a motion capture system, the diversity of the features intrinsic to a HS (the selection of fingers, their own shapes, the angles, the relationships between the fingers; see §6 for details), or to any parameter, should also be embodied in the generated glyph. Hence, the necessity to solve both issues: the inscription itself and the corresponding information (see §4).
8In recent years, modern audiovisual technologies allow the creation of large SL corpora (Blanck et al. 2010, Braffort 2016), granting more in-depth explorations of SL functioning. But existing technologies cannot overcome the lack of an effective representation system for SL. To solve that problem, researchers use glosses or type font systems (Johnston 2008; Fenlon et al. 2015). Both solutions have pros and cons.
9The so-called “glosses” are mono-linear verbal labels (in the researcher’s VL) providing the (supposed) meaning of every sign by doing a sign-by-sign translation from SL to VL. On the one hand, they allow fast, easy and searchable transcriptions of large amounts of data, thanks to their use of specific video-tagging annotation tools (like ELAN - Crasborn & Sloetjes 2008 - or iLex - Hanke 2002); on the other hand, they are influenced by VL syntax and semantics and do not, therefore, provide any information about the sign form. The use of glosses prevents the identification of form-meaning patterns and may conceal SL-specific phenomena. To avoid some of these problems, Johnston (2008) has developed the system “ID-gloss” (see §3.1).
- 4 For an overview see Bianchini (2012), Boyes-Braem (2012) and Crasborn (2015).
10Typographic systems4 consider signs on the basis of their form. Most of them are inspired by the pioneering notation developed by Stokoe (1960) and, like this first typographic system, they show just a mono-linear view of SL (cf. Figure 1), dividing signs in four main parameters: HS, orientation, LOC, and movement of the hand; only a few typographic systems also describe facial expression and body posture. Typographic systems are easy to read and are not influenced by VLs, but they are usually tedious to learn and use because of the large number of characters and their weak iconicity, and some of them do not ensure searchability. At present, two different typographic systems circulate more broadly than others: HamNoSys (Prillwitz 1989; see §3.2), developed for research purposes, and SignWriting (Sutton 1995; see §3.3), mostly dedicated to education. The different aims pursued by Prillwitz and Sutton means that HamNoSys is particularly suited to computer-assisted linguistic research and focuses mainly on conveying the signs form, while SignWriting is more effective for handwriting and tends to convey the meaning of signs as effective and quick as possible.
Figure 1. The American Sign Language sign for Goldilocks
in Stokoe Notation, HamNoSys and SignWriting
Source : www.signwriting.org
11For Johnston (2008), “an ID-gloss is the (English) word that is consistently used to label a sign within the corpus, regardless of the meaning of that sign in a particular context or whether it has been systematically modified in some way”. ID-glosses are not sign translations but highly standardized labels, made to analyze lexical signs (Johnston 2011). Like traditional glosses, they are mono-linear representations, but one fundamental difference is that ID-gloss must be linked to a SignDatabase, which shows the shape of the reference sign and all its form variations. Via the SignDatabase, a researcher can decide whether to associate the sign with an existing ID-gloss or to create a new ID-gloss. To build a performing SignDatabase, it is mandatory to have a system to represent signs.
12A solution to linking ID-gloss with sign shape is to use the Hamburg Notation System (a.k.a. HamNoSys; Prillwitz 1989), within the iLex annotation tool. HamNoSys is a mono-linear typographic system derived from Stokoe’s notation, which has been developed for lexicographic purposes with the aim of becoming like an international phonetic alphabet (IPA) for SLs. HamNoSys characters describe the four manual parameters of SL (and, marginally, some facial expressions) in a quite compositional way (cf. Figure 2). HamNoSys is high performing in digital environments: it has been codified under Unicode standard and ensures machine readability, scriptability and searchability. But the computer ease-of-use does not correspond to user-friendliness: even if its characters are quite iconic, to learn, write and read an HamNoSys string is quite difficult because of their complexity and low iconicity.
Figure 2. HamNoSys linear organization
Source : Smith 2013
- 5 The purpose of SW is to allow the "writing" of SL, that is, to express concepts directly in writte (...)
13SignWriting (SW; Sutton 1995) is a typographic system developed to write5 (not to transcribe) SLs; it is made from over 35,000 characters, which enables both manual and non-manual parameters to be written. Visually, SW is different from any other existing typeface, because it is the only one that tries to take into account SL’s multi-linearity, using highly iconic characters placed in a bi-dimensional vignette that represent signing space (cf. Figure 1): these features make it very legible and quite easy to learn, but handwriting can be laborious. SW has been developed for educational and cultural purposes and for this reason, even if software did exist dedicated to SW (e.g., SignMaker), it would not meet the researcher’s need for searchability. In 2017, SW entered the Unicode standard but non-dedicated software (like MS Word) could not support the bi-dimensional layout of SW, which must be converted into a mono-linear string, and in so doing, lost all information transmitted by the use of space.
14Therefore, except in a few corpora (Efthimiou et al. 2010; Hanke et al. 2012), SL parameters are not really annotated, or else just partially. One of the reasons is the time needed to do this, even if it has been rarely quantified (Colletta et al. 2009:57). This time consuming activity could be reduced if a glyph inserted into the annotation software used carried several pieces of information (in ELAN for instance, this would help reduce the number of tiers used to annotate sign forms to one).
15In the next section, we will present our typographic system (one for each parameter: HS, LOC, movement, and facial expression), in which every glyph embeds layered information for sign representation by way of generic characters (see §4.1). This system allows a linear (like a text) and a multi-linear organization (like a score) of the information within the textual space. This formal transcription allows the concatenation of several layers of features. Hence, glyphs, as a collection of bricks of information, might represent one parameter. This concatenation renders the features searchable, despite the fact that we cannot see their forms in the glyph, and allows data query into more than is visibly transcribed (see §4.2). In the last sub-section (see §4.3), we present in detail the four principles upon which our approach is based. The general architecture of Typannot will be explained in §5, using the most completed parameter: HS.
16Three layers of information are imbedded within each SL sign: 1) the parameter (1st parametric layer); 2) the parts (2nd part layer); 3) the combination of features (3rd featural layer). For instance, handshape (HS) is just one of the possible layers of information. HS is an example of a parameter, and is therefore an example of the 1st parametric layer. An example of the 2nd layer would be the fingers, of which all five could be chosen. The fingers would be an example of the 2nd parts layer. The third layer shows the features of the parts layer. In our example, we could say that all five fingers (2nd part layer) in the HS (1st parametric layer) are “open”; the angle of the fingers, therefore, would be represented by the 3rd featural layer.
Figure 3. Three layers of information in SL signs
17The Typannot type fonts are built up from the 1st parametric layer. The 2nd parts layer contains glyphs and each glyph is constructed by using a combination of generic characters (type font character sensu Unicode), which correspond to the 3rd featural layer.
18To build the list of generic features necessary for Typannot development, two approaches have been used: 1) if a well-established list of items already exists, we follow a phonological approach; 2) if, for any reason, the list of items is missing or insufficient, we establish it by following a formal approach: analyzing the anatomical or kinesiological characteristics of the parameter. The first approach has been used for HS, where an initial list of 237 items (inspired by Eccarius & Brentari’s 2008 study of HSs in 9 SLs) forms the core for extraction of features available in Typannot HS description. For movement or LOC, however, we used the second approach and built an ad hoc list of features.
19Another challenge for SL TranSys is to follow a systematic framework at the glyphic level in order to retain the integrity of the information that it encodes. The goal is to be able to query the various layers of description that make a sign, more than simply analyze their syntactic or semantic functions. Analyzing SL at low (phonetic) levels opens up the structure of the parameters and, through them, that of the sign. Our practice of systematic low-level transcription advances the exhaustive transcription of sign components. Eventually, corpora that are described in such detail intrinsically hold more value, since they can be queried at more levels by more users. Presently, with our typographic system, we annotate the 3rd featural layer and we are able to search all three layers. By comparison, the transcription capability of ELAN software is limited to one single tier; the information in some cases, therefore, could only be transcribed inefficiently, using six entire tiers per hand.
20As already discussed (see §4.1), Typannot is based on three layers of information for each parameter described. Those layers allow translating the intrinsic dimensions of the SL sign, as it is perceived, into a systematic representation framework that can be grounded in two elementary perspectives: 1) the technical and typographical contexts of transcription is formed by the pragmatic and specific conditions in which the design process of a TranSys is conceived; 2) the modelization and analytical vocations inherent to the work of transcription implies a special reflection on the way data can be viewed, assembled and searched. We identified four principles that address the fundamental requirements for our TranSys: genericity, readability, modularity and scriptability.
21Transcription fundamentally differs from representation because it requires achieving discreteness. The purpose of transcription goes beyond recognition and comprehension, which can be seen as standard tasks. To design a TranSys we need to decompose language down to its lowest distinctive units, paradoxically making it unrecognizable from the natural perspective of language. This change of dimension can be compared to a phonemic reduction and constitute an abstract representation of the parts of a sign. In order to move away from a phonetic dimension (where forms are seen from an “oral” perspective) we have to create a conceptual level of representation that identifies every single element of a sign: in simpler words, it is not “how it looks from outside” but “what it is made of inside” that interests us. For example, the fingers as perceived in a HS are made out of six generic features (3rd featural layer): hand, finger, event, shape, angle, contact (see §5). This concept allows a constructivist approach to transcription that is built on the articulatory nature of gesture. Genericity thus can be understood as a level of reduction of information that allows a system to offer a symbolic and systematic space that can be used to model any production of a parameter for any SL.
22A TranSys requires not only a model that systematically characterizes each parameter but also a coherent internal organization. While the body parts have their own physical space and natural organization, transcription must follow the linear space of the writing form in which most data are input, i.e. a text. This approach requires a second radical shift of perspective to conceptualize a linear architecture in which the units of information can be consistently and logically assembled, i.e. syntax. Generic notions also help organize information within this textual space; as we have defined categories of generic information (e.g., fingers, shape, angle, events, etc.) we are able to assign them an invariable position and organization in an exhaustive string of data representing the globality of a parameter. This syntax is essential to insure data integrity and compatibility when transcriptions are communicated beyond the individual annotator sphere, compared to each other or collaboratively edited. Thus, bricks of information are assembled in a so-called nomenclature that will be an essential criterion when it comes to reading, writing and, of course, searching.
23But how can Typannot guarantee that such a level of exhaustivity remain readable for the annotator (see §3 for compromises made by SignWriting or HamNoSys)? And how can readability be defined when talking about SLs?
- 6 The term refers to a system capable of reproducing and maintaining itself and is linked to Maturana (...)
24Specialists of VL writing systems have heated debates on how to define readability and most of their positions are characterized by a verbo-centric point of view (e.g. the writing system is the representation of the phonemes in a Latin typeface). To step out of this traditional attitude, we will turn to the pioneering work of the French neurophysiologist Jacques Paillard, who offers a clear and relevant definition of readability and legibility concepts. In Paillard’s view (1974), an essential part of the living organism’s activity is bound to collect information in order to continuously adapt itself to the changing environment, thus in-forming6 (Maturana & Varela 1980) their activity. He describes the conditions of constitutions of information in these terms:
There is no such thing as usable information without an organized structure, extractable from the space-time environment in which it arises as a singular and distinguishable element. In the “eyes” of the organism, it is the spatial and temporal organization of the accessible bits that carry the in-forming qualities of any informational food. The “signifying” nature of this informational food is only granted on the ground of the level of regularity, stability and reproducibility achieved in such an organization. (Paillard 1974, 9)
- 7 Readability refers to the way letters are arranged to form a readable word, phrase or text. Legibil (...)
25Writing, as a graphical information system based on inscriptions produced by the organism’s own activity, can achieve meaning through a process of organization and distinction based on visuo-spatial regularities operated from a visuo-gestural modality. From this reflection we can outline two interesting ideas for our issues: 1) distinctivity (discreteness) is at the basis of the “informational food” structure and 2) this structure has a regular and stable organization in time and space. Here we can recognize two universal tenets that characterize all existing writing systems (whether they are alphabetic, syllabic or ideogrammatic) and can be regarded as defining the notions of readability and legibility7 (Mc Monnies 1999).
26In writing systems, “letters” are achieved through the management of formal parameters (e.g., proportion, orientation, partition, disposition, repetition, etc.); in the case of SLs, the production of such regularities could imply the integration of the enacted structure of the language (as lived and perceived in the stable frame of reference that is the body) into the graphical space of a writing system. Although SLs cannot claim a long record of writing evolution, they do have this inherent frame of reference that is stable and regular: one’s “own” body and the “lived” experience of language that is perceived through it.
27In order to meet the readability criterion, Typannot must take into account both the phonological and logographical format of information. This is a unique opportunity to challenge the modal and semiotic rupture that traditionally occurs between speech and writing (voco-acoustic vs. visuo-gestural). Indeed, writing does involve reduction and conventions but, in the case of SL, it can do so by following the fruitful relation between two extremes of writing forms: the “image” of reality as an analogical movement toward the referred idea (the signed body through a logographical perspective), and the modularity of a graphical decomposition of information (the parts and variations that compose that signed body through a phonological perspective).
28To conclude, an SL TranSys needs to display the same information in two formats: 1) a generic form that visualizes the bits of information in a syntax that confers stability and regularity to the code architecture from the perspective of corporal and gestural models; 2) a composed form that integrates symbolic translations of the parts in an analogic space of representation (an image of the signed body). For that reason, we designed the two forms of representations and give Typannot users the ability to seamlessly and consistently view/read one or the other while retaining data integrity. Progress in typography technology allows us to design this dual perspective using the OpenType font format and functionalities (like the ligatures) that are nowadays widely implemented in all text editing environments. This way, the principle of genericity and readability can be achieved in the limited typographical dimension of the glyph. This logographic readable form is also an ongoing ethical commitment of our team to provide more accessible tools for both linguists and signers.
29Nevertheless, the issue of retaining information in both forms should not hide the fact that our multi-level phonographic decomposition generates a large amount of possible combinations. Such a logographic format, based on extremely large variation pools, raises the question of “imbrication”, and thus modularity in a non-linear construction space. For this format to actually function in a variable environment like SL, we need to devise a systematic framework. How can we define modularity and how can Typannot use it to solve the question of massive combination possibilities?
30Chinese writing system is an interesting example of massive glyphic combination. At the core of its structure is a small set of radicals (214) that have phono-semantic values. Those elementary components are assembled into more complex characters to represent words with related meanings or sounds. This principle of modularity can be adopted to translate the visual structure of the body and gesture in Typannot glyphic framework. The generic levels of description of a parameter can actually be visualized in a set of modules that share visual and spatial analogies with the SL sign (see example of HS in §5).
31During Typannot development, we searched for the best forms to represent SL features while assessing their ability to be assembled and remain legible. Those conventions cannot be formed arbitrarily. Typannot modular system is the result of a back-and-forth between designers (carefully evaluating all the solutions they proposed) and future users (taking into account the way they perceive and understand the system), and it is still open.
32But designing the Typannot “modules” solves only half of the problem of representing all the combinations of parts and features for every single SL parameter. Designing the thousands (or millions) of possible combinations by hand is impossible, so it is mandatory to find a procedure to automate this work. In our glyphic framework, the design space is constant because it is a conventionalized projection of the body and its features. So we could write an algorithm to put each module at the right place and in the right order in order to solve the glyphic integration problem. Our algorithm runs under Robofont, a font production software using Python coding language. It allows the automatic generation of all possible combinations inside a single parameter. At present, we have already generated all possible combinations for HS. The same procedure will be used for all the other parameters represented by Typannot.
33Last but not least, we need to consider how the user will input this new type of graphical representation of SLs. Although a writing system inherently points toward the act of writing, should we still consider it through the perspective of the pen and the hand that controls it? Or maybe this traditional modality is not as relevant here?
- 8 A previous research project of the GestualScript team, called Photocalligraphy, investigated the re (...)
34In this article we explore the way the act of writing and the semiotic modalities of language could be articulated, as they both share a visuo-gestural dimension. Earlier we presented the principles that allow the system to depart from the natural perceived perspective of orality and achieve genericity, readability and modularity. This working process also brought us back to the intrinsic dimension of the SL sign: the corporal experience allowed by visual analogy. In a way, the act of tracing those analogical glyphs and modules is eventually a gestural activity that will help signers relate to a form of writing that goes beyond the image and reinstall them into their language, as an experience felt through the body and gestures8. We make the hypothesis that such an analogy could trigger the construction of a new kind of relation between language and writing (even if it is necessary to relativize such a practice in light of the scientific context of transcription): the inscription of the signer’s linguistic experience into a typographic representation of SL using an intrinsic perspective (rather than external point of view). In short it has more to do with “what and how one does” than “how one looks”.
35Still, transcription and analysis of SL corpus are essentially carried out in a digital environment rather than through an analogical modality like handwriting. This should bring our attention towards the issue of digital inputs methods. Each established writing system comes with its keyboard layout. Even Chinese writing has its own keyboard and users pushed so far as to emulate a traditional entry, tracing the character with the finger on a trackpad. To create the Typannot input interface, we decided our task was not to map our system onto a classical mechanical keyboard. Starting from a “blank sheet”, we explored new solutions like graphical user interfaces and motion capture systems (see § 7).
36Typannot transcribes at low (phonological) level every existing SL (142, according to Simons et al. 2018). As already mentioned, Typannot peculiarities are: a) it takes into consideration the parametric, the part and the featural layer of sign information (see §4.1); b) it is based on four underlying principles (genericity, readability, modularity and scriptability (see §4.3 for a detailed explanation); c) it is built on three levels of representation (a graphemic formula, a set of generic characters, and a typographic font made of composed forms).
37To explain how the three layers of representation work together, we will continue using the HS parameter. Note that the Typannot framework developed for HS may also be applied to all other manual and non-manual parameters (at the moment, we are developing LOC, movement and facial expression, but more parameters may come).
- 9 Note that the greatest difference between the work done for the HS and that for the other paramete (...)
38The graphemic formula is an ordered list of features, which are relevant for the description of SLs. Since HS is the most investigated parameter, lists of occurring HSs already exist, and several researchers have already proposed a phonological analysis of this parameter (e.g., Liddell 1990; Brentari 1998). To build its list of generic information on HS, Typannot also uses a phonological point of view (§4.1): starting from the Eccarius and Brentari’s (2008) analysis of HS features, Typannot retains 21 features that are relevant for the analysis and, when they exclude each other, groups them into categories (i.e. event, shape, etc.; see Table 2)9.
39The generic characters are the translation of the selected features (i.e., 21 for HS) into graphical forms; to compose a complete HS, these characters are then arranged in a linear way following strict syntactic rules (see Table 1). At present, to ensure portability and data queries in every software and operating system, each representation system needs to be recognizable by the Unicode Standard: to be sure to comply with all the Unicode Consortium requirements, Typannot has been developed from the very beginning to abide to the guidelines. Furthermore, thanks to its genericity and modularity, Typannot requires the formal recognition of just the generic characters (i.e., only 21 slots for HS) while, for example, SW needs 261 slots for HS.
Table 1. Syntax of the HandShape description
* means optional ; {…} means repeat
Table 2. Typannot glyphs for HS
Figure 4. Example of a HS described with generic characters and represented by a glyph in composed form. Queries can be run on the generic characters or on glyphs
40To ensure easy readability and scriptability and the integration of several pieces of information into a single glyph (see §4.2), Typannot also provides a compounded version for every HS described. The composed form (see Figure 4) is an iconic but highly standardized representation of the more salient features of HS; it is automatically generated within an OpenType font through typographic ligatures. Nevertheless, the permutation of every generic character into the descriptive syntax leads to countless possibilities that, instead of allowing a deeper knowledge of the HS forms, will blurry data into useless differentiations. For that reason, the Typannot team has issued some rules to narrow the list of possible HSs (passing from billions of possibilities to less than 30000 HSs), relying on phonological and kinesiological indications.
41Thanks to the algorithm used into Robofont (see §4.3.3), Typannot HSs have been “translated” into a font that will be soon downloadable (for free) on every operating system. The typefaces come with a dedicated virtual keyboard, which will enable users to select different parameters in order to combine Typannot glyphs (Boutet et al. 2018). The ongoing design and development of Typannot Keyboard (see Figure 5) focuses on creating an accessible and user-friendly tool to write down Typannot in a fast and easy way. Typannot Keyboard is based on three interfaces, some of them still under development, each one having its own peculiarity and serving different purposes and work modes.
Figure 5. Typannot Keyboard homepage
42The parametric interface (see Figure 6 & 7), displays generic glyphs and leads the user to select, step-by-step, different parameters in order to create a HS and its corresponding glyph.
- 10 The AUTOCOMPLETE field displays signs that are close to the current input. GROUP refers to a select (...)
Figure 6. Typannot Keyboard parametric interface : structure of the interface10
Figure 7. Typannot Keyboard parametric interface : example of the composition of a glyph
43When configurations are selected, a 3D model and a glyph of the HS are displayed, allowing users to understand and verify the glyph combination; if needed, using the configuration buttons or the formula line can make some changes directly. The formula line gives feedback to the user and can be displayed as text (as in Figure 7) or can be switched to generic characters. When the combination matches the HS to be transcribed, the glyph is ready to be sent to different software. Typannot’s main purpose is to transcribe SL on the ELAN annotation tool as any other typeface, but the keyboard can also be used with any other text software (Word, text edit, Google doc, etc.).
44The gestural interface, using motion capture (MoCap, based on Leap Motion device - Avola et al. 2014), is under development. It will enable users to transcribe glyphs by positioning their hand, in the searched HS, above a Leap Motion sensor that will automatically capture its shape. When the right HS has been achieved, the corresponding glyph can be sent, in a single click, to any annotation or text-editing software. Some further development and adjustments need to be done before this technology can be fully operational, but a first set of tests enabled us to confirm a significant time reduction in transcribing corpora. For this interface, it is worth noting that it gives the way to solving the multi-linearity issues of SLs by transforming the signer’s body (or hand, for HS) in the “pen” that enables signs to be written.
45The semi-composed interface, also under development, offers the possibility to click directly on parts of a composed character, each one corresponding to more than one generic character (e.g., it will be possible to input, in a single click instead of two, the information “flat + semi-closed”, corresponding to two generic characters). The semi-composed characters, based on glyphic solution, work just like shortcuts and offer a very efficient tool without requiring any added devices.
46We are testing the different interfaces and updating functionalities and design to make it as user-friendly and efficient as possible. Until today, our work has been mainly focused on HS, but as soon as the three different interfaces are fully operational for HS, the other parameters will be implemented, too.
47The use of Typannot for the HS (complete typeface) and for the LOC parameter (only the graphemic formula) reveals some preliminary results concerning the presence of praxic gestures (the way we handle objects) in LSF (French Sign Language) and, therefore, the way praxis influences the form of symbolic gestures. According to Napier (1956), we differentiate grips between: 1) precision grip posture, characterized by an opposition of the thumb (TOpp) with at least one finger, inducing an extension of the wrist; 2) power grip posture, characterized by a non-opposition of the thumb (TNOpp) with other fingers and no specific extension of the hand. (Note that, if a precision grip is used to seize light objects, an extension of the wrist is possible - this is not the case with power grips).
48Beyond the influence of handling in symbolic gestures, a praxic approach (Siblot 1997) raises questions about the conditions in which sense is produced. Gestures share a common ground in a praxic or in a symbolic way. SLs are the only types of languages using the same medium to handle the world and to represent it. From a similar point of view, this kind of languages is unique to tackle a pragmatic approach besides a simulated action framework (Hostetter & Alibali 2010), especially at a phonological level.
49For LOC, the stationary criterion of the extension of the wrist has been retained to differentiate precision grip (with TOpp) and power grip (with TNOpp). We expect that TOpp HS should present an extension of the wrist whereas TNOpp HSs should witness non-specific extension of the hand. This assumption is applied to an excerpt of LSF, of 1’38" duration, extracted from the LS-Colin corpus (Braffort et al. 2001), transcribed with Typannot HS complete typeface for HS and the graphemic formula for LOC. The latter codes the relative location of each segment (hand, forearm, arm) according to an intrinsic frame of reference (Levinson 1996, Boutet 2010).
Table 3. Distribution of the Extension/Flexion of the hand according to the opposition of the thumb for all the encountered HSs. The dotted line represent the linear trendline.
|
50Despite the fact that, among HSs gathered in the analysis, some have nothing to do with a grip and have nevertheless a TOpp or a TNOpp, these preliminary results show that the a priori association between the opposition of the thumb and a relative location of the wrist caused by praxic situations of differentiated grips does have an impact on symbolic gestures as signs. This result opens up an issue on iconicity not only based on a visual representation of an image (McNeill 1992, Cuxac 2000), but coming from a physical interaction with the world (objects, entities, events) through our body. As far as we know, Typannot TranSys is the only one that allows this kind of investigation.
51Typannot, a typographic system (one for every parameter of SL: HS, orientation, LOC, movement and facial expression) made to transcribe SLs, focuses on the formal notation for all existing SLs. These supposed lingua franca (or, to say it better, scripta franca) builds upon, at least, two preceding TranSys: HamNoSys and SignWriting. Like them, the general organization relies on SL parameters. Nevertheless, differences appear in the approach we used to design Typannot. The four principles underlying the creation of our typefaces allow the easy writing (scriptability) of concatenated information (genericity), corresponding to low-level features (modularity) into a highly readable glyph (readability). These principles are present in all Typannot typographic systems, which are based on three finely integrated levels of representation: a graphemic formula collects the features at a phonological level when a list is enclosed or at a physiological levels for other parameters; on these graphemic lexemes and according to the syntax used to express the formula, generic characters compose the core of the typefaces; last but not least, the composed forms are the readable glyphs which ligate the generic characters. These levels are all searchable, regardless of the composed or the generic forms.
52A virtual keyboard is required to compose these three levels of representation. The input can be done using one of the three user-interfaces developed by the Typannot team: the parametric interface provides access to the generic character and their compositions; the gestural interface enables to inscribe the HS directly by placing the hands in the right HS, resolving the multi-linearity issue of SL and the difficulty facing us in writing with multi-point tools; the semi-composed interface proposes pre-combined generic characters to save time during transcription.
53The HS parameter is an example of how Typannot works and how it will be able to innovate SL linguistics research. Our design methodology is well established and requires only to be applied to all the other SL parameters. At present, HSs are complete, we have developed the graphemic formula for LOC and movement, and we are well advanced in the generic characters for facial expression. For those parameters, we also have first versions of their generic characters, but further tests are needed to confirm their design. Once the design of a typeface is completed, it will be integrated into the Typannot Keyboard. The work on the generic and the semi-composed interfaces will be simple; for the gestural interface, it will be necessary to find the correct MoCap device (already done for movement and LOC parameter) and to transform this data into generic characters. To complete the use of the Typannot system for transcription, we have already conceived an ELAN template that allows linguists to transcribe with all Typannot typefaces (only using the generic formula for those parameters that are not yet fully developed). To conclude, Typannot is an ongoing project that works at different semiological and practical levels, requiring several areas of expertise (hence our team of linguists, designers and computer engineers), and which will allow continued SL analysis to evolve in comprehensive and innovative directions.