(Bigbee et al. 2001)
:
Manual annotation
Automatic annotation
It is unfortunate that there is still today an enormous gap between the community of linguists and phoneticians on the one hand and that of engineers and computer scientists on the other. Each community needs the other and, in an ideal world, linguists would provide theoretical frameworks and data which are useful to engineers, while engineers would provide tools which are useful to linguists. The exchange between the two communities, however, is in practice very slow.
(D.J. Hirst 2006: 198)
(Chiarcos et al. 2008)
:When multiple annotations are integrated into a single data set, inter-relationships between the annotations can be explored both qualitatively (by using database queries that combine levels) and quantitatively (by running statistical analyses or machine learning algorithms).
However, when such muti-layer corpora are to be created with existing task-specific annotation tools, a new problem arises: output formats of the annotation tools can differ considerably.
With the help of multimodal corpora searches, the investigation of the temporal alignment (synchronized co-occurrence, overlap or consecutivity) of gesture and talk has become possible.
(Abuczki and Baiat Ghazaleh, 2013)
The Automatic Annotator time-aligns descriptive data for Tiers such as Phonetics, Prosody, Syntax, Discourse with the recorded signal:
The expected result is time-aligned data, for all annotated levels as Phonetics, Prosody, Gestures, Syntax, Discourse,...