Navigation – Plan du site

AccueilNuméros24Digital Humanities in the Web 3.0...Finding Relatedness: pathways for...

Digital Humanities in the Web 3.0 Era

Finding Relatedness: pathways for detecting textual relatedness in the medieval scholastic corpus

Trouver la relation : voies pour détecter la relation textuelle dans le corpus scolastique médiéval
Jeffrey C. Witt

Résumés

Pour montrer l’importance de préparer d’abord les éditions historiques sous forme de données textuelles, tout en laissant la présentation (que ce soit sous forme imprimée ou sur le Web) comme une tâche secondaire en aval, cet article identifie les résultats bénéfiques pour la recherche qui peuvent être obtenus grâce à l’analyse informatique lorsqu’un tel corpus de données textuelles est à portée de main. En mettant l’accent sur la profonde intertextualité caractéristique du corpus scolastique médiéval, il passe en revue trois méthodes distinctes pour détecter différentes formes de relation textuelle au sein du corpus : les intersections de n-grammes, les “document embeddings” et la convolution. Dans chaque cas, une attention particulière est accordée à la façon dont la disponibilité d'un “knowledge graph” spécifique à un domaine nous aide à la fois à préparer correctement le corpus pour l'analyse et à visualiser les résultats de manière à améliorer la recherche. Ces résultats incluent l’observation des tendances dans les pratiques de citation dans différents genres et sous-genres du corpus, le regroupement automatique des questions par similarité et la détection d’une réutilisation textuelle soutenue et non citée.

Haut de page

Texte intégral

1. Prologue: Digital Editions as Textual Data

1Despite a growing interest in what are often loosely termed “digital scholarly editions,” we are still, as a field, far away from revolutionizing the way we document, preserve, and publish the historical record. This is especially true in the field of medieval scholasticism, where the new methodology is in no way mainstream, much less a scientific expectation. Quite the opposite: peer review of a scholarly digital edition today often means proving that the presented form of the digital edition can look something like a traditional print output. In contrast, the peer review of traditional print edition carries no expectation for the edition in question to meet the standards required for that data to be made genuinely machine accessible.

2One obstacle to the full adoption of such a paradigm shift lies in the imprecision inherent in the word “digital edition”. For many, the phrase “digital scholarly edition” brings to mind a dynamic or interactive presentation of a text, for example, some kind of website. In this regard, the question of how those digital pixels found their way to the screen in front of the reader is as irrelevant as how the ink got printed on the page. The assumption here is that the end product of scholarly editorial work is the text we can read.

3Accordingly, the choice between a digital or analogue edition is not relevant to the question of scholarly quality or scientific import. It’s simply a matter of convenience: a question of where and how I want to read this text. It has nothing to do with scientific quality of the text or the ability of the scholarly output to advance or hinder the field.

4This line of thinking leads to a natural next question, which I have not infrequently received after describing the steps necessary to produce a digital edition. Is all this so called “extra work” really worth it? The skeptic notes that they have already learned how to use one word processing application; why they should they learn another? A reasonable workflow for producing a readable text already exist; why should they invest time in learning something different? And, if someone really wants to read the produced text on a computer or in a web-browser, isn’t it more than sufficient to create a digital PDF version of the print rendering and then use the web as a vehicle to publish it? If “digital edition” merely means a presented text that can be read on an electronic device, then the skeptic has a strong point.

5Indeed, the phrase “digital edition” has a certain fuzziness precisely because it carries with it the assumptions of the old paradigm. The notion of an “edition” conjures up something that someone reads. But herein lies the crux of the shift. We need to stop thinking about a scholarly edition as something that humans read, but instead as something that gets processed. An edition needs to be understood first and foremost as textual data, prepared, not for immediate consumption, but for machine-processing.

6When an edition is scientifically prepared as textual data, it can be further processed to achieve many different outcomes. The edition can and will continue to be presented for human consumption in both print and electronic forms. But neither is the edition confined or limited to these presentations. Instead, the same process-able edition can be dissected, analyzed, manipulated, re-organized, compared, contrasted, and connected.

7The flexibility of an edition of textual data means that, while offering a host of new possibilities, it can still achieve all the outcomes expected by traditional publication. Digital editions are often faulted for being less sustainable than printed editions. But this objection is moot when “digital edition” means textual data, since an edition of textual data can easily be presented as a physical book and thereby retain whatever level of durability printed books are assumed to possess. This is not the case if one’s conception of a digital edition is a dynamic website that merely presents the text. On this view, the dynamic electronic display is seen as something in tension with the printed edition, forcing the scholar to choose whether they want to create a traditional printed edition or website. If such a dichotomy were real, the sustainability question would be a real concern. But when an edition is understood as textual data, the dichotomy is only an illusion creating a false binary. Well-prepared textual data can easily and simultaneously be presented as a printed text and website.

8The fixation on presentation that leads to the false binary between print vs. digital editions also obscures how much more can be done with an edition beyond its presentation for human consumption. And our inability to see these possibilities makes it difficult to see the loss to scientific progress that results when an edition is locked in its presentation (whether that be a printed text or an electronic display) and is no longer machine accessible.

9Thus, to change the paradigm and show that the preparation of textual data is worth it, we need to demonstrate not only that human readable presentations can be easily produced from textual data but that, when enough machine-accessible data is present, computer assisted techniques offer real value to our scientific inquiry. At the outset of this paradigm shift, demonstrating this value faces a unique hurdle. In a great many cases, the value of these techniques (both simple and complex information retrieval methods) becomes truly visible only when applied at scale. But therein lies the challenge. Reaching a critical mass of data requires the production and assemblage of that textual data. Thus, advocates of the new methodology are eager to show the promise of new methods but are stymied by a lack of quality data at sufficient scale. In turn, skeptics can be unimpressed with the outcomes because outcomes remain provisional due to the fact that skeptics have not yet been convinced to help generate textual data. The result is a kind of vicious spiral wherein advocates can only speak of promise, and potential, but skeptical, data-creators remain at a distance because that promise has not yet materialized.

10Breaking from this spiral will be gradual; machine-accessible and interoperable data will be collected slowly, and at first there may be few insights to show. But over time these results will begin to emerge. And as these initial results become known, one expects that the value of creating editions as textual data will become increasingly obvious to the point of becoming a field standard and scientific expectation. When this happens, the path up the spiral will accelerate. As the value becomes clearer, the commitment to creating editions of textual data according to adopted field standards will increase. Accordingly, the demonstrable value of machine assisted analysis will only grow.

11At present, we are still early on in the climb. But neither are we at the very beginning. After many years, the mass of assembled medieval scholastic editions that are machine-accessible and interoperable is growing. The corpus is by no means close to complete, but the scale is becoming sufficiently large to illustrate the promise of various methods with much more detail. And in some cases, it is already possible to point to genuine discoveries or to new visualizations that improve the transparency of what may have been known generally but was previously difficult to see in detail.

12To generate further interest in this new way of creating scholarly editions, we need to document the kinds of methods that can be applied to an aggregated corpus. And, as much as possible, we need to identify research results that could not have been achieved without a sufficiently large scale of machine-accessible data and accompanying forms of computer analysis.

13This is the goal of the present article. What follows is a tour through several techniques designed to detect different kinds of passage relatedness within a large corpus. In documenting the application of these methods to the medieval scholastic corpus, I hope to not only highlight emerging results, but to argue for the vital importance of changing our thinking about scholarly editions from presentations of data to the scientific preparation of data itself. If such a shift were to occur in the field of medieval studies, the scale at which these methods could be applied would explode, and accordingly, so would the outcomes.

2. Introduction: The Pursuit of Textual Relatedness

  • 1 For a high-level discussion of intertextuality, see Graham Allen (2011), Intertextuality, 2nd ed., (...)
  • 2 See Michael Stenskjær Christensen, Jeffrey C. Witt & Ueli Zahnd (2021), “Re-Conceiving the Christia (...)

14Documenting forms of textual relatedness is a highly valued task in the field of intellectual history1. From recording quotations and identifying sources to tracking down influence and grouping text passages via elaborate indices, textual scholars have long been preoccupied with such tasks and continue to see great value in the endeavor. This is nowhere truer than in the modern study of medieval scholasticism, where the intertextuality inherent in a commentary tradition is one of the most notable characteristics of the genre2.

15Computational techniques substantially assist the scholar in detecting forms of textual relatedness. Adopting such techniques allows us to conduct this traditional work in much more efficient ways. It also allows us to do this work in much more exhaustive ways, enabling us to replace the anecdotal reports compiled from a single editor’s interest with comprehensive reports that a researcher can filter according to their needs.

16In what follows, I want to describe the application of three useful methods of analysis and document the results we are beginning to see when these methods are applied to a corpus of sufficient scale. Each method speaks to different kind of relatedness, showing that no one method is sufficient. Thus, selecting the right method depends on the research question at hand.

  • 3 See Jose Manuel Gomez-Perez, Ronald Denaux & Andres Garcia-Silva (2020), A Practical Guide to Hybri (...)

17To begin, I offer a short overview (Section 3) of the machine-accessible corpus in use and the knowledge graph that stands behind this corpus, both of which are required to apply these methods at scale. A knowledge graph plays an important role in preparing the corpus for the application of statistical analysis and for visualizing those results in meaningful ways. As the power of AI and statistical methods of analysis grows, much is being written about the role that knowledge graphs will continue to play in order to make these methods successful3. This, as we will see, remains true in all of the methods described below. Thus, wherever possible, I will try to stress where the knowledge graph of corpus metadata is at work to assist in the process. Upon completing this short introduction to the corpus, I will turn to each method in question (Sections 4, 5 and 6), identifying the distinct kind of textual relatedness it aims to detect, how it works, and the positive results we are beginning to see.

3. The SCTA Corpus and Knowledge Graph

18The text corpus used in the examples below has been assembled and disseminated via the Scholastic Commentaries and Texts Archive (SCTA)4. The SCTA has a three-fold mission:

  1. To develop field standards for editing textual data.

  2. To aggregate data created according to these standards.

  3. To publish aggregated data for open and creative reuse by the community.

19Following these principles, it’s the goal of the SCTA to aggregate the entire medieval scholastic corpus and make both the text and the knowledge graph constructed through its aggregation freely available for use. It is precisely because texts are created as textual data encoded according to the standards maintained by the SCTA5 that they can be automatically crawled and the assertions drawn from this data can be reliably stored in a knowledge graph6. The resulting graph records detailed assertions about the component parts and nested hierarchies of exceptionally complicated medieval scholastic texts. It also records details about authors, textual genres, and a host of other text categorizations. Finally, it retains links to other knowledge graphs, such as Wikidata, whereby more widely known information, such as author date and birthplace, can be incorporated into the dataset. Each night a crawler runs across known texts; any detected changes are updated in the graph. All data is immediately available for consumption via a public SPARQL endpoint or customized API7.

  • 8 The CSV API endpoint (https://scta.info/csv/scta) is one example, but other API endpoints, both sta (...)
  • 9 The ability to carve up and re-construct texts at different hierarchy levels calls to mind Ted Nels (...)
  • 10 For more on this dynamic ability to traverse the corpus network, see Jeffrey C. Witt (2023), “Trans (...)

20An immediate use of the SPARQL endpoint is that researchers can request versions of different slices of the corpus in various forms (e.g., xml, csv, plaintext, etc.) suited to different forms of analysis8. The ability to slice the corpus and dynamically compose and recompose textual fragments into “documents” of different sizes based on corpus metadata is a critical requirement for many of the use cases described below9. It is precisely this ability that larger aggregators, such as Google Books, archive.org, or institutional libraries (which exist without access to such a detailed and domain specific knowledge graph), lack. Finally, as we will see, the knowledge graph plays a key role in our ability to transparently inspect the results of large-scale analysis in highly granular ways. For example, several similarity matrices will be presented below to help interpret results; these matrices include many colored dots indicating points of textual similarity. While static in their presentation here, the knowledge graph makes it possible for these dots to remain connected to the entire corpus. This means these displays are actually dynamic, and it is possible for a user to click on a dot — i.e. a point of similarity — and immediately call up the paragraphs that stand behind these results, view and compare the respective transcriptions, and even inspect the manuscript images that underlie these transcriptions10.

21In sum, the SCTA knowledge graph offers us two key advantages: 1) the ability to re-construct the corpus into different kinds of documents (and document sizes) depending on the research questions; and 2) the ability to break down, organize, and re-group results in meaningful ways.

4. Detecting Sources, Influences, and Quotation Trends with N-Grams

22Textual relatedness is a broad category because texts can be related in a variety of ways. And depending on the research goal in question, a scholar may be very interested in one kind of relatedness but not in another. Further, one kind of relatedness can blur our ability to focus in on the kind of relatedness currently at issue.

23One kind of relatedness that has been of interest to historians from early on is simple source identification. Tracking, marking, and indexing canonical quotations used by authors has long been a traditional task of the textual editor. The modern expectation that a critical editor create a corresponding apparatus fontium and the desire of readers to have access to such a list of sources — via an index — is clear evidence of this recognized value.

  • 11 See my article, Jeffrey C. Witt (2023), “Transparency and Discovery” on why we need a new abstract (...)

24But manual approaches, unassisted by computation, are painstaking and time-consuming. Even when the source of a quotation is obvious, the labor of identifying the quotation and its “coordinates” (page and lines) in a specific edition is immense. Worse yet, this labor, once done, must be constantly repeated. If one editor finds a quotation of Augustine’s De Trinitate cited within a text, tracks down and then records its material coordinates, the next editor who finds this same quotation in another text must repeat these same laborious steps of source discovery and documentation steps11.

  • 12 For a description of this inversion, see: Michael Stenskjær Christensen (2021), “Re-Conceiving the (...)

25Finally, none of these steps can be inverted. That is, if one wanted to create an index of all the future uses of a quotation by Augustine — or, beyond that, begin to track trends in the uses or disuses of a given quotation — the index and apparatus fontius of each future text would have to be manually scoured and documented. Machine-accessible texts and citations would allow this inversion to happen automatically with no additional labor12.

26The value of such a reverse index to intellectual history shows how imperative it is that the scientific expectation for a scholarly edition become an expectation for machine-accessible data rather than a presented text. Without machine-accessible data, creating such a reverse index is a cost-prohibitive labor. With such data, it is effortless and instantaneous.

27Detecting citations at scale and tracking their usage through a corpus is an exercise in tracking a specific kind of relatedness: namely, we are usually looking for repeated, near verbatim fragments of text. These verbatim fragments distinguish this kind of relatedness from mere conceptual similarity or a discussion around a common topic. This is a different kind of similarity that requires its own approach. (See Section 5 below.)

  • 13 For a description of n-grams, see Justin Grimmer, Margaret E. Roberts, & Brandon M. Steward (2022), (...)
  • 14 On cosine similarity see, Justin Grimmer et al. (2022), Text as Data, p. 72.

28The use of n-grams for comparison is a well-worn technique for capturing this kind of relatedness13. N-grams are strings of words of n-length contained within the target document. A sentence like, “the cat is on the mat” contains four n-grams of size 3: “the-cat-is”, “cat-is-on”, “is-on-the”, “on-the-mat”. By breaking a text into its composite n-grams, we can create a feature vector that retains some of the sequential information that might otherwise be lost by merely looking at the individual words in the document. This sequential information helps us find passages re-using identical phrases and not just a shared vocabulary. A typical approach to detect document similarity might involve counting the different n-grams within each document and then computing the cosine similarity14. Computing the cosine similarity would take into account the size of the documents when computing the number of n-grams shared between the two documents. In my approach, I wanted to report cases of shared n-grams regardless of the variable lengths of the document size. The goal is here is to detect the presence of common citations embedded within a document. A small document and a large document might both quote a common source, and therefore I want to identify such a match as “related.”

29There is, however, still the issue of noise: n-grams belonging to common phrases that are shared between documents but do not indicate a common source. To reduce unwanted noise, I carefully selected a couple of parameters.

30First, I adjusted the size of the n-gram to 4. Substantially raising the value of the n-gram size (for example, to 5 or 6) will significantly reduce the noise coming from small technical phrases. But raising the value has the drawback of occluding quotation matches that vary due to small changes in the quotation (e.g. an inversion of two words or slight word variations due to minor transcriptions differences or OCR errors). Setting the n-gram level lower (for example, to 2 or 3) will help catch common citations that include these kinds of variations, but it will add back significant noise by allowing matches of two- or three-word phrases common to the language. In the end, I settled on an n-gram size of 4 as a useful metric.

31A second parameter was the intersection threshold of shared n-grams used in my definition of “similarity”. One could, for example, say that two documents are similar if they share just one 4-gram. But sharing just one n-gram of size 4 is not a high probability indicator of a shared quotation, but more likely suggests the use of a phrase common to the language or discussion at hand. As such, setting the intersection threshold at 1 introduces quite a bit of noise. Through trial and error, I alighted on 4 shared n-grams as a useful threshold to find meaningful similarity while reducing unwanted noise as much as possible.

32A third parameter is document size. In this case, I wanted to detect similarity at the paragraph level. Thus, I used the knowledge graph to divide the entire corpus into separate documents, one for each paragraph.

  • 15 See Justin Grimmer et al. (2022), Text as Data, p. 61. Figure 1 below shows a simplified example of (...)

33With this list of documents, I used the n-gram size and intersection threshold parameters to construct a similarity matrix for all documents within the corpus. The first step in the process is to vectorize each document so that it is represented by the presence or absence of all unique 4-grams in the corpus. This form of one-hot encoding results in a very large sparse vector for each document or paragraph (illustrated in a simplified form below in Figure 1)15.

Fig. 1- Example vector representing the presence (1) or absence (0) of a given n-gram in document A

Fig. 1- Example vector representing the presence (1) or absence (0) of a given n-gram in document A

34In Figure 1, a, 1 is used to indicate that this n-gram is present in the document one or more times. A 0 means that it is not present. To construct the similarity matrix (S), where similarity equals paragraphs that share 4 or more 4-grams, we simply need to take the dot product of the vectors representing each document (as illustrated in Figure 2).

Fig. 2- Example of computing the number of shared 4-grams in documents A and B

Fig. 2- Example of computing the number of shared 4-grams in documents A and B

35The result of this calculation in Figure 2 is 7 (computed by adding up the values of the comparison vector). Thus, the two documents, A and B, would be considered “similar” since this exceeds the intersection threshold of 4. This similarity is then recorded in similarity matrix (S) by a 1 or Sa,b = 1.

36In practice, an efficient way of implementing this calculation and displaying the results is by indexing all of the unique 4-grams in the corpus within the knowledge graph itself. These n-grams can then be linked to all their containing paragraphs via a property called “isFoundIn” (e.g., ngram isFoundIn document). Once these n-grams and their relationships are indexed, we can quickly query for all paragraphs whose count of shared 4-grams meets the desired threshold.

  • 16 This query is a memory intensive operation. For this reason, n-gram queries of this kind are not ab (...)

37For the SCTA knowledge graph, an example, simplified, SPARQL query can be seen in Figure 316.

Fig. 3- An example, simplified, SPARQL query illustrating how the n-gram intersection threshold can be used to query for all “related” documents

Fig. 3- An example, simplified, SPARQL query illustrating how the n-gram intersection threshold can be used to query for all “related” documents

38The result of this query is a list of matched paragraphs (?start and ?target) who share 4 or more common 4-grams.

39The resulting similarity matrix and its visualizations are useful for at least two reasons, both of which I will illustrate below. 1) First, they allow us to assist the work of editors in their discovery and confirmation of sources and detect the wider influence of a quotation within a larger corpus. 2) Second, they allow us to reveal useful trends (customizable to different research parameters) about the uses of sources at a macro-scale. Such trends could, for example, without an ounce of additional effort, allow an editor to see if their author’s use of a source is a rare anomaly and deviation from tradition or a traditional citation conforming to standard practice.

40Below I offer a description of the visualizations created from the n-gram similarity matrix and how they contribute to each of these outcomes.

4.1. Detecting Sources and Influences

41The graphic below (Figure 4) represents one possible visualization of the similarity matrix resulting from a modified version of the SPARQL query introduced above (Figure 3). In this example, every paragraph appearing in Augustine’s De Trinitate is represented along the X-axis in sequential order, moving from left to right. The knowledge graph is used here to retain the sequential order of the paragraphs within the larger text and list them accordingly. Along the Y-axis, we can see every paragraph within a corpus of 500,000+ paragraphs that have been identified as similar to at least one of the paragraphs within Augustine’s De Trinitate. Paragraphs that have no similarity with any paragraph in the De Trinitate are excluded. Again using the metadata stored in the corpus knowledge graph, these similar paragraphs are arranged along the Y-axis in successive order within their containing texts, and then the containing texts are arranged in chronological order.

Fig. 4- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)

Fig. 4- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)

42The chronological arrangement of matched paragraphs is critical for moving from recognizing mere similarity to recognizing sources and distinguishing these from influence. Using the date of the main text in question (in this example, Augustine’s De Trinitate), we can then create a visual marker separating those texts that were written before Augustine’s De Trinitate (in red) and those that were written after (in blue). A researcher may start from an interest in a particular paragraph or book within the De Trinitate. In such a case, they can navigate across the X-axis to find the section of interest and then scan down the Y-axis. If a previous red dot appears, they can immediately observe that, in the paragraph in question, Augustine is quoting from a previous authority. If a column of red appears, Augustine is not only quoting from a previous authority but is likely invoking a traditional authority that has been frequently cited by others before him, with the highest red dot (the earliest in the chronology) likely being the original source of the common citation.

43Similarly, following any blue dots, a researcher can quickly ascertain the influence of a particular paragraph in the future scholastic tradition. A single blue dot in a column, without any previous red dots, indicates a future text that shows similarity with the Augustine text because it is quoting something likely original to Augustine. A column with repeated blue dots — again without any previous red dots — acts as a kind of heat map to reveal that not only was this Augustinian text quoted by a future author, but it was quoted by several authors and has an outsized influence on the future tradition.

44Figure 5 below is a zoomed in view that offers us clear examples of these blue streaks which indicate various places within the De Trinitate that have been particularly influential. The highlighted column, for example, identifies a point in Augustine’s text that shows similarity to several future texts, but lacks any detected similarity to previous texts. Thus, it is highly likely that the reason for the similarity is that future authors are repeatedly quoting a passage original to Augustine.

Fig. 5- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)

Fig. 5- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)

45Finally, we should note that the ability to apply and visualize n-gram similarity at scale allows us to approach the corpus from a new entry point. In the above examples, we imagined a researcher beginning with a particular paragraph or passage in mind; perhaps they are already familiar with the text and suspect that the paragraph will be highly influential. In such a case, they can use the method to confirm what they already suspect. But the macro-view also allows the researcher to see what they do not expect. One need not come to the visualization with a particular paragraph in mind. Instead, they can begin by simply looking for paragraphs that have unique sources (a single red dot) or a paragraph that shows disproportional influence (a long and heavy blue streak). In this way, one can come to the corpus without much previous knowledge and quickly learn about important points of interest. Likewise, the seasoned researcher can go beyond the confirmation of their own expectations and encounter unexpected and unanticipated trends.

4.2. Detecting Quotation Trends

46The interest in both confirming trends and finding unexpected ones leads us to a second benefit that can be achieved through combining n-gram similarity and the corpus knowledge graph. This benefit is the ability to go beyond mere source and influence detection and recognize citation trends and deviations from these trends within comparable works.

  • 17 For an introduction to Lombard’s Sentences, see Philipp W. Rosemann (2007), The Story of a Great Me (...)

47This kind of trend detection is especially useful within a continuous and sustained commentary tradition. A prime example is the centuries-long medieval tradition of commenting on the Sentences of Peter Lombard17. The Sentences of Peter Lombard is an encyclopedic-like work, meaning that it is broken into distinct parts (4 books), each of which are broken down further into several distinctions which treat a vast array of topics. Each distinction, therefore, is analogous to an encyclopedia article that represents a view on a given topic at a particular point in time.

  • 18 See, for example, the list of known commentaries compiled by Friedrich Stegmüller (1947-1948) in Re (...)

48The Sentences commentary tradition is a practice of commentating on all (or some) of these distinctions. Today there are more than a thousand known extant commentaries spanning at least five centuries18. The stability of topics associated with each distinction and the regularity with which commentaries were written on these distinctions creates the potential to observe how ideas and authorities on a given topic endured or changed over time.

49We can illustrate this potential once more with the text of Augustine’s De Trinitate. In this case, instead of looking at the similarity of this text within the entire corpus, we can isolate and restrict the similarity matrix to the text of Peter Lombard alone. Immediately, in Figure 6, we can see how different parts of Augustine’s De Trinitate (again arranged across the X-axis) dominate within different distinctions of Lombard’s Sentences. Here only paragraphs in Lombard’s Sentences with a detected similarity match are displayed along the Y-axis in descending sequential order.

Fig. 6- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in Lombard’s Sentences with a detected similarity match (Y-axis)

Fig. 6- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in Lombard’s Sentences with a detected similarity match (Y-axis)

50We can see that Book 1 of Augustine’s De Trinitate dominates and clusters in the distinctions 1 and 2 of Book 1 of Lombard’s Sentences (label 1). Augustine’s Book 3 sees concentrated usage in Lombard’s Book 2, distinctions 7 and 8 (label 2). The middle books of Augustine’s De Trinitate (Books 4-8) are used heavily in Lombard, Book 1, distinctions 15 through 38 (label 3) while Books 9 and following get much less usage, until finally we see another instance of concentrated usage of Book 15 (label 4).

51While the above picture shows the frequency of use of distinct books in Augustine’s text, we might want to reverse this to see how the use of Augustine’s text is distributed throughout through the entire text of Lombard. This can be seen by locating all paragraphs in the Lombard text across the X-axis and all paragraphs in Augustine’s De Trinitate with a detected similarity match along the Y-axis. This matrix, seen in Figure 7, shows us quite clearly that, while Lombard relies heavily on Augustine’s De Trinitate, his use of the work as a whole is largely constrained to the distinctions appearing in Book 1 of the Sentences. In contrast, there is much less use of the De Trinitate in Books 2-4 (albeit with a few notable exceptions).

Fig. 7- Similarity matrix of Lombard’s Sentences (X-axis) compared to paragraphs in Augustine’s De Trinitate with a detected similarity match (Y-axis)

Fig. 7- Similarity matrix of Lombard’s Sentences (X-axis) compared to paragraphs in Augustine’s De Trinitate with a detected similarity match (Y-axis)

52The ability to see these citation trends within the text of Lombard allows us to set a baseline of standard practice within the larger commentary tradition. Using the metadata stored in the text knowledge graph, we can reconstruct the similarity matrix to include not only the Lombard text but subsequent commentaries arranged chronologically. Such a view can immediately reveal citation patterns that continue from commentary to commentary. We can see in Figure 8, for example, that the heavy use of De Trinitate, Books 4-8 in Lombard’s Sentences (represented by the top row in Figure 8) tends to continue in all subsequent commentaries (represented by each subsequent row). However, we can also see that parts of Augustine’s De Trinitate cited by Lombard begin to fall out of fashion in later commentaries (seen by comparing labels 1 and 2).

Fig. 8- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)

Fig. 8- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)

53Or conversely, in the zoomed in view seen in Figure 9, it is possible to see passages that were not cited by Lombard but began to be added to the commentary tradition by subsequent commentators. These patterns reveal areas of innovation and deviation from tradition.

Fig. 9- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)

Fig. 9- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)

54Continuing to make use of knowledge graph metadata, several variations of the above visualizations can be made, tailored to more nuanced research questions. Two more examples will suffice to make the point. Figure 10 below is designed to show how Augustine’s text is used by each distinction throughout the commentary tradition. This means that texts belonging to distinction 1 in each commentary across many centuries have been detached from their containing text and combined into a new text composed of all commentaries on just distinction 1. At the same time, each paragraph within the grouping remains arranged in chronological order The same has been done for distinction 2, 3 and so on. The resulting matrix allows us to see which books of Augustine’s De Trinitate habitually dominate as an authority in each distinction. The visualization in Figure 10 shows, for example, that it is quite customary in distinction 8 to quote from De Trinitate, Books 5 through 8 (label 1). But it also shows us innovation. Off to the right (label 2), later in the chronological sequence, we see an author commenting on distinction 8, but beginning to quote from De Trinitate, Book 15. This is a potential innovation within the tradition.

Fig. 10- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs in all Sentences commentaries with a detected similarity match, grouped by distinction (Y-axis)

Fig. 10- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs in all Sentences commentaries with a detected similarity match, grouped by distinction (Y-axis)

55Finally, a scholar of the Sentences commentary tradition would likely not only be interested in the uses of Augustine, but in patterns of citation and textual similarity to the main text being commented on, namely Lombard’s Sentences. For this use case, we can construct the matrix accordingly. Along the X-axis we can arrange each paragraph of Lombard’s Sentences and then compare all commentaries grouped by their distinction number. The result in Figure 11 shows us largely what we expect. Namely, in commentaries on distinction 1 (the third row in Figure 11, labeled “distinction 1”), we see clusters of citations of passages from Lombard’s distinction 1 (label 1), and so on for each distinction. This conformity to expectation not only gives us confidence in the reliability of the overall method but should give us confidence about results that are unexpected. Namely, at various points we can see trends within a given distinction group to begin quoting from other unexpected distinctions of Lombard’s Sentences. For example, in distinction 1, we can see a pattern of citing from distinction 3 (label 2) and distinction 8 (label 3).

Fig. 11- Similarity matrix of Lombard’s Sentences (X-axis) compared to all paragraphs in all Sentences commentaries with similarity match in Lombard’s Sentences, grouped by distinction (Y-axis)

Fig. 11- Similarity matrix of Lombard’s Sentences (X-axis) compared to all paragraphs in all Sentences commentaries with similarity match in Lombard’s Sentences, grouped by distinction (Y-axis)

5. Document-Embeddings for Broad Similarity Detection

56Relatedness through citations of sources or passage similarity due to common sources is only one kind of relatedness. As such it hardly exhausts what can be achieved through a commitment to making the corpus available as machine-accessible data.

57Another kind of relatedness we might be interested in is passage conceptual similarity (or dissimilarity). For example, beyond noting that two passages use a common source, we might want to know why. While several texts may quote a particular Bible verse, there is likely more than one conceptual reason that a text uses this quotation. Here we might want some help sorting these uses into different categories.

58Perhaps a given text invokes a Bible verse to support a position on God’s omnipresence while another invokes that same citation to help illuminate a discussion about the nature of matter. In this case, grouping larger textual chunks into categories based on conceptual similarity is highly desirable. Moreover, traditional editions from the 16th century to the 20th century have repeatedly demonstrated the perceived value of such groupings in the form of elaborate scholia. In Figures 12–15, directly underneath the title and title question, we can see a list of related passages that the editors believe might be of interest to the reader.

Fig. 12- Thomas Aquinas, Summa Theologiae, II-II, q. 43, a. 8 (Opera Omnia Iussu Leonis XIII, vol. 8 [Rome, 1895], p. 329)

Fig. 12- Thomas Aquinas, Summa Theologiae, II-II, q. 43, a. 8 (Opera Omnia Iussu Leonis XIII, vol. 8 [Rome, 1895], p. 329)

Fig. 13- Thomas of Strasbourg, I Sent., prol. q. 1 (Genoa 1585, fol. 2v)

Fig. 13- Thomas of Strasbourg, I Sent., prol. q. 1 (Genoa 1585, fol. 2v)

fig.14- Gerard of Siena, I Sent., d. 38, q. 1 (Padua 1598, p. 573)

fig.14- Gerard of Siena, I Sent., d. 38, q. 1 (Padua 1598, p. 573)

Fig. 15- Durandus of St. Pourcain, I Sent., prol. q. 1 (Lyon 1563, fol. 2r)

Fig. 15- Durandus of St. Pourcain, I Sent., prol. q. 1 (Lyon 1563, fol. 2r)
  • 19 See Justin Grimmer, Margaret E. Roberts, & Brandon M. Steward (2022), Text as Data, Princeton, NJ, (...)

59There are several methods in informational retrieval that can help us construct such scholia. Here I detail how the use of “document-embeddings”19 can produce impressive results when trained on a corpus that is curated for its particular kind of content (e.g. scholastic medieval texts) and sufficiently large.

  • 20 Quoc Le & Tomas Mikolov, “Distributed Representations of Sentences and Documents,” p. 1 (introducti (...)
  • 21 Quoc Le & Tomas Mikolov, “Distributed Representations of Sentences and Documents,” p. 1 (introducti (...)

60At a high level, document-embeddings work to build a low-dimensional feature vector that represents an entire document. During training, a document vector and word vectors are used in combination. Together they are used to predict the next word in a series of words within the target document. Le and Mikolov write: “At prediction time, the paragraph vectors are inferred by fixing the word vectors and training the new paragraph vector until convergence20”. That is, during training, the weights in the document vector are modified as the system tries to increase the accuracy of the next-word prediction. This process is repeated many times, and each time the weights within the document vector is refined and improved. The result is a document vector that impressively characterizes the document and its relationship to other documents. Given our earlier discussion, it is notable that this approach has some significant advantages over the one-hot encoded n-gram approach. Namely, while still capturing information latent in the sequencing of document words, it does not suffer from the same kind of “data sparsity and high dimensionality”21.

61A computational approach like this that can capture the semantic subtleties of document in a low dimensional space enables us to produce the scholia depicted above at scale while allowing the results to be filtered by the user’s needs rather than the editor’s interest or knowledge. Indeed, with remarkable accuracy, the document-embeddings can reproduce the manually constructed scholia of the great nineteenth- and twentieth-century critical editions.

62Embedding training, in this case, was done using the “doc2vec” algorithm implemented in the Python Gensim library22. But, once again, exploitation of the knowledge graph played an important role. The choice of document size is a critical parameter when generating document-embeddings. In the case of replicating the above scholia, the comparison is typically made by matching texts at the question or article level. Thus, we used the knowledge graph to reconstruct the corpus into distinct documents for each question or article within a larger text. The result is a corpus of 23,433 documents containing 38,205,513 words.

63Several further parameters can be customized during training. In this case, training was set to construct vectors per document of size 200, which means each document is represented by a vector of 200 dimensions or weights. Only words appearing 10 or more times in the corpus were included in training. And training was set to cycle for 100 iterations23. Using the resulting vectors, we can compute the similarity between all documents and programmatically construct the kind of scholia seen above based on the ranking of those results.

  • 24 For a more on tf-idf see Justin Grimmer, et al. (2022), Text as Data, p. 75-77. See also, the sklea (...)

64For documenting the quality of the results, the assertions of similarity between the Summa Theologiae and the Sentences commentary of Aquinas made by the nineteenth-century editors of the Leonine edition are particularly helpful (see Figure 12). Over several volumes the editors have asserted hundreds of parallels between the two works at the article level. Some of these parallels are obvious, for example when the question title is nearly identical. And in such cases an n-gram search or term frequency – inverse document frequency (tf-idf) approach would probably suffice24. But even in obvious cases, questions are frequently rephrased and re-articulated in ways that make it difficult for simple searches to identify parallels. And for every obvious case, there are several more cases where the parallels are not at all obvious. Consider, for example, the following two questions which were matched by the Leonine editors:

“Utrum lex naturalis contineat plura praecepta vel unum tantum / Whether the natural law contains many commands or only one (Summa Theologiae, I-II, q. 94, a. 2; TAca84-d1e1158-Dd1e347)”

“Utrum habere plures uxores sit contra legem naturae / Whether having may wives is against the law of nature (Sent. IV, d. 33, q. 1, a. 1; ta-l4d33q1-Dd1e131)”

65While the two titles share some common words, there are no common n-grams, and the entire conceptual focus of the two questions seems different. Nevertheless, the editors, based on their knowledge of the two questions and all the other questions included in both works, have singled out the latter as a top recommended match for the former. In such cases, one must admire and respect the painstaking and time-consuming effort of the Leonine editors to first recognize and then record these related discussions.

66In order to test the quality of the similarity results generated through document-embeddings, I have manually recorded the assertions of similarity made by the Leonine editors using the SCTA Ids that correspond to the texts being related. So, when the Leonine editors assert a parallel between a source text (e.g., Summa Theologiae, I-II, q. 94, a. 2) and a target text (e.g., Sent. IV, d. 33, q. 1, a. 1), this was recorded as an assertion of similarity between a source Id (e.g., TAca84-d1e1158-Dd1e347) and a target Id (e.g., ta-l4d33q1-Dd1e131). With these 19th century assertions now machine-actionable, I can consider for any source document (e.g. an article within the Summa Theologiae) how highly the document-embeddings model ranks the target document (e.g. an article from the Sentences commentary) in terms of similarity.

67A fragment output of this report can be seen below in Figure 16. The value of “matchFiltered” represents how high the document-embeddings ranked the Sentences commentary article asserted to be similar by the Leonine editors among other questions within the Sentences commentary. A 0 indicates a top match. “matchUnfiltered” is the total rank compared to all documents within the entire SCTA corpus, and “matchPerc” represents the match similarity computed by the embeddings. The “mean” represents the average similarity ranking computed by the model for the matches asserted by the Leonine editors.

Fig. 16- Sample results of match rank testing via document-embeddings

Fig. 16- Sample results of match rank testing via document-embeddings
  • 25 I adjust for extreme outliers here because, as shown below, there are places where the actual asser (...)

68In an example experiment, testing 641 matches, the results show that, after adjusting for extreme outliers (taking the 90th percentile of results)25, the parallel asserted by the Leonine editors is, on average, the 7th highest recommended document among articles with the Sentences commentary. In other words, results like this would mean that, with zero manual effort, the Leonine editors’ recommended matches would, on average, appear on the first page of results in a typical recommendation service (10 results to a page). Further, while the average rank is 7.83, the median score is 1, indicating that most of the recommendations were much higher than the average ranking. Indeed, 41% (265 of 641) of recorded matches were the computer’s top recommendation and 66% (426 of 641) were within its top 5 of recommended matches.

69With this level of accuracy, outliers cannot automatically be viewed as imperfections in the model. Rather they begin to raise questions about the original assertion. Consider the following two questions. In Summa Theologiae I-II, q. 18, a. 3, the question reads:

“Utrum actio hominis sit bona vel mala ex circumstantia / Whether a human action is good or evil from circumstance” (TAca84-d1e765-Dd1e638)

70The Leonine editors suggest a match with Aquinas’ Sentences commentary, II, d. 26, a. 5, which reads:

“Utrum gratia dividatur convenienter in gratiam operantem et cooperantem / Whether grace is fittingly divided in operating and cooperating grace” (ta-l2d26q1-Dd1e345)

71But as we can see below in Figure 17, in the entry “TAca84-d1e765-Dd1e638===ta-l2d26q1-Dd1e345”, the computer ranks this match as the 1036th most similar. This extremely low ranking combined with the lack of an obvious connection between the two questions makes one wonder if this assertion was a typo or some kind of printing error. Manual inspection and guess work around a likely source of human error suggests that “26” may have been a misprint of “36”, meaning the intended match may actually have been Sentences commentary, book 2, distinction 36, question 1, article 5, which reads:

“Utrum distinctio bonorum sit conveniens / Whether the distinction of goods is fitting” (ta-l2d36q1-Dd1e369)

72Modifying the match accordingly produced a much better result, as seen below (Figure 17) in entry “TAca84-d1e765-Dd1e638===ta-l2d36q1-Dd1e369”. Here the computer identifies distinction 36, question 1, article 5 as the 12th most similar.

Fig. 17- Comparison of document-embedding’s similarity ranking of the Leonine editors’ asserted similarity relation (TAca84-d1e765-Dd1e638===ta-l2d26q1-Dd1e345) to the corrected similarity assertion (TAca84-d1e765-Dd1e638===ta-l2d36q1-Dd1e369

Fig. 17- Comparison of document-embedding’s similarity ranking of the Leonine editors’ asserted similarity relation (TAca84-d1e765-Dd1e638===ta-l2d26q1-Dd1e345) to the corrected similarity assertion (TAca84-d1e765-Dd1e638===ta-l2d36q1-Dd1e369

73In sum, the use of document-embeddings points to tangible research benefit that follows from a commitment to making the scholastic corpus available as textual data. With remarkable accuracy and speed, we can reproduce findings that the field already acknowledges as valuable, but previously could only accomplish through time-consuming manual effort. Moreover, computer assisted results improve the accuracy of human findings and suggest new parallels of importance that can easily go unnoticed by human editors.

6. Detecting Sustained Textual Reuse with N-Grams and Convolution

  • 26 On ideas of originality and authorship in the scholastic tradition, see Francesco del Punta (1998), (...)

74While extremely useful and desirable, neither source similarity nor conceptual similarity exhaust the kinds of textual relatedness that might interest a scholar of intellectual history. In fact, in some cases, they can even obscure the reasons why similarity is detected. In many notable cases, scholastic thinkers often thought of themselves as compilers rather than as original authors26. In their role as compilers, source similarity and conceptual similarity can frequently occur not just because authors are interested in a similar topic or are quoting common authorities, but because one author is systematically borrowing from another. This often happens without reference or attribution, sometimes in small clusters, sometimes in larger extended sequences.

  • 27 See note 25 above.

75We can illustrate the challenge of finding this kind of relatedness by looking back at a few of the search results that occurred using the document-embedding method described above. The results below (Figure 18) show a side-by-side comparison of the ranked matches via document-embeddings (on the right) and ranked matches via the tf-idf method27 (on the left). The tf-idf approach works by representing each document as vector of n-grams (again, in this case I’ve chosen n-gram size 4) weighted against the frequency in the document compared to their frequency in the corpus at large. In this case similarity depends on the detection of meaningful verbatim 4-grams and not just conceptual similarity.

76As discussed above and visible here, in most cases, the document-embedding approach significantly outperforms the tf-idf approach when looking for mere conceptual similarity. But we can also see that there are some matches detected by the tf-idf (line 5 and line 9) that are more highly ranked than in the document-embedding approach. As this case illustrates, there can be at least two reasons why something is conceptually similar: either because it is discussing the same topic but with different words and phrases or because it is borrowing verbatim fragments and thus of necessity shows conceptual similarity. The document-embedding method blurs this difference and does not create an easy way to separate the two kinds of similarity.

Fig. 18- Comparison of tf-idf (left) to document-embedding (right) similarity rankings

Fig. 18- Comparison of tf-idf (left) to document-embedding (right) similarity rankings
  • 28 For more on convolution and its implementation using the Python library SciPy, see: https://docs.sc (...)

77An additional challenge lies in identifying texts that do not merely share verbatim fragments — because, for example, they are citing common authorities — but because one text is systematically and consecutively borrowing the other. Thus, we need a method that takes an n-gram matching approach but only reports places where this reuse is consecutive and sustained from paragraph to paragraph. To detect this kind of similarity, the adoption and implementation of a method called “convolution”, commonly used for feature detection in computer vision, has proved remarkably effective28.

  • 29 Note, for example, that, while Justin Grimmer, et al. (2022), in Text as Data, surveys a very wide (...)

78It is not at first obvious that a method designed to detect visual patterns would be useful for detecting this kind of textual relatedness29. However, when the results of the n-gram similarity method (described above in Section 4) are viewed from a very high level, with paragraph blocks for both texts arranged in sequential order, certain visual patterns emerge: notably, distinct diagonal clusters.

79Consider the similarity matrix seen below in Figure 19 representing the text of Albert the Great’s Summa Theologiae compared against the corpus at a large. While there are many points of similarity detected between individual paragraphs (due to the reuse of common Bible verses, authorities, or technical phrases), there are also clear diagonal patterns. See for example labels 1, 2, and 3.

Fig. 19- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 1-3 point to examples of diagonal clusters visible to the naked eye

Fig. 19- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 1-3 point to examples of diagonal clusters visible to the naked eye

80These diagonal shapes point to a unique kind of sustained textual similarity. They indicate first that, when starting at a given paragraph “i”, similarity has been detected in paragraph “j”. This by itself isn’t very telling. Lots of isolated paragraphs include verbatim textual reuse, usually (as we have seen above) because one paragraph is quoting the other or both paragraphs are quoting from a common source. But what is unique is that in these cases, after finding a match, we find a further match when we move to the next successive paragraph in each text. Said differently: let S represent the resulting similarity matrix of all paragraphs of two texts compared against one another. Let Si,j represent the first point of intersection of any two paragraphs. Then, when we move to the next paragraph “i+1” we once again encounter detected similarity in j+1 (Si+1,j+1). And when we move another step forward, to “i+2”, we find further detected similarity in j+2 (Si+2,j+2). We can see this illustrated below in Figure 20.

Fig. 20- Illustration of the diagonal pattern formed through textual re-use in three successive paragraphs

Fig. 20- Illustration of the diagonal pattern formed through textual re-use in three successive paragraphs

81The odds of a similarity match in a successive step (Si+1,j+1) being accidental or simply due to the quotation of a common authority are considerably lower than the odds of just two paragraphs showing similarity. When this happens three times in a row (Si,j, Si+1,j+1, Si+2,j+2) we can assert with a very high degree of confidence that this pattern is emerging, not because one text is merely quoting from the other text or both are quoting a common authority, but because one is compiling from the other.

82Once we know the meaning that lies behind these diagonal patterns, a method of visual feature detection (like convolution) is very effective at singling out these diagonal clusters from all the other isolated matches of similarity that have been detected. The convolution method itself is applied as follows: we create a filter matrix with 1s in the positions that represent the feature we want to detect. The precise shape of this filter is likewise an adjustable parameter. In this case, I have used a 6x6 identity matrix with 1’s down the diagonal, seen in Figure 21.

Fig. 21- Example 6x6 filter matrix designed to capture diagonal patterns in the larger similarity matrix

Fig. 21- Example 6x6 filter matrix designed to capture diagonal patterns in the larger similarity matrix
  • 30 See Figure 2 above for an illustration of this dot product calculation.

83Applying the filter involves running the matrix across the full similarity matrix (null values included) and detecting 6x6 windows whose dot product30 exceeds a certain threshold. Setting the threshold at 6 in this example would identify only those index positions that include an exact replica of the convolution filter. Setting the threshold at a slightly lower level like 3 provides more flexibility, allowing us to detect general diagonal patterns even if there is a gap at a given point in the diagonal. A threshold of 3 would match several variations, four examples of which are given in Figure 22.

Fig. 22- Examples of sub-matrices in the larger similarity matrix whose dot product when multiplied by the filter matrix would result in a score of 3 or higher

Fig. 22- Examples of sub-matrices in the larger similarity matrix whose dot product when multiplied by the filter matrix would result in a score of 3 or higher

84This flexibility gives us a greater ability to detect the kind of reuse that we often see in compilers; a compiler will frequently borrow a sentence but then add some kind of modification or interpretative clarification before borrowing another sentence. So, in many cases we don’t see perfectly sustained reuse, but a start and stop approach full of brief intermittent interruptions. The large filter matrix with a lower threshold is perfectly suited to capture this sustained but frequently interrupted textual reuse.

  • 31 One important finding using this method has already been published in my article Witt (2023), “Tran (...)

85In closing, we can survey some of the early results and genuine research gains that have resulted from the application of this method31. Consider once more the similarity matrix already shown above in Figure 19, representing the text of Albert the Great’s Summa Theologiae. (Shown again below as Figure 23.)

Fig. 23- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 4-5 point to diagonal clusters that unlike 1-3 are difficult to discern and easy to overlook from such a high-level perspective

Fig. 23- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 4-5 point to diagonal clusters that unlike 1-3 are difficult to discern and easy to overlook from such a high-level perspective

86As noted, even with the naked eye, some patterns of sustained clusters of textual reuse are visible (e.g. labels 1-3). But manually cataloguing all these clusters would be unnecessarily time consuming and prone to error. Further, the noise created by other types of similarity and the extremely high-level perspective required to see the totality of matches makes it difficult to see and pick out meaningful clusters visually. Consider the extremely difficult to see cluster patterns identified by label 4 and label 5. When taking such a wide view, these clusters are barely visible.

87The method of convolution, however, can very successfully move through the resulting similarity matrices of all texts compared against all others and report out very targeted passages of interest (or slices of the corresponding matrix where the successive reuse occurs). Once detected, we can again use the knowledge graph to restrict the frame to the targeted point of comparison and view the match with full transparency. As seen in Figure 24 and Figure 25, the convolution method has allowed us to detect clusters from a very obscure author writing at the beginning of the 15th century, Lambertus de Monte, that in the previous Figure 23 labels 4 and 5 respectively, were very barely visible, if at all. Using the resulting coordinates reported by the convolution, we can restrict the frame and clearly see a significant pattern of textual reuse that would be otherwise very easy to overlook. The result is a dynamic and transparent window of similarity that not only shows us the result but allows us to inspect the results, view the actual text for a given paragraph, and see how they compare.

  • 32 Mario Meliado & Silvia Negri (2011), “Neues zum Pariser Albertismus des frühen 15. Jahrhunderts: de (...)

88Briefly, we ought to note the research value of the above result. A survey of the manuscript known to contain the Sentences commentary of Lambertus de Monte was published by Meliado and Negri in 201132. In their article, Meliado and Negri are at pains to show that the text of Lambertus proves that there was an active early Neo-Albertist school at Paris in the late 14th and early 15th centuries which previous scholarship had overlooked. Their argument focuses on looking at the debates in the principia of his commentary and showing a conceptual similarity between the positions taken by Lambertus and those known to be held by Albert and his followers. Their argument also stresses the presence of important excerpts from texts of Albert in the margins of the manuscript. What they did not note, however, (because it is extremely difficult to see without a systematic comparison of the Lambertus text to every other text in the corpus) is that throughout much of the body of the commentary, Lambertus is successively lifting passages from Albert’s Summa Theologiae, making slight to moderate modifications, and then presenting the result as his own answer to the proposed question. This level of textual reuse is perhaps the most persuasive evidence for Lambertus’s status as a disciple of Albert the Great. But without machine-accessible data, its presence — and the extent of it — is hard to see. With machine-accessible data and comparison via convolution, the connection becomes obvious and transparent.

89The success of the convolution method can also be seen by comparing these results to the document-embedding results. Traversing through the questions contained within the Summa Theologiae of Albert the Great (the text represented on the X-axis in Figures 19 and 23) and requesting the 20 most similar questions in the corpus will certainly provide useful results. But because there are many reasons for text similarity, the causes behind these high rankings are blurred together and not easily separated.

90Consider the document-embedding similarity results in Figure 26 for Albert’s Summa Theologiae, Pars 1, Tractatus 18, Quaestio 70 (Almn78-a48811) — the question highlighted above in Figure 25 from which Lambertus de Monte has systematically lifted passages.

Fig. 26- Document-embeddings report of the 20 most similar documents to Almn78-a48811. Bolded entries point to texts NOT written by Albert the Great

Fig. 26- Document-embeddings report of the 20 most similar documents to Almn78-a48811. Bolded entries point to texts NOT written by Albert the Great

91The corresponding text from Lambertus (gU87nn-d1e597) is indeed listed as the 5th most similar, but we can also see that stylistic similarity between works by the same author heavily obscure results. The similarity list is dominated by texts for Albert’s Summa Theologiae (all entries beginning with “Almn78”) and his Sentences commentary (all entries beginning with “alalal”). The convolution method, therefore, can be a helpful way to separate one type of similarity from the other. Further, while the text by Lambertus (5:gU87nn-d1e597) is the top recommendation among works not by Albert, the percentage of similarity is only marginally different from two chapters by Alexander of Hales: (7:ahsh-l1p1i1t2q3t2c3, 13:ahsh-l1p1i1t2q3t3m1c4), two questions by Peter of Tarantasia (10:pdt7y6-e92541, 12:pdt7y6-e91088), and one chapter by Peter Lombard (14:pl-l1d37c1). Importantly (as seen below in the similarity matrices for each of the mentioned results, Figures 27–32), while the text of Lambertus de Monte (Figure 27) and Peter Lombard (Figure 32) show sustained successive reuse (the latter being a case of Albert lifting successive passages from Lombard), the texts from Alexander of Hales and Peter of Tarantasia — despite being ranked higher than the text of Lombard — show no successive reuse and few if any similar paragraphs due to n-gram intersections. The document-embedding approach, however, offers us no reason to distinguish the matches into separate categories of relatedness.

92The superiority of the convolution method for separating out cases of successive reuse from cases of stylistic and conceptual similarity is visible in the results reported by convolution for the same text of Albert the Great (Almn78-a48811) discussed above. Instead of many results clustered around a high percentage similarity (see Figure 26), the convolution method reports only 3 results.

Fig. 33- Convolution report of cluster detection for Almn78-a48811

Fig. 33- Convolution report of cluster detection for Almn78-a48811

93First, it clearly recognizes and reports the significant case of reuse by Lambertus de Monte (Figure 33, Result 1). Second, it reports Albert’s reuse of Peter Lombard (Figure 33, Result 2). So, in our example, it has effectively reported the two cases of interest, while ignoring the conceptually similar texts that do now contain the kind of relatedness we aim to find. Finally, it reports a third result (Figure 33, Result 3), visualized below in Figure 34, which is a match between a question in Albert’s Summa Theologiae and his Sentences commentary that was not in the top 20 results reported by the document-embedding method (Figure 26).

Fig. 34- Diagonal clusters between Almn78-a48811===alalal-c43683 detected by convolution method, but not listed in top 20 most similar results reported by document-embedding method

Fig. 34- Diagonal clusters between Almn78-a48811===alalal-c43683 detected by convolution method, but not listed in top 20 most similar results reported by document-embedding method

94This final example is notable because this is a clear case of sustained dependency between the two texts, but unlike in our other examples it is not listed anywhere in the top 20 document-embedding results. Despite being a clear instance of the kind of text similarity we want to find, the result is crowded out and obscured by other kinds of textual relatedness.

7. Conclusion

95The goal of this tour through several methods and examples of both possible and actual research outcomes was first and foremost to make an argument for a new urgency within the field of textual scholarship — and specifically in the field of study focused on the scholastic Middle Ages — to prioritize the creation of digital editions as textual data rather than textual presentations. The outcomes surveyed here are intended to illustrate what is already becoming possible through a commitment to creating a corpus of machine-accessible data at scale. Even through the inchoate form of the current corpus, we are already beginning to generate genuine advances in the form of new discoveries and corrections and improvements to previous research.

96Given the current pace of development in the fields of natural language processing and information retrieval, as well as the heavy financing behind new AI-powered tools, one can only imagine how much these computational methods will be able to assist future historical research. But these future outcomes will depend on the existence of a well-constructed corpus, accompanied by a robust knowledge graph of domain specific metadata. The Scholastic Commentaries and Texts Archive is one attempt to maintain such a corpus. Through its organization of the scholastic corpus to date, this article has tried to show some of the earliest fruits that are now within grasp. With increased participation in the form of financing, data-contribution, and community governance, who knows how far we can go. The sky’s the limit.

Haut de page

Bibliographie

Allen, Graham (2011), Intertextuality, 2nd ed., London, Routledge.

Burns, Patrick J., James A. Brofos, Kyle Li, Pramit Chaudhuri &Joseph P. Dexter (2021), “Profiling of Intertextuality in Latin Literature Using Word Embeddings,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, p. 4900-4907.

Christensen, Michael Stenskjær, Jeffrey C. Witt, & Ueli Zahnd (2021), “Re-Conceiving the Christian Scholastic Corpus with the Scholastic Commentaries and Texts Archive,” in Digital Humanities and Christianity, edited by Tim Hutchings & Claire Clivaz, Berlin, De Gruyter, p. 47-76. https://doi.org/10.1515/9783110574043-003.

de Boer, Jan-Hendryk (2018), “Kommentar,” in Universitäre Gelehrtenkultur vom 13.–16. Jahrhundert. Ein interdisziplinäres Quellen- und Methodenhandbuch, ed. Jan-Hendryk de Boer, Marian Füssel & Maximilian Schuh, Stuttgart, Franz Steiner, p. 265-318.

del Punta, Francesco (1998), “The Genre of Commentaries in the Middle Ages and its Relation to the Nature and Originality of Medieval Thought,” in Was ist Philosophie des Mittelalters? ed. Andreas Speer & Jan A. Aertsen, Berlin, De Gruyter, p. 138-151.

Gomez-Perez, Jose Manuel, Ronald Denaux & Andres Garcia-Silva (2020), A Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP, Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-44830-1.

Grimmer, Justin, Margaret E. Roberts & Brandon M. Steward (2022), Text as Data, Princeton, NJ, Princeton University Press.

Le, Quoc & Tomas Mikolov (2014), “Distributed Representations of Sentences and Documents,” https://arxiv.org/pdf/1405.4053v2.pdf.

Meliado, Mario & Silvia Negri (2011), “Neues zum Pariser Albertismus des frühen 15. Jahrhunderts: der magister Lambertus de Monte und die Handschrift Brussel, Koninklijke Bibliotheek, Ms. 760”, Bulletin de Philosophie Médiévale 53, p. 349-84.

Nelson, Ted (1981), Literary Machines: The report on, and of, Project Xanadu concerning word processing, electronic publishing, hypertext, thinkertoys, tomorrow’s intellectual revolution, and certain other topics including knowledge, education and freedom, Sausalito, CA, Mindful Press.

Řehůřek, Radim & Petr Sojka (2010), “Software Framework for Topic Modelling with Large Corpora” in Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, ELRA.

Rosemann, Philipp W. (2007), The Story of a Great Medieval Book: Peter Lombard’s Sentences, Toronto, Broadview Press.

Stegmüller, Friedrich (1947-1948), Repertorium Commentariorum in Sententias Petri Lombardi, 2 vols., Würzburg, Schöningh.

Witt, Jeffrey C. (2018), “Digital Scholarly Editions and API Consuming Applications,” Digital Scholarly Editions as Interfaces 12, p. 219-47.

Witt, Jeffrey C. (2023), “Transparency and Discovery: Using a Text-Image Network to Study Manuscripts and Text Transmission", in Journal of Data Mining and Digital Humanities. On the Way to the Future of Digital Manuscript Studies (special issue). https://doi.org/10.46298/jdmdh.10225.

Haut de page

Notes

1 For a high-level discussion of intertextuality, see Graham Allen (2011), Intertextuality, 2nd ed., London, Routledge. For a more specific example of researchers interested in intertextuality using machine analysis, see Patrick J. Burns, James A. Brofos, Kyle Li, Pramit Chaudhuri & Joseph P. Dexter (2021), “Profiling of Intertextuality in Latin Literature Using Word Embeddings,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Association for Computational Linguistics), p. 4900–4907.

2 See Michael Stenskjær Christensen, Jeffrey C. Witt & Ueli Zahnd (2021), “Re-Conceiving the Christian Scholastic Corpus with the Scholastic Commentaries and Texts Archive,” in Digital Humanities and Christianity, edited by Tim Hutchings and Claire Clivaz, Berlin, De Gruyter, p. 48, which characterizes the distinctive nature of the scholastic tradition as follows: “It is this fundamental commitment to compendia and commentaries that makes scholasticism a complex intellectual tradition to study, a tradition that is best understood as a community project (instead of a loose conglomerate of individual contributions). The genre of commentaries epitomizes the scholastics’ commitment to other texts and authors.” Consider also the “scholia” (discussed below in Section 5) frequently accompanying scholastic editions as an example of this enduring interest in the intertextual nature of the scholastic tradition.

3 See Jose Manuel Gomez-Perez, Ronald Denaux & Andres Garcia-Silva (2020), A Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP, Cham, Springer International Publishing, https://doi.org/10.1007/978-3-030-44830-1.

4 See https://scta.info. Note that text identifiers used throughout the examples and figures below are SCTA short ids. They can be de-referenced, by prepending the SCTA URI prefix “http://scta.info/resource/”. For more published work on the SCTA, its underlying data models, and its applications see, Jeffrey C. Witt (2018), “Digital Scholarly Editions and API Consuming Applications”, Digital Scholarly Editions as Interfaces 12, p. 219-47; Michael Stenskjær Christensen, Jeffrey C. Witt, & Ueli Zahnd (2021), “Re-Conceiving the Christian Scholastic Corpus with the Scholastic Commentaries and Texts Archive,”, p. 47-76, https://doi.org/10.1515/9783110574043-003; and Jeffrey C. Witt (2023), “Transparency and Discovery: Using a Text-Image Network to Study Manuscripts and Text Transmission", in Journal of Data Mining and Digital Humanities. On the Way to the Future of Digital Manuscript Studies (special issue). https://doi.org/10.46298/jdmdh.10225.

5 For current and always in-progress documentation, see: https://community.scta.info/pages/docs. See especially the TEI customization docs that govern how textual-data is original prepared: https://community.scta.info/pages/lombardpress-schema-critical.html and https://community.scta.info/pages/lombardpress-schema-diplomatic.html. Consistency and conformity to these specifications is big reason why such a large corpus of textual data can be regularly crawled and a consistent and coherent knowledge graph can be extracted from that data.

6 A useful entry point into the knowledge graph is https://scta.info/resource/scta. This is a “workGroup” resource that links to all texts (“toplevel expressions”) indexed within the graph.“toplevel expressions” refer to the first hierarchical level in a given text which is usually further subdivided into parts or chapters, and then further subdivided into sections and finally into paragraphs.

7 The public SPARQL endpoint can be accessed here: http://sparql.scta.info/ds/query; see https://community.scta.info/pages/technical-overview.html for information on how to access and use.

8 The CSV API endpoint (https://scta.info/csv/scta) is one example, but other API endpoints, both stable and in-progress are described here: https://community.scta.info/pages/technical-overview.html.

9 The ability to carve up and re-construct texts at different hierarchy levels calls to mind Ted Nelson’s original vision of Xanalogical Storage, wherein a digital corpus is stored first as a pool of re-usable and granular text fragments. Individual fragments of a larger text can then be called upon at will from this pool and recomposed to meet different purposes. The SCTA knowledge graph — which creates a de-referenceable record for each granular piece of the corpus (e.g. paragraph, quotation, name, even an individual word) and identifies its place within multiple competing and overlapping text hierarchies — is a critical tool for achieving the results envisioned by Nelson. See Ted Nelson (1981), Literary Machines: The report on, and of, Project Xanadu concerning word processing, electronic publishing, hypertext, thinkertoys, tomorrow’s intellectual revolution, and certain other topics including knowledge, education and freedom, Sausalito, CA, Mindful Press, esp. chapter 0, p. 0/6.

10 For more on this dynamic ability to traverse the corpus network, see Jeffrey C. Witt (2023), “Transparency and Discovery: Using a Text-Image Network to Study Manuscripts and Text Transmission".

11 See my article, Jeffrey C. Witt (2023), “Transparency and Discovery” on why we need a new abstract textual coordinate system so that this mapping to material coordinates can happen instantly and automatically.

12 For a description of this inversion, see: Michael Stenskjær Christensen (2021), “Re-Conceiving the Christian Scholastic Corpus,” p. 52.

13 For a description of n-grams, see Justin Grimmer, Margaret E. Roberts, & Brandon M. Steward (2022), Text as Data, Princeton, NJ, Princeton University Press, p. 51; for their use in comparison tasks, see esp. c. 7, p. 70-77.

14 On cosine similarity see, Justin Grimmer et al. (2022), Text as Data, p. 72.

15 See Justin Grimmer et al. (2022), Text as Data, p. 61. Figure 1 below shows a simplified example of the vector using only 12 example n-grams. In actuality, the resulting vector is much larger. At the time of writing, the corpus consists of 25,796,762 unique 4-grams which constitutes the full length of the one-hot encoded vector for each document.

16 This query is a memory intensive operation. For this reason, n-gram queries of this kind are not able to be run directly on the SCTA public SPARQL endpoint. Many of the examples below are realized through a version of above query (restricted to targeted sub-graphs to ease the demand on memory), which is sent to an alternative SPARQL endpoint. The point here is to illustrate the basic logic of a query that could be implement on any graph organized in a similar way.

17 For an introduction to Lombard’s Sentences, see Philipp W. Rosemann (2007), The Story of a Great Medieval Book: Peter Lombard’s Sentences, Toronto, Broadview Press.

18 See, for example, the list of known commentaries compiled by Friedrich Stegmüller (1947-1948) in Repertorium Commentariorum in Sententias Petri Lombardi, Würzburg, Schöningh. See also A Digital Repertory of Commentaries on Peter Lombard’s Sentences, https://drcs.zahnd.be/, which, starting from Stegmüller’s catalogue, tries to keep a record of all known commentaries.

19 See Justin Grimmer, Margaret E. Roberts, & Brandon M. Steward (2022), Text as Data, Princeton, NJ, Princeton University Press, p. 86-87 for a discussion of document-level embeddings.

20 Quoc Le & Tomas Mikolov, “Distributed Representations of Sentences and Documents,” p. 1 (introduction). https://arxiv.org/pdf/1405.4053v2.pdf.

21 Quoc Le & Tomas Mikolov, “Distributed Representations of Sentences and Documents,” p. 1 (introduction). Recall that the resulting one-hot encoded n-gram vectors resulted in vectors of length greater than 25 million, and the value for most of these features in most documents was 0. In contrast, the document-embedding method can significantly reduce the dimensionality. In the case discussed here, the training was instructed to compute vectors of size 200.

22 For Gensim’s implementation see: https://radimrehurek.com/gensim/models/doc2vec.html and Radim Řehůřek & and Petr Sojka, “Software Framework for Topic Modelling with Large Corpora.” (2010) in Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks (ELRA). For a description of the underlying algorithms implemented by Gensim, see Quoc Le & Tomas Mikolov, “Distributed Representations of Sentences and Documents.”

23 The resulting SCTA corpus vectors (versioned as v1.0.0) are published here: https://github.com/scta/corpus-doc-embeddings/tree/v1.0.0. One can also find here more details on training, model versions, implementations details, and examples.

24 For a more on tf-idf see Justin Grimmer, et al. (2022), Text as Data, p. 75-77. See also, the sklearn TfidfVectorizer: https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html. See also, the brief discussion of tf-idf below in section 6.

25 I adjust for extreme outliers here because, as shown below, there are places where the actual assertions made by the Leonine editors may be wrong. In such cases the extreme distance from 0 should not be factored into the overall average. There are other cases where the asserted match is between an article in the Summa Theologiae and a very specific reply nested deep within an article in the Sentences commentary which may not reflect the main concern of the overall article. In such cases, the document-embedding score, which is focused on overall conceptual similarity, may produce a lower ranking that distorts the overall effectiveness of matching articles in the Summa Theologiae to articles in the Sentences commentary.

26 On ideas of originality and authorship in the scholastic tradition, see Francesco del Punta (1998), “The Genre of Commentaries in the Middle Ages and Its Relation to the Nature and Originality of Medieval Thought,” in Was ist Philosophie des Mittelalters? ed. Andreas Speer and Jan A. Aertsen, Berlin, De Gruyter, p. 138-151 and Jan-Hendryk de Boer (2018), “Kommentar,” in Universitäre Gelehrtenkultur vom 13.–16. Jahrhundert. Ein interdisziplinäres Quellen- und Methodenhandbuch, ed. Jan-Hendryk de Boer, Marian Füssel & Maximilian Schuh, Stuttgart, Franz Steiner, 2018, p. 265-318.

27 See note 25 above.

28 For more on convolution and its implementation using the Python library SciPy, see: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html. The Wikipedia article on convolution https://en.wikipedia.org/wiki/Convolution offers a useful starting place for learning about convolution generally and contains several links to useful resources.

29 Note, for example, that, while Justin Grimmer, et al. (2022), in Text as Data, surveys a very wide range of text-analysis methods, including an entire chapter on “Clustering” (c. 12, pp. 123-146), convolution as a method does not appear to be a focus and is not listed in the index of the book.

30 See Figure 2 above for an illustration of this dot product calculation.

31 One important finding using this method has already been published in my article Witt (2023), “Transparency and Discovery”. Here I document the discovery of a previously unknown case of text reuse by the fourteenth-century Augustinian Peter Gracilis of passages from the earlier Franciscan Andreas de Novo Castro. These reused passages were detected via the same method described here.

32 Mario Meliado & Silvia Negri (2011), “Neues zum Pariser Albertismus des frühen 15. Jahrhunderts: der magister Lambertus de Monte und die Handschrift Brussel, Koninklijke Bibliotheek, Ms. 760”, Bulletin de Philosophie Médiévale 53, p. 349-84.

Haut de page

Table des illustrations

Titre Fig. 1- Example vector representing the presence (1) or absence (0) of a given n-gram in document A
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-1.jpg
Fichier image/jpeg, 31k
Titre Fig. 2- Example of computing the number of shared 4-grams in documents A and B
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-2.jpg
Fichier image/jpeg, 50k
Titre Fig. 3- An example, simplified, SPARQL query illustrating how the n-gram intersection threshold can be used to query for all “related” documents
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-3.jpg
Fichier image/jpeg, 77k
Titre Fig. 4- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-4.png
Fichier image/png, 148k
Titre Fig. 5- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs (arranged in chronological order, top to bottom) in the corpus with a detected similarity match (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-5.png
Fichier image/png, 39k
Titre Fig. 6- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in Lombard’s Sentences with a detected similarity match (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-6.png
Fichier image/png, 92k
Titre Fig. 7- Similarity matrix of Lombard’s Sentences (X-axis) compared to paragraphs in Augustine’s De Trinitate with a detected similarity match (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-7.png
Fichier image/png, 127k
Titre Fig. 8- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-8.png
Fichier image/png, 133k
Titre Fig. 9- Zoomed in similarity matrix of Augustine’s De Trinitate (X-axis) compared to paragraphs in distinct Sentences commentaries with a detected similarity match, grouped by commentary and arranged chronologically (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-9.jpg
Fichier image/jpeg, 81k
Titre Fig. 10- Similarity matrix of Augustine’s De Trinitate (X-axis) compared to all paragraphs in all Sentences commentaries with a detected similarity match, grouped by distinction (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-10.png
Fichier image/png, 106k
Titre Fig. 11- Similarity matrix of Lombard’s Sentences (X-axis) compared to all paragraphs in all Sentences commentaries with similarity match in Lombard’s Sentences, grouped by distinction (Y-axis)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-11.png
Fichier image/png, 92k
Titre Fig. 12- Thomas Aquinas, Summa Theologiae, II-II, q. 43, a. 8 (Opera Omnia Iussu Leonis XIII, vol. 8 [Rome, 1895], p. 329)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-12.png
Fichier image/png, 1,3M
Titre Fig. 13- Thomas of Strasbourg, I Sent., prol. q. 1 (Genoa 1585, fol. 2v)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-13.png
Fichier image/png, 1,4M
Titre fig.14- Gerard of Siena, I Sent., d. 38, q. 1 (Padua 1598, p. 573)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-14.png
Fichier image/png, 1,7M
Titre Fig. 15- Durandus of St. Pourcain, I Sent., prol. q. 1 (Lyon 1563, fol. 2r)
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-15.png
Fichier image/png, 950k
Titre Fig. 16- Sample results of match rank testing via document-embeddings
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-16.png
Fichier image/png, 135k
Titre Fig. 17- Comparison of document-embedding’s similarity ranking of the Leonine editors’ asserted similarity relation (TAca84-d1e765-Dd1e638===ta-l2d26q1-Dd1e345) to the corrected similarity assertion (TAca84-d1e765-Dd1e638===ta-l2d36q1-Dd1e369
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-17.png
Fichier image/png, 28k
Titre Fig. 18- Comparison of tf-idf (left) to document-embedding (right) similarity rankings
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-18.jpg
Fichier image/jpeg, 133k
Titre Fig. 19- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 1-3 point to examples of diagonal clusters visible to the naked eye
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-19.png
Fichier image/png, 218k
Titre Fig. 20- Illustration of the diagonal pattern formed through textual re-use in three successive paragraphs
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-20.jpg
Fichier image/jpeg, 34k
Titre Fig. 21- Example 6x6 filter matrix designed to capture diagonal patterns in the larger similarity matrix
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-21.jpg
Fichier image/jpeg, 42k
Titre Fig. 22- Examples of sub-matrices in the larger similarity matrix whose dot product when multiplied by the filter matrix would result in a score of 3 or higher
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-22.jpg
Fichier image/jpeg, 91k
Titre Fig. 23- Similarity matrix of Albert the Great’s Summa Theologiae (X-axis) compared to all paragraphs in the corpus with a detected similarity match (Y-axis). Labels 4-5 point to diagonal clusters that unlike 1-3 are difficult to discern and easy to overlook from such a high-level perspective
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-23.png
Fichier image/png, 222k
Titre Fig. 24-25
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-24.jpg
Fichier image/jpeg, 193k
Titre Fig. 26- Document-embeddings report of the 20 most similar documents to Almn78-a48811. Bolded entries point to texts NOT written by Albert the Great
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-25.jpg
Fichier image/jpeg, 164k
Titre Fig. 27- 32
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-26.jpg
Fichier image/jpeg, 256k
Titre Fig. 33- Convolution report of cluster detection for Almn78-a48811
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-27.jpg
Fichier image/jpeg, 59k
Titre Fig. 34- Diagonal clusters between Almn78-a48811===alalal-c43683 detected by convolution method, but not listed in top 20 most similar results reported by document-embedding method
URL http://journals.openedition.org/methodos/docannexe/image/10987/img-28.png
Fichier image/png, 77k
Haut de page

Pour citer cet article

Référence électronique

Jeffrey C. Witt, « Finding Relatedness: pathways for detecting textual relatedness in the medieval scholastic corpus »Methodos [En ligne], 24 | 2024, mis en ligne le 16 octobre 2024, consulté le 06 février 2025. URL : http://journals.openedition.org/methodos/10987 ; DOI : https://doi.org/10.4000/12xql

Haut de page

Auteur

Jeffrey C. Witt

Loyola University Maryland

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Search OpenEdition Search

You will be redirected to OpenEdition Search