Navigation – Plan du site

AccueilNuméros24Digital Humanities in the Web 3.0...Machines, Symbolic AI, and the Se...

Digital Humanities in the Web 3.0 Era

Machines, Symbolic AI, and the Semantic Web:
What They Are and Why They Matter in the Humanities

Les machines, l’IA symbolique et le Web sémantique :
Ce qu’elles sont et pourquoi elles comptent pour les sciences humaines
Roberta Padlina

Plan

Haut de page

Texte intégral

  • 1 For both quotations see George B. Dyson, Turing's Cathedral: The Origins of the Digital Universe, (...)

“This species of device is so radically new that many of its uses will become clear only after it’s been put into operation.” It’s what he said to me. ‘Cause he understood. He knew the real challenge was not building the thing but asking it the right questions in a language intelligible to the machine. And he was the only one who spoke that language.”
(Benjamín Labatut, The MANIAC)
“Let the whole outside world consist of a long paper tape.”
(John von Neumann, 1948)1

Introduction

The Old Dream of Intelligent Machines

  • 2 Alan Turing, “On Computable Numbers with an Application to the Entscheidungsproblem”, Proceedings (...)
  • 3 John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 betwee (...)
  • 4 “It turns out that the recent success of deep learning is due less to new breakthrough in AI than (...)

1In 1936, Alan Turing conceived and formally described an automatic universal machine, and in 1951, he posed the question of whether digital computers could think2. In 1945, John von Neumann provided the first concrete description of an electronic digital programmable machine, building on Turing’s work3. This description served as the blueprint for all subsequent machines and laid the foundation of modern computers. Eight decades have passed since the dawn of the digital universe and the technological revolution that followed. It is remarkable to consider the profound impact this transformation has had on nearly every aspect of our lives, especially when contrasted with the relatively limited changes in the underlying technology itself since then4.

  • 5 “The whole thinking process is still rather mysterious to us, but I believe that the attempt to m (...)
  • 6 This is why Turing replaced the original question of whether machines can think ‒ “too meaningles (...)
  • 7 “Beyond the narrow framing put forward by both technology companies and the doctrine of human uni (...)

2The dream of artificial intelligence (AI) and thinking machines, which has a much longer history, has gone through many winters and springs but remains with us, eadem sed aliter – the same, yet different. As AI progresses, it also refines our understanding of intelligence itself5, though its precise nature and the means of distinguishing between genuine and simulated intelligence remain elusive6. If there is one lesson we should have learned by now, it is that intelligence is not an exclusively human capacity but rather a more-than-human faculty7.

The Explosive Success of Current AI and Related Issues

  • 8 “I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both e (...)

3In 2024, AI is primarily associated with machine learning (ML) and large language models (LLMs). Generative and deep learning-based AI applications, such as AlphaGo, ChatGPT, and Midjourney, have propelled the field forward spectacularly and made an immediate global impact. This is not the first wave of enthusiasm that has since cooled, nor will it be the last. Nevertheless, the illusion of human-like thoughts and actions performed by machines is stronger than ever. Unfortunately, the majority of people hold a dangerously misguided perception of AI, largely shaped by a dominant, self-reinforcing rhetoric that promotes the idea of AI as a disembodied, neutral and autonomous intelligence disconnected from the material world8. This illusion of intelligence and autonomy is further reinforced by the possibility to interact with these programs in ordinary natural languages, a feature that is both extremely comfortable and convenient. It is important to remember that when the illusion and sense of ease are great, so too are the risks.

  • 9 It is important to acknowledge the existence of alternative open-source and transparent solutions (...)
  • 10 “That’s what happens, it would seem, when the development of AI is led primarily by venture-funde (...)
  • 11 AlphaGo Zero, for example, played 4.9 million games of Go against itself in just 72 hours. Cf. Da (...)

4Several significant issues have been identified in relation to these novel forms of AI9. Here, I will outline only four of these concerns. First, they are “black boxes”: unintelligible not only to laypeople but even to their developers. There is a lack of comprehension and control over these technologies, and those who shape them and determine their proprietary access are often competitive, extractive, and harmful Big Tech companies led by foolish billionaires who fail to demonstrate methodological prudence, do not employ scientific criteria, and offer no transparency10. As a result, AI is perceived as an arcane technology with its own mysterious processes, leaving individuals feeling powerless and unarmed against it. This perception is reinforced by the immense computational power of machines, which greatly exceeds human capacity and enables them to perform complex calculations while generating and processing an enormous number of possible combinations11. The necessity of explainable AI (XAI) is becoming increasingly apparent, as it is vital for users to understand and trust the outcomes generated by such systems.

  • 12 “There are indications however that it is possible to make the machine display intelligence at th (...)
  • 13 “Let’s start by telling the truth: machines don’t learn. What a typical « learning machine » does (...)
  • 14 Cf. Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 131.
  • 15 “The most efficient search of an unmapped territory takes the form of a random walk”, George B. D (...)
  • 16 “It is probably wise to include a random element in a learning machine [...]. A random element is (...)
  • 17 “What is this Monte Carlo method? Very roughly, the idea is to replace a given precise mathematic (...)

5A second issue is that, at certain points, these technologies begin to lose their way and become delirious, producing hallucinations. These hallucinations are particularly challenging to eliminate entirely, as they are inherently linked to the statistical and probabilistic nature of the systems involved12. Since they are based on language models rather than knowledge models, ChatGPT and similar systems lack the capacity to “truly” comprehend ‒ or to comprehend as humans do ‒ the input they receive13. Chatbots never respond with “I don’t know”; instead they fabricate sentences word by word, attempting to fill gaps by guessing the next word based on statistical probabilities and proceeding through trial and error14. This approach enables them to tinker with responses and generate content that resembles previously encountered material. The key to their power lies in incorporating elements of randomness and in the idea that a random search can be more efficient than a systematic one15. As Turing observed, introducing randomness is fundamental to enabling machines to learn autonomously, thereby reducing reliance on predefined rules set by humans16. This idea was realized in John von Neumann’s Monte Carlo method, a technique for making predictions and solving complex, intractable problems without directly tackling them through abstract calculus. Instead, Monte Carlo relies on random choices and a series of approximations that converge on the correct answer17. Rather than computing all possible combinations of a complex problem that resists analytical or mathematical approaches, it is easier to simulate the situation numerous times, using randomness to select and explore potential paths. To obtain success probabilities, one need only observe and count the number of positive outcomes.

  • 18 “Training sets raise complex questions from ethical, methodological, and epistemological perspect (...)
  • 19 “As Vannevar Bush foresaw, machines have enormous appetites. But how and what they are fed has an (...)
  • 20 “Anything and everything online was primed to become a training set for AI”, Kate Crawford (2021) (...)
  • 21 “It's no secret: deep learning requires big data. Big in the sense of the million-plus labeled tr (...)
  • 22 George B. Dyson (2012), Turing’s Cathedral, p. 312, “One approach is to start with the questions, (...)
  • 23 “We are training Google’s image recognition algorithms for free. Again, the myth of AI as afforda (...)
  • 24 It has become so normalized across the industry to take and use whatever is available that few s (...)
  • 25 “We become more like the machines we envisage, in ways which, in the present, have profoundly neg (...)

6A third issue pertaining to mainstream AI is the substantial amount of biases deeply embedded in the data fed to these systems18. ML programs, predominantly inductive in nature and therefore reliant on specific examples, require vast amounts of data (the more, the better19) for training and to perform their intended functions. Typically, three main sources of data are used: synthetic (artificially generated) data, data from the Web and social media, and data provided by the users of these services. The first two sources are unreliable in quality, as they are either inherently fake or largely consist of misinformation, subjective opinions, or inaccuracies20. The third source is all of us. When we use ChatGPT or Google, we tend to assume that we, the users, are the only party asking questions and receiving answers. However, this is not the case: users also provide the responses these systems seek21. Such systems learn from us (from our questions and reactions) without our awareness. Despite the extensive data and information these AI systems have accumulated, they still lack a genuine understanding of its meaning. To uncover this meaning, they rely on our questions and their connection to the answers we select, echoing “the strategy that Turing had in mind: gathering all available answers, inviting all possible questions, and mapping the results”22. We are currently caught in a vicious cycle, with teachers unaware of their roles and learners unable to understand what they are handling or to distinguish between truth and falsehood. Without mutual understanding, these tools and their users profoundly influence one another. This outcome is the result of a deliberate strategy: the AI industry has succeeded in positioning us, its users, at the same time as buyers, (unpaid) workers, and products of its business23. It is, admittedly, a stroke of (evil) genius on their part but also a rather shortsighted move on ours, as we allow ourselves to be beguiled solely for the sake of the comfort it affords24. We are all contributing to the success of these companies, enabling their CEOs to pursue delusional and harmful visions at the expense of the planet and everyone else25.

  • 26 AI encompasses ML, which, in turn, includes artificial neural networks (ANNs). Deep learning is a (...)
  • 27 “Some people – generally mathematicians – promoted mathematical logic and deductive reasoning as (...)
  • 28 Cf. Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 20.
  • 29 George B. Dyson (2012), Turing’s Cathedral, p. 318, “As Marcus notes, while we humans attribute t (...)
  • 30 “I expect that digital computing machines will eventually stimulate a considerable interest in s (...)

7A final, more specific risk in today’s race for AI is that the current success of ML relies heavily on subsymbolic AI and may overshadow the importance of, and thus divert funding from, its counterpart (the other side of AI): machine reasoning (MR) and symbolic AI, also known as GOFAI (good old‑fashioned artificial intelligence). These are the two main AI research programs, exhibiting opposing approaches. Subsymbolic or connectionist AI, inspired by neuroscience, adopts a bottom‑up approach and simulates the low-level brain structures of human neural networks26. In contrast, symbolic AI takes a top-down approach, focusing on simulating high-level symbolic representations of the mind and conscious cognitive processes, such as abstraction, reasoning, and decision-making27. In the top-down approach, the programmer explicitly designs all the processes in the algorithm, whereas in the bottom-up approach, the algorithm learns gradually on its own – ideally from data alone and without the intervention of humans or their theoretical and conceptual mediation28. In order to achieve optimal results with ChatGPT and similar technologies, there is a growing emphasis on “prompt engineering.” The aim of this approach is to enhance the quality of user input given in plain text and natural language, thereby reinforcing the perception that machines can understand this input as we do29. In contrast, symbolic AI and MR rely on formal symbolic languages that allow the specification of all necessary symbols and rules to perform step-by-step reasoning based on logical deduction. This symbolic language is the original language and mode of reasoning of the machine30, and it is crucial that humans also make an effort to understand and learn it:

  • 31 James Bridle (2022), Ways of Being, p. 169. Cf. “The languages we have developed thus far to comm (...)

Once again, in imagining better ways of living with non-humans, computational or biological, we must be attentive to their own ways of speaking and making meaning, and not simply insist that they learn to speak, and think, and behave, in the ways that we do31.

  • 32 “it is possible to teach a machine by punishments and rewards to obey orders given in some langua (...)

8ML offers the convenience of direct expression in natural languages, yet it does not guarantee the accuracy of its results. In contrast, MR requires the use of a formal language but ensures that the output will be correct (provided the initial data and rules are accurate). Additionally, MR relies on knowledge models that provide context for interpreting the meaning of data, enabling machines to grasp it more effectively. Subsymbolic AI and ML alone are not yet sufficiently reliable for scientific purposes and research. By contrast, symbolic AI offers significant transparency (and consequently trustworthiness), because knowledge models are human-readable and explainable (unlike the inner workings of artificial neural networks) and all processes are explicitly outlined step by step. It is therefore essential to complement ML with MR and to combine and integrate the two AI paradigms (symbolic and subsymbolic) to enhance AI32.

Symbolic AI and the Semantic Web

  • 33 Such systems are known to be efficient for domain-specific and local solutions but are typically (...)

9Symbolic AI began with Leibniz’s dream and evolved into knowledge- and rule-based systems for automatic deduction and theorem proving, expert systems for solving domain-specific problems33, and knowledge representation, engineering, and management systems. The vision of the Semantic Web (SW) is deeply rooted in this tradition of symbolic AI. Although still a work in progress with many improvements needed, the SW is currently one of the most advanced knowledge representation paradigms and effective technologies available to researchers, as it enables the creation of comprehensive and reusable knowledge graphs.

The Digital in the Humanities: Benefits of Semantic Web Technologies

  • 34 The digital, computational approach to humanistic material has led to new comparisons and method (...)
  • 35 “The powers of computers derive as much from their ability to copy as from their ability to compu (...)

10There are undeniable benefits for scientific research, including the humanities, in adopting the digital paradigm and the SW as a methodological and conceptual framework34. Implementing SW technologies enables research data and results to be expressed in a symbolic and formal way, benefiting both humans and machines while offering a significant return on investment. The SW combines the strengths of formal systems, along with the power of symbolic notation and reasoning, with the advantages of the Web’s decentralized and distributed architecture in a global context. This synergy is most evident in the two areas where the SW (and machines in general) excel and demonstrate their capabilities most clearly: communication and computation (or reasoning)35.

Communication

  • 36 “before attempting to translate our data into the rigorous language of symbols, it is above all t (...)
  • 37 Nevertheless, the same concepts can always be expressed through different symbolic representation (...)

11The use of formal symbolic languages allows us to purify natural language expressions and clarify their logical structures, providing a scientific tool in the form of a precise and rigorous logical language that eliminates ambiguities and shortcuts, leaving nothing to guesswork36. Following the principle of contrast, each symbol corresponds to one and only one concept: entities deemed equivalent must be represented by the same symbol, while entities with different properties must be represented by distinct symbols37. The first and crucial step is to assign a symbol (an identifier) to anything that requires discussion or investigation, ensuring it is distinct, directly accessible and manipulable.

  • 38 “The web is an amazing access platform. Content published on the web becomes almost immediately g (...)
  • 39 Cf. Umberto Eco (1975), Trattato di semiotica generale, p. 112 and 377. Cf. also Fabio Ciotti (20 (...)
  • 40 “We will increasingly operate in a world of networked and linked collection and descriptions. Wit (...)
  • 41 “The humanities have become increasingly fragmented over the last two centuries–unlike the scienc (...)
  • 42 “Scientific observation is not merely pure description of separate facts. Its main goal is to vie (...)

12The logical and universal nature of machines and their languages enables us to effectively manage human diversity – linguistic, cultural, and disciplinary. Machines and SW languages function as a lingua franca independent of natural languages. A further distinctive benefit of the SW is that it is built upon and grounded in the World Wide Web, whose impact is evident to all38. The significant power of the SW’s formal resources lies in their connectivity and reusability within a global semantic space39, a conceptual network that maps different representations and models. The SW provides a common framework that facilitates data integration and information linkage across disparate sources and contexts. It enables the integration of all forms of data (from unstructured to structured and everything in between) and any kind of media40. This framework fosters interdisciplinarity, methodological integration and conceptual unification, supporting a transparent and holistic perspective that transcends modern academic compartmentalization and fragmentation41. This is crucial, as scientific observation should not be limited to merely describing separate entities or facts but should incorporate the event described within as many relationships as possible42.

13In the SW, research data becomes open and FAIR43: thanks to unique identifiers and rich descriptions, data is easily findable; and being expressed in formal languages, data is not only transparent to humans but also accessible to machines (machine-readable); and being self‑explanatory, machine-interpretable and based on standard logical models, data is inherently and semantically interoperable; and, ultimately, all this makes data directly reusable44.

  • 45 “The exercise helps you to check your own reasoning; you can be your own critic. What is more, as (...)
  • 46 La transformation du langage ordinaire en symboles présente des difficultés plus grandes. Il fau (...)

14One of the most important aspects of formalizing data with SW technologies is that we do not merely digitize it; we also make the implicit semantics (meaning) of data and documents, as well as the tacit knowledge of scholars about them, explicit. Formalization compels us to clarify, thereby revealing and making visible ‒ not only to others but also to ourselves ‒ our reasoning and assumptions45. It also exposes shortcomings or flaws in the data and helps prevent the creation of oversimplified assumptions and taxonomies46.

15The digital paradigm underscores the need for structured data and representations, as well as models that precisely describe the meaning of data. A purely data-driven approach is insufficient and must be complemented by conceptual knowledge and theories. The SW provides modeling languages and logic-based knowledge models, known as ontologies, which allow us to capture and express complex semantics. These ontologies facilitate the symbolic representation of research domain theories, defining the fixed characteristics and objects of a domain, as well as the principles governing their interactions.

  • 47 “« Theories are nets », wrote Novalis, « and only he who casts will catch ». Theories are nets, a (...)
  • 48 “Ideas about how we should think are locked into our culture. It’s a problem exacerbated by techn (...)
  • 49 “No schema is ever complete, no taxonomy ever finished – and that’s fine, providing the systems w (...)
  • 50 Kate Crawford (2021), Atlas of AI, p. 132.

16Models, with their unifying and predictive power, are of critical importance47. However, we must be cautious not to become prisoners of our own conceptual models and mental schemas: no model fully represents the reality it claims to depict. Yet, models can shape ‒ either restrict or expand ‒ our perception of reality in ways that may not be immediately apparent48. It is therefore crucial to recognize the partiality and continuous revisability of models. The SW enables us to subject our models to an ongoing process of open, transparent, collaborative, and consensus-based revision49. Classification, as an act of power ‒ “the power to decide which differences make a difference50” ‒ must be carefully regulated and progressively refined collectively by the scholarly community.

  • 51 Usualmente un solo significante veicola contenuti diversi e interallaciati e che pertanto quello (...)
  • 52 “This redundancy prevented bits being lost in transit”, George B. Dyson (2012), Turing’s Cathedra (...)
  • 53 Cf. Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 147 and 160. “It turns (...)

17Moreover, no single explanatory framework can fully account for the many layers of our complex reality and the multiple connections within it. There are always various ways to interpret the same data, and data can be combined in more than one way. It is therefore essential to link data to its historical and cultural context and describe it through various, multidimensional models that vary in both abstraction and detail levels51. In the digital world, redundancy is not a drawback but an advantage52. Semantically rich representations and basic descriptions should complement each other, and data should be transformed and presented in multiple formats, enabling multimodal access to a multiplicity of representations and perspectives53.

Computation

18Symbolic formal languages also serve as the foundation of machines’ computational abilities and reasoning power. As mentioned above, the SW provides tools to express and model not only data and information (or metadata) but also the knowledge associated with a given domain. This capability enables the construction of an inference apparatus to draw deductions (and other forms of inference) from knowledge graphs using ontological characteristics and logical rules. By formally capturing our knowledge and the logical rules that govern reasoning within a domain, we can apply logical and mathematical calculus, as well as various computational methods, to our data. In doing so, the SW bridges a concrete, descriptive approach with a general, normative one.

19When data is directly accessible to machines and therefore machine-processable, a wide range of procedures can be automated: transforming and processing data (thereby reducing manual errors), validating data quality and consistency, generating complex analytics and visualizations, and performing machine reasoning. For example, missing information can be deduced and added based on ontologies and rules through a process known as forward chaining.

  • 54 “You reduce the text to a few elements, and abstract them, and construct a new, artificial object (...)

20In sum, machines support both analysis and synthesis, aiding in the testing and advancement of conjectures and hypotheses. Adopting the digital paradigm and SW technologies facilitates the fulfillment of scientific criteria and requirements such as openness, clarity, explainability, consistency, validity, verification, and reproducibility. These technologies also promote efficient knowledge exchange and collaboration ‒ both among humans and between humans and machines ‒ while supporting intellectual history and humanistic research through content exploration, distant reading, and network analysis54.

The current place of SW technologies in the Humanities

  • 55 The Linked Open Data movement itself appears to adopt a somewhat relaxed approach toward the form (...)

21Despite their advantages and potential, SW technologies are neither widely adopted nor methodologically integrated into humanities research and teaching. There remains a widespread reluctance toward digital experimentation and approximation, with only a minority of humanistic scholars (albeit a growing one) investing in and committing to formal computational methods. Furthermore, even when Digital Humanities projects adopt these technologies, they are sometimes employed with a relaxed and overly simplified approach. This can lead to the implementation of loosely defined, less formal, and less precise semantics, which may be counterproductive and introduce inconsistencies into the global system55.

  • 56 What precisely is new in its latest guise of this problem on the Web of Linked Data is that this (...)

22It is regrettable that genuine interdisciplinary collaborations between humanities scholars and experts in information technology and computer science are not more prevalent56. In general, the latter often show little interest in the nuances and intricacies of humanistic discourses, while the former frequently do not deem important understanding how machines function. This disconnect has resulted in confusion and uncertainty about how best to advance Digital Humanities effectively.

  • 57 An unwillingness to admit the possibility that mankind can have any rivals in intellectual power (...)
  • 58 “The essential argument against formal specification is their difficulty. Formal specifications, (...)

23The use of machines for tasks traditionally considered the exclusive domain of human intelligence has always been met with skepticism and resistance (even mathematicians initially failed to recognize their value)57. Today, two primary objections persist in the humanities regarding the implementation of computational methods in general and SW technologies in particular: first, they are perceived as neither suitable nor useful; and second, they are considered overly complex and difficult to understand58.

Suitability of Computational Methods for the Humanities

  • 59 “Why is computer so important to mankind? We might have felt at one time that calculating would m (...)
  • 60 “Another argument that one used to hear against formal specifications was that they only applied (...)
  • 61 “A variant of Lady Lovelace's objection states that a machine can 'never do anything really new'” (...)

24The first objection asserts that machines and computer-based research are unnecessary and unsuited to the humanities, claiming that traditional methods, which make data understandable only to humans, are sufficient. The main argument is that humanistic objects and discourses (unlike those in the natural, economic, and social sciences) defy digital analysis and resist quantification and mathematization59. Formal methods are often dismissed as empty formalisms and arid schemes, devoid of meaningful impressions60. Humanistic topics are viewed as lying beyond the limits of formal logic, which is deemed incapable of capturing their qualitative nuances. This skepticism is further reinforced by the claim that deductive inference is non-ampliative (i.e., it adds nothing new), as conclusions are already implicitly contained within their premises61.

  • 62 “All this work helped to show many people in a variety of scientific disciplines the tremendous b (...)

25I firmly believe that machines and computational methods are both useful and well-suited to the humanities62. The benefits previously discussed in relation to communication clearly extend to the communication and sharing of humanities research data. Nonetheless, it is not always clear that the computational benefits are equally applicable.

  • 63 Si pensa spesso che le culture scientifica e umanistica siano contrapposte nei metodi e nelle fi (...)
  • 64 “Although it is certainly true that the computer can solve very many problems in areas that can b (...)
  • 65 Cf. Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche. (...)
  • 66 “Language and number serve as instrumental aids to the processes of reasoning”, George Boole (185 (...)

26It is a mistake to assume that the capabilities of computable numbers and digital machines cannot be applied to research in the humanities63. First, numbers are not merely numbers; they are informational entities and abstract symbols that, through computing machines, can be applied to and used to represent anything64. Numbers themselves are symbols that have been gradually refined through an increasing degree of abstraction: initially tied to concrete objects, they evolved into the concept of abstract numbers and, eventually, to the idea of numbers in general, encompassing any and all possible numbers. The sequence of natural numbers leads to another powerful abstraction: the notion that numbers can extend beyond all limits, bringing the concept of infinity into mathematics65. Millennia of mathematical development have greatly abstracted the concept of number beyond its purely quantitative meaning, allowing it to represent complex relationships. Natural numbers are deeply connected to our symbolic and linguistic capabilities, and the entire power of mathematics stems from the ability to represent numbers using arbitrary symbols66:

  • 67 Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 79.

How did Homo sapiens alone ever move beyond approximation? The uniquely human ability to devise symbolic numeration systems was probably the most crucial factor. [… ] Linguistic symbols parse the world into discrete categories. Hence, they allow us to refer to precise numbers and to separate them categorically from their closest neighbors. [...] It is this transition from an approximate to a symbolic representation of numbers [...] a transition that occurs both in cultural history and in the mind of any child who acquires the language of numbers67.

  • 68 “Logic is conversant with two kinds of relations, – relations among things, and relations among f (...)
  • 69 “The terms narrow and weak are used to contrast with strong, human-level, general, or full-blown (...)
  • 70 “That is the fact that interhuman communication is far less rigidly constrained than human-machin (...)
  • 71 “To our minds it is clearest when several steps are telescoped together, to form one single sente (...)
  • 72 “Jumping to conclusions on the basis of limited evidence is so important to an understanding of i (...)
  • 73 “Logics and statistics should be primarily, although not exclusively, viewed as the basic tools o (...)
  • 74 “Deductive rules are non-ampliative, but that does not mean that they play no useful role in know (...)

27Secondly, computers are not only grounded in numbers but, even more fundamentally, in logic: they are not “merely” digital computing machines but also logical machines. Their origins lie in the interplay between mathematics and logic, mediated by philosophy and shaped by ontological commitments68. While machines do not possess the general intelligence of humans, they can still be extremely useful, particularly in areas where humans are less efficient69. Natural languages are full of ambiguities, with words carrying multiple meanings that also fluctuate over time70. Human intuitions are frequently flawed and biased, and we are champions in taking shortcuts71 and jumping to conclusions72. By contrast, machines excel in the faultless and tireless execution of long series of logical steps, capable of flawlessly processing vast amounts of data in an integrated manner. The combination of logical and statistical methods enables machines to serve as a fundamental tool in heuristic investigations73. Furthermore, deductive rules can be seen as ampliative in the sense that, while the conclusion of a deductive argument is already contained within its premises, it may not be immediately evident or obvious to everyone. In this way, deductive rules are ampliative in a weaker sense, as they make the conclusion explicitly visible74.

  • 75 “The necessity for using the intuition is then greatly reduced by setting down formal rules for c (...)

28It is valuable to combine human intuition and common sense with machine rigor and patience: digital computational dynamics should support the human interpretative process75. Rather than aiming to create entirely autonomous AI, it would be more beneficial to foster a dialogue between humans and machines, establishing a virtuous feedback loop in which both parties are engaged and aligned. This collaborative model is the essence of the human-in-the-loop (HITL) design principle, which contrasts with a completely autonomous system that does not rely on human supervision (human-out-of-the-loop). Along the HITL approach, the machine should be capable of acting and making decisions independently, yet it could also consult with and seek guidance from a human, who, in turn, would supervise and intervene as needed to assist the machine.

  • 76 Complessità strutturale che resiste certo all'analisi, ma non vi si sottrae”, Umberto Eco (1975) (...)
  • 77 “The patterns found can consist of a regularity (often with exceptions) but they can also consist (...)
  • 78 “Since Panini specifies a clear procedure for his grammar, which he expresses as a system of rule (...)
  • 79 “Abstraction, in some form, underlies all of our concepts [...] Abstraction is closely linked to (...)
  • 80 “As the cognitive scientist Robert French phrased it, abstraction and analogy are all about perce (...)

29Humanistic discourse exhibits a complexity that “certainly resists analysis, but does not shirk from it76”. In both the humanities and the natural sciences, there is a quest for regularities, proportions, analogies, hidden similarities, underlying patterns and structures, and, ultimately, laws77. Moreover, as Bod illustrated, procedural systems of rules are also present in the humanities and hold great importance there78. Abstraction and generalization are also essential aspects of humanistic research79: concepts emerge through abstraction, analysis, and the generalization of particular cases. Identifying the relevant and invariant aspects while setting aside unimportant details allows us to uncover regularities, recurring common patterns, and essential structures, making these visible and comprehensible80.

  • 81 Una unità culturale non può essere però identificata soltanto attraverso la serie dei propri int (...)
  • 82 “The Infinity Principle: To shed light on any continuous shape, object, motion, process, or pheno (...)
  • 83 “Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cuttin (...)

30Also in the humanities, knowledge is acquired through a divide et impera strategy: a progressive refinement of successive, increasingly detailed, and deeper distinctions. The continuum we perceive (whether of reality, a discourse, or an artwork) is segmented, resulting in the construction of analytical structures (cultural and semantic units) that are labeled and referenced. Continuous information blocks are differentiated into discrete representations, which are then organized into a structure, a systematic organization of the material continuum81. The transition from the continuous and concrete to the discrete and abstract is a fundamental aspect of all knowledge82. The more effectively we parse the world into discrete objects and abstractly identify these entities, the better machines (with their ability to calculate and manipulate discrete elements) can support research. However, this represents only one half of the knowledge process. The logic of division must be complemented by a logic of recomposition. The next step is to recombine the elements (the semantic units) in a multitude of ways, creating a foundation upon which further knowledge can be built. This is precisely what occurs with digital computation (and mathematical calculus83). The universal languages of numbers and logic enable machines to function as numerical synthesis mechanisms, facilitating the powerful analysis of analog (continuous) realities within an abstract representation space. Anything codified with symbols can be manipulated and recombined in numerous ways through formal rules and operations. The digital machine, with its analytical and combinatorial power, enables the construction of new, manipulable informational objects.

Complexity of computational methods

31The second objection is more understandable and relevant: digital methods and SW technologies are undeniably complex and challenging to learn and use. However, those who avoid tackling complexity often end up creating complicated (or even chaotic) solutions. Adopting the digital paradigm presents many demanding challenges. There is no doubt that the effort required, both in terms of conceptual design and practical implementation, is substantial and necessitates continuous commitment, as well as a clear strategy and effective management. For larger projects, it also demands extensive discussions and consensus-based decision-making.

  • 84 In realtà nei linguaggi naturali le unità culturali di rado sono entità formalmente univoca e sp (...)

32It must also be acknowledged that reducing data to pure numerical information requires complex theories to explain how this can be achieved. Symbolic representations are more easily produced than numerical approximations, but a domain must first be sufficiently organized through pre-formal analysis in order to be suitable for symbolic and formal operations. In natural languages, cultural units are rarely formally unambiguous entities. They are often fuzzy concepts: concepts with vague boundaries that are subject to gradation84.

  • 85 “Formal logic can only take account of relations which are formally expressed (VI.16); and it may (...)
  • 86 Al contrario, in una procedura ecdotica segnata dall'informatica la trascrizione da un manoscrit (...)
  • 87 “The use of a modern computing machine is based on the user's ability to develop and formulate th (...)

33The challenge is one of translation: machines require all data and processes to be explicitly, unambiguously, rigorously, and completely formulated, with no steps omitted85. On the one hand, we must build knowledge models to encode data into symbolic formal representations, enabling machines to access and manipulate it meaningfully86. On the other hand, we must also provide well-defined and robust rules of transformation and inference to apply to the data87.

  • 88 “Representations try to capture objects in their entirety and can be further transformed into pub (...)

34It seems that a certain misconception about the role of the digital in the humanities still persists. The prevailing view holds that digitization is primarily or exclusively aimed at enhancing human usability. The focus is placed on the presentation of data in narrative or graphical terms for human interpretation, rather than on representation in formal form for machine processing ‒ in other words, the emphasis is on “mere” digitization rather than formalization88. Consequently, much of the meaning conveyed by humanist scholars in digital data remains accessible only to other humans in plain text, without a structured form that is both machine-readable and functional for data processing. Within this limited perspective, machines are seen as convenient tools ‒ or sometimes as burdens ‒ but ultimately as mere mediums or channels for human communication.

  • 89 “[Vi sono] due momenti caratteristici dell’impatto dell’informatica con il mondo: 1) in una prima (...)
  • 90 In the same way as developers strive to build user-friendly digital tools and solutions.
  • 91 “Infatti la macchina informatica che legge (per così dire) chiude il cerchio dell'utilizzazione d (...)
  • 92 “So it is important that the high-level program, while comfortable for the human, still should be (...)

35Conversely, I maintain that it is important to also consider the machine as a target user, whose requirements must be acknowledged and taken into account89. In other words, we must be machine-friendly90 and adopt its characteristic linguistic symbolic codes to enable the machine to make the most of our data and knowledge91. While enhancing human usability is important, we must also ensure that the formal expressive power of the system is not sacrificed for the sake of human convenience92. To achieve optimal outcomes when working with a machine, it is essential to place it at the center; benefits for humans will naturally follow. Indeed, when information is encoded in digital form, it is important to ensure that the machine can capture as much of its meaning as possible to correctly interpret and use it. If meaning is not lost in translation, the machine can take care of translating it back for an infinite variety of human-targeted uses.

  • 93 “Who seeks for methods without having a definite problem in mind seeks for the most part in vain” (...)
  • 94 Richard P. Feynman (1996), Lectures on Computation, ed. by Anthony J.G. Hey & Robin W. Allen, Bos (...)
  • 95 Indebolire la pretesa di esattezza per produrne una simulazione più semplice e adeguata: è quest (...)

36Another important point is that, before attempting to obtain answers from the machine, it is essential to precisely define the question to be posed or the problem to be solved, as well as to determine the most effective way to translate them for the machine93: “converting questions to effective procedures is pretty much equivalent to getting them into a form whereby computers can handle them94.” The main challenge lies in ensuring that the machine fully understands what we want. Hasty application of formal analysis without sufficiently preparing the subject can lead to errors, so caution is necessary when automatically deriving conclusions. Furthermore, to some extent, researchers must also accept “weakening the demand to exactitude to produce a simpler and more adequate simulation: this is a kind of paradox as well as a tacit and decisive cunning of the science of calculation95.”

  • 96 “Evans identified two central questions: how to represent the line figures and how to define the tra (...)
  • 97 “The chief practical difficulty of this inquiry will consist, not in the application of the metho (...)

37The conceptualization and representation of a problem, along with the detailed knowledge required for the machine to process and solve it, is no trivial task96. The challenge lies in encoding the data and programming the subsequent computation, both of which are as much a science as they are an art97. While this requires significant effort, it is well worth it: if we succeed in constructing a symbolic formal system that is isomorphic to the portion (and level) of reality we aim to describe, we can fully leverage the computational power of the machine.

  • 98 “This was done in part by thinking things through logically, but also perhaps more importantly by (...)

38It is essential to recognize the necessity of a close collaboration between experts in information theory and computer science and domain specialists. Computer scientists can assist with the translation and formalization of data and knowledge, as well as the automation of data processes and procedures for addressing problems. Moreover, there is significant potential for a fruitful avenue to advance both fields. Computer science is not only concerned with problem-solving but also with problem discovery. Programmers are constantly seeking problems to test and refine their algorithms98. The humanities represent a particularly intriguing application area for computer science, as they pose critical challenges in representation and modeling: data is often incomplete, ambiguous, or contradictory, and there are frequently multiple conflicting interpretations.

  • 99 “Digital computers are able to answer most – but not all – questions stated in finite, unambiguou (...)

39To realize the vision of Digital Humanities, it is crucial to avoid a divide between technical and scholarly components. Instead, a genuine effort must be made to foster true collaboration by integrating and embracing digital principles and challenges within domain-specific areas. This requires a virtuous co-design process involving humanist scholars, computer scientists and engineers99.

Back to Turing and von Neumann!100

  • 100 “Back to Leibniz!” is the title of a 1932 article by Norbert Wiener, a major figure of cybernetic (...)
  • 101 “The machine's processes are mosaics of very simple standard parts, but the designs can be of gre (...)

40I believe the two objections we have mentioned reveal a deeper misunderstanding of what a machine is, how it works, and what it can and cannot do. A machine is a complex system designed to manage and interact with other complex systems. Yet, surprisingly, it is entirely built upon very simple components, combined according to a precise logic and based on a few fundamental ideas and principles. Complexity arises from the infinite ways these parts interact and organize themselves, enabling a universality of application101.

  • 102 “Doing digital preservation requires a foundational understanding of the structure and the nature (...)

41To realise the full potential of a technology, it is crucial to clearly understand its capabilities and limitations. The goal of the remainder of this paper is to provide an introduction to the theoretical foundations, underlying principles, characteristics, possibilities, and limits of, first, digital computing machines (Section 2) and, second, SW technologies (Section 3). The aim is to raise at least an awareness, but ideally an understanding, of their power and potential102.

42We often forget the teachings of past masters, yet they remain highly relevant and have the important merit of being expressed in simpler terms, unclouded by technical jargon or highly specialized discussions. The initial formulation of a theory is typically more accessible than its later, more sophisticated developments, and it sheds light on the theory’s objectives and significance. For this reason, in Section 2, we will revisit the initial motivations and fundamental principles of digital computing machines.

The Humanities in the Digital: Benefits of a Humanistic Perspective

43To conclude this first section, allow me to emphasize the urgent need for humanist researchers to take a more active role in constructing and shaping the digital universe.

  • 103 “It seems that the data is the place where the editorial content is stored, where the editorial p (...)
  • 104 Researcher should also be aware of the importance of copyleft: the practice of granting the right (...)

44First, scholars should take greater responsibility for the digital and formal representation of their research data and models103. Embracing the principle that “sharing is caring,” they should be absolutely clear and transparent about reuse possibilities, release their content using open standards and under open licenses104 or place it directly in the public domain.

  • 105 “Arrangement and description is the process by which collections are made discoverable, intelligi (...)
  • 106 Anche per le discipline umanistiche « il Web semantico » rappresenta « il futuro » e l'informat (...)

45Scholars should also pay close attention to the form of their content. The process of arranging and describing research collections according to modern digital archival principles should begin with the researchers themselves105. In Digital Humanities and data science projects, as well as in digital research infrastructures that integrate different research sources and datasets, an enormous amount of work and resources is devoted to cleaning and preparing data before any transformation or analysis can take place. It is in the researcher’s interest to ensure that these manipulations align with and do not alter the meaning of their data. Scholars should take responsibility for and actively participate in the development of appropriate symbolic representations for their research data and problems, as they alone can identify the critical logical relationships and structures within their domains106.

  • 107 “This separation of ethical questions away from the technical reflects a wider problem in the fie (...)
  • 108 Se l'etica è la logica dell'agire giusto, la logica è l'etica del pensare. [...] Parlare con giust (...)
  • 109 “We should be afraid. Not of intelligent machines. But of machines making decisions that they do (...)
  • 110 Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 115.
  • 111 “The philosopher Achille Mbembé [...] writes: « It is about extraction, capture, the cult of data (...)

46Finally, humanist scholars have an ethical duty to contribute constructively to the development of AI in order to counter AI industry’s ethics washing and competition107. The issue of AI now concerns everyone in society, and scholars in the humanities must be actively involved in current discussions, addressing the challenges and problems posed by machines and AI, and advocating for the responsible use of language and data. “If ethics is the logic of right acting, logic is the ethics of thinking108.” Humanists should participate in the regulation of AI, contributing their expertise to help prevent undesirable drifts in the field109. Just as we should apply the power of digital machines to our research problems, we should also apply the power of our disciplines to AI. As Ippoliti points out, we must recognize that the notion of the algorithm itself carries social power, a power that must be carefully managed and constrained110. Ethical and philosophical reflection is essential for the responsible development of AI111. Humanists have thus an important role to play in these matters and should not hold back from engaging.

The Machine: Its Origins and Fundamental Ideas

“The title « On Computable Numbers » (rather than « On Computable Functions ») signaled a fundamental shift. Before Turing, things were done to numbers. After Turing, numbers began doing things.”
(George Dyson, Turing’s Cathedral, p. 250)

  • 112 Cf. “The integration of linguistics and logic was one of the major attainments of twentieth-centu (...)

47The history of digital computing machines dates back quite far and has developed within a theoretical framework where many disciplines met, creating intricate interplays: philosophy, logic, mathematics, linguistics, semiotics, philology, statistics, cognitive science and engineering112. One way to perceive and untangle these connections, along with the merging of perspectives and objectives, is to acknowledge the fundamental ideas and logical concepts underlying our modern computer. What is a digital computing machine, and how does it work and “think”?

Computability: Numbers and Logic. The First Dreamers

  • 113 “If I were to choose a patron saint for cybernetics out of the history of science, I should have to (...)
  • 114 “Leibniz referred to such a system of characters as a characteristic. Unlike the alphabetic symbo (...)
  • 115 Leibniz a énoncé, il y a deux siècles, le projet de créer une écriture universelle, dans laquell (...)
  • 116 “Fascinated by the Aristotelian division of concepts into fixed « categories, » Leibniz thought (...)
  • 117 “Itaque profertur hic calculus quidam novus et mirificus, qui in omnibus nostris ratiocinationibu (...)

48The first to dream of mechanical, automatically reasoning machines and to foresee the enormous potential of binary digital computing was Gottfried Wilhelm Leibniz113. Leibniz was the first to envision giving reasoning and logic the form of a mathematical calculus, expressed in a universal symbolic language (lingua characteristica universalis114) built upon an alphabet of 0s and 1s115. His idea was to view reasoning as a form of logical deductive computation, performed with symbols for concepts instead of numbers116. This symbolic language enables the construction of a calculus ratiocinator: a logical calculus with rules that express the logical relations among concepts. Leibniz also realized that these rules could be mechanized, allowing a machine to automatically perform the calculations. To solve a reasoning problem, one would simply say, Calculemus!117

  • 118 “In 1666 Leibniz wrote the work De arte combinatoria, which was considered to be a continuation o (...)
  • 119 “the rules of deduction could then be reduced to manipulations of these symbols, via what Leibniz (...)

49Leibniz understood that automating reasoning required an exact logic, which could only be expressed in an unambiguous language: an ideographic writing system with a graphic symbol for each primitive concept118. Beginning with Leibniz, subsequent pioneers of symbolic AI embraced the idea of thinking as computation, emphasizing the crucial importance of symbolic notation for calculus and the interpretation of logical inferences as mechanical manipulations of symbols119.

  • 120 “These mathematical tables were calculated by hand, and the mistakes were simply the result of hu (...)

50Then came Charles Babbage, who, irritated by the numerous errors in the hand-calculated mathematical tables of his time, began dreaming of a reliable and flexible computing machine. He started designing (but never completed) the Difference Engine No. 1 and No. 2, and, more significantly, the Analytical Engine: a programmable machine equipped with memory, a calculation unit, and programs written on punched cards120. Had it been completed, the Analytical Engine could have been the first general purpose (or all-purpose or universal) automatic calculating machine.

51Both Leibniz and Babbage were far ahead of their times, lacking the technologies needed to realize their dreams. It was not until the 1930s and 1940s that the old dream began to take shape, thanks to the work of three mathematicians and logicians. During this brief period, Alan Turing conceptualized his abstract, theoretical machine and mathematically proved that the old dream was indeed possible; Kurt Gödel uncovered its limitations ‒ specifically, the limitations of any formal system rich enough to express number theory; and John von Neumann translated Turing’s abstract concept into a practical, concrete physical realization, defining its logical design and architecture. But before we get to that, there are still a few important events that we need to mention.

Boolean Algebra

  • 121 “There is not only a close analogy between the operations of the mind in general reasoning and it (...)
  • 122 “The design of the following treatise is to investigate the fundamental laws of those operations (...)

52After Leibniz, the close nexus between mathematics and logic, language, and computation was further explored by George Boole. Boole’s groundbreaking idea was to treat logic (the laws of thought) as a form of algebra, now known as Boolean algebra121. He recognized that algebra could be used to mimic the laws and processes of the human mind, thereby encoding human reasoning and bringing it under the framework of mathematical laws122.

  • 123 Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche, p. (...)
  • 124 The term “algorithm” was also derived from his name.

53If arithmetic is the science of numbers, algebra is the science of symbols. The transition from arithmetic to algebra was made possible through the inclusion of letters in mathematical formulas to handle arbitrary numbers. Algebra is essentially the study of arithmetic operations considered from an abstract and general perspective, dissociated from concrete numbers. It was the search for general solutions that gave rise to the literal symbolism of algebra123. The term “algebra” originates from the title of the book by Muḥammad ibn Mūsā al-Khwārizmī124, Kitāb al-Jabr wa-l-Muqābala (The Compendious Book on Calculation by Completion and Balancing), which focused on solving linear equations. Algebra is characterized, first and foremost, by its method: the use of letters and literal expressions, transformed according to well-defined rules. As a science of formal operations, algebra focuses on the formulation of general rules governing the transformation of symbolic expressions and the solution of equations. In an algebraic equation, symbols can be moved from one side of the equation to the other (changing their sign) or eliminated on both sides, following formal rules. The meaning of the letters is irrelevant; what matters are the rules for operating on these symbols.

  • 125 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematic (...)

54In mathematical algebra, letters represent ordinary numbers, and the rules for transforming them are those of arithmetic. In contrast, Boolean algebra is an algebra of things and propositions. For Boole, reasoning consists primarily of two types of propositions: those expressing relations between things (primary propositions) and those expressing relations between propositions (secondary propositions). To construct his algebra of logic and represent logical relations and propositions in a symbolic notation and algebraic form, Boole needed to define rules (or laws) for manipulating logical concepts, analogous to the algebraic rules for manipulating numbers. Boole observed that “there is to a considerable extent an exact agreement in the laws by which the two classes of operations are conducted125.”

  • 126 “As the combination of two literal symbols in the form xy expresses the whole of that class of ob (...)
  • 127 “We have seen (II.9) that the symbols of Logic are subject to the special law, x2 = x. Now of the (...)
  • 128 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematic (...)

55The correspondence between algebra and logic is perfect, though there is one crucial difference: in logic, but not in mathematics, the following fundamental special law holds: xx=x, or equivalently, x2=x126. Remarkably, this law is also valid in mathematics but only in two specific cases: when x is either 0 or 1127! En passant, this fundamental law also underpins the principle of non-contradiction: “what has been commonly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form128.”

  • 129 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematic (...)
  • 130 “We might undoubtedly have established the theory of Primary Propositions upon the simple notion (...)

56Boole also needed to provide an interpretation for the letters used as variables. These symbols represented classes, that is groups or collections of things, which are referred to as sets. From this set‑theoretic perspective, the symbol 0 represented Nothing, while 1 represented the Universe129. Thus, 1−x would represent the class of things that are not x, so that x+(1−x)=1130.

  • 131 Cf. “But nobody had shown with Shannon’s clarity and rigor that electrical engineers could use al (...)

57With Boole’s algebra of logic, logic became even more symbolic and formal. Boole focused his work on the calculus ratiocinator envisioned by Leibniz: Boolean algebra is a calculus (a system of rules for calculation) where the abstract symbolic language serves as an instrument for this calculus of reasoning. Today, Boolean algebra is widely applied in computer science, particularly in the design of logical gates and complex electric circuits131.

Mathematical Logic

  • 132 “Mathematical logic, also called « logistic », « symbolic logic », the « algebra of logic », and, (...)
  • 133 “Is there a limit to what we can, in principle, compute? [...] Ironically, it turns out that all (...)

58The union of mathematics and logic, mediated by philosophy, was not only the driving force behind the development of calculating machines but also led to the creation of a new branch of mathematics: mathematical logic132. This field gained prominence during the foundational crisis of mathematics, an intense debate that began at the turn of the 19th and 20th centuries concerning the nature and foundations of mathematics133.

  • 134 Paolo Zellini (2022), Discreto e continuo, p. 67.

59After George Cantor founded set theory, a new and broad theoretical landscape opened up, though it brought with it paradoxes rooted in recursive and self-referential reasoning. Cantor proved the existence of two distinct types of infinities: the infinity of natural numbers and the larger infinity of real numbers. Cantor’s argument is based on two key concepts: cardinality and countability. Natural numbers can be used in two ways: as cardinals (one, two, three), i.e., they are used for counting; and as ordinals (first, second, third), i.e., they are used for ranking. A cardinal number (or cardinality) represents the number of elements in a set: a set with three elements has cardinality 3. Two sets have the same cardinality (i.e., they are equinumerous) if they contain the same number of elements; they can then be placed into a bi-univocal correspondence or one-to-one matching. Something is countable (or numerable) if it can be put into bi-univocal correspondence with the set of natural numbers134. Cantor demonstrated that it is impossible to establish such a correspondence between the discrete set of natural numbers and the continuous set of real numbers: real numbers are uncountable, or more than countable. Natural numbers and real numbers have thus different cardinalities: the (transfinite) cardinal number of the set of natural numbers is denoted by ℵ0 (aleph-null), while the cardinality of the set of real numbers is denoted by C (for continuum). Cantor’s continuum hypothesis states that there are no intermediate cardinal numbers between these two cardinalities.

  • 135 Martin Davis (2018), The Universal Computer, p. 57-59. “The essence of the diagonal method is the (...)

60Cantor’s demonstration, which allows him to compare two infinite sets, is based on a brilliant trick of his invention: the diagonal method135. This same method will be later used by both Gödel and Turing in their proofs of the incompleteness and undecidability of formal mathematical systems. The trick relies on the fact that it is possible to determine whether the set of natural numbers and the set of real numbers have the same cardinality without knowing their exact cardinal numbers or counting their elements, simply by establishing a one-to-one correspondence between the elements of the two sets. It is not necessary to complete this matching process ‒ which would be impossible ‒ to show that a new real number, not included in the matching list, can always be constructed. By taking the first digit of the first number and changing it, the second digit of the second number and changing it, and so on ‒ proceeding diagonally through the matrix ‒ a new real number is generated that differs from all the others in at least one digit!

  • 136 In contrast to propositional logic, predicate logic allows for the analysis of the internal struc (...)
  • 137 In first-order logic quantification is restricted to individuals and doesn’t apply to sets of ind (...)
  • 138 Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for (...)

61After Cantor came Frege, who is regarded as the founder of modern logic, and created the first formal system for predicate logic136 along with a comprehensive theory of quantification. This system corresponds, more or less, to first-order logic137. Pursuing an explicit, unambiguous, and rigorous formulation, Frege followed Leibniz’s vision of a universal symbolic language and devised a new language with formal expressive power: his Begriffsschrift, an ideography or concept writing, described by Frege as «a formula language, modeled upon that of arithmetic, for pure thought». This language focuses solely on conceptual content: “a system of notation directly appropriate to objects themselves138.”

  • 139 “The most immediate point of contact between my formula language and that of arithmetic is the wa (...)
  • 140 Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for (...)

62Frege’s goal was to construct a logical system whose axioms and syntactic rules would capture the nature and properties of numbers in purely formal terms. In such a system, derivations are carried out exclusively according to the form of the expressions: symbols are manipulated based on formal rules (‘rules for the use of our signs’) rather than their meaning139. The symbolic notation enables one to perceive the structure of a formula and facilitates symbol manipulation. For Frege, the use of letters (variables) confer generality to theorems: “I therefore divide all signs that I use into those by which we may understand different objects and those that have a completely determinate meaning. The former are letters, and they will serve chiefly to express generality. But no matter how indeterminate the meaning of a letter, we must insist that throughout a given context the letter retain the meaning once given to it140.”

  • 141 “Thus it is that the great mediaeval controversy over universals has flared up anew in the modern (...)

63This approach, called logicism, was central to Frege’s program, which aimed to demonstrate that mathematics could be grounded in logic. Quine later associated logicism with realism, the idea that abstract entities, such as mathematical objects, have objective existence independent of the human mind141. It also must be noted that Frege had a reversed perspective from Boole’s: Boole sought to reduce reasoning and logic to algebraic symbolism and formal calculus, effectively making logic a branch of mathematics. Frege, on the other hand, believed that logic should serve as the foundation of all reasoning, including mathematical reasoning. For Frege, mathematics was a part of logic.

  • 142 Cf. <https://plato.stanford.edu/entries/frege-theorem/#S2>.

64However, Frege’s system was inconsistent, and the inconsistency arose from his use of Cantor’s set theory to define numbers. Frege introduced the notion of the extension of a concept (Begriffsumfang), assuming that any property or concept could be associated with its extension: the set of elements that possess that property or fall under that concept. As Cantor, Frege viewed a number as a property of a set (its cardinality) and thus also as its extension. A specific number could then be defined as the collection of all sets that are in bi-univocal correspondence. For example, the number two is defined as the collection of all sets containing exactly two objects. In Frege’s system, a concept is a set associated with the elements it contains (the set’s extension)142.

  • 143 In particular, this was achieved by introducing a “theory of types” to prevent circular assertion (...)
  • 144 “The present work has two main objects. One of these, the proof that all pure mathematics deals e (...)

65Bertrand Russell identified an inconsistency in reasoning about sets of sets and formulated the famous paradox named after him, which drove Frege to despair: the set of all sets that are not members of themselves is a member of itself? To eliminate this and other paradoxes143, Russell, together with Alfred North Whitehead, published Principia Mathematica (1910‑1913). In this work, they also attempted ‒ but ultimately failed ‒ to reduce and deduce classical mathematics from a system of symbolic logic144. Building on Frege’s logicist program, they adopted a clearer symbolic notation and simplified the formalization of arithmetic using the axioms defined by Giuseppe Peano.

Hilbert’s Program

  • 145 The idea of formalism was to separate mathematical concepts from their meaning and view them as a (...)
  • 146 The axiomatization of a theory is the formulation of its fundamental properties and truths from w (...)
  • 147 “It was really Hilbert's stroke of genius to understand that formalization is the proper techniqu (...)

66Another mathematician, David Hilbert, sought to establish the foundations of mathematics, but with a different strategy, using a formalized system of axioms. His program is known as formalism145. At various international mathematical congresses, Hilbert posed several open problems and challenged his fellow mathematicians to solve them. In particular, Hilbert aimed to create a unified axiomatic formal system encompassing all fundamental, valid concepts and principles of mathematics. Such a system would express these principles as a finite set of axioms146 for natural numbers (like Peano’s arithmetic) and enable the derivation of all deducible mathematical truths through a finite series of logical steps, defined by a finite set of formal rules (such as those in Frege’s system or Principia Mathematica)147.

  • 148 Cf. Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico del (...)
  • 149 “7.71. « Complete system in a wide sense » for: « axiomatic system which contains all the true sent (...)
  • 150 There is a subtle but crucial distinction between systems that are externally correct, meaning th (...)
  • 151 7.61. « Non-contradictory system » for: « axiomatic system whose rules of deduction do not allow (...)
  • 152 “By the Entscheidungsproblem of a system of symbolic logic is here understood the problem to find (...)
  • 153 Trying all possibilities or combinations only works if the solution exists, as it will eventually (...)
  • 154 Propositional logic, for example, is decidable because a method exists ‒ the truth-table ‒ that c (...)

67To serve as the foundation of mathematics, this formal system had to be coherent, complete, and decidable148. A system is complete if every true statement (or valid formula or theorem) in the domain (here, number theory) can be derived or deduced from the axioms using only the system’s rules149. It is coherent (consistent and correct150) if no falsehood or contradiction can ever be derived from its axioms and rules151. Finally, it is decidable if there is a method (a procedure or test) that, given any possible statement of a system, can determine whether the statement is provable or not within the system. A statement is undecidable if it is neither provable nor refutable within the system. The question of the decidability later became known as the Entscheidungsproblem or decision problem152. Solving the decision problem requires a decision procedure: a reliable and effective method or set of instructions that, in a finite number of steps and amount of time153, can determine whether a statement is valid154.

  • 155 “This conviction of the solvability of every mathematical problem is a powerful incentive to the (...)

68Hilbert was particularly focused on proving the completeness of logic and both the completeness and coherence of mathematical analysis (the theory of real numbers). He was highly optimistic, believing there were no unsolvable problems in mathematics, as captured in his famous motto: Wir müssen wissen. Wir werden wissen (“We must know. We will know”)155. Although Hilbert’s program ultimately failed, his ambitious vision paved the way for research into the concept of computability and the creation of the digital computing machine.

Formal systems

  • 156 “One should guard against confusing axiomatization and formalization. […] Examples of axiomatized (...)
  • 157 “Before leaving the subject of computability, I want to make some remarks about the related topi (...)
  • 158 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 16.

69A formal system, such as first-order logic, is the rigorous and complete formalization of an axiomatic system. It consists of a formal symbolic language and a deductive apparatus. The language comprises an alphabet ‒ a finite set of signs or symbols (including symbols for variables, constants, logical connectives, and quantifiers, as well as auxiliary symbols) ‒ used to form the language’s terms and sentences156. A formal system includes production (or formation) rules that specify how symbols can be combined, corresponding to the syntax and formal grammar of the language157. An expression is simply a graphical sign or group of graphical signs. These finite strings (or sequences) of symbols may be gibberish (meaningless upon interpretation), or, if they adhere to the system’s production rules, they are well-formed expressions that may be assigned meaning. The set of well-formed expressions constitutes the language of the formal system. A sentence is a well-formed expression that can stand or be asserted independently within the system. If an expression contains variables, these must be replaced with constants for the expression to become a sentence158. The meaning of a sentence corresponds to a (true or false) proposition. Axioms and theorems are both laws of the system, i.e. sentences asserted within the system that represent propositions considered to be true.

  • 159 “Axiomatic System for « the set of expressions falling into two classes such that the elements of (...)
  • 160 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27.

70The deductive apparatus of a formal system is a system of calculus that includes a finite and consistent set of axioms along with deductive rules of inference, which indicate how to derive new valid sentences from axioms and other already accepted sentences within the system159. An important principle is that “all axioms and all rules of deduction of the system must be stated explicitly; all other asserted sentences must be deduced explicitly160.” Axioms are assumed to be valid without proof, while theorems must be proved. The derivation (or proof) of a theorem is an explicit, step-by-step description of how the theorem is produced, starting from the axioms and applying the rules of inference.

  • 161 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27. “Turing suggests that any puzzl (...)
  • 162 Rules of Transformation, by which these laws can be developed so as to yield still further laws” (...)

71What is essential to understand is that a formal system is an “axiomatic system whose rules concern exclusively the graphical form of expressions and all of whose axioms and rules are explicitly formulated161.” From a general perspective, a formal system consists of strings (sequences of graphical characters and signs) and rules for transforming one string into another. Theorems are strings of symbols that can be derived from other strings of symbols (the axioms) according to the rules of manipulation, which specify how new symbols combinations can be produced from previously obtained combinations162. A formal system is both self-contained ‒ a closed world ‒ and discrete, meaning there are no uncertainties or half-measures: everything is precisely determined.

The End of Hilbert’s Program

  • 163 Gödel, Kurt (1931), “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter S (...)
  • 164 Naturalmente la matematica può essere formalizzata in tanti modi diversi, scegliendo diversi sis (...)

72The results achieved by Kurt Gödel and Alan Turing marked the end of Hilbert’s program. In 1931, Gödel published his famous two incompleteness theorems163, establishing the fundamental limits of any formal axiomatic deductive system sufficiently powerful to express ordinary arithmetic. The first incompleteness theorem states that any164 coherent formal system is necessarily incomplete, meaning it contains truths that cannot be proven within the system, i.e. undecidable statements. The second theorem, which arises from the first, asserts that such a formal system cannot prove its own coherence.

Gödel Proof and Gödel-numbering

  • 165 “The Epimenides paradox is a one-step Strange Loop, like Escher's Print Gallery. But how does it (...)
  • 166 “Is G a TNT-theorem? If so, then it must assert a truth. But what in fact does G assert? Its own (...)
  • 167 “But it cannot be emphasized too strongly that this undecidability is only with respect to provab (...)

73Gödel achieved these results in an ingenious way: constructing a statement within the formal system that is both true and unprovable within the system. This statement, let’s call it G, translates an ancient philosophical paradox rooted in self-reference and negation: the liar paradox165. G refers to itself and asserts “This statement cannot be proven”. Now, is G a theorem expressing a truth? If G is a theorem, it can be proven, then the system includes a falsehood and is therefore flawed. If G cannot be proven, then it is true but unprovable, rendering the system incomplete. In other words, if G is a theorem, it is true, but its truth is that it is not a theorem, and therefore it is false. Conversely, if G is not a theorem, then it is unprovable, but then what it asserts is true! Thus, G is formally undecidable (neither G nor its negation is provable), and the system is incomplete since it misses this truth166. Gödel demonstrated that there exist true propositions (intuitively provable true from outside the system167) that are expressible but not provable within the system. He thereby proved that no mathematical formal system can be both coherent and complete.

  • 168 All of the following explanation is derived from Piergiorgio Odifreddi (2021), Il dio della logic (...)

74Gödel employed two powerful techniques to construct the self-referential and negative statement G: Cantor’s diagonal method and the so-called Gödel-numbering168. To translate the liar’s paradox (“This sentence is false”) into the language of arithmetic, Gödel began by associating each arithmetic formula with a single variable (argument) to an infinite sequence of truth values. Each truth value in the sequence indicates whether the number corresponding to its position makes the formula true (V) or false (F). For example, the formula “x is even” corresponds to the sequence:

V

F

V

F

0

1

2

3

75Gödel then imagined organizing these formulas alphabetically, along with their corresponding truth-value sequences, into a table with infinitely many rows (one for each formula) and infinitely many columns (one for each natural number). Each cell of the table contains the truth value (V or F) indicating whether the number corresponding to the position number of the column satisfies the formula corresponding to the row. The result is a structure like this:

V

V

V

F

V

F

V

V

F

F

V

F

...

V

F

F

V

...

76Gödel then assumed the existence of an arithmetic formula A(n,x) with two arguments, n and x, corresponding to the entire table. This formula asserts that the formula at position n in the alphabetical order is true for the argument x. The diagonal of the table corresponds to A(x,x), while its negation corresponds to the antidiagonal. This negation is itself a one-argument arithmetic formula and therefore occupies a specific position n in the alphabetical order of one-argument formulas. However, A(n,x) also expresses the fact that the negation of A(x,x) is true, which implies that A(x,x) is false. In particular, A(n,n) expresses the statement that A(n,n) is false. This makes A(n,n) a formal arithmetic version of the liar’s paradox, saying of itself that it is false, thereby rendering the system inconsistent. As a result, the arithmetic formula A cannot exist. In this way, Gödel demonstrated that there is no arithmetic definition of arithmetic truth within a formal language. This result is the so-called theorem on the undefinability of truth in arithmetic, later proved by Alfred Tarski.

  • 169 “If provability in a formal system for arithmetic is definable within the system itself, but trut (...)
  • 170 “The property of a formula, that it is provable, is a purely combinatorial (formal) one, in that (...)

77Summing up, Gödel established an enumeration of arithmetic formulas and assumed that metamathematical statements about their truth could be translated into mathematical statements about the numbers corresponding to these formulas. However, this approach failed for the liar’s paradox. Surprisingly, though, what was not possible for the concept of truth worked for another meta-mathematical concept that Gödel could express in the language of arithmetic: demonstrability (as distinguished from truth169). As he stated, “To the concept « provable formula » corresponds the set of « numbers of provable formulas »170.”

  • 171 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della f (...)
  • 172 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della f (...)

78This formed the foundation of Gödel’s original proof of the incompleteness theorem, which «derives incompleteness from the difference between truth and provability. An alternative, more direct demonstration could proceed by assuming completeness (the equivalence of truth and provability) and then deducing a contradiction»171. The formula at the heart of the theorem remains the same (“This statement cannot be proven”), but now it also conveys, “This statement is false”. “Since a correct system cannot prove falsehood, the formula is not provable; otherwise, it would be false. But because it is not provable and asserts precisely that it is not, the formula is true. Thus, Gödel provided an explicit example of a formula that is true but not provable within a correct formal system172.”

  • 173 “That one can not capture all of mathematics in one formal system already follows according to Ca (...)
  • 174 “Typographical rules for manipulating numerals are actually arithmetical rules for operating on n (...)
  • 175 Kurt Gödel (1995), “[On undecidable sentences] (*1931?)”, in Kurt Gödel, Collected Works, Volume (...)
  • 176 “Viewed from the outside, these systems involve relationships among strings of symbols. On the in (...)

79In short, Gödel, using the diagonal method and Gödel-numbering, first enabled self-reference within formal systems. He then defined the meta-mathematical notion of demonstrability or provability in mathematical terms within the system. Gödel thus created a numerical code in which each arithmetic formula is assigned a unique number, which can then be used as a value on a different (meta) level173. What is crucial is that not only is there a bi-univocal correspondence (a one-to-one matching) between formulas and numbers, but, more importantly, there is a true isomorphism174 between the formulas of number theory and natural numbers. Meta-mathematical concepts like demonstrability are “arithmetical in the precise sense adopted above. That’s is the case rests, in the end, on the circumstance that these metamathematical concepts involve only certain combinatorial relationships among the formulas, [relationships] which (under a suitable choice of the association [between formulas and numbers]) are directly mirrored among the associated numbers, thanks to the appropriate choice of the initial association between formulas and numbers175.” Gödel-numbering code ensures not only that the two systems contain the same number of elements, but also that their internal properties and relations are preserved. As a result, numbers can have two different meanings and can be interpreted on two distinct levels: within the mathematical language itself or at its meta-level. Meta-statements, which are typically made outside the system (in a different language), can now be expressed within the system (using the same language). Thus, the language of mathematics can be used to make meta-assertions about mathematics itself176:

  • 177 Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 18.

Gödel had the insight that a statement of number theory could be about a statement of number theory (possibly even itself), if only numbers could somehow stand for statements. The idea of a code, in other words, is at the heart of his construction. In the Gödel Code, usually called “Gödel-numbering”, numbers are made to stand for symbols and sequences of symbols. That way, each statement of number theory, being a sequence of specialized symbols, acquires a Gödel number, something like a telephone number or a license plate, by which it can be referred to. And this coding trick enables statements of number theory to be understood on two different levels, as statements of number theory, and also as statements about statements of number theory177.

Alan Turing and the Abstract Machine

  • 178 Alan Turing, “On Computable Numbers”, p. 231.
  • 179 “« On Computable Numbers » is regarded as the founding publication of the modern science of compu (...)

80In his 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, Alan Turing demonstrated that the “Hilbertian Entscheidungsproblem can have no solution178.” More significantly, he recognized that the question of decidability is intimately connected to the concept of computability and the notion of an effective procedure. His profound insights into the nature of computation led him to conceive and describe his abstract, theoretical digital computing machine179.

Effective Procedures and Computability: the Church-Turing Thesis

  • 180 “It is possible to produce the effect of a computing machine by writing down a set of rules of pr (...)

81Before it referred to a calculating machine, the term “computer” described a human (often a woman) calculator equipped with pencil and paper or a desk calculator. This is where Turing begins his reasoning, establishing an initial analogy: a machine is analogous to a human computer, envisioned as an obedient clerk who rigorously follows a systematic method or effective procedure (what he calls a “rule of thumb”) to solve calculations step by step180. This involves adhering exactly to a fixed and finite set of instructions. Turing demonstrated that all functions computable by a human mind can also be computed by a machine: any effective procedure that can be conceived can be executed by a machine.

  • 181 Paolo Zellini (2022), Discreto e continuo, p. 264 (my translation). “An effective procedure is a (...)
  • 182 Richard P. Feynman (1996), Lectures on Computation, p. 54.
  • 183 Nel 1936 successe però l'inaspettato. Indipendentemente l'uno dall'altro, e quasi simultaneament (...)

82Thus, “effectively calculable” or “computable” means that there exists a calculational procedure ‒ a finite sequence of steps or ordered rules ‒ that can be followed to perform the computation using a finite number of basic operations. An algorithm is the equivalent of an effective procedure: “a fully defined and effective discrete process, i.e. capable of reaching a conclusive result in a finite number of steps181.” An algorithm corresponds to the specification of a finite sequence of operations (instructions) that can be applied mechanically and thus that a machine can execute. Computability, therefore, is “the ability of a certain type of machine to perform a computation182” and “calculable means programmable on a computer183.”

  • 184 “In a recent paper Alonzo Church as introduced an idea of « effective calculability », which is e (...)
  • 185 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della f (...)
  • 186 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della f (...)
  • 187 “One of my conclusion was that the idea of a « rule of thumb » process and a « machine process » (...)

83This correspondence between human and machine computational abilities became known as the Church-Turing Thesis, which has various formulations184. Alonzo Church, independently of Turing, also explored the nature of computation and introduced his newly invented λ-calculus as a formal definition of the intuitive notion of effective calculability185. Prior to their work, Gödel had discussed recursive functions, whose values can be computed by a "finite procedure" using only two simple rules: replacing a variable with a number and substituting an expression with another whose equality had already been established186. Church’s colleague, Stephen Kleene, demonstrated that Gödel’s recursive functions and Church’s λ-definable functions are exactly the same, and that both correspond precisely to those functions that are effectively calculable. Turing further proved that every effective computation can be performed by a Turing machine (his own notion of computability), establishing that Turing computability is also equivalent to general recursiveness and λ-definability187.

84What is crucial in all this is that it became possible to provide an absolute and general definition of the notion of a formal system using any of the equivalent formulations of computability. Gödel favored Turing’s formulation in terms of mechanical calculation:

  • 188 Kurt Gödel (1986), “Postscriptum (3 June 1964)”, in Kurt Gödel, Collected Works, Volume i, p. 370 (...)

A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable. […] the concept of formal system, whose essence it is that reasoning is completely replaced by mechanical operations on formulas188.

  • 189 “For instance if Gödel's theorem is to be used we need in addition to have some means of describi (...)
  • 190 “Gödel later generalized this result, pointing out that « due to A. M. Turing's work, a precise (...)

85Thus, the machine ‒ the computer itself ‒ is a formal system189! For this reason, Gödel’s incompleteness theorems also establish the limitations of digital machines, as machines are indeed formal systems190.

Turing Machines and Universal Turing Machines

  • 191 To see a Turing machine in action, several simulators are available online, such as <https://morp (...)

86Turing envisioned many different machines, each capable of performing a specific task or solving a particular problem by executing an effective procedure or algorithm. The components of a Turing machine are remarkably simple: a scanner that can read and write symbols, and an infinite paper tape divided into squares, each containing either a blank space or a symbol. The scanner can examine only one square at a time (the “scanned square”) and perform basic operations: move left or right along the tape, read, compare, erase, copy, or print a symbol191. The symbols typically consist of binary digits along with special markers to denote the start and end of binary strings, marking the boundaries of specific data items on the tape.

  • 192 “Turing introduced two fundamental assumptions: discreteness of time and discreteness of state of (...)

87The Turing machine operates based on an instruction table (the “table of rules”), which precisely and exhaustively specifies the machine’s actions for every possible configuration it may encounter. This table defines the transition to the next state, determined by the specific combination of the current internal state of the machine and the scanned symbol. Each instruction in the table consists of five components (known as “quintuples”): the current state of the machine, the scanned symbol, the symbol to be written, the direction of movement (left or right), and the resultant state. Thus, a complete description of any Turing machine can be provided in a finite number of words192.

  • 193 “We shall have a description of the machine in the form of an Arabic numeral. The integer represe (...)

88The instruction table functions as a planned sequence of events that governs the machine’s operations. The instructions are stored in the machine’s memory (the tape) alongside the data to be processed. In a Turing machine, a program is written as a long, one-dimensional stream of symbols that form the instructions. Each instruction (quintuple) is then converted (encoded) into a sequence of letters, and the entire instruction table is represented as a long stream of letters separated by colons. This is known as the “standard description” (S.D.) of the machine. The standard description is then transformed into a single number by replacing the letters with numerals, resulting in the “description number” (D.N.) of the machine193.

  • 194 Alan Turing, “On Computable Numbers”, p. 241-242, “When we have decided what machine we wish to i (...)

89Turing machines were designed to solve specific problems, with each machine dedicated to a particular task. However, Turing introduced another groundbreaking and fundamental idea: the Universal Turing machine. This machine, when provided with the description number (D.N.) of any specialized Turing machine and its initial input, could simulate the operation of that machine. The Universal Turing machine is capable of reading and interpreting any machine description stored in its memory, executing the corresponding instructions, and functioning exactly as the described specialized machine, producing identical results. The Universal Turing machine can thus perform anything that any other Turing machine can! Thus, this single machine can, in principle, compute everything that is computable194. With this innovation, Turing successfully abstracted and encapsulated the fundamental essence of a computing machine.

Stored-Program Machine

  • 195 “« On Computable Numbers » is the birthplace of the fundamental principle of the modern computer, (...)
  • 196 “The difference is that in the Universal Turing machine, but not the Analytical Engine, there is (...)
  • 197 “The plugged pattern can be changed from one problem to another, but–at least in the simplest arr (...)
  • 198 Note the important difference between this mode of control and the plugged one, described earlie (...)

90The fact that the Universal Turing machine could be programmed to perform any possible computation makes it an all-purpose (or general-purpose) computing machine. Its universal nature is tied to the concept of the stored-program machine195: coded instructions ‒ programs or algorithms ‒ can be loaded and stored in the machine’s memory just like the data those programs operate on196. This represented a revolutionary new way of controlling machines. Previously, machines had to be physically rewired to alter their behavior ‒ manually setting switches and rerouting cables. The physical interconnections themselves defined the program to solve a problem: control was hardwired or plugged197. With a stored-program machine, the machine’s behavior was determined by programming code rather than changing its physical structure198. This meant that a machine could be programmed by loading coded sequences of instructions (programs) into its memory (memory-stored control).

  • 199 “Before Turing, the general supposition was that in dealing with such machines the three categori (...)
  • 200 “What we want is a machine that can learn from experience. The possibility of letting the machine (...)

91This innovation also underscores that the distinction between hardware and software is not clear-cut but blurred199. The Universal Turing machine is designed so that nothing in its hardware needs to be modified to produce a new result or solve a new problem; a new program (software) can simply be introduced (loaded) into the machine. Crucially, this enabled another breakthrough: the machine could autonomously modify its own instructions, allowing it to learn and improve its actions on its own200.

Computable Numbers, Undecidable Problems

  • 201 “This is a very simple idea, but is of the utmost importance. The idea of the iterative cycle of (...)
  • 202 “The fundamental point of Turing's analysis has to do with infinite sequences of binary digits”, (...)
  • 203 Alan Turing, “On Computable Numbers”, p. 230. “Just as any set of typographical rules generates a (...)

92The notion of computability is closely tied to recursion and approximation201: algorithms are effective because they rely on purely constructive and iterative procedures that are applied numerous times, producing a (possibly) infinite sequence of successive approximations202. Turing defines computable numbers as “the real numbers whose expressions as a decimal are calculable by finite means», and further explains that, by his definition, «a number is computable if its decimal can be written down by a machine203.”

93Not all numbers are computable. The set of numbers that can be calculated using algorithmic procedures forms a discrete, countable set. In contrast, as Cantor demonstrated, the continuum of real numbers is uncountable. Consequently, the set of computable numbers does not exhaust the set of real numbers (the continuum).

  • 204 Cf. Martin Davis (2018), The Universal Computer, p. 140-141.
  • 205 Martin Davis (2018), The Universal Computer, p. 141. “We are now in a position to show that the E (...)
  • 206 “Turing was able to construct, by a method similar to Gödel's, functions that could be given a fi (...)
  • 207 “The fact of the matter is that there is no systematic method of testing puzzles to see whether t (...)

94Just as there are non-computable numbers, there are also undecidable problems. Returning to Hilbert’s program, Turing formally demonstrated that no effective procedure, mechanical method, algorithm, or Turing machine could ever solve the Entscheidungsproblem. In his 1936 paper, Turing’s demonstration is presented in rather obscure terms – so I will leave the formal details aside204. What is important to understand is that identifying even one mathematical problem that cannot be solved by an algorithm (or a Turing machine) is enough to prove that the Entscheidungsproblem cannot be solved205. Turing used Cantor’s diagonal method to identify such an unsolvable problem206. This result means that no algorithm can decide whether a mathematical statement can be deduced from the axioms of a formal system of mathematics. In other words, a general decision procedure to determine whether a problem is solvable, or whether a sentence or theorem is provable within a formal axiomatic system, does not exist207.

  • 208 Alan Turing, “On Computable Numbers”, p. 241. Cf. “We are always able to obtain from the rules of (...)
  • 209 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della f (...)
  • 210 “So there are some computational problems (e.g. determining whether a UTM will halt) that cannot (...)

95After Gödel demonstrated that no consistent formal system of arithmetic is complete, nor can it prove its own consistency, Turing further showed that such a system is also undecidable. The question of decidability is closely linked to that of completeness: a formal system that is both coherent and complete will also be decidable, as it would be indeed possible to automatically and systematically generate all its theorems and proofs based solely on the system’s axioms and rules208. To determine whether a formula is decidable, one would simply wait for it (or its negation) to be generated automatically. “Completeness ensures that one of the two cases happens, and consistency ensures that only one happens”. Symmetrically, a formal system that is coherent and undecidable is necessarily incomplete. Consequently, “the undecidability of a formal system excludes any possible decision method209.” These are the fundamental limits of computability and computation210.

John von Neumann

  • 211 Initially a follower of Hilbert’s program, von Neumann shifted from pure to applied mathematics a (...)
  • 212 “Julian Bigelow, von Neumann’s chief engineer, recollected: The person who really... pushed the w (...)
  • 213 “The machine described in the paper (variously known as the IAS, or Princeton, or von Neumann machi (...)

96Finally, John von Neumann211 brought the dream to life: he was the driving force behind the concrete implementation and physical realization of the Universal Turing Machine as the first high‑speed, automatic, electronic, stored-program, all-purpose digital computing machine212. In 1945, von Neumann wrote the First Draft of a Report on the EDVAC, in which he outlined the logical design and fundamental architecture of the machine. This design, known as the “von Neumann architecture,” remains the underlying logic of most modern computers213. Notably, it is also thanks to von Neumann that the digital machine has remained in the public domain.

  • 214 John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 betwee (...)
  • 215 Jack B. Copeland (ed.) (2004), The Essential Turing, p. 27.
  • 216 “« Random access » meant that all individual memory locations – collectively constituting the mac (...)

97The von Neumann architecture is remarkably simple, comprising just five components or “organs”: a central arithmetical (CA) unit for performing basic arithmetic operations; a (logical) central control (CC) unit for executing instructions in the correct sequence; a memory (M) unit for storing both numerical and instructional material (data and algorithms) needed for solving computational problems; and input and output (I/O) channels for communicating with humans214. Von Neumann not only defined the logical organization of the machine but also laid the groundwork for programming it, creating the first modern code and program215. He also contributed to the critical innovation of the high-speed random-access (parallel) storage matrix (or memory, known as RAM)216. Turing’s machine operated with a one-dimensional tape, requiring a sequential search to locate the next instruction, which could involve scanning the tape from end to end. Random-access memory, by contrast, allows direct access to any element without needing to traverse the entire memory so that every part of the memory is directly accessible:

  • 217 George B. Dyson (2012), Turing’s Cathedral, p. x.

Turing’s model of universal computation was one-dimensional: a string of symbols encoded on a tape. Von Neumann’s implementation of Turing’s model was two-dimensional: the address matrix underlying all computers in use today. The landscape is now three-dimensional, yet the entire Internet can still be viewed as a common tape shared by a multitude of Turing’s Universal Machines217.

Digital Machines

  • 218 However, hybrid machines exist, and a digital machine can always simulate the behavior of an anal (...)
  • 219 “In an analog machine each number is represented by a suitable physical quantity, whose value, me (...)
  • 220 “That the machine is digital however has more subtle significance. It means firstly that numbers (...)

98The term “digital” contrasts with “analog,” and whether a machine is one or the other depends on how quantities ‒ and thus numbers, on which the machine operates ‒ are represented within it218. Quantities can be measured and expressed either as physical and spatial values (as continuous information, uninterrupted and capable of taking any value) or in numerical form (as discrete information, which varies in distinct “jumps” rather than flowing continuously)219. Analog machines are limited in terms of practicability, precision, and generality220.

  • 221 It is challenging to succinctly and precisely define what a number is. Put simply, a number is an (...)
  • 222 “Secondo alcuni esperti di archeologia cognitiva, come Karenleigh Overmann (ma anche altri), ques (...)

99The word “digital” originates from the English term “digit” (a numeral in a numbering system221), which itself derives from the Latin “digitus,” referring to fingers, as we traditionally count in base ten using our ten fingers222. To digitize something means to convert it into numerical form. In a digital machine, everything is represented as numbers:

  • 223 George B. Dyson (2012), Turing’s Cathedral, p. 250.

The title “On Computable Numbers” (rather than “On Computable Functions”) signaled a fundamental shift. Before Turing, things were done to numbers. After Turing, numbers began doing things. By showing that a machine could be encoded as a number, and a number decoded as a machine, “On Computable Numbers” led to numbers (now called “software”) that were “computable” in a way that was entirely new223.

  • 224 “bits can represent words, pictures, sounds, music, and movies as well as product codes, film spe (...)

100As abstract symbols and informational entities, numbers serve as the first lingua franca or universal language of the machine, allowing us to express a myriad of problems and concepts. Through the machine, we can translate any information into numbers and harness the immense power of computation224. Assigning, labeling, identifying, or encoding something with a number is a fundamental and powerful act ‒ Gödel knows the extraordinary possibilities that arise once you begin numbering!

The Binary System

  • 225 “It is customary to use the symbols « 0 » and « 1 » as the names of the two states, but any two d (...)

101In the machine, numbers are expressed using a specific numbering system: the binary code system. Binary numbers consist of sequences of only two digits, 0s and 1s, known as binary digits or bits. Initially, these bits are not interpreted as numbers but rather as mere signs or symbols225. These symbols are then effectively combined to represent numbers within the machine, and these numbers, in turn, represent other things ‒ more specifically, two primary types of entities: data and instructions. For anything to be processed by the machine, it must first be converted into binary digits.

  • 226 Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Esse (...)

102Why is the binary system so important for digital machines? It is the simplest possible numbering and information system, consisting of only two components or states – two clear and mutually exclusive alternatives that leave no room for ambiguity. The machine’s magic and power arise from the simple yet profound symmetry of its “all-or-none” systems, which operate with only two possible values, signals, or states: binary arithmetic (0/1), classical logic (true/false), and electricity (on/off). The synergy of these all-or-none systems unleashes the full power of the machine226.

  • 227 Cf. “Hence, instead of determining the measure of formal agreement of the symbols of Logic with t (...)
  • 228 “The power of mathematics in applications usually lies in revealing similarities or even identiti (...)

103This bi-univocal correspondence and perfect isomorphism between these three systems ensures that all meaning they represent is unambiguous and can be transmitted completely, without loss, from one system to another227. Binary signals and symbols are not only easier to transmit but also simpler to interpret correctly. Furthermore, electrical signals are both “perceptible” and intelligible to the machine228. In summary, two-state devices are significantly more elegant and reliable than those with multiple states.

Logical Control

  • 229 “To enable the machine to compute, i.e. to operate on these numbers according to a predetermined (...)
  • 230 [The basic arithmetical operations] all are patterns of alternative actions, organized in highly (...)
  • 231 “Reducing logical reasoning to formal rules is an endeavor going back to Aristotle. It was the un (...)
  • 232 The logical nature of the digital sum becomes even clearer when the binary (rather than decimal) (...)

104The first and most obvious task a machine can perform with its binary numbers is mathematical operations229. However, basic arithmetic operations are, in fact, constructed from combinations of iterative sequences of fundamental logical operations230. Computation inherently has a logical structure231, and this becomes especially clear when using the binary system, where the rules for arithmetic are significantly simpler than those in base ten232.

  • 233 A logic gate is a fundamental component of digital circuits that performs a basic logical operati (...)
  • 234 All of these gates are examples of « switching functions », which take as input some binary-valu (...)
  • 235 “That syllogism, conversion, &c., are not the ultimate processes of Logic. It will be shown in th (...)

105Logic, the science of reasoning and valid inference, provides control within the machine. Like numbers, logic is a universal language, and its universality stems from its formal nature. In von Neumann’s architecture, the logical Central Control unit supervises the sequence of operations, ensuring they are executed in the correct order. The machine processes one instruction at a time, operating in a serial rather than parallel mode. Logic is the cornerstone of the machine: as mentioned, mathematical operations can be broken down into logical instructions. Moreover, logical design governs all computational activities and any other operations the machine can perform. On the physical level, machine circuits execute instructions based on logical principles. For example, an and logic gate233, which performs the logical operation of conjunction, has two inputs and one output: the output is triggered only when both inputs are simultaneously active234. The elegance of the system lies in its simplicity: with just two symbols (0 and 1), you can represent and express almost anything; and with just two logical operators (conjunction and negation) and their corresponding logical gates, you can construct all other logical and gate operators!235 These simple, foundational components can generate staggering complexities and unlock an infinite space of possibilities.

  • 236 “Beyond the capability to execute the basic operation singly, a computing machine must be able to (...)
  • 237 “Any computing machine that is to solve a complex mathematical problem must be « programmed » for (...)
  • 238 “Probably the most important idea involved in instruction tables is that of standard subsidiary t (...)

106In machines, logic serves simultaneously as a unifying foundational framework, a model of computation, and a programming language236. Algorithms or programs specify how to carry out a task via control functions and series of successive, recursive, or conditional operations237. So-called routines and subroutines – collections of autonomous instructions for very specific tasks – function as reusable components within larger programs. These routines encapsulate processes frequently used across different programs, allowing them to be combined and reused in more complex operations238.

Numerical addresses

  • 239 Richard P. Feyman (1996), Lectures on Computation, p. 14 “For example, one of the instructions wa (...)

107As in natural languages, the symbols of the machine’s binary alphabet (1 and 0) are organized into “words”, which are then combined to form “sentences”. In the context of machine language, these sentences are called instructions, each of which directs the machine to perform a basic action, such as reading from a memory location or calculating the sum of values stored in two memory registers. Each individual instruction is assigned a unique number, known as an opcode (short for “operation code”), which identifies the type of operation to be performed239.

  • 240 “The architectural principle that a pair of 5-bit coordinates (25 = 32) uniquely identified one o (...)
  • 241 “Obtain a piece of information almost immediately by « dialling » the position of this informatio (...)
  • 242 “They took us by the hand and explained how numbers could live in houses with addresses...”, Fred (...)
  • 243 “An order must indicate which basic operation is to be performed, from which memory registers the i (...)

108As we discussed in relation to the stored-program machine and RAM, the machine must be able to retrieve and store information (data and programs) as efficiently as possible, with direct access to any part of its memory240. To enable this, each memory location is assigned a unique numerical address. Consequently, every word (data) and every order (instruction) in the machine not only has an identifying number but is also assigned a unique address241. It is as though these words and instructions reside in “houses with addresses242”, and they are identified by these numerical addresses. In summary, data, instructions, and their respective locations in memory are all represented numerically, ensuring the machine can locate and process them seamlessly243.

Communication: Codes, Languages and Information

Communication and Information Theory

  • 244 “Now it could be objected here that a coded message, unlike an uncoded message, does not express (...)
  • 245 “Whether we regard signs as the representatives of things and of their relations, or as the repre (...)

109To express our thoughts and emotions, transmit messages and information, and convey meaning in general, we primarily use languages but also many other codes, often unconsciously244. Language assigns a symbolic and referential value to sequences of signs and expressions (both oral and written)245. Semiotics is the science that studies signs and symbols used in communication (the theory of sign production) and signification (theory of codes). Communicating a message is an act of signification, relying on a system of signification (a code) that triggers an interpretative response.

  • 246 Un codice è un sistema di significazione che accoppia entità presenti a entità assenti”, Umberto (...)
  • 247 Occorre dunque (come è stato fatto) concepire il codice come una doppia entità che stabilisce d (...)

110In communication theory, a sender encodes the meaning he wants to express in a message, while the receiver must know and use the same code to interpret (or decode) the message and extract its meaning. A code is a system of signals or symbols given certain meanings, enabling their interpretation in relation to the objects they denote or to other symbolic systems246. Beyond symbols, a code also includes rules that dictate how these symbols can be combined and manipulated so that they remain meaningful247. The term “code” derives from codex, which originally meant “book” and later “body of laws”; today, it refers to a collection of correspondences that make the interpretation of signs and symbols possible. To encode something means to express or represent it using the symbols and rules of an agreed, conventional (though not arbitrary) code system.

  • 248 “Information theory is usually thought of as « sending information from here to there » (transmis (...)
  • 249 Although the text uses the colorful words « information, » « transmission, » and « coding, » a c (...)
  • 250 “A Mathematical Theory of Communication”, Claude E. Shannon (1948), “A Mathematical Theory of Com (...)

111The field of coding is closely linked with information theory (the mathematical theory of communication), as both deal with the representation, transformation, and transmission of information (across space and time248) via abstract symbols249. Information theory, founded in 1948 by Claude E. Shannon, is an abstract mathematical theory that studies possible representations based on the binary code. In this context, the bit (binary digit) serves as the fundamental unit and basic building block of information250. One bit can only represent two codes, but each additional bit doubles the number of possible codes:

  • 251 Charles Petzold (2000), Code, p. 70.

The thing about the bit is that it conveys very little information. A bit of information is the tiniest amount of information possible. Anything less than a bit is no information at all. But because a bit represents the smallest amount of information possible, more complex information can be conveyed with multiple bits251.

  • 252 “The two main problems of representation are the following. 1. Channel encoding: How to represent (...)
  • 253 “Try and decode this message: 011010110. You can't do it! At least, not uniquely. You do not know (...)
  • 254 l'informazione nel senso (a,i) [teoria matematica dell'informazione come una teoria strutturale (...)
  • 255 In a sense, the amount of information in a message reflects how much surprise we feel at receivi (...)

112The primary focus of information theory is on how to transmit information without loss and how to process it correctly. It mainly addresses the limits and boundary conditions of what is achievable in these processes. Two key challenges arise when encoding and transmitting information: avoiding errors or noise and increasing efficiency252. The transformation must preserve as much information as possible, while transmission should minimize ambiguity and reduce the likelihood of misunderstanding253. Shannon also developed a measure to determine the amount of information in a message, based on the probability of the message within the space of all possible messages254. Surprisingly, the amount of information in a message does not depend solely on the message itself but also on the receiver’s knowledge and the degree of surprise the message produces!255

  • 256 “Whenever we talk about bits, we often talk about a certain number of bits. The more bits we have (...)
  • 257 Cf. “Each decimal digit, in turn, is represented by a system of « markers ». […] A marker which c (...)
  • 258 “The essential concept here is that information represents a choice among two or more possibiliti (...)
  • 259 <https://home.unicode.org/>.
  • 260 Cf. <https://www.unicode.org/faq/>.

113To encode a message in binary code, we need to determine the minimum number of bits required256. For example, the English language uses approximately 30 different symbols (the 26 letters and a few punctuation marks)257. To uniquely identify each symbol with a number, we need at least five bits (25=32)258. Today, the standard for character encoding is Unicode259, a universal character repertoire that assigns a unique numerical code (called a code point) to every character across all writing systems within a 21-bit code space (providing 2’097’152 possible values). Depending on the encoding format each character is represented either as a sequence of one to four 8-bit bytes (UTF-8), one or two 16-bit code units (UTF-16) or a single 32-bit code unit (UTF-32)260.

A Hierarchy of Lower and Higher Languages

  • 261 Il computer e l'uomo (è un'ovvietà) non usano un codice comunicativo condiviso. Il problema dell (...)
  • 262 “Such an analysis [of CC], however, is dependent upon a precise knowledge of the system of orders (...)
  • 263 “Il lavoro della trascrizione-edizione consisterebbe dunque nella ri-codificazione di tutti gli e (...)

114For communication to occur, the code must be shared between the sender and receiver. How, then, can we communicate with machines, which do not understand our human languages and codes and cannot directly decode or process them?261 The solution lies in the use of specialized machine‑readable codes that machines can understand and operate on: machine or programming languages262. In computing, a code is a system of symbols and rules, used to convert (encode) information into data (representations) and programs (instructions) for the machine. Encoding is always an act of translation and transcription from one code system to another263.

  • 264 “details such as how the instruction of codes are represented or exactly how things are set out i (...)
  • 265 “It must be mentioned, however, that computer programming was originally done on an even lower le (...)
  • 266 The many levels in a complex computer system have the combined effect of « cushioning » the user (...)

115Within a machine, there exists an entire hierarchy of codes and languages, operating at a different level of increasing abstraction264. At the lowest level is the binary code, or machine language, where there is no abstraction265: programs consist of a numerical representation of a finite set of basic instructions or operations directly understood by the machine. Writing programs in binary, however, is virtually impossible for most humans due to its complete lack of readability. To bridge this gap, more abstract, higher-level codes were developed, allowing programmers to ignore the details of binary syntax and work in languages closer to human expressions. These programming languages were introduced to simplify the process of writing computer programs. As such, they are designed primarily for human readability and convenience rather than for machine use266.

  • 267 The idea of assembly language is to « chunk » the individual machine language instructions, so t (...)
  • 268 And here is the vital point: someone can write, in machine language, a translation program. This (...)

116Just above machine language are medium-level languages like Assembly, where numeric instruction codes (opcodes) are represented with mnemonics, that is human-readable names. At this level, there is still very little abstraction, as there is typically a one-to-one correspondence between assembly language instructions and machine language instructions267. However, even this intermediate language must be translated into machine language by a program called an Assembler268.

  • 269 It was clear that a most powerful addition to any programming language would be the ability to d (...)
  • 270 “Of course, we already went « up » a bit when we summarized operations by instructions such as «  (...)
  • 271 The next level of the hierarchy carries much further the extremely powerful idea of using the co (...)
  • 272 “All our interactions with digital information are mediated through layers of platforms. […] This (...)

117Moving further up, there are high-level languages (such as PHP, Python, Java, or JavaScript), which are much closer to human natural languages. Due to their greater abstraction, these languages must first be either compiled or interpreted before the program can be executed by the machine269. Compilers and interpreters are special translation programs that convert high-level language code into machine language270. Today, the conversion from a higher-level language to machine language is typically performed automatically by the machine itself271. Finally, even user interfaces can be considered as a form of very high-level language272.

  • 273 “It is striking how tight the connection is between progress in computer science (particularly Ar (...)
  • 274 “The actual code for a problem is that sequence of coded symbols... that has to be placed into th (...)
  • 275 “The instructions which govern these operation must be given to the device in absolutely exhausti (...)
  • 276 “For today's computers to perform a complex task, we need a precise and complete description of h (...)
  • 277 Questo passaggio comporta pertanto l'assunzione delle note caratteristiche di univocità, coerenz (...)

118Different programming languages exhibit varying levels of expressive power, and each language is more or less suited to specific programming tasks or solving particular problems273. However, more important than the choice of a given language is the way or style in which we communicate with the machine. A program is essentially a set of instructions that the central processing unit (CPU) uses to perform a given task. The program code expresses the logical definition of the problem to be solved274. Preparing a problem for the machine to solve involves writing out the sequence of operations that characterize the problem and translating these into coded instructions that the machine can understand. The instructions required to solve the problem must be provided «in absolutely exhaustive detail»275, specifying the problem and how to solve it in precise and rigorous terms, explicitly defining every step in the logic of the program. Machines are intolerant of errors, omissions, and shortcuts, and even minor missteps can cause them to falter276. Therefore, when writing for the machine, one must exercise foresight and maintain meticulous, self-contained rigor277.

Structures in Space and Sequences in Time

  • 278 “It was realized that the computer really processed information, not just numbers”, Herman H. Gol (...)
  • 279 It must be noted that time in the machine is not the same as time outside the machine. “No time i (...)

119In a digital machine, everything is represented as numbers. However, as we have seen, these numbers are not merely numerical values but also carriers of information278. What kind of information do they represent? There are two fundamental types of numbers within the machine: numbers that mean things and numbers that do things. The former are structures in space (data), while the latter are sequences in time (instructions, programs, or algorithms)279. Data represents the information used and produced during computation, while algorithms are the processes or means by which the machine acts upon that data.

  • 280 “A distinction which is made in Artificial Intelligence is that between procedural and declarativ (...)
  • 281 “Although we can try to draw a clear line between program and data, the distinction is somewhat a (...)
  • 282 “You can see that a [shift] register like this takes a sequential piece of information and turns (...)
  • 283 The fundamental, indivisible unit of information is the bit. The fundamental, indivisible unit o (...)

120The difference between data and algorithms can be linked to the distinction between declarative and procedural knowledge, that is knowledge of facts versus knowledge of how-to’s. Procedural knowledge is not explicitly stored or localized; rather, it is “spread around,” implicitly represented and defined by the logical patterns set by algorithms280. Typically, syntactic knowledge is procedurally embedded in programs, while semantic knowledge is static. However, it is crucial to understand that there is no clear-cut distinction between the two281: sequences in time can be transformed into structures in space, and vice versa282. The machine can translate in both directions ‒ between structure and sequence283. In other words, the same knowledge can either be made explicit and expressed as a static, exhaustive list or repertoire, or it can be implicitly characterized through a system of generative schemes and rules, enabling the dynamic construction of that same list or repertoire.

  • 284 “Here is a case which demonstrates that, despite the theoretical equivalence of data and programs (...)
  • 285 “A key factor in the re-usability of data is the extent to which it is well structured. The more (...)

121Nevertheless, even though there is a theoretical equivalence between data and algorithms, maintaining a practical balance between the two requires careful consideration of their implications and consequences284. The pairing of data structures and algorithmic sequences is a cornerstone of data engineering. Data structures must represent information in a manner that allows it to be effectively processed by an algorithm. Algorithms, used to define research problems and procedures, must suit the data they will process. Generally, the more granular, fine-grained, and precise (structured and unambiguous) the data is, the easier it is to design an algorithm capable of manipulating it285. Every algorithm operates in accordance with a specific and precise data model.

The Semantic Web

“In the act of dividing the continuous distance into two halves one point is treated as two, since we make it a starting-point and a finishing-point: and this same result is also produced by the act of reckoning halves as well as by the act of dividing into halves.”
(Aristoteles, Physics, 263 a 23-25)

The World Wide Web

  • 286 The servers are active processes that reply to requests [...] clients are browser processes”, Be (...)
  • 287 “But the gift of the Web wasn’t only informational: by its very existence it gave us new tools to (...)

122Before the Semantic Web, there was the Web ‒ or, more precisely, the World Wide Web ‒ a system designed as a network of interconnected information in the form of textual documents, specifically HTML web pages. These pages are accessed and transferred using the HyperText Transfer Protocol (HTTP), which defines the language through which clients (browsers) and servers communicate286. The Web itself is built on the foundational infrastructure of the Internet, a global network of interconnected computers287. The Internet operates based on the Transmission Control Protocol/Internet Protocol (TCP/IP), a framework that ensures reliable communication between computers.

123Thirty-five years ago, in 1989, Sir Tim Berners-Lee created the Web while working at CERN288. In 1993, CERN released the Web’s code and specifications into the public domain289. The following year, in 1994, the first international conference on the Web was held, and the World Wide Web Consortium (W3C), an international public-interest non-profit organization, was established at MIT in collaboration with CERN. Since then, the W3C has served as the primary authority on Web standards, with the mission of leading the Web to its full potential290.

124HTML (HyperText Markup Language), the language used to convert text documents into web pages that browsers can interpret and display, is an extension of SGML (Standard Generalized Markup Language). SGML is a document description language that uses tags to semantically mark and structure parts of a web document. The syntax and semantics of HTML elements and attributes are defined in the W3C HTML standards specification. The latest version, HTML5, is the result of a collaborative effort between the W3C and the Web Hypertext Application Technology Working Group (WHATWG)291.

125At the heart of the Web and HTML is the hypertext link. An HTML document contains links that connect to other parts of the same document, to other documents within the same website (internal links), or even to entirely different websites (external links). These links make documents easily browsable and navigable through a simple user interface:

  • 292 Berners-Lee, Tim (1990), Proposal for a Hypertext Project, p. 2

The texts are linked together in a way that one can go from one concept to another to find information one wants. The network of links is called a web. […] the texts are known as nodes. The process of proceeding from node to node is called navigation292.

  • 293 There is some terminological overlap between Web 3.0 and Web3, the latter being mainly associated (...)

126The Web has evolved through several stages or phases, often referred to by version numbers: Web 1.0, Web 2.0, and today, Web 3.0. Web 1.0 was “read-only”: it featured a small number of content creators, such as individuals or companies creating homepages, and a large majority of content consumers (users). This phase primarily consisted of static pages containing human-readable information served from server file systems. With its second version, the Web became a “read-write” platform, enabling users to actively interact with and contribute to the Web by creating blogs, wikis, or participating on social platforms. Web 2.0 is less centralized and more community-focused, emphasizing participation, social interaction, and dynamic content. The current Web 3.0 represents the “read-write-execute” phase: the use of Web content is no longer limited to humans, but is also directly accessible to and processable by machines. Machines can not only read and write Web content but also understand, use, and execute it. The Semantic Web is an integral part of Web 3.0293.

The Semantic Web Vision

  • 294 Tim Berners-Lee (1998a), “Semantic Web Road Map”, Design Issues. URL: https://www.w3.org/DesignIs (...)
  • 295 Tim Berners-Lee, James Hendler & Ora Lassila (2001), “The Semantic Web: A new form of web content (...)

127The Semantic Web (SW) was also envisioned by Sir Tim Berners-Lee294. In 2001, together with James Hendler and Ora Lassila, he published a seminal article in Scientific American titled “The Semantic Web”295: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilites.

  • 296 Tim Berners-Lee et al. (2001), “The Semantic Web: A new form of web content that is meaningful to (...)

128The SW is first of all a vision: the transformation of the Web from a collection of documents created for human consumption into a Web of data, whose meaning is also accessible to machines. Content is not merely encoded in a machine-readable form but also made machine-comprehensible and directly processable296. The SW extends the Web paradigm by enabling not only documents but also data, along with its semantics and logic, to be portable and shareable. Its main goal is to construct a semantic knowledge representation system that is decentralized, distributed, heterogeneous yet unifying, global, and formal.

129The SW represents one of the most advanced technologies for the representation, exchange, and retrieval of data and knowledge on the Web. It is aligned with the tradition of knowledge expert systems, emphasizing how information can be modeled to ensure interoperability, reuse, and robustness within a distributed environment. SW languages are specifically designed to express complex knowledge. By focusing on expressivity and reasoning, the SW not only provides a foundational language for expressing propositions but also includes mechanisms to define axioms and logical rules, enabling advanced reasoning capabilities.

130The most significant aspect of the SW is its provision of logical tools to express not only data and information (metadata) but also knowledge in a formal and explicit manner. Logic serves to resolve ambiguity, replacing it with formalism. The explicit and self-descriptive formal semantics ensures data transparency and disambiguation of information. Admittedly, formalization requires considerable effort, commitment and collaboration from both domain experts and SW specialists. However, the reusability of formal data models, combined with their ability to enable semantic interoperability and reasoning, offers an immense return on investment. As previously discussed (in Section 1), the adoption of SW technologies entails numerous advantages and opens up vast possibilities – most notably, machine reasoning. Despite this, the SW has yet to reach its full potential. It is not yet universally adopted as a standard, and some of its components still require further formalization and standardization ‒ an area where humanistic scholars can and should contribute to advancing the SW community’s efforts.

Linked Open Data

131The SW is closely associated with the Linked Open Data (LOD) publishing paradigm. This paradigm has gained increasing prominence within the contexts of the Open Science movement, open government data (OGD) and open research data (ORD). Today, it is more evident than ever that the outputs of scientific research, along with the underlying research data, should be open297 and linked298.

  • 299 “Technically speaking, Linked Data refers to data published on the Web in such a way that it is m (...)

132The core idea of LOD is to leverage SW technologies to publish structured data on the Web and to use SW links to interconnect data from diverse sources. This approach establishes explicit, meaningful connections and relationships between otherwise separate datasets and across repositories299. Publishing data according to these principles enhances their interoperability, facilitates their exchange, and significantly improves their durability and long-term usability.

  • 300 <https://5stardata.info/>.
  • 301 1. Use URIs as names for things. 2. Use HTTP URIs so that people can look up those names. 3. Whe (...)

133The implementation of LOD follows a five-step deployment program known as the Five-Star LOD model300, initially outlined by Tim Berners-Lee301. LOD is online, machine-readable, open, identified and linked. This model emphasizes the progressive improvement of data openness and interconnectivity, with each step building upon the previous one:

1341. Publish data on the Web: make the data available online under an open license, in any format (e.g. PDF).
2. Publish data as machine-readable structured data: ensure the data can be read and processed by machines (e.g. Excel)
3. Publish data in an open, non-proprietary format: use formats that are free from restrictions (e.g. CSV
4. Use Uniform Resource Identifiers (URIs) to identify entities in your data: assign unique URIs to entities so they can be easily referenced (RDF).
5. Link your data to other datasets: create meaningful connections to other data, providing additional context and enabling a richer network of information (LOD).

  • 302 Cf. Tim Berners-Lee (2006), “Linked Data”, <https://handbook.opendata.swiss/de/content/glossar/bi (...)
  • 303 Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32, Gabriel Müller & Ueli Zahnd ( (...)
  • 304 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, in Fabio Ciotti & Gianfranco Cr (...)
  • 305 Raul Mordenti (2001), Informatica e critica dei testi, p. 35; Peter Rovinson (2013), “Towards a T (...)

135The adoption of the LOD paradigm is important for advancing research in the Humanities and enhancing its impact. A notable example of this is the development of so-called semantic editions ‒ digital scholarly editions created using SW models and technologies. This approach requires rethinking and encoding texts as LOD302, treating them as data and relationships303. The text is no longer perceived solely as a standalone work or document, but as a collection of structured and interlinked data, information, and knowledge. More specifically, it becomes a dynamic and mobile, non-hierarchical object composed of elements situated within a dense network of intra-, inter-, and extra-textual relationships304. Semantic editions offer a way to reconcile the tension between text and apparatus, as well as between the conceptualization of the text as a work (the Text) and as a document (the testes, or witnesses). This tension is particularly pronounced in the case of medieval texts305.

  • 306 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 305, Peter Boot & Marijn Koo (...)
  • 307 « edizione critica eccellente » quella che « offre i materiali necessari e sufficienti per un’al (...)

136To create a semantic edition, the text must first be "exploded" (decomposed) into its constituent fragments, parts, aspects, and relationships. Each element is assigned and identified with a unique URI, allowing it to be reassembled into the text while also connecting it to other data and information within a broader network of references in the context of a general knowledge graph306. This decomposition facilitates countless recompositions and recombinations from various perspectives and focuses. By making textual data concretely and unambiguously addressable at the most granular level possible – identifying the smallest units of analysis – it becomes feasible to describe and annotate texts and documents with precision across any level of granularity and abstraction. SW technologies enable the precise identification of textual phenomena and places, which can then be combined or processed independently. This allows for the description of diverse contexts, hierarchies, and levels from multiple perspectives. For this reason, semantic editions have the potential to be truly excellent, that is providing “the necessary and sufficient materials for another critical edition of the same work conducted according to different criteria307.”

  • 308 Cf. Jörg Wettlaufer (2018), “Der nächste Schritt? Semantic Web und digitale Editionen”, in Roland (...)

137SW is not yet a standard for digital scholarly editions, and user-friendly tools or fully developed models remain limited. Nevertheless, given its enormous potential, initiatives in this direction are becoming increasingly numerous308.

The SW Stack

138Concretely, the SW consists of a set of open-source, formal languages and models recommended by the W3C309. These are often depicted as a stack (also referred to as a layer cake) of successive layers310. While this is a convenient graphical representation, it should not be interpreted as normative311.

Fig. 1: The Semantic Web Stack (source: <https://www.w3.org/​2007/​Talks/​0130-sb-W3CTechSemWeb/​#(24)>)

139The layers can be broadly categorized into three levels of standardization:

1401. The bottom layer corresponds to well-established and standardized Web technologies (URI/IRI and XML).
2. The middle layers contain SW-specific technologies that have been standardized (RDF, RDFS, OWL, SPARQL, RIF.
3. The upper layers represent SW technologies that are still in the process of being formalized and standardized (Logic, Proof, Trust).

141The final layer, user interfaces and applications, is not strictly part of SW technologies but plays a crucial role in making the SW accessible and usable for both humans and machines. Each layer builds on those beneath it, with increasing expressivity as one moves higher up the stack. While neighboring layers can interact, such interaction is not mandatory. All layers are necessary to fully realize the vision of the Semantic Web.

URI: global identification

142Among Web technologies, one stands out as crucial for the SW: the Uniform Resource Identifier (URI). URIs serve as global identifiers, enabling the unique identification of any resource on the Web. The most familiar and widely used type of URI is the URL (Uniform Resource Locator), which is primarily tied to the Web’s document-centric view, providing access to a document by indicating its location on the Web. In contrast, URIs do not merely provide addresses but function as actual identifiers. Another variant, the Uniform Resource Name (URN), serves as a persistent identifier that remains valid even if the resource’s location changes. Additionally, there is the Internationalized Resource Identifier (IRI)312, an extended version of the URI. Unlike URIs, which are restricted to the ASCII (American Standard Code for Information Interchange) character set, IRIs support a broader range of characters, accommodating non-ASCII characters for international use. In short, IRI is a superset of URI and URI encompasses both URL and URN, which are disjoint categories313.

143What is crucial is that, by receiving a global identifier, a string becomes a thing, that is a sequence of characters is transformed into a distinct entity. This transformation enables meaningful properties and relationships to be associated with the entity. The concept mirrors Gödel-numbering and machine numerical addresses: assigning a unique number, address, or identifier to something makes it directly accessible to machines and manipulable according to defined rules. The combination of granular encoding and unique identifiers unlocks the full potential of machine processing. In the SW, every resource is represented and addressed with a URI, making it the foundational element of the entire SW framework.

RDF: Knowledge Representation and Data Interchange

144The Resource Description Framework (RDF)314 forms the core foundation of the SW. RDF serves as a basic grammar, an abstract data model, and a logic framework for information and knowledge representation, as well as for data interchange.

145RDF relies on two fundamental structures: the triple and the graph. An RDF triple is a single assertion or statement composed of three parts: Subject-Predicate-Object. A set of triples forms a graph, a network-like structure consisting of a set of vertices (nodes) and a set of edges (links) that connect pairs of nodes. Specifically, an RDF graph is a directed, labeled graph, representing an (unordered) set of RDF assertions. RDF graphs are typically stored in specialized databases called Triple Stores.

146RDF is an assertional logic, a formal language designed to express propositions. RDF triples, as assertions or sentences conveying propositions, function as standalone complex signs that support logical discourse. This approach contributes to the SW’s “simplicity” by being relatively “verbose” and aligned with the ways humans think and communicate. However, SW is also inherently complex, as it does not permit shortcuts. It is an abstract declarative-oriented system that allows assertions about any subject matter.

147In the RDF grammar, there are only three disjoint types of terms that can serve as the Subject, Predicate, or Object of a statement (and thus become nodes in the graph): named resources (named nodes), anonymous resources (blank nodes), and datatyped literals. Named resources are identified through URIs, the global identifiers for Web resources. Anonymous resources, or blank nodes, represent unidentified resources; more specifically, they act as implicitly existentially quantified variables denoting the existence of a resource without an assigned URI. Blank nodes are scoped locally to the RDF graph where they appear. Datatyped literals are defined by their content or literal value: they are simple strings of characters associated with a data type (such as text, number, or date), which can also include language specifications.

  • 315 “Definition of RDF Triple: Assume that I is the set of all IRI references, B (an infinite) set of (...)
  • 316 Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 99.

148The Subject, Predicate, and Object in an RDF triple must denote named resources represented by URIs, but there are two exceptions: the Subject can also be a blank node, and the Object can either be a blank node or a literal315. The Predicate expresses a binary relation of the Subject, referred to as a property, while the Object represents the value of that relation. When the Object is a URI, the proposition establishes an Object Property, which relates two resources or entities. Conversely, if the Object is a character string, it forms a Datatype Property, which connects an entity or resource to its attributes or properties. This model ensures that most Subjects, all Predicates, as well as many Objects in RDF assertions, are named resources on the Web, that is uniquely identified and always accessible. Crucially, these resources can be retrieved directly by clicking on their URI to access the corresponding web page representing the resource. This capability eliminates semantic ambiguity and vagueness associated with names and labels, providing certainty and verifiability about the entities and concepts being referenced316.

  • 317 Tim Berners-Lee (1999a), “The Semantic Web as a language of logic”, Design Issues. URL: https://w (...)
  • 318 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 280, Fabio Ciotti (2012), “W (...)

149The logical structure of RDF, embodied in the triple that represents an assertion or proposition, possesses immense expressive power317. This structure enables us to describe any resource and represent any connection between resources. RDF empowers us with the ability to engage in “propositioning”: expressing defined thoughts or propositions in a logical discourse that machines can understand and process318. By combining predicative linguistic articulation with the global identification and reference mechanism of URIs, we can construct a global semantic space, a semantic network of interconnected assertions.

  • 319 “To publish data on the Web, the items in a domain of interest must first be identified. These ar (...)

150It is crucial to understand that the resources identified through URIs and used to form RDF triples or assertions are not limited to data or documents on the Web. Rather, they represent the actual things that exist in the world ‒ things that those documents describe. These resources can vary widely in type and ontological status, encompassing material entities, historical and cultural artifacts, abstract concepts, propositions, facts or events. By convention, and slightly simplifying, a reference to a URI implies a reference to the object it identifies319.

  • 320 Tim Berners-Lee (1998b), “Using XML for Data”.

151What is most significant is that not only entities but also relationships (Predicates) are treated as resources with unique and global identifiers, making them first-class objects320. On the Web, everything is connected, but traditional hypertext links are merely generic links. In contrast, RDF links between resources are typed links or qualified relations, meaning they explicitly define the nature of the connection. This allows us to precisely specify and encode the semantics of relationships.

152The following are simple examples of three RDF triples taken from DBpedia321:

153dbr:Aristotle rdf:type foaf:Person .
dbr:Pyhsics_(Aristotle) dbp:stitle "Physics"@en ;
dbp:author "Aristotle"@en .

154The first triple asserts that Aristotle is a person; the second that Aristotle’s work Physics has the English title “Physics”; and the third (where the Subject is omitted due to the semi-colon) that its author’s name is the literal value “Aristotle” in English. Instead of using the complete URI, it is common practice to use abbreviated forms with prefixes for namespaces. For example, the prefix dbr: represents <http://dbpedia.org/​resource/​>, and dbp: corresponds to <http://dbpedia.org/​property/​>. The property rdf:type is a RDF property indicating that the subject is an instance of the class specified by the object.

Syntaxes

155RDF can be serialized in various formats. In the early days of the SW, the primary syntax used was XML (eXtensible Markup Language)322. Today, other standards are available for serialization, including N-Triples, Terse RDF Triple Language (Turtle)323, Notation3 (N3) language324 and JSON‑LD325 (commonly used in APIs).

Figure 2: Venn diagram showing the relations between W3C languages and their syntactical features (source: <https://nie-ine.github.io/​e-editiones/​semantic-web-technology-introduction#notation-3-language>)

156The most human-friendly syntax is Turtle, while full Notation3 (N3) offers greater expressive power. As shown in the Venn diagram in Figure 2, which illustrates the relationships between these languages and their syntactical features, Turtle is a subset of N3. Notably, N3 includes symbols for logical implication as well as for universal and existential quantification. N3 is a superset of RDF that supports declarative programming with features such as rules, graph quoting, graph description, and various built-ins. However, N3 is not yet an official W3C standard, but only a W3C specification.

SPARQL Protocol and RDF Query Language

157The SPARQL Protocol and RDF Query Language326 is an RDF query language based on graph pattern matching, designed to retrieve and manipulate data stored in RDF format327.

158A SPARQL query contains a set of triple patterns, collectively called a basic graph pattern. These triple patterns resemble RDF triples, except that the Subject, Predicate, or Object can include variables. A triple pattern matches a portion of an RDF graph when variables in the pattern can be substituted with RDF terms from the graph.

159A SPARQL query consists of two main parts:

1601. An outer clause that begins with a keyword (select, describe, ask or construct) and specifies the search variables, which are denoted with a leading question mark.
2. A
where clause that specifies the search pattern to match against the RDF data graph.

161The result of a SPARQL query is a sequence of solutions: all possible matches where the variables in the query are bound to corresponding RDF terms in the graph. SPARQL queries can be refined with filters to narrow the results, and more complex queries can be constructed using conjunctions, disjunctions, and optional or alternative patterns. The optional feature is particularly significant because RDF graphs cannot be assumed to be complete. If an optional pattern does not match (i.e., no bindings are found), the solution is not discarded.

162In the following simple SPARQL query, we search for all triples in which Aristotle appears as the Subject:

163SELECT ?p ?o
WHERE { dbr:Aristotle ?p ?o . }

RDFS: RDF Schema

164At the semantic layer of the SW stack, there are two main levels328. The first semantic level is represented by RDF Schema (RDFS)329, which enables the creation of taxonomies. RDFS extends RDF by providing a data-modeling vocabulary that allows the expression of basic intensional ontological properties. Particularly, it allows to define hierarchies between classes and properties (specifying sub-classes and sub-properties) and to specify the domain (the subject) and range (the object) of a predicate.

OWL: Ontologies

  • 330 <https://www.w3.org/OWL/>.
  • 331 Willem N. Borst (1997), Construction of Engineering Ontologies, Institute for Telematica and Info (...)
  • 332 Tim Berners-Lee et al. (2001), “The Semantic Web”, “Pure logic is ontologically neutral. It makes (...)
  • 333 Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 41.

165The second and far more expressive level of semantics is represented by the Web Ontology Language (OWL 2)330, a language designed to create ontologies. In the context of the SW, an ontology is a formal data model that implements a conceptual model specifying concepts (classes, which are abstract groups or categories of objects), the relationships between these concepts (properties), and axioms (which impose constraints on term values) relevant to a given domain. An ontology has been famously defined as «a formal specification of a shared conceptualisation»331. While RDF provides a powerful but generic expressive capability ‒ enabling the unambiguous and global identification and definition of any type of concept or binary relation ‒ the precise nature and meaning of these classes and relations are defined within ontologies332. Ontologies allow us to enhance the dimensionality of a knowledge graph, precisely classifying entities and defining specific attributes and relationships. As a result, all concepts and their various dimensions are interconnected in an abstract, integrated, and deeply structured semantic system333.

  • 334 Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergen (...)

166OWL ontologies are grounded in set theory and model theory. When modeling with ontologies, it is essential to understand the fundamental distinction between categories (classes) and individuals (instances of those classes). In a knowledge base or knowledge graph, this distinction translates into two different types of statements: TBox and ABox statements. TBox (terminological box) statements are universally quantified assertions about concepts, such as the general properties of classes and relationships (roles). In contrast, ABox (assertional box) statements are specific assertions about individual entities, compliant with the TBox. ABox statements describe, for instance, that a particular individual entity is an instance of one or more classes defined in the TBox334. In this framework, OWL ontologies represent the TBox of a knowledge base, while RDF data corresponds to the Abox.

  • 335 Cf. Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 98.
  • 336 “When you represent information as a DLG [directed labelled graph] the nodes don’t actually conta (...)
  • 337 Reification is also used to describe RDF triples.
  • 338 I dati acquistano valore di conoscenza quando sono interconnessi con altri dati, quando la loro (...)

167Ontologies are knowledge organization systems designed to express and represent knowledge in an explicit, formal, self-descriptive, shared, and consensual manner335. They enable us to represent the fixed and context-independent properties of a domain and the principles governing the interactions of entities within that domain. Ontologies should not be seen merely as data models but as (partial) descriptions and models of reality. Through them, we articulate the ontological commitments and mappings of reality inherent in our theories, while the ontological distinctions help clarify the fundamental structures of reality. For instance, relationships play a crucial role in creating meaning and representing knowledge336. As noted earlier, in the SW, relationships (properties or predicates) are treated as first-class objects and assigned URIs as identifiers. Moreover, a common practice is to reify relationships ‒ turning them into “things” (res) i.e. classes ‒ allowing them to be analyzed in detail and connected to other entities337. Ontologies thus provide the tools to explicitly and precisely codify the nature of relationships, enhancing our ability to represent and reason about complex systems338.

168In general and compared to RDFS, OWL provides a richer vocabulary for describing properties and classes. It allows for the specification of logical and semantic characteristics of classes and properties, such as equivalence, symmetry, generalizations, specializations, abstractions, cardinality, and more. Additionally, it supports the definition of domain-specific concepts, relationships, properties, and logical rules. Ontologies enable the representation of reality in its multiple dimensions (conceptual, visual, material), at varying levels (of abstraction, granularity), and linked in diverse ways. The ability to define both concrete, specific concepts and abstract, generic ones allows for the creation of overarching structures and the identification of increasingly rich connections. Ontologies facilitate the recognition of the plurality of real and objective aspects within the unity of a thing and the identification of patterns at all levels. Even though these aspects and patterns cannot exist independently, they must nevertheless be distinguished. This is exemplified in the LMRoo (Library Reference Model, formerly FRBRoo, Functional Requirements for Bibliographic Records) ontology339, which describes four semantic levels (from the most abstract to the most concrete) characterizing a text:

1691. Work: the abstract work encompassing all concepts related to an original idea.
2. Expression: the intellectual or artistic realization of a work in the form of an identifiable, intangible object.
3. Manifestation (Product Type or Singleton): the physical embodiment or instance of the expression of a work, produced either serially or uniquely.
4. Item: a single exemplar of a manifestation.

  • 340 Tim Berners-Lee (1998b), “Using XML for Data”, Tim Berners-Lee et al. (2001), “The Semantic Web”,(...)
  • 341 Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 49.

170The expressive power of ontologies is further amplified by the fact that anyone can create their own ontology ‒ defining unique concepts and relationships ‒ within the global semantic space340. This allows for the expression of any statement, but for it to be meaningful and interoperable, it must also adhere to common and shared ontologies. These shared ontologies, being more generic and abstract, act as a unifying glue, enabling the overlap and interconnection of various discourses, even among conflicting interpretative models341. Furthermore, ontologies must be grounded in primitive elements or representations ‒ definitions of the fundamental categories and relationships within a domain. The process of defining such primitive representations is referred to as ontological engineering.

171There are different types of ontologies with distinct objectives: some are highly generic, while others are domain-specific or tailored to specific applications. A significant ontology for the humanities and cultural heritage field is CIDOC-CRM (Conceptual Reference Model)342. This event‑centric model, which has become an ISO standard (ISO 21127), provides a common and extensible semantic framework. Ontologies can be linked and extended, but it is crucial to prioritize reusing and mapping to existing models. Various modeling strategies and methodologies can be employed, making ontology management a critical aspect of the entire process. Ontologies should not be developed as isolated, stand-alone models; instead, they should be constructed within a coherent network of extensively linked ontologies operating at varying levels of abstraction.

OWL Semantics and Profiles

172OWL has two alternative semantics, which provide different ways of assigning meaning to OWL ontologies343.

  • 344 SROIQ is a highly expressive Description Logic, extending the SHOIN Description Logic with featur (...)

173The first is OWL 2 Direct Semantics (referred to as OWL 2 DL), which assigns meaning directly to the ontological structures. This semantics is compatible with the model-theoretic semantics of SROIQ344 Description Logic (DL). Description Logics are a family of logics designed for knowledge representation in terms of concepts, roles, individuals, and their relationships. They are fragments of first-order logic, offering reduced expressivity but with the advantage of being decidable. As discussed in Section 2, decidability means that, for any statement expressed in the logic language, there is an effective method to determine whether the statement is a consequence of a set of valid formulas. A statement is decidable if it (or its negation) can be demonstrated within the framework of a given theory. The goal of Description Logics is to balance expressive power with computability (or decidability).

174The second semantics is RDF-Based Semantics (referred to as OWL 2 Full). This semantics assigns meaning directly to RDF graphs and only indirectly to ontological structures. It was created to establish a semantic connection between OWL and RDF. While it is fully compatible with RDF semantics345, OWL 2 Full is known to be undecidable.

175In addition to these semantics, OWL 2 also has different profiles, which are syntactic subsets or sub-languages of OWL 2346. These profiles are less expressive but more efficient and better suited for specific application scenarios. OWL 2 EL is optimized for working with large datasets, OWL 2 QL for efficient querying and OWL 2 RL for scalable reasoning.

Open World Assumption

176Before moving to the next SW layer of reasoning, it is important to understand a key logical assumption underlying the SW: the Open World Assumption (OWA). The OWA means that an assertion that cannot be derived is not necessarily false, and no conclusions can be drawn from the absence of an assertion. In contrast, the Closed World Assumption (CWA) considers any statement that cannot be derived as false (negation as failure).

177The OWA is particularly relevant in the context of the decentralized Web environment, where it is impossible to assume that all information is contained within a single, complete knowledge graph. New information might exist or emerge elsewhere on the Web, and thus, it cannot be assumed that knowledge graphs are ever complete. RDF, RDFS, and OWL operate under the OWA, where something that is not known to be true is simply unknown, not false. Consequently, no conclusions can be drawn from the absence of derivation.

  • 347 Dörthe Arndt (2019), Notation3 as the unifying logic for the semantic web, PhD thesis in Technolo (...)

178However, cases of negation can still be addressed in the open world using the so-called scoped negation as failure. This means “drawing conclusions from the fact that something cannot be derived from a fixed dataset explicitly named. Even if we find out later that somewhere in the Semantic Web a fact x is declared to be true, the statement what we cannot derive that fact x from a fixed knowledge base K keeps being valid347.”

  • 348 Tim Berners-Lee et al. (2001), “The Semantic Web”.
  • 349 Dean Allemang & James Hendler (2011), Semantic Web for the Working Ontologist, Oxford, Elsevier L (...)

179The OWA aligns with the vision of open and distributed knowledge, supporting a global but decentralized space where new information may continually emerge. It upholds the principles that “anything can link to anything348” and “anyone can say anything about any topic349.”

Reasoning: Logic and Rules

180In the SW, automatic and formal machine reasoning is based on both ontologies (ontological reasoning) and rules (rule-based reasoning). While ontologies offer static or declarative knowledge, rules allow for procedural knowledge to be implemented.

181Ontologies provide the axioms for the reasoning engine to work with. This is why we have to be careful in what we actually (formally) express in our ontologies, since this could have important logical consequences. Therefore, when creating ontologies, theoretical and terminological precision should be sought, as it is indispensable for logical analysis.

182In the SW, it is possible to derive new knowledge not only from the data and ontologies but also by using rule-based reasoning. In information theory, a rule consists of a condition and a conclusion. If some condition holds in the data, then the conclusion is processed. For example, the rule for the sub-class relation is as follows: the condition states that “A is a sub-class of B” and “x is an instance of A”; the conclusion infers that “x is an instance of B”. In Notation3350:

183{?A rdfs:subClassOf ?B. ?x rdf:type ?A} => {?x rdf:type ?B}.

184In the SW, there are various inference formats for rules: some are standardized, such as the Rule Interchange Format (RIF), while others, like Notation3 (N3), are more expressive and powerful but not yet standardized351. N3 extends RDF and supports declarative programming with features such as rules (using the implication symbol), graph quoting (with curly brackets), graph description, and various built-ins. The Notation3Logic (N3Logic)352 rules-based reasoning is more performant and efficient than reasoning based on Description Logics. In N3, it is even possible to write and apply rules that produce new rules as their consequences (known as rule-producing rules).

185While N3Logic is highly powerful, it also has certain limitations. Its semantics is only informally defined, leaving room for interpretation and leading to inconsistencies in implementations across different reasoners. The discrepancies primarily arise from implicit universal quantification in nested formulas. In N3, bound variables can be used without explicitly stating their universal or existential quantifiers, as these are implicitly assumed. However, the meaning of such variables is not precisely defined, resulting in differing interpretations.

186Reasoning capabilities are provided by reasoners (or reasoning engines) such as the EYE Reasoner353, a theorem prover that can be extended with domain-specific rules. These reasoners can generate proofs, compute the deductive closure of a graph (the set of all triples that can be logically derived from it), and return the results of queries.

187Machine reasoning has a wide range of applications and purposes. It can be used to check the consistency and quality of data as well as the validity of inferences. It allows for highly precise data querying and enables the analysis of complex datasets. Reasoners can compute subsumption relationships between concepts (verifying whether one expression always denotes a subset of the objects denoted by another expression) and determine whether certain assertions (inclusion or instance assertions) are logically implied by a knowledge base. Furthermore, they can infer new data from existing data, deriving implicit knowledge and drawing new conclusions. Machine reasoning facilitates the implementation of any kind of computation or data processing. Data transformation processes, such as ETL (extract-transform-load) operations, are effectively forms of inference: they involve automatically transforming strings of symbols (data representations) into other, ideally equivalent, strings by applying a set of logical formal rules. The power of machine reasoning lies in its expressiveness and flexibility, allowing different rule sets to be applied in parallel and simultaneously, enabling highly sophisticated and dynamic reasoning capabilities.

The Rest of the Stack: Logic, Proof and Trust

  • 354 Cf. Berners-Lee, Tim, Dan Connolly, Lalana Kagal, Yosi Scharf & Jim Hendler (2008), “N3Logic: A l (...)

188In the SW layer cake, above the layers for querying, ontologies, and rules, lies the layer of Unifying Logic, a logical framework that should connect the other layers, enabling interoperability between them. The Unifying Logic aims to semantically combine Description Logics and rule-based reasoning into a cohesive logic system to address significant challenges, such as ensuring decidability and reconciling the open world and closed world assumptions. A potential candidate for this Unifying Logic is Notation3 Logic354.

189The proof layer provides formal proofs for the derivations performed during reasoning. For instance, N3 reasoners can generate proofs using the SWAP proof vocabulary355. Together, the layers of proof and cryptography form the foundation of the trust layer.

Issues and Further Developments

  • 356 Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergen (...)
  • 357 “[The Semantic Web] is not merely another data model, but also includes reflections on semiotics, (...)

190The SW is not without its criticisms, particularly regarding its complexity, as well as issues of a logical and formal nature356. Additionally, it faces concrete and practical limitations. The SW remains a work in progress, with certain components not yet fully formalized or definitively standardized. Addressing the remaining challenges requires concerted efforts and close collaboration among specialists from diverse disciplines, reflecting the multifaceted nature of the problems to be solved357.

  • 358 “But the question what ontology actually to adopt still stands open, and the obvious counsel is t (...)

191The first challenge lies in the undeniable complexity of the entire SW framework. This complexity makes it difficult, on one hand, to attract humanist scholars to integrate SW into their research and, on the other hand, to represent knowledge with the formal characteristics and rules required by the SW. Concerns about usability and applicability are widespread. Adding additional layers of information to triples and graphs, such as provenance or certainty, is neither straightforward nor simple, yet it is particularly important in the humanities. There is also a notable lack of user-friendly interfaces and tools. Ontologies, especially highly generic ones, are not always easy to read or interpret, and striking the right balance between generic and discipline- or project-specific solutions (between consensus and expressivity) is a delicate task, further complicated by the difference of existing solutions358. It is important to recognize that RDF data and OWL ontologies are primarily designed for machine processing rather than human consumption. Nevertheless, this does not mean we can disregard usability concerns; efforts should be made to include shortcuts and other accommodations for human convenience without compromising the machine-oriented design.

  • 359 as developers strive to provide the structure and organization beyond the just linking of data, (...)

192Alongside usability, there is also the critical issue of misuse. How can we ensure that ontologies are constructed and combined in a consistent manner, especially when they operate at different levels of abstraction and employ varying terminologies?359

  • 360 Cf. Harry Halpin et al. (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on (...)

193Other challenges stem from the limitations of the SW framework’s formal expressive power. Issues related to the semantic interpretation of quantification and negation, as well as the definition of identity and sameness, remain unresolved360. Additionally, there is a growing demand to support more powerful types of logic, such as fuzzy, modal, and non-monotonic logics, which would enable the representation of knowledge that is imprecise, uncertain, approximate, or subject to change.

194These are far from trivial problems, but ongoing developments offer promising solutions. For instance, RDF-star361 extends the RDF graph model by allowing statements about statements, enabling the annotation of triples and graphs with additional information, such as provenance362. Similarly, RDF Surfaces (a sublanguage of N3) has been designed to implement classical first-order logic with negation in RDF, inspired by Pat Hayes’s vision and Charles Sanders Peirce’s existential graphs (a diagrammatic notation for logical expressions)363.

195To address usability, the Linked Art community has proposed extending the LOD paradigm to Linked Open Usable Data364. Furthermore, new tools are emerging to enhance usability and applicability. Examples include LIFT365, a Python-based tool for transforming TEI XML editions into knowledge graphs; Geovistory366, a virtual research environment for humanities and social sciences; and LEAF-Writer367, a web-based semantic editor.

196In conclusion, much work remains to be done, and humanist scholars have a crucial role to play in advancing the SW’s potential and addressing its challenges.

Haut de page

Bibliographie

Aleksandrov, Alexandre D., Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche. Analisi, algebra e geometria analitica, Turin, Bollati Boringhieri, 2010.

Allemang, Dean & James Hendler (2011), Semantic Web for the Working Ontologist, Oxford, Elsevier LTD.

Arndt, Dörthe (2019), Notation3 as the unifying logic for the semantic web, PhD thesis in Technology and Engineering, Ghent University, Ghent. URL: https://biblio.ugent.be/publication/8634507 [consulted on 06/12/2024].

Bartha, Paul F. A. (2010), By Parallel Reasoning. The Construction and Evalutation of Analogical Arguments, Oxford, Oxford Univerity Press.

Berners-Lee, Tim (1990), Proposal for a Hypertext Project. URL: https://cds.cern.ch/record/2639699/files/Proposal_Nov-1990.pdf [consulted on 06/12/2024].

Berners-Lee, Tim (1998a), “Semantic Web Road Map”, Design Issues. URL: https://www.w3.org/DesignIssues/Semantic.html [consulted on 06/12/2024].

Berners-Lee, Tim (1998b), “Using XML for Data”, Design Issues. URL: https://www.w3.org/DesignIssues/XML-Semantics.html [consulted on 06/12/2024].

Berners-Lee, Tim (1999a), “The Semantic Web as a language of logic”, Design Issues. URL: https://www.w3.org/DesignIssues/Logic.html [consulted on 06/12/2024].

Berners-Lee, Tim (1999b), “The Semantic Toolbox”, Design Issues. URL: https://www.w3.org/DesignIssues/Toolbox.html [consulted on 06/12/2024].

Berners-Lee, Tim (2006), “Linked Data”, Design Issues. URL: https://www.w3.org/DesignIssues/LinkedData.html [consulted on 06/12/2024].

Berners-Lee, Tim, James Hendler & Ora Lassila (2001), “The Semantic Web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilites”, Scientific American, 284 (5), p. 34-43. URL: http://www.sciam.com/article.cfm?id=the-semantic-web [consulted on 06/12/2024].

Berners-Lee, Tim, Dan Connolly, Lalana Kagal, Yosi Scharf & Jim Hendler (2008), “N3Logic: A logical framework for the World Wide Web”, Theory and Practice of Logic Programming, 8 (3), p. 249-269. DOI: https://doi.org/10.1017/S1471068407003213 [consulted on 06/12/2024].

Bernstein, Abraham, James Hendler & Natalya Noy (2016), “A New Look at the Semantic Web”, Communications of the ACM, 59 (9), p. 1-5. DOI: https://doi.org/10.1145/2890489 [consulted on 06/12/2024].

Bocheński, Józef M. (1959), A Precis of Mathematical Logic, Dordrecht, D. Reidel.

Bod, Rens (2013), A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present, Oxford, Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199665211.001.0001 [consulted on 06/12/2024].

Boole, George (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, Cambridge, Cambridge University Press, 2009.

Boot, Peter & Marijn Koolen (2021), “Connecting TEI Content Into an Ontology of the Editorial Domain”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 9-29.

Borst, Willem N. (1997), Construction of Engineering Ontologies, PhD thesis in Engineering, Institute for Telematica and Information Technology/University of Twente, Enschede. URL: https://ris.utwente.nl/ws/portalfiles/portal/6036649/t0000004.pdf [consulted on 06/12/2024].

Bridle, James (2022), Ways of Being. Beyond Human Intelligence, Dublin, Allen Lane.

Burkov, Andriy (2019), The Hundred-Page Machine Learning Book, Quebec, Themlbook.

Buzzetti, Dino (2011), “Oltre il rappresentare: le potenzialità del markup”, in Lorenzo Perilli & Domenico Fiormonte (eds.), La macchina nel tempo. Studi di informatica umanistica in onore di Tito Orlandi, Firenze, Le Lettere, p. 39-62.

Buzzetti, Dino (2012), “Cos’è, oggi, l’informatica umanistica? L’impatto della tecnologia”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 103‑132.

Cellucci, Carlo (2013), Rethinking Logic: Logic in Relation to Mathematics. Evolution, and Method, New York, Springer.

Ciotti, Fabio (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 243‑276.

Copeland, Jack B. (ed.) (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial, Intelligence, and Artificial Life: Plus The Secrets of Enigma, Oxford, Oxford University Press. DOI: https://doi.org/10.1093/oso/9780198250791.001.0001 [consulted on 06/12/2024].

Cools, Hans & Roberta Padlina, “Formal Semantics for Scholarly Editions”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 97-124.

Crawford, Kate (2021), Atlas of AI. Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven/London, Yale University Press.

Crupi, Gianfranco (2012), “Universo bibliografico e semantic web”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 277‑306.

Davis, Martin (ed.) (1965), The Undecidable, New York, Raven.

Davis, Martin (2018), The Universal Computer. The Road from Leibniz to Turing, Abingdon‑on‑Thames, CRC Press.

Deheane, Stanislas (2011), The Number Sense. How the Mind Creates Mathematics, Oxford, Oxford University Press.

Dreyfus, Hubert L. (1992), What Computers Still Can’t Do: A Critique of Artificial Reason, Cambridge, MIT Press.

Dyson, George B. (2012), Turing’s Cathedral. The origins of the Digital Universe, London, Penguin Group.

Eco, Umberto (1975), Trattato di semiotica generale, Milan, Bompiani.

Ferrara, Silvia (2021), Il salto. Segni, figure, parole: viaggio all’origine dell’immaginazione, Milan, Feltrinelli.

Ferrarini, Edoardo (2007), “La trascrizione dei testimoni manoscritti: metodi di filologia computazionale”, in Arianna Ciula & Francesco Stella (eds.), Digital philology and medieval texts, Pisa, Pacini, p. 103-120.

Feynman, Richard P. (1996), Lectures on Computation, ed. by Anthony J.G. Hey & Robin W. Allen, Boston, Addison-Wesley Publishing Compagny.

Frege, Gottlob (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, in Jean van Heijenoort (ed.), Frege and Gödel. Two Fundamental Texts in Mathematical Logic, Cambridge, Harvard University Press, p. 1-82, 1970.

Gödel, Kurt (1931), “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I”, Monatshefte für Mathematik und Physik, 38 (1931), p. 173-198.

Gödel, Kurt (2003), Collected Works, Volume V, ed. by Solomon Feferman, John W. Dawson Jr, Warren Goldfarb, Charles Parsons & Wilfried Sieg (eds.), Oxford, Clarendon Press.

Gödel, Kurt (1995), “[On undecidable sentences] (*1931?)”, in Kurt Gödel, Collected Works, Volume III, ed. by Solomon Feferman, John W. Dawson Jr, Warren Goldfarb, Charles Parsons & Robert M. Solovay (eds.), Oxford University Press, p. 30‑35.

Gödel, Kurt (1986), “Postscriptum (3 June 1964)”, in Kurt Gödel, Collected Works, Volume I, ed. by Solomon Feferman, John W. Dawson Jr, Stephen C. Kleene, Gregory H. Moore, Robert M. Solovay & Jean van Heijenoort (eds.), New York, Oxford University Press, p. 369-371.

Godfrey-Smith, Peter (2016), Other Minds. The Octopus and the Evolution of Intelligent Life, London, Collins.

Goldstine, Herman H. (1972), The Computer from Pascal to von Neumann, Princeton, Princeton University Press.

Gruber, Thomas R. (1994), “Towards Principles for the Design of Ontologies Used for Knowledge Sharing”, International Journal Human-Computer Studies, 43, p. 907-928.

Halpin, Harry, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, in David Wood, Stefan Decker & Ivan Herman (eds.), Proceedings of W3C Workshop: RDF Next Steps 2010. URL: https://www.w3.org/2009/12/rdf-ws/papers/ws21 [consulted on 06/12/2024].

Hamming, Richard W. (1986), Coding and Information Theory, Upper Saddle River, Prentice‑Hall.

Heath, Tom & Christian Bizer (2011), Linked Data: Evolving the Web into a Global Data Space, San Francisco, Morgan & Claypool. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001 [consulted on 06/12/2024].

Hilbert, David (2005), “Mathematical Problems (1900b)”, in William B. Ewald, From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Oxford, Oxford University Press, p. 1096-1105.

Hochstenbach, Patrick, Mathijs van Noort, Dörthe Arndt, Rebekka Martens, Jos De Roo, Ruben Verborgh, Pieter Bonte & Femke Ongenae, “RDF Surfaces: Enabling Classical Negation on the Semantic Web”, arXiv. DOI: https://doi.org/10.48550/arXiv.2406.10659 [consulted on 06/12/2024].

Hofstadter, Douglas R. (1999), Gödel, Escher, Bach: An Eternal Golden Braid, New York, Basic Books.

Ippoliti, Emiliano (2023), Guida critica alle intelligenze artificiali. Potenzialità e limiti in una prospettiva filosofica, Milan, Egea.

Kahneman, Daniel (2012), Thinking, Fast and Slow, London, Penguin Book.

Leibniz, Gottfried W. (1666), Dissertatio de arte combinatoria, Berlin, De Gruyter, 1923.

Lurija, Aleksandr (1976), “Romantic science: Unimagined portraits, (Moscow 1976)”, in Aleksandr Lurija, Viaggio nella mente di un uomo che non dimenticava nulla, Roma, Armando, p. 108-116.

Meyer, Bertrand (1991), Introduction to the Theory of Programming Languages, London, Prentice Hall.

Mitchell, Melanie (2019), Artificial Intelligence: A Guide for Thinking Humans, London, Pelican Books.

Monticelli, Roberta (de) (2006), Esercizi di pensiero per apprendisti filosofi, Turin, Bollati Boringhieri.

Mordenti, Raul (2001), Informatica e critica dei testi, Roma, Bulzoni.

Moretti, Franco (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, New Left Review, 24. URL: https://newleftreview.org/issues/ii24/articles/franco-moretti-graphs-maps-trees-1 [consulted on 06/12/2024].

Müller, Gabriel & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, in Maarten J.F.M. Hoenen (ed.), Past and Future: Medieval Studies Today, Turnhout, TEMA, p. 49-79.

Odifreddi, Piergiogio (2005), Penna, pennello e bacchetta. Le tre invidie del matematico, Bari, Laterza.

Odifreddi, Piergiogio (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, Milan, Tascabili degli Editori Associati.

Owens, Trevor (2018), The Theory and Craft of Digital Preservation, Baltimore, John Hopkins University Press.

Peano, Giuseppe (1894), “Notations de Logique Mathématique. Introduction au Formulaire de mathématiques”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 123‑176.

Peano, Giuseppe (1896), “Introduction au tome II du « Formulaire de mathématiques »”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 196-200.

Petzold, Charles (2000), Code. The Hidden Language of Computer Hardware and Software, Washington, Microsoft Press.

Quine, Willard V. O. (1963), From a Logical Point of View, New York, Harper & Row.

Robinson, Peter (2013), “Towards a Theory of Digital Editions”, Variants: The Journal of the European Society for Textual Scholarship, 10, p. 105-131.

Rota, Gian-Carlo (1986), “In Memoriam of Stan Ulam. The Barrier of Meaning”, Physica, 22D, p. 1-3.

Sahle, Patrick (2016), “What is a Scholarly Digital Edition”, in Matthew Driscoll & Elena Pierazzo (eds.), Digital Scholarly Editing. Theories and Practices, Cambridge, OpenBook Publishers, p. 19-39.

Shannon, Claude E. (1948), “A Mathematical Theory of Communication”, The Bell System Technical Journal, 27 (3), p. 379-423. DOI: https://doi.org/10.1002/j.1538-7305.1948.tb01338.x [consulted on 06/12/2024].

Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel & Demis Hassabis (2017), “Mastering the game of Go without human knowledge”, Nature, 550, p. 354-359.

Singh, Simon (1999), The Code Book, The Science of Secrecy from Ancient Egypt to Quantum Cryptography, New York, Anchor Books.

Smullyan, Raymond M. (1961), Theory of Formal Systems, Princeton, Princeton University Press.

Snapper, Ernst (1979), “The Three Crises in Mathematics: Logicism, Intuitionism and Formalism”, Mathematics Magazine, 52 (4), p. 207-216. DOI: https://doi.org/10.2307/2689412 [consulted on 06/12/2024].

Sowa, John F. (2000), “Ontology, Metadata, and Semiotics”, in Bernhard Ganter & Guy W. Mineau (eds.), Conceptual Structures: Logical, Linguistic, and Computational Issues, Berlin, Springer, p. 55-81. DOI: https://doi.org/10.1007/10722280_5 [consulted on 06/12/2024].

Stone, Deborah (2020), Counting: How We Use Numbers to Decide What Matters, New York, Liveright Publishing Corporation.

Strogatz, Steven (2019), Infinite Powers. How Calculus Reveals the Secrets of the Universe, New York, Eamon Dolan Book.

Tomaszuk, Dominik (2016), “Inferences rules for RDF(S) and OWL in N3Logic”, arXiv. DOI: https://doi.org/10.48550/arXiv.1601.02650 [consulted on 06/12/2024].

Verborgh, Ruben & Jos De Roo (2015), “Drawing Conclusions from Linked Data on the Web. The EYE Reasoner”, IEEE Software, May/June (3). URL: https://josd.github.io/Papers/EYE.pdf [consulted on 06/12/2024].

Vogeler, Georg (2021), “« Standing-off Tree and Graphs »: On the Affordance of Technologies for the Assertive Edition”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 73-94.

von Neumann, John (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 between the U.S Army Ordnance Departement and the University of Pennsylvania, Moore School of Electrical Engineering, University of Pennsylvania, June 30.

von Neumann, John (1951), “Various Techniques Used in Connection with Random Digits”, Journal of Research of the National Bureau of Standards, 3 (36-38), p. 768-770. URL: https://mcnp.lanl.gov/pdf_files/InBook_Computing_1961_Neumann_JohnVonNeumannCollectedWorks_VariousTechniquesUsedinConnectionwithRandomDigits.pdf [consulted on 06/12/2024].

von Neumann, John (2012), The Computer & the Brain, New Haven/London, Yale University Press.

Wettlaufer, Jörg (2018), “Der nächste Schritt? Semantic Web und digitale Editionen”, in Roland S. Kamzelak & Tim Steyer (eds.), Digital Metamorphose: Digital Humanities und Editionswissenschaft, Wolfenbüttel, Zeitschrift für digitale Geisteswissenschaften. DOI: https://doi.org/10.17175/sb002_007 [consulted on 06/12/2024].

Wiener, Norbert (1985), Cybernetics: or Control and Communication in the Animal and the Machine, Cambridge, M.I.T. Press.

Yu, Liyang (2011), Developer’s Guide to the Semantic Web, Berlin, Springer.

Zellini, Paolo (2022), Discreto e continuo. Storia di un errore, Milan, Adelphi.

Haut de page

Notes

1 For both quotations see George B. Dyson, Turing's Cathedral: The Origins of the Digital Universe, London, Penguin Group, p. 87 and 64 respectively.

2 Alan Turing, “On Computable Numbers with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society, 42 (1936-7), p. 230-265; Alan Turing (1951), “Can Digital Computers Think?”, in Jack B. Copeland (ed) (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus The Secrets of Enigma, Oxford, Oxford University Press, p. 482-486.

3 John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 between the U.S Army Ordnance Departement and the University of Pennsylvania, Moore School of Electrical Engineering, University of Pennsylvania, June 30.

4 “It turns out that the recent success of deep learning is due less to new breakthrough in AI than to the availability of huge amounts of data (thank you, internet!) and very fast parallel computer hardware”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, London, Pelican Books, p. 101, “The address matrix that began, in 1951, with a single 40-floor hotel, with 1,024 rooms on every floor, has now expanded to billions of 64-floor hotels with billions of rooms, yet the contents are still addressed by numerical coordinates that have to be specified exactly, or everything comes to a halt”, George B. Dyson (2012), Turing’s Cathedral. The origins of the Digital Universe, London, Penguin Group p. 309, “Yet we still face the same questions that were asked in 1953. Turing’s question was what it would take for machines to begin to think. Von Neumann's question was what it would take for machines to begin to reproduce” George B. Dyson (2012), Turing’s Cathedral. p. 10.

5 “The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 486, “All of a sudden the idionsyncracies, the weaknesses and powers, the vagaries and vicissitudes of human thought were hinted at by the newfound ability to experiment with alien, yet hand-tailored forms of thought – or approximations of thought. As a result, we have acquired, in the last twenty years or so, a new kind of perspective on what thought is, and what it is not”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach: An Eternal Golden Braid, New York, Basic Books, p. 337.

6 This is why Turing replaced the original question of whether machines can think ‒ “too meaningless to deserve discussion”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 449 ‒ with an operational criterion: the famous imitation game. To pass this Turing test, a machine must be able to: “answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine” (Jack B. Copeland (ed) (2004), The Essential Turing, p. 484). “as soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking, but a sort of unimaginative donkey-work. From this point of view one might be tempted to define thinking as consisting of « those mental processes that we don't understand ». If this is right then to make a thinking machine is to make one which does interesting things without our really understanding quite how it is done” Alan Turing, “Can Automatic Calculating Machines Be Said to Think?”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 500, “The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand”, George B. Dyson (2012), Turing’s Cathedral, p. 263, “There is a related « Theorem » about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of « real thinking ». The ineluctable core of intelligence is always in that next thing which hasn't yet been programmed. [...] « AI is whatever hasn't been done yet. »” Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 600, “As Claude Shannon wrote presciently in 1950, a machine that can surpass humans at chess « will force us either to admit the possibility of mechanized thinking or to further restrict our concept of thinking. » The latter happened”, Melanie Mitchell (2019), Artificial Intelligence, p. 196.

7 “Beyond the narrow framing put forward by both technology companies and the doctrine of human uniqueness (the idea that, among all beings, human intelligence is singular and pre-eminent) exists a whole realm of other ways of thinking and doing intelligence”, James Bridle (2022), Ways of Being. Beyond Human Intelligence, Dublin, Allen Lane, p. 10, “Systems of intelligent, computational ability – mycorrhizal networks, slime moulds and ant colonies, to name a few – have always existed in the natural world, but we had to recreate them in our labs and workshops before we were capable of recognizing them elsewhere. This is technological ecology in practice. We need the mental models provided by our technology, the words we make up for its concepts and metaphors, in order to describe and properly understand that analogous processes are already at play in the more-than-human-world”, James Bridle (2022), Ways of Being, p. 194. The octopus and Physarum polycephalum (aka “Blob”) are examples of animals and organisms that display remarkable intelligence. Cf. Peter Godfrey-Smith (2016), Other Minds. The Octopus and the Evolution of Intelligent Life, London, Collins and also “logic [as a problem solving capacity and means of discovery] is not peculiar to human beings. Biological evolution has endowed not only human beings, but virtually all organisms, with a natural logic through which they manage to survive”, Carlo Cellucci (2013), Rethinking Logic: Logic in Relation to Mathematics. Evolution, and Method, New York, Springer, p. 365.

8 “I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures”, Kate Crawford (2021), Atlas of AI. Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven/London, Yale University Press, p. 8, “One of the less recognized facts of artificial intelligence is how many underpaid workers are required to help build, maintain, and test AI systems. This unseen labor takes many forms – supply-chain work, on-demand crowdwork, and traditional service-industry jobs. […] Sometimes workers are directly asked to pretend to be an AI system”, Kate Crawford (2021), Atlas of AI., p. 63‑65, “As the social anthropologist F. G. Bailey observed, the technique of « obscuring by mystification » is often employed in public settings to argue for a phenomenon’s inevitability. We are told to focus on the innovative nature of the method rather than on what is primary: the purpose of the thing itself. Above all, enchanted determinism obscures power and closes off informed public discussion, critical scrutiny, or outright rejection”, Kate Crawford (2021), Atlas of AI, p. 214.

9 It is important to acknowledge the existence of alternative open-source and transparent solutions, such as those shared on the Hugging Face platform (https://huggingface.co/), which lie beyond the most well-known and mainstream options discussed here. However, these solutions tend to be less immediately accessible, and their overall impact is comparatively limited.

10 “That’s what happens, it would seem, when the development of AI is led primarily by venture-funded technology companies. The definition of intelligence which is framed, endorsed and ultimately constructed in machines is a profit-seeking, extractive one”, James Bridle (2022), Ways of Being, p. 9-10, “Their learned responses are that of a corporate intelligence, evolving within the arid, airless ecology of neoliberal capitalism, tech company boardrooms and ever-increasing financial and social disparities. If we wish them to evolve differently, we will need to address and alter this ecology”, James Bridle (2022), Ways of Being, p. 67, “The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that the mass harvesting of data is necessary and justified for creating systems of profitable computational « intelligence »”, Kate Crawford (2021), Atlas of AI, p. 95.

11 AlphaGo Zero, for example, played 4.9 million games of Go against itself in just 72 hours. Cf. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel & Demis Hassabis (2017), “Mastering the game of Go without human knowledge”, Nature, 550, p. 354-359, “AI game engines are designed to play millions of games, run statistical analyses to optimize for winning outcomes, and then play millions more. These programs produce surprising moves uncommon in human games for a straightforward reason: they can play and analyze far more games at a far greater speed than any human can. This is not magic; it is statistical analysis at scale”, Kate Crawford (2021), Atlas of AI, p. 215, “« We are not scanning all those books to be read by people », an engineer [of Google] revealed to me after lunch. « We are scanning them to be read by an AI. » The AI that is reading all these books is also reading everything else – including most of the code written by human programmers over the past sixty years”, George B. Dyson (2012), Turing’s Cathedral, p. 312-313.

12 “There are indications however that it is possible to make the machine display intelligence at the risk of its making occasional serious mistakes”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 374, if a machine is expected to be infallible, it cannot also be intelligent”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 394.

13 “Let’s start by telling the truth: machines don’t learn. What a typical « learning machine » does, is finding a mathematical formula, which, when applied to a collection of inputs (called « training data »), produces the desired outputs. This mathematical formula also generates the correct outputs for most other inputs (distinct from the training data) on the condition that those inputs come from the same or a similar statistical distribution as the one the training data was drawn from. Why isn’t that learning? Because if you slightly distort the inputs, the output is very likely to become completely wrong”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, Quebec, Themlbook, p. xvii, “This lack of understanding is clearly revealed by the un-humanlike errors these systems can make; by their difficulties with abstracting and transferring what they have learned; by their lack of commonsense knowledge; and by their vulnerability to adversarial attacks. The barrier of meaning between AI and human-level intelligence still stands today”, Melanie Mitchell (2019), Artificial Intelligence, p. 307, Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali. Potenzialità e limiti in una prospettiva filosofica, Milan, Egea, p. 56.

14 Cf. Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 131.

15 “The most efficient search of an unmapped territory takes the form of a random walk”, George B. Dyson (2012), Turing’s Cathedral, p. 198. Conventional computers cannot generate true randomness; it must be supplied externally through methods such as cards, dice, a decaying nucleus, or random number tables. Cf. “Anyone who considers arithmetical methods of producing a random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number – there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method”, John von Neumann (1951), “Various Techniques Used in Connection with Random Digits”, Journal of Research of the National Bureau of Standards, 3 (36-38), p. 768-770. URL: https://mcnp.lanl.gov/pdf_files/InBook_Computing_1961_Neumann_JohnVonNeumannCollectedWorks_VariousTechniquesUsedinConnectionwithRandomDigits.pdf [consulted on 06/12/2024], p. 768.

16 “It is probably wise to include a random element in a learning machine [...]. A random element is rather useful when we are searching for a solution of some problem […] Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 463, “it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behavior depend on something like a roulette wheel or a supply of radium. The behavior of these may perhaps be predictable, but if so, we do not know how to do the prediction”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 484, “[Turing] suggested incorporating a random-number generator to create what he referred to as a « learning machine, » granting the computer the ability to take a guess and then either reinforce or discard the consequent results. If guesses were applied to modifications in the computer’s own instructions, a machine could then learn to teach itself”, George B. Dyson (2012), Turing’s Cathedral, p. 261.

17 “What is this Monte Carlo method? Very roughly, the idea is to replace a given precise mathematical procedure by one involving random processes. […] By doing this probabilistic experiment enough times we can approach the correct volume with arbitrarily high probability”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, Princeton, Princeton University Press, p. 296-297, “Monte Carlo originated as a form of emergency first aid, in answer to the question: What to do until the mathematician arrives? « The idea was to try out thousands of such possibilities and, at each stage, to select by chance, by means of a “random number” with suitable probability, the fate or kind of event, to follow it in a line, so to speak, just considering all branches »”, George B. Dyson (2012), Turing’s Cathedral, p. 191, “Today's search engines, long descended from their ENIAC-era ancestors, still bear the imprint of their Monte Carlo origins: random search paths being accounted for, statistically, to accumulate increasingly accurate results. The genius of Monte Carlo – and its search-engine descendants – lies in the ability to extract meaningful solutions, in the face of overwhelming information, by recognizing that meaning resides less in the data at the end points and more in the intervening path”; George B. Dyson (2012), Turing’s Cathedral, p. 198‑199, “In this way, rather than trying to represent and solve a whole problem at once – to capture and dominate it – the Monte Carlo method seeks to actively explore and draw conclusions about a problem, a very different, and more naturalistic, approach”, James Bridle (2022), Ways of Being, p. 225, “it uses its roll-outs to collect statistics on how many times a given move actually leads to a win or loss. The more roll-outs the algorithm runs, the better its statistics”, Melanie Mitchell (2019), Artificial Intelligence, p. 205-206.

18 “Training sets raise complex questions from ethical, methodological, and epistemological perspectives”, Kate Crawford (2021), Atlas of AI, p. 119, “To make such predictions, machine learning systems are seeking to classify entirely relational things into fixed categories and are rightly critiqued as scientifically and ethically problematic”, Kate Crawford (2021), Atlas of AI, p. 146, “This epistemological flattening of complexity into clean signal for the purposes of prediction is now a central logic of machine learning”, Kate Crawford (2021), Atlas of AI, p. 146, “Bias is a symptom of a deeper affliction: a far-ranging and centralizing normative logic that is used to determine how the world should be seen and evaluated”, Kate Crawford (2021), Atlas of AI, p. 221, “A predictive algorithm is a Hall of Mirrors. A mirror reflects what’s in front of it. If the people in front of it behave in biased ways, the mirror will « behave » in the same biased ways. The mirror doesn’t know that it’s arresting more blacks than whites under the same circumstances or hiring more men than women. It knows only what the police do or what employers do. It assumes they know what makes them successful, so it tells them to keep on doing more of the same”, Deborah Stone (2020), Counting: How We Use Numbers to Decide What Matters, New York, Liveright Publishing Corporation, p. 58.

19 “As Vannevar Bush foresaw, machines have enormous appetites. But how and what they are fed has an enormous impact on how they will interpret the world, and the priorities of their masters will always shape how that vision is monetized”, Kate Crawford (2021), Atlas of AI, p. 121, “Intuitively, the larger is the set of training examples, the more unlikely that the new examples will be dissimilar to (and lie on the plot far from) the examples used for training”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 7. Essentially, machine and deep learning algorithms extract structures from input data by identifying statistical regularities or correlations and encoding these structures in network parameters. However, they can easily mistake noise in the data for a signal, producing numerous correlations that do not convey any “hidden” information or meaning. Moreover, Calude and Longo have shown that as the amount of data increases, so does the prevalence of spurious correlations. This is a consequence of the under‑determination of similarity (anything can be considered to be similar to anything else in a certain respect). Cf. Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 75-76, “Ora un'ultima ambiguità leagata all'idea di somiglianza sta nel fatto che – a livello di fenomeni molto elementari come alto|basso, destra|sinistra o lungo|largo – ogni cosa rassomiglia a qualsiasi altra cosa. Il che significa che vi sono certe caratteristiche formali così generiche da appartenere a quasi tutti i fenomeni e che possono essere considerate iconiche di ogni altro fenomeno”, Umberto Eco (1975), Trattato di semiotica generale, Milan, Bompiani, p. 278.

20 “Anything and everything online was primed to become a training set for AI”, Kate Crawford (2021), Atlas of AI, p. 106.

21 “It's no secret: deep learning requires big data. Big in the sense of the million-plus labeled training images in ImageNet. Where does all this data come from? The answer is, of course, you – and probably everyone you know. [...] Have you ever identified a picture in order to prove to a website that you're not a robot? Your identification might have helped Google tag an image for use in training its image search system”, Melanie Mitchell (2019), Artificial Intelligence, p. 97.

22 George B. Dyson (2012), Turing’s Cathedral, p. 312, “One approach is to start with the questions, and search for the answers. Another approach is to start with the answers and search for the questions. Because it is easier (and more economical) to collect answers (which are already encoded) than to ask questions (which have to be encoded), the first step would be to crawl through the matrix and collect the meaningful strings. [...] Human beings and machines have already done much of the work, filing away meaningfully encoded strings since the beginning of the digital universe and, since the dawn of the Internet, giving them unique numerical addresses. [...] The result is an indexed list (within your machine's « state of mind, » to use Turing's language) of a significant fraction of the meaningful answers in the Digital Universe. With two huge deficiencies: you don't have any questions – you have only answers –and you have no clue where the meaning is. Where do you go to get the questions, and how do you find where the meaning is? If, as Turing imagined, you have the mind of a child, you ask people, you guess, and you learn from your mistakes. You invite people to submit questions – keeping track of all submissions – and, starting with simple template-matching, suggest possible answers from your indexed list. People click more frequently on the results that provide more meaningful answers, and with simple bookkeeping, meaning, and the map between questions and answers, begins to accumulate over time. Are we searching the search engine, or are the search engines searching us?”, George B. Dyson (2012), Turing’s Cathedral, p. 263-264.

23 “We are training Google’s image recognition algorithms for free. Again, the myth of AI as affordable and efficient depends on layers of exploitation, including the extraction of mass unpaid labor to fine-tune the AI systems of the richest companies on earth. Contemporary forms of artificial intelligence are neither artificial nor intelligent. We can – and should – speak instead of the hard physical labor of mine workers, the repetitive factory labor on the assembly line, the cybernetic labor in the cognitive sweatshops of outsourced programmers, the poorly paid crowdsourced labor of Mechanical Turk workers, and the unpaid immaterial work of everyday users”, Kate Crawford (2021), Atlas of AI, p. 69.

24 It has become so normalized across the industry to take and use whatever is available that few stop to question the underlying politics”, Kate Crawford (2021), Atlas of AI, p. 93, “There has been a general failure to address the ways in which the instruments of knowledge in AI reflect and serve the incentives of a wider extractive economy” Kate Crawford (2021), Atlas of AI, p. 135, “Questions about who gets to do that rewriting of reality, which decisions are made along the way, and who gains from it, are all too often missed and forgotten in the excitement”, James Bridle (2022), Ways of Being, p. 21-22, “optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers. One of the lessons of the financial crisis that led to the Great Recession is that there are periods in which competition, among experts and among organizations, creates powerful forces that favor a collective blindness to risk and uncertainty”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 262.

25 “We become more like the machines we envisage, in ways which, in the present, have profoundly negative effects on our relationships with one another and with the wider world”, James Bridle (2022), Ways of Being, p. 10.

26 AI encompasses ML, which, in turn, includes artificial neural networks (ANNs). Deep learning is a variant of ANNs that incorporates multiple layers. The term “deep” refers to the number (depth) of layers in the neural network.

27 “Some people – generally mathematicians – promoted mathematical logic and deductive reasoning as the language of rational thought. Others championed inductive methods in which programs extract statistics from data and use probabilities to deal with uncertainty. Still others believed firmly in taking inspiration from biology and psychology to create brain-like programs”, Melanie Mitchell (2019), Artificial Intelligence, p. 7-8. “it's important to understand a philoso phical split that occurred early in the AI research community: the split between so-called symbolic and subsymbolic AI. […] A symbolic AI program's knowledge consists of words or phrases (the « symbols »), typically understandable to a human, along with rules by which the program can combine and process these symbols in order to perform its assigned task”, Melanie Mitchell (2019), Artificial Intelligence, p. 7-8, “subsymbolic approach to AI took inspiration from neuroscience”, Melanie Mitchell (2019), Artificial Intelligence, p. 12.

28 Cf. Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 20.

29 George B. Dyson (2012), Turing’s Cathedral, p. 318, “As Marcus notes, while we humans attribute to the program a certain understanding of what we consider basic concepts [...] the program actually has no such concepts”, Melanie Mitchell (2019), Artificial Intelligence, p. 215, “these systems don't comprehend the meaning of what we ask them”, Melanie Mitchell (2019), Artificial Intelligence, p. 279, “We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted”, Melanie Mitchell (2019), Artificial Intelligence, p. 368.

30 “I expect that digital computing machines will eventually stimulate a considerable interest in symbolic logic and mathematical philosophy. The language in which one communicates with these machines, i.e. the language of instruction tables, forms a sort of symbolic logic. The machine interprets whatever it is told in a quite definite manner without any sense of humor or sense of proportion. Unless in communicating with it one says exactly what one means, trouble is bound to result. Actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machine were given instruction tables which would enable it to interpret that logical system. This should mean that there will be much more practical scope for logical systems then there has been in the past”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 392.

31 James Bridle (2022), Ways of Being, p. 169. Cf. “The languages we have developed thus far to communicate with machines are themselves pidgins: simplified forms which are native to neither party. It requires effort on both sides to fit our ideas inside them; neither side expresses itself well, but each thinks itself superior. Linguists call this the “double illusion”: humans think they are speaking computer, computers think they are speaking human, and neither is very satisfied”, James Bridle (2022), Ways of Being, p. 169.

32 “it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g. a symbolic language. These orders are to be transmitted through the « unemotional ». The use of this language will diminish greatly the number of punishments and rewards required”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 461, “Over the last six decades of AI research, people have repeatedly debated the relative advantages and disadvantages of symbolic and subsymbolic approaches. Symbolic systems can be engineered by humans, be imbued with human knowledge, and use human-understandable reasoning to solve problems. […] While there have been some attempts to construct hybrid systems that integrate subsymbolic and symbolic methods, none have yet led to any striking success”, Melanie Mitchell (2019), Artificial Intelligence, p. 34‑35.

33 Such systems are known to be efficient for domain-specific and local solutions but are typically limited in scope and not suited for general, global solutions that can be applied to other problems or domains.

34 The digital, computational approach to humanistic material has led to new comparisons and methods of analysis. [...] The digital humanities are bringing not just new understanding but also new questions that have never been asked before. This movement is drastically changing humanistic practice”, Rens Bod (2013), A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present, Oxford, Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199665211.001.0001 [consulted on 06/12/2024], p. 362.

35 “The powers of computers derive as much from their ability to copy as from their ability to compute”, George B. Dyson (2012), Turing’s Cathedral, p. 283.

36 “before attempting to translate our data into the rigorous language of symbols, it is above all things necessary to ascertain the intended import of the word we are using. But this necessity cannot be regarded as an evil by those who value correctness of thought, and regard the right employment of language as both its instrument and its safeguard”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, Cambridge, Cambridge University Press, 2009, p. 60-61, “I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed, so that its origin can be investigated. [...] Everything necessary for a correct inference is expressed in full, but what is not necessary is generally not indicated; nothing is left to guesswork. In this I faithfully follow the example of the formula language of mathematics”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, in Jean van Heijenoort (ed.), Frege and Gödel. Two Fundamental Texts in Mathematical Logic, Cambridge, Harvard University Press, 1970, p. 5-6. “un linguaggio reso più limpido e sorvegliato, anche più magro forse, nel passaggio attraverso l'arte della logica. […] Degli strumenti umani che offre la logica dunque non solo non bisogna avere paura, ma anzi occorre considerali come preziosissimi (e localmente migliorabili) strumenti di controllo del pensare, vale a dire ausilio all'espressione trasparente e non ambigua dei pensieri”, Roberta de Monticelli (2006), Esercizi di pensiero per apprendisti filosofi, Turin, Bollati Boringhieri., p. 28-29.

37 Nevertheless, the same concepts can always be expressed through different symbolic representations. Cf. “It is important not to get the idea, from the rather strict nature of all the formal systems we have seen, that the isomorphism between symbols and real things is a rigid, one-to-one mapping, like the strings which link a marionette and the hand guiding it”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 338.

38 “The web is an amazing access platform. Content published on the web becomes almost immediately globally accessible. It is hard to adequately underscore how massive an impact this has had and is still having on access to cultural heritage material”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, Baltimore, John Hopkins University Press, p. 163, “But the gift of the Web wasn’t only informational: by its very existence it gave us new tools to identify and understand networks themselves”, James Bridle (2022), Ways of Being, p. 81.

39 Cf. Umberto Eco (1975), Trattato di semiotica generale, p. 112 and 377. Cf. also Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 266, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 120‑121.

40 “We will increasingly operate in a world of networked and linked collection and descriptions. With more and more content made available online, a click away from the holdings of any number of institutions, individual institutional collections have become part of a linked global collection. In this context, more and more digital materials will be described and will also describe each other”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 130.

41 “The humanities have become increasingly fragmented over the last two centuries–unlike the sciences, where the opposite seems to have taken place”, Rens Bod (2013), A New History of the Humanities, p. 4.

42 “Scientific observation is not merely pure description of separate facts. Its main goal is to view an event from as many perspectives as possible. The eye of science does not probe « a thing, » an event isolated from other things or events. Its real object is to see and understand the way a thing or event relates to other things or events”, Aleksandr (1976), “Romantic science: Unimagined portraits, (Moscow 1976)”, in Aleksandr Lurija, Viaggio nella mente di un uomo che non dimenticava nulla, Roma, Armando, p. 108-116.

43 Cf. <https://www.force11.org/group/fairgroup/fairprinciples> and <https://www.go-fair.org/fair-principles/>.

44 Cf. Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 99-100.

45 “The exercise helps you to check your own reasoning; you can be your own critic. What is more, as we shall see, making the reasoning explicit can aid in the creative process by bringing out weaknesses and suggesting remedies”, Paul F. A. Bartha (2010), By Parallel Reasoning. The Construction and Evalutation of Analogical Arguments, Oxford, Oxford Univerity Press, p. 5.

46 La transformation du langage ordinaire en symboles présente des difficultés plus grandes. Il faut d'abord analyser complètement les propositions qu'on veut écrire en symboles. Mais cette analyse a son avantage; combien de fois la proposition se transforme-t-elle en une identité, ou on y découvre des inexactitudes, des lacunes, des ambiguïtés!”, Giuseppe Peano(1896), “Introduction au tome II du « Formulaire de mathématiques »”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 198, “Da punto di vista teorico, la trascrizione come codifica fa interrogare analiticamente e esplicitamente su cosa stiamo codificando (o ricodificando o decodificando), a scomporre questo qualcosa in elementi discreti, a ordinare in modo sequenziale le nostre operazioni, a evitare ambiguità, contraddizioni ridondanze; e ci costringe, soprattutto, a formulare tutto ciò in modo rigoroso, senza poter ricorrere più alle (comodissime) tassonomie semiclandestine del buon senso, « tolleranti e bonarie »”, Raul Mordenti (2001), Informatica e critica dei testi, Roma, Bulzoni, p. 29, “The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 274-275, “a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 277.

47 “« Theories are nets », wrote Novalis, « and only he who casts will catch ». Theories are nets, and we should learn to evaluate them for the empirical data they allow us to process and understand: for how they concretely change the way we work rather than as ends in themselves. Theories are nets; and there are so many interesting creatures that await to be caught, if only we try”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, New Left Review, 24. URL: https://newleftreview.org/issues/ii24/articles/franco-moretti-graphs-maps-trees-1 [consulted on 06/12/2024], p. 63.

48 “Ideas about how we should think are locked into our culture. It’s a problem exacerbated by technology. Once a way of seeing the world has been moulded into a tool it’s very hard to think otherwise: « When all you have is a hammer, everything looks like a nail » as the saying goes”, James Bridle (2022), Ways of Being, p. 175, “All computers are simulators. They contain abstract models of aspects of the world, which we set in motion – and then immediately forget that they’re models. We take them for the world itself”, James Bridle (2022), Ways of Being, p. 207.

49 “No schema is ever complete, no taxonomy ever finished – and that’s fine, providing the systems we put in place for interpreting and applying those schemas are open, transparent, comprehensible and renegotiable”, James Bridle (2022), Ways of Being, p. 111, Umberto Eco (1975), Trattato di semiotica generale, p. 182.

50 Kate Crawford (2021), Atlas of AI, p. 132.

51 Usualmente un solo significante veicola contenuti diversi e interallaciati e che pertanto quello che si chiama « messaggio » è il più delle volte un testo il cui contenuto è un discorso a più livelli”, Umberto Eco (1975), Trattato di semiotica generale, p. 86, “it is important to determine whether there is enough context, so that someone in the future can make sense of the digital objects”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 119.

52 “This redundancy prevented bits being lost in transit”, George B. Dyson (2012), Turing’s Cathedral, p. 137, “redundancy was the clue to the correct way to organize automata made with unreliable components”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 280.

53 Cf. Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 147 and 160. “It turns out that it is extremely useful to have the same information in several different forms for different purposes. [...] It suggests that there are advantages to being able to switch back and forth between procedural and declarative representations”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 616-617.

54 “You reduce the text to a few elements, and abstract them, and construct a new, artificial object. A model. And at this point you start working at a « secondary » level, removed form the text: a map, after all, is always a look from afar – or is useless, like Borges's map of the empire. Distant reading, I have called this work elsewhere; where distance is however not an obstacle, but a specific form of knowledge: fewer elements, hence a sharper sense of their overall interconnection. Shapes, relations, structures. Patterns”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, p. 94.

55 The Linked Open Data movement itself appears to adopt a somewhat relaxed approach toward the formal aspects of the SW. Cf. “In Linked Data, the use of owl:sameAs is ubiquitous in « inter-linking » data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, in David Wood, Stefan Decker & Ivan Herman (eds.), Proceedings of W3C Workshop: RDF Next Steps 2010. URL: https://www.w3.org/2009/12/rdf-ws/papers/ws21 [consulted on 06/12/2024], p. 1, “Contrary to popular belief in some circles, formal semantics are not a silver bullet. Just because a construct in a knowledge representation language is prescribed a behaviour using formal semantics does not necessarily mean that people will follow those semantics when actually using that language « in the wild »”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, p. 1, “A loosely defined system of metadata may be adequate for finding information, but inadequate for any further processing. As Phipps observed, superficial agreements about vocabulary may hide complexities that make interoperability impossible”, John F. Sowa (2000), “Ontology, Metadata, and Semiotics”, in Bernhard Ganter & Guy W. Mineau (eds.), Conceptual Structures: Logical, Linguistic, and Computational Issues, Berlin, Springer, p. 55-81. DOI: https://doi.org/10.1007/10722280_5 [consulted on 06/12/2024], p. 6. “The digital world appears to be a cool rule-bound universe of logic and order. In practice it is a word of partially completed and conflicting specifications, files that were never valid but seem to work fine in the application most people use them in, and, deep down, information that is made up of physical markings on tangible media”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p, 52-53.

56 What precisely is new in its latest guise of this problem on the Web of Linked Data is that this is the first time the problem is being encountered by different individuals attempting to independently knit their knowledge representations together using the same standardized language. Much of the supposed « crisis » over the proliferation of owl:sameAs in Linked Data can be traced to the fact that these uses of owl:sameAs tend to be mutually incompatible, and almost always violate the rather strict logical semantics of identity demanded by owl:sameAs”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, p. 1.

57 An unwillingness to admit the possibility that mankind can have any rivals in intellectual power. This occurs as much amongst intellectual people as amongst others: they have more to lose”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 410-411. Turing anticipated numerous objections and prejudices against intelligent machines. One of the most famous objections was put forth by Ada Lovelace, who referenced Charles Babbage’s machine: «The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform». Cf. Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 392-393, “The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the working out of consequences from data and general principles”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 456.

58 “The essential argument against formal specification is their difficulty. Formal specifications, it is said, are hard to learn, hard to write, hard to read”, Bertrand Meyer (1991), Introduction to the Theory of Programming Languages, London, Prentice Hall, p. 4.

59 “Why is computer so important to mankind? We might have felt at one time that calculating would make up only a tiny part of human activity. Computing would seem, in the post-Galilean era, to have one foot in the physical sciences and the other in the accounting world. For this reason the average person might expect that little of his intellectual activity need to be spent in computing”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 344.

60 “Another argument that one used to hear against formal specifications was that they only applied to toy examples, and failed to describe full-size, realistic languages”, Bertrand Meyer (1991), Introduction to the Theory of Programming Languages, p. 4.

61 “A variant of Lady Lovelace's objection states that a machine can 'never do anything really new'”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 454. In response, Turing stated that it was impossible to ascertain the outcome of a particular code without executing it. “Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. One does not need to be able to understand how these orders lead to the machine's subsequent behavior, anymore that one needs to understand the mechanism of germination when one puts a seed in the ground. The plant comes up whether one understands or not”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 485. “There is an old saw which says, « Computers can only do what you tell them to do. » This is right in one sense, but it misses the point: you don't know in advance the consequences of what you tell a computer to do; therefore its behavior can be as baffling and surprising and unpredictable to you as that of a person. [...] There is another sense in which this old saw is rusty. This involves the fact that as you program in ever higher-level languages, you know less and less precisely what you've told the computer to do! Layers and layers of translation may separate the « front end » of a complex program form the actual machine language instructions. At the level you think and program, your statements may resemble declaratives and suggestions more than they resemble imperatives or commands”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 306.

62 “All this work helped to show many people in a variety of scientific disciplines the tremendous breadth of applicability of the electronic computer. It certainly was seminal and, I believe, it was vital in conditioning scientists to accept and welcome the computer as a basic new tool both for the experimentalist and the theoretician”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 299. “If asked what is really the gist of the matter in our still ongoing change from analogue to digital media ‒ what « the real revolution » is ‒ my answer, at least, would be transmedialisation. The shift from media orientation to data orientation with its focus on abstraction, modelling and multi-purpose representations can be shown particularly clearly for the field of scholarly editions”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32.

63 Si pensa spesso che le culture scientifica e umanistica siano contrapposte nei metodi e nelle finalità. Secondo i pregiudizi degli osservatori distratti, la prima si interessa dell'esperienza pubblica, universale, oggettiva, quantitativa, unitaria, e il suo linguaggio è preciso, razionale, fatto di idee e concetti. La seconda guarda invece all'esperienza privata, particolare, soggetttiva, qualitativa, molteplice, e il suo linguaggio è ambiguo, emotivo, fatto di immagini e racconti. Questi pregiudizi vengono messi profondamente in crisi dalla constatazione che scienza e arte, e cioè le rispettive punte di diamante delle due culture, sono visioni complementari e non contraddittorie del mondo, sia esterno che interno. [...] La prova più esplicita della compatibilità fra scienza e arte si trova proprio nella matematica, che fornisce ad entrambe uno strumento comune per esprimerne gli aspetti essenziali”, Piergiogio Odifreddi (2005), Penna, pennello e bacchetta. Le tre invidie del matematico, Bari, Laterza, p. 53.

64 “Although it is certainly true that the computer can solve very many problems in areas that can be rendered into a mathematical form, this is a rather sterile and not very useful definition since it suggests largely scientific and engineering applications far removed from the man in the street. It is therefore better to recognize that what a computer really deals with is not just numbers alone but rather with information broadly. It does not just operate on numbers; rather, it transforms information and communicates it. Perhaps the greatest importance of the stored-program concept lies precisely at this point. As we have seen, information – the instructions characterizing a problem – can be coded into numerical form and then altered at will at the computer as the computation proceeds. This may well be the genesis of the idea of encoding information into digital form and then transforming it as desired. Herein lies the key to the importance of electronic computers. Their universality makes them as useful for sorting information as for multiplying numbers. […] In sum, the importance of the computer to society lies not only in its superb ability to do very complex tasks of an abstruse mathematical nature but also in its ability to alter profoundly the communication and transformation of all sorts of information. It is the latter capacity that has been so useful to the humanists and the sociologist as well as to the businessman”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 344-345, “In the late 1950s Newell and Simon proved that computers could do more than calculate. They demonstrated that a computer's strings of bits could be made to stand for anything, including features of the real world, and that its programs could be used as rules for relating these features”, Hubert L. Dreyfus (1992), What Computers Still Can't Do: A Critique of Artificial Reason, Cambridge, MIT Press, p. x, “Some learning algorithms only work with numerical feature vectors. When some feature in your dataset is categorical, like « colors » or « days of the week, » you can transform such a categorical feature into several binary ones”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 44. “Physically, a bit is just a magnetic « switch » that can be in either of two positions. You could call the two positions « up » and « down », or « x » and « o », or « 1 » and « 0 »... The third is the usual convention. It is perfectly fine, but it has the possibly misleading effect of making people think that a computer, deep down, is storing numbers. This is not true. A set of thirty-six bits does not have to be thought of as a number any more than two bits has to be thought of as the price of an ice cream cone. Just as money can do various things depending on how you use it, so a word in a memory can serve many functions. [...] How a word in memory is to be thought of depends entirely on the role that this word plays in the program which uses it. It may, of course, play more than one role–like a note in a canon”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 288-289.

65 Cf. Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche. Analisi, algebra e geometria analitica, Turin, Bollati Boringhieri, 2010, 19-21.

66 “Language and number serve as instrumental aids to the processes of reasoning”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 2, “Indeed, numbers acquire a life of their own, devoid of any direct reference to concrete sets of objects. The scaffolding of mathematics can then rise, ever higher, ever more abstract”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, Oxford, Oxford University Press, p. xix, “Obviously, what distinguishes us from other animals is our ability to use arbitrary symbols for numbers, such as words or Arabic digits. These symbols consist of discrete elements that can be manipulated in a purely formal way, without any fuzziness”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 62, “language eases the computation and communication of precise numerical quantities”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 75, “A system of symbolic numerals, however, seems essential in order to go beyond this evolutionarily ancient system and to perform exact calculations”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 263, “In well-trained adults, at least, parietal cortex is the place where quantities and symbols meet. Education provides us with a shared neuronal code for numerosities and symbols”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 271.

67 Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 79.

68 “Logic is conversant with two kinds of relations, – relations among things, and relations among facts. But as facts are expressed by propositions, the latter species of relation may, at least for the purposes of Logic, be resolved into a relation among propositions”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 7, “As pointed out, the nervous system is based on two types of communications: those which do not involve arithmetical formalisms, and those which do, i.e. communication of orders (logical ones) and communication of numbers (arithmetical ones). The former may be described as language proper, the latter as mathematics”, John von Neumann (2012), The Computer & the Brain, New Haven/London, Yale University Press, p. 82, “The foundations of any mathematical construction are grounded on fundamental intuition such as notions of set, number, space, time, or logic. These are almost never questioned, so deeply do they belong to the irreducible representations concocted by our brain. Mathematics can be characterized as the progressive formalization of these intuitions. Its purpose is to make them more coherent, mutually compatible, and better adapted to our experience of the external world”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 228.

69 “The terms narrow and weak are used to contrast with strong, human-level, general, or full-blown AI (sometimes called AGI, or artificial general intelligence) – that is, the AI that we see in movies, that can do most everything we humans can do, and possibly much more”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, p. 40-41, “But essentially everyone in AI research agrees that core « commonsense » knowledge and the capacity for sophisticated abstraction and analogy are among the missing links required for future progress in AI”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, p. 322.

70 “That is the fact that interhuman communication is far less rigidly constrained than human-machine communication. For instance, we often produce meaningless sentence fragments as we search for the best way to express something, we cough in the middle of sentences, we interrupt each other, we use ambiguous descriptions and « improper » syntax, we coin phrases and distort meanings – but our message still gets through, mostly. With programming languages, it has generally been the rule that there is a very strict syntax which has to be obeyed one hundred per cent of the time; there are no ambiguous words or constructions”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297, “The amazing thing about language is how imprecisely we use it and still manage to get away with it”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 674.

71 “To our minds it is clearest when several steps are telescoped together, to form one single sentence”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 60, “We can't possibly keep track of everything happening around us, so the brain takes mental shortcuts. Shortcuts enable us to make rough judgments, but often they distort our counting”, Deborah Stone (2020), Counting: How We Use Numbers to Decide What Matters, New York, Liveright Publishing Corporation, p. 40.

72 “Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 86, “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 201.

73 “Logics and statistics should be primarily, although not exclusively, viewed as the basic tools of « information theory »”, John von Neumann (2012), The Computer & the Brain, p. 2, “In principle the computer can also be used as a heuristic tool to study a large class of related situations in the hopes of finding any regularities that may exist”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 87. Rens Bod (2013), A New History of the Humanities, p. 298-299.

74 “Deductive rules are non-ampliative, but that does not mean that they play no useful role in knowledge. Since the conclusion makes explicit all or part of what is contained in the premises, establishing that the conclusion is plausible facilitates the comparison of the premises with experience”, Carlo Cellucci (2013), Rethinking Logic: Logic in Relation to Mathematics. Evolution, and Method, New York, Springer, p. 306.

75 “The necessity for using the intuition is then greatly reduced by setting down formal rules for carrying out inferences which are always intuitively valid”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 135, “The computer is a logical, mathematical system, upon which higher-level statistical, probabilistic systems, such as human language and intelligence, could possibly be built”, George B. Dyson (2012), Turing’s Cathedral, p. 278.

76 Complessità strutturale che resiste certo all'analisi, ma non vi si sottrae”, Umberto Eco (1975), Trattato di semiotica generale, p. 341, “Il proposito della teoria dei codici era di mostrare che i linguaggi, se pure non hanno logica esatta, hanno almeno una qualche logica. E probabilmente il problema non è quello di trovare una logica, se per logica si intende solo una teoria strettamente assiomatizzata. Si tratta di trovare una teoria semiotica”, Umberto Eco (1975), Trattato di semiotica generale, p. 219, nota 4.

77 “The patterns found can consist of a regularity (often with exceptions) but they can also consist of a system of rules such as a grammar, or a system of interpretations, and they may even be similar to 'laws' such as the sound shift laws in linguistics and the laws of harmony in music”, Rens Bod (2013), A New History of the Humanities, p. 9, “Yet I will argue that seeking and finding patterns is timeless and ubiquitous, not only when observing nature but also when examining texts, art, poetry, theatre, languages, and music. Just as in all other scholarship, it is about trying to make a meaningful distinction between fortuitous and non-fortuitous pattern”, Rens Bod (2013), A New History of the Humanities, p. 10, “regularities found in previous observations impose themselves on new observations. In this regard there is no essential difference between the study of scientific phenomena and humanities phenomena”, Rens Bod (2013), A New History of the Humanities, p. 72, “Both humanists and scientists search for underlying patterns, which they try to express in logical, procedural or mathematical formalizations”, Rens Bod (2013), A New History of the Humanities, p. 355, “The quest for patterns represents an uninterrupted constant in humanistic research and is being investigated increasingly often with the aid of cognitive and digital approaches”, Rens Bod (2013), A New History of the Humanities, p. 362.

78 “Since Panini specifies a clear procedure for his grammar, which he expresses as a system of rules, we will designate his method as the procedural system of rules principle”, Rens Bod (2013), A New History of the Humanities, p. 16, “Medieval logic was rule-based and procedural virtually all over the world, with the syllogism as the most important reasoning pattern. If scholars believed they could discover a formal system of rules somewhere, it was apparently in the structure of human reasoning”, Rens Bod (2013), A New History of the Humanities, p. 129.

79 “Abstraction, in some form, underlies all of our concepts [...] Abstraction is closely linked to analogy making. [...] « the perception of a common essence between two things »”, Melanie Mitchell (2019), Artificial Intelligence, p. 319‑320, “In short, analogies, most often made unconsciously, are what underlie our abstraction abilities and the formation of concepts”, Melanie Mitchell (2019), Artificial Intelligence, p. 320.

80 “As the cognitive scientist Robert French phrased it, abstraction and analogy are all about perceiving « the subtlety of sameness. » To discover this subtle sameness, you need to determine which attributes of the situation are relevant and which you can ignore”, Melanie Mitchell (2019), Artificial Intelligence, p. 332.

81 Una unità culturale non può essere però identificata soltanto attraverso la serie dei propri interpretanti. Essa deve essere definita come POSTA in un sistema di altre unità culturali che vi si oppongono o la circoscrivono. Un'unità culturale « esiste » solo in quanto ne viene definita un'altra che vi si oppone. È solo la relazione tra i vari elementi di un sistema di unità culturali che sottrae a ciascuno dei termini ciò che è portato dagli altri”, Umberto Eco (1975), Trattato di semiotica generale, p. 108, “il soggetto di ogni attività semiosica non è altro che il risultato della segmentazione storica e sociale dell'universo, quello stesso che l'indagine sulla natura dello spazio semantico globale ha reso evidente. Questo soggetto si presenta nella teoria dei codici come un modo di vedere il mondo; per conoscerlo non si può che vederlo come un modo di segmentare l'universo e di associare unità espressive a unità di contenuto, in un lavoro nel corso del quale queste concrezioni storico-sistematiche si fanno e si sfanno senza posa”, Umberto Eco (1975), Trattato di semiotica generale, p. 377, “See here how a quantitative history of literature is also a profoundly formalist one–especially at the beginning and at the end of the research process. At the end, because it must account for the data; and at the beginning, because a formal concept is usually what makes quantification possible in the first place: since a series must be composed of homogeneous objects, a morphological category is needed ‒ « novel », « anti‑Jacobin novel », « comedy », etc. – to establish such homogeneity”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, p. 86, note 14, “Every man will tend to segregate a mass of moving matter as a unit, separate from the static background, and to pay it particular attention”, Rens Bod (2013), A New History of the Humanities, p. 62. Cf. also “Ultimately, the database nature of new media, its inherent indexicality, means that this decision about structure and order are far more fluid than they are for artifactual physical objects. We need to figure out how to best chunk this data and process it to extract the meaningful information that will make it usable and discoverable now and in the future”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 146.

82 “The Infinity Principle: To shed light on any continuous shape, object, motion, process, or phenomenon – no matter how wild and complicated it may appear – reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole”, Steven Strogatz (2019), Infinite Powers. How Calculus Reveals the Secrets of the Universe, New York, Eamon Dolan Book, p. xvi.

83 “Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cutting process always involves infinitely fine subtraction, which is used to quantify the differences between the parts. Accordingly, this half of the subject is called differential calculus. The reassembly process always involves infinite addition, which integrates the parts back into the original whole. This half of the subject is called integral calculus”, Steven Strogatz (2019), Infinite Powers, p. xv.

84 In realtà nei linguaggi naturali le unità culturali di rado sono entità formalmente univoca e spesso sono ciò che la logica dei linguaggi naturali chiama oggi « fuzzy concepts », o insiemi sfumati (Lakoff, 1972)”, Umberto Eco (1975), Trattato di semiotica generale, p. 119.

85 “Formal logic can only take account of relations which are formally expressed (VI.16); and it may thus, in particular instances, become necessary to express, in a formal manner, some connexion among the premises which, without actual statement, is involved in the very meaning of the language employed”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 204, “una caratteristica dello stesso strumento informatico, dal suo paradossale limite che potremmo così formulare: « L’informatica può risolvere in modo soddisfacente solo problemi informatici, cioè posti in modo informatico ». È questo il « difetto » dell’informatica, il suo intrinseco limite; ma si tratta di un limite davvero paradossale, perché si rovescia in un elemento di illimitata pervasività. Per poter usare il computer, per giovarsi al meglio delle sue straordinarie possibilità, si è, prima o poi, costretti a modificare l’intero processo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 24.

86 Al contrario, in una procedura ecdotica segnata dall'informatica la trascrizione da un manoscritto rappresenta il momento forte, anzi decisivo, non solo perché è il momento più costoso in termini di tempo/uomo ma sopprattutto perché configura il momento crucialissimo della codifica, cioè dell'immissione nella macchina dell'informazione da cui dipenderanno tutti i successivi trattamenti e manipolazioni”, Raul Mordenti (2001), Informatica e critica dei testi, p. 29.

87 “The use of a modern computing machine is based on the user's ability to develop and formulate the necessary complete codes for any given problem that the machine is supposed to solve”, John von Neumann (2012), The Computer & the Brain, p. 71.

88 “Representations try to capture objects in their entirety and can be further transformed into publications. This already indicates a possible distinction between representation and presentation which will be discussed later. For the moment it is important that representation is a necessity for an edition. Critical engagement without representation is not an edition – but an examination, a catalogue or a description”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 24, “today information resources are being created without primarily thinking of them in terms of publication. We are less looking forward to the layout and functionality of the presentation, but start with the decoding and encoding of what is actually there. We create information resources that are guided by abstract models and abstract descriptions of the objects at hand. The dogma of our current markup strategies is the separation or rather translation from form to content. Thus, we do not just transform our textual witnesses from one (material) media and form into another (digital) media and form. Rather, we try to encode structures and meaning of documents and texts beyond their mediality”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32, “Here we see a transition from the edition as a media product to the edition as a modelled information resource that can be presented in media but is about the abstract representation of knowledge in the first place”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32.

89 “[Vi sono] due momenti caratteristici dell’impatto dell’informatica con il mondo: 1) in una prima fase il computer viene inteso/usato come macchina utile per la risoluzione di vecchi problemi tipici dell’assetto epistemico dato; b) in una seconda fase il computer viene finalmente inteso come generatore di problemi inediti in un assetto epistemico del tutto nuovo (determinato dallo stesso uso dell’informatica)”, Raul Mordenti (2001), Informatica e critica dei testi, p. 22.

90 In the same way as developers strive to build user-friendly digital tools and solutions.

91 “Infatti la macchina informatica che legge (per così dire) chiude il cerchio dell'utilizzazione dell'informatica in filologia. Con essa diventa evidente che non si tratta più di un'utilizzazione parziale, cioè dell'ausilio dell'informatica (intesa ancora come strumento) per superare le difficoltà quantitative della antiche procedure (una fase questa che è bene rappresentata, per la filologia, dai primi tentativi di informatizzare la stemmatica degli anni Settanta). Ora, se la macchina « sa leggere », si tratta invece evidentemente anche di scrivere per la macchina, e, più in generale, di tener conto della macchina quando si scrive; ma dunque di tenere conto della macchina soprattutto quando si produce quella forma fortissima e (in senso proprio) fondamentale di scrittura che è l'edizione di un testo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 45-46.

92 “So it is important that the high-level program, while comfortable for the human, still should be unambiguous and precise”, Douglas R. (1999), Gödel, Escher, Bach, p. 298.

93 “Who seeks for methods without having a definite problem in mind seeks for the most part in vain”, David Hilbert (2005), “Mathematical Problems (1900b)”, in William B. Ewald, From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Oxford, Oxford University Press, p. 1101.

94 Richard P. Feynman (1996), Lectures on Computation, ed. by Anthony J.G. Hey & Robin W. Allen, Boston, Addison-Wesley Publishing Compagny, p. 54. “La conclusione inevitabile a cui ci conduce l'analisi svolta in questo testo è che la costruzione di teorie, ipotesi e modelli (il theory-building) da parte degli esseri umani non solo non è stata e non può essere dismessa dalla Big Data Revolution, ma rimane essenziale per sfruttare al meglio ciò che le macchine possono offrire”, Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 125.

95 Indebolire la pretesa di esattezza per produrne una simulazione più semplice e adeguata: è questo una sorta di paradosso come pure una tacita e risolutiva astuzia della scienza del calcolo”, Paolo Zellini (2022), Discreto e continuo. Storia di un errore, Milan, Adelphi, p. 237. “In definitiva, anche quando un idioletto d'opera sia identificato al massimo grado, rimangono infinite sfumature, a livello della pertinentizzazione dei livelli inferiori del continuum espressivo, che non saranno mai completamente risolte, perché spesso neppure l'autore ne è cosciente. Ciò non significa che non siano analizzabili, ma significa certamente che la loro analisi è destinata ad approfondirsi di lettura in lettura e il processo interpretativo assume l'aspetto di una approssimazione infinita […] Vi è però una pigrizia filosofica nell'etichettare come « intuizione » tuto ciò che richiede una analisi molto approfondita per essere descritto con sufficiente approssimazione”, Umberto Eco (1975), Trattato di semiotica generale, p. 340‑341.

96 “Evans identified two central questions: how to represent the line figures and how to define the transformation rules. Like contemporary case-based reasoning advocates, Evans recognized the importance of describing his figures in a standardized way. If the program (rather than the program user) were really to do the work, there should be no leeway for arbitrary choices in the representation”, Paul F. A. Bartha (2010), By Parallel Reasoning, p. 64. “While the electronic computer produced a revolution by increasing incredibly the speed of processing data, it still left a large task for the human: the task of programming the problems to be run. It is therefore not at all surprising that from the start great emphasis was put upon methods for alleviating the burden of the programmer”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 333.

97 “The chief practical difficulty of this inquiry will consist, not in the application of the method to the premises once determined, but in ascertaining what the premises are. In what are regarded as the most rigorous examples of reasoning applied to metaphysical questions, it will occasionally be found that different trains of thought are blended together; that particular but essential parts of the demonstration are given parenthetically, or out of the main course of the argument; that the meaning of a premiss may be in some degree ambiguous; and, not unfrequently, that arguments, viewed by the strict laws of formal reasoning, are incorrect or inconclusive. The difficulty of determining and distinctly exhibiting the true premises of a demonstration may, in such cases, be very considerable. […] The necessity of a rigorous determination of the real premises of a demonstration ought not to be regarded as an evil; especially as, when the task is accomplished, every source of doubt or ambiguity is removed”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 185-186. “It should be noted that this connection-pattern can be set up at will – indeed, this is the means by which the problem to be solved, i.e. the intention of the user, is impressed on the machine”, John von Neumann (2012), The Computer & the Brain, p. 11, “the art of computing consists to no small degree of measures to keep this effect down [the amplification of errors introduced by earlier operations]”, John von Neumann (2012), The Computer & the Brain, p. 28.

98 “This was done in part by thinking things through logically, but also perhaps more importantly by coding a large number of problems. Through this procedure real difficulties emerged and helped illustrate general problems that were then solved”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 267.

99 “Digital computers are able to answer most – but not all – questions stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). Computers have been getting better and better at providing answers–but only to questions that programmers are able to ask. What about questions that computers can give useful answers to but that are difficult to define? In the real world, most of the time, finding an answer is easier than defining the question”, George B. Dyson (2012), Turing’s Cathedral, p. 262-263.

100 “Back to Leibniz!” is the title of a 1932 article by Norbert Wiener, a major figure of cybernetics : Norbert Wiener (1985), Cybernetics: or Control and Communication in the Animal and the Machine, Cambridge, M.I.T. Press.

101 “The machine's processes are mosaics of very simple standard parts, but the designs can be of great complexity, and it is not obvious where the limit is to the patterns of thought they could imitate”, Alan Turing, “Can Automatic Calculating Machines Be Said to Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 500.

102 “Doing digital preservation requires a foundational understanding of the structure and the nature of digital information and media. [...] first, all digital information is material. Second, the database is an essential media form for understanding the logic of digital information systems. Third, digital information is best understood as existing in and through a nested set of platforms”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 34, “In this context, like so many others, to be able to decide on how to collect content and ultimately to arrange and describe it requires significant technical understanding of how the underlining system functions and has been designed”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 146.

103 “It seems that the data is the place where the editorial content is stored, where the editorial processes are recorded and the editorial knowledge is kept. The most important task for the editor is the creation of information as rich, accurate and reliable data. The creation of online publications or print spin-offs from this data may be left to other specialists such as publishing houses, web agencies or media designers”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 36-37. Computer science teaches us that noise should be removed from data as early as possible to prevent it from accumulating along the way. Cf. “[Bigelow’s] Maxim 5 added that « if noise is ever to be filtered from signal, it must be done at the earliest possible stage rather than after the two are tangled with other noises and signals »”, George B. Dyson (2012), Turing’s Cathedral, p. 112.

104 Researcher should also be aware of the importance of copyleft: the practice of granting the right to freely distribute and modify a work, while requiring that the same rights be preserved (and not restricted) in derivative works created from that original work.

105 “Arrangement and description is the process by which collections are made discoverable, intelligible, and legible to their future users”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 129.

106 Anche per le discipline umanistiche « il Web semantico » rappresenta « il futuro » e l'informatica umanistica è chiamata a fornire «rappresentazioni formali del lascito documentario dell'umanità (human record)”, adatte cioè ad essere elaborate automaticamente. Infatti, come ancora ricorda John Unsworth, “queste rappresentazioni – ontologie, schemi, rappresentazioni della conoscenza, comunque le si voglia chiamare – dovrebbero essere prodotte da persone formate negli studi umanistici. E la disciplina che li produce richiede una formazione umanistica, unita ad una conoscenza di elementi di matematica, logica, ingegneria e informatica. […] C'è una grande quantità di lavoro da fare – e indubbiamente non tutto di natura tecnica. Nella costruzione di questa grande mappa del sapere una grande parte sarà costituita da lavoro collaborativo (social work), creazione del consenso, compromesso. Ma anche quest'attività dovrà essere affidata a persone che sappiano come il consenso possa essere raggiunto ed espresso in un medium di natura computazionale”, Dino Buzzetti (2012), “Cos’è, oggi, l’informatica umanistica? L’impatto della tecnologia”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, 127-128, Hans & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 102.

107 “This separation of ethical questions away from the technical reflects a wider problem in the field, where the responsibility for harm is either not recognized or seen as beyond the scope of the research”, Kate Crawford (2021), Atlas of AI, p. 117, “Simply leaving regulation up to AI practitioners would be as unwise as leaving it solely up to government agencies”, Melanie Mitchell (2019), Artificial Intelligence, p. 150.

108 Se l'etica è la logica dell'agire giusto, la logica è l'etica del pensare. [...] Parlare con giustezza è un modo dell'agire responsabile. Fare asserzioni è assumersi l'impegno di sosterne la loro verità”, Roberta de Monticelli (2006), Esercizi di pensiero per apprendisti filosofi, p. 11.

109 “We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence. Machine stupidity creates a tail risk. […] Or as the AI researcher Pedro Domingos so memorably put it, « People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world. »”, Melanie Mitchell (2019), Artificial Intelligence, p. 368.

110 Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 115.

111 “The philosopher Achille Mbembé [...] writes: « It is about extraction, capture, the cult of data, the commodification of human capacity for thought and the dismissal of critical reason in favour of programming…. Now more than ever before, what we need is a new critique of technology, of the experience of technical life. »”, Kate Crawford (2021), Atlas of AI, p. 225-226, “Suddenly what were once only academic questions have started to matter very much in the real world”, Melanie Mitchell (2019), Artificial Intelligence, p. 322, “Perhaps AI will take away truck-driving jobs, but because of the need to develop AI ethics, the field will create new positions for moral philosophers”, Melanie Mitchell (2019), Artificial Intelligence, p. 358. Cf. also “the AI-problem, the problem of giving a description of intelligent behaviour that would be precise enough to make a computer simulation possible. Everyone nowadays seems to be concerned with this problem, I said. Neurophysiologists and psychologists who work on the problem of perception are joining forces with computer scientists, and even the philosophers are hoping against hope to be taken seriously, and thereby to get a salary raise”, Gian-Carlo Rota (1986), “In Memoriam of Stan Ulam. The Barrier of Meaning”, Physica, 22D, p. 1.

112 Cf. “The integration of linguistics and logic was one of the major attainments of twentieth-century humanities. It has had an enormous effect on other disciplines both within and outside the humanities, from musicology to literary studies and from cognitive psychology to artificial intelligence”, Rens Bod (2013), A New History of the Humanities, p. 297.

113 “If I were to choose a patron saint for cybernetics out of the history of science, I should have to choose Leibniz. The philosophy of Leibniz centers about two closely related concepts – that of a universal symbolism and that of a calculus of reasoning. From these are descended the mathematical notation and the symbolic logic of the present day. Now, just as the calculus of arithmetic lends itself to a mechanization progressing through the abacus and the desk computing machine to the ultra-rapid computing machines of the present day, so the calculus ratiocinator of Leibniz contains the gems of the machina ratiocinatrix, the reasoning machine”, Norbert Wiener (1985), Cybernetics, p. 12, “[Leibniz’s] four great accomplishments to the field of computing: his initiation of the field of formal logics; his construction of a digital machine; his understanding of the inhuman quality of calculation and the desirability as well as the capability of automating this task; and, lastly, his very pregnant idea that the machine could be used for testing hypotheses”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 9, “Leibniz believed, following Hobbes and in advance of Hilbert, that a consistent system of logic, language, and mathematics could be formalized by means of an alphabet of unambiguous symbols manipulated according to mechanical rules. […] With his logical calculus, or calculus ratiocinator, Leibniz took the first steps towards his vision of a "universal symbolistic in which all truths of reason would be reduced to a kind of calculus. […] he proposed a universal coding in which primary concepts would be represented by prime numbers–an all-encompassing mapping between numbers and ideas”, George B. Dyson (2012), Turing’s Cathedral, p. 103-104.

114 “Leibniz referred to such a system of characters as a characteristic. Unlike the alphabetic symbols which had no meaning, the examples just mentioned were, for him, a real characteristic in which each symbol represented some definite idea in a natural and appropriate way. What was needed, Leibniz maintained, was a universal characteristic, a system of symbols that was not only real, but which also encompassed the full scope of human thought”, Martin Davis (2018), The Universal Computer. The Road from Leibniz to Turing, Abingdon‑on‑Thames, CRC Press, p. 10‑11.

115 Leibniz a énoncé, il y a deux siècles, le projet de créer une écriture universelle, dans laquelle toutes les idées composées fussent exprimées au moyen de signes conventionnels des idées simples, selon des règles fixes”, Giuseppe Peano (1894), “Notations de Logique Mathématique. Introduction au Formulaire de mathématiques”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 3, “That zero and one where sufficient for logic as well as arithmetic was established by Gottfried Wilhelm Leibniz in 1679, following the lead given by Thomas Hobbes in his Computation, or Logique of 1656. « By ratiocination, I mean computation, » Hobbes had announced. « Now to compute, is either to collect the sum of many things that are added together, or to know what remains when one thing is taken out of another. Ratiocination, therefore is the same with Addition or Subtraction »”, George B. Dyson (2012), Turing’s Cathedral, p. 4.

116 “Fascinated by the Aristotelian division of concepts into fixed « categories, » Leibniz thought of what he came to call his « wonderful idea »: he would seek a special « alphabet » whose elements represented not sounds, but concepts. A language based on such an alphabet should make it possible to determine by symbolic calculation which sentences written in the language were true and what logical relationships existed among them”, Martin Davis (2018), The Universal Computer, p. 3.

117 “Itaque profertur hic calculus quidam novus et mirificus, qui in omnibus nostris ratiocinationibus locum habet, et qui non minus accurate procedit quam Arithmetica aut Algebra. Quo adhibito semper terminari possunt controversiae quantum ex datis eas determinari possibile est, manu tantum ad calamum admoto, ut sufficiat duos disputantes omissis verborum concertationibus sibi invicem dicere: calculemus, ita enim perinde ac si duo Arithmetici disputarent de quondam calculi errore”, Gottfried W. Leibniz (1666), Dissertatio de arte combinatoria, Berlin, De Gruyter, 1923. « Une manière de Langue ou d'Écriture universelle, mais infinement différente de toutes celles qu'on a projetées jusqu'ici ; car les caractères, et les paroles mêmes, y dirigeroient la Raison ; et les erreurs exceptées celles de fait, n'y seroient que des erreurs de calcul », Giuseppe Peano (1896), “Introduction au tome II du « Formulaire de mathématiques »”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 1.

118 “In 1666 Leibniz wrote the work De arte combinatoria, which was considered to be a continuation of Llull's project to discover truths through the exhaustive combination of concepts”, Rens Bod (2013), A New History of the Humanities, p. 195.

119 “the rules of deduction could then be reduced to manipulations of these symbols, via what Leibniz called a calculus ratiocinator, what nowadays might be called a symbolic logic”, Martin Davis (2018), The Universal Computer, p. 12. “According to Leibniz, however, in order to be an effective method of discovery, the axiomatic method must be converted into a formal axiomatic method. This involves setting up a formal language capable of expressing all thoughts, and a system of formal deductive rules capable of representing all human reasoning. [...] Such language will be based on the fact that « all human ideas can be resolved into a few ones as primitive ideas. » The latter are ideas « which are conceived per se, and from whose combination all our other ideas arise. » They are like « a sort of alphabet of human thought. » [...] One will then be able to assign characters to the primitive ideas and form new characters for all other ideas by means of combinations of such characters, which will « have among themselves the relation that the ideas have among themselves. »” Carlo Cellucci (2013), Rethinking Logic, p. 169, “On the other hand, Leibniz calls the system of formal deductive rules, capable of representing all human reasoning, « calculus of reasoning » [calculus ratiocinator]. Such a calculus is necessary because, to « avoid being left wandering in a labyrinth, » the « mind must be guided by some (as it were) sensible thread. » The mind is « unable to embrace distinctly many things at the same time, » only by means of a calculus of reasoning will it be able to « do without imagination, using signs in place of things. » The calculus of reasoning will be « like a sort of general algebra » that will « give the means to perform reasoning by calculation »”, Carlo Cellucci (2013), Rethinking Logic, p. 170.

120 “These mathematical tables were calculated by hand, and the mistakes were simply the result of human error. This caused Babbage to exclaim, « I wish to God these calculations had been executed by steam! » This marked the beginning of an extraordinary endeavor to build a machine capable of faultlessly calculating the tables to a high degree of accuracy. [...] The scientific tragedy was that Babbage's machine would have been a stepping-stone to the Analytical Engine, which would have been programmable. Rather than merely calculating a specific set of tables, the Analytical Engine would have been able to solve a variety of mathematical problems depending on the instructions that it was given. In fact, the Analytical Engine provided the template for modern computers. The design included a « store » (a memory) and a « mill » (processor), which would allow it to make decisions and repeat instructions, which are equivalent to the « ifthen… » and « loop » commands in modern programming”, Simon Singh (1999), The Code Book, The Science of Secrecy from Ancient Egypt to Quantum Cryptography, New York, Anchor Books, p. 64-65. Cf. Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 17-18 and illustration 4 following p. 120.

121 “There is not only a close analogy between the operations of the mind in general reasoning and its operations in the particular science of Algebra, but there is to a considerable extent an exact agreement in the laws by which the two classes of operations are conducted”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 6, “Whence it is that the ultimate laws of Logic are mathematical in their form; why they are, except in a single point, identical with the general laws of Number; and why in that particular point they differ”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 11.

122 “The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 1. “George Boole’s great achievement was to demonstrate once and for all that logical deduction could be developed as a branch of mathematics”, Martin Davis (2018), The Universal Computer, p. 30.

123 Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche, p. 88.

124 The term “algorithm” was also derived from his name.

125 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 6.

126 “As the combination of two literal symbols in the form xy expresses the whole of that class of objects to which the names or qualities represented by x and y are together applicable, it follows that if the two symbols have exactly the same signification, their combination expresses no more than either of the symbols taken alone would to. In such case we should therefore have xy = x. As y is, however, supposed to have the same meaning as x, we may replace it in the above equation by x, and we thus get xx = x. Now in common Algebra the combination xx is more briefly represented by x2. Let us adopt the same principle of notation here; for the mode of expressing a particular succession of mental operations is a thing in itself quite as arbitrary as the mode of expressing a single idea or operation (II.3). In accordance with this notation, then, the above equation assumes the form x2 = x, and is, in fact, the expression of a second general law of those symbols by which names, qualities, or descriptions, are symbolically represented”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 31.

127 “We have seen (II.9) that the symbols of Logic are subject to the special law, x2 = x. Now of the symbols of Number there are but two, viz. 0 and 1, which are subject to the same formal law. We know that 02 = 0, and that 12 = 1; and the equation x2 = x, considered as algebraic, has no other roots than 0 and 1”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 37-38, “The literal symbols of Logic are universally subject to the law whose expression is x2 = x. Of the symbols of Number there are two only, 0 and 1, which satisfy this law. But each of these symbols is also subject to a law peculiar to itself in the system of numerical magnitude, and this suggests the inquiry, what interpretations must be given to the literal symbols of Logic, in order that the same peculiar and formal laws may be realized in the logical system also”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 46-47, “the fundamental law of duality (2) Chap. II., whose expression is x2 = x, or, x(1 - x) = 0; a law, which while it serves to distinguish the system of thought in Logic from the system of thought in the science of quantity, gives to the processes of the former a completeness and a generality which they could not otherwise possess”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 166, “His remark about a « special law to which the symbols of quantity are not subject » is very important: this law in effect is that x2 = x for every x in his system. Now in numerical terms this equation or law has as its only solution 0 and 1. This is why the binary system plays so vital a role in modern computers: their logical parts are in effect carrying out binary operations”, Herman H. (1972), The Computer from Pascal to von Neumann, p. 37.

128 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 50. Cf. also “That axiom of metaphysicians which is termed the principle of contradiction, and which affirms that it is impossible for any being to possess a quality, and at the same time not to possess it, is a consequence of the fundamental law of thought, whose expression is x2 = x. Let us write this equation in the form xx2 = 0, whence we have x(1 – x) = 0”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 49, “Thus it is a consequence of the fact that the fundamental equation of thought is of the second degree, that we perform the operation of analysis and classification, by division into pairs of opposites, or, as it is technically said, by dichotomy”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 50-51.

129 George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities. “In Boole's system 1 denotes the entire realm of discourse, the set of all objects being discussed, and 0 the empty set”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 37, “Nothing and Universe are the two limits of class extension, for they are the limits of the possible interpretations of general names”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann p. 76.

130 “We might undoubtedly have established the theory of Primary Propositions upon the simple notion of space, in the same way as that of secondary propositions has been established upon the notion of time”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 174-175, “the real ground upon which the symbol 1 represents in primary propositions the universe of things, and not the space they occupy, is, that the sign of identity = connecting the members of the corresponding equations, implies that the things which they represent are identical, not simply that they are found in the same portion of space. Let it in like manner be affirmed, that the reason why the symbol 1 in secondary propositions represents, not the universe of events, but the eternity in whose successive moments and periods they are evolved, is, that the same sign of identity connecting the logical members of the corresponding equations implies, not that the events which those members represent are identical, but that the times of their occurrence are the same”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 176. “This led Boole to the principle that the algebra of logic was precisely what ordinary algebra would become if restricted to the two values 0 and 1. However, to make sense of this, it became necessary to reinterpret the symbols 0 and 1 as classes. A clue is provided by the behaviors of 0 and 1, respectively, with respect to ordinary multiplication: 0 times any number is 0; 1 times any number is that very number. In symbols, 0x = 0, 1x = x”, “Boole’s method of relating secondary propositions to his algebra of classes was to bring time into the picture. With each proposition Boole would in effect associate the class of instants of time for which that proposition was true. To say that proposition X is true, Boole would write X = 1 meaning that the class of instants in which the proposition is true encompasses the entire time span under consideration. Likewise, X = 0 would express that X is false, because there are no instants of time in which X is true. Given a proposition X&Y which expresses the truth of both X and Y, the set of instants in which it is true is just the set intersection XY. Finally, for a proposition if X then Y to be true, what is required is that any time that X is true, Y is also true, that is that there is no time when X is true and Y is false. As an equation: X(1 − Y) = 0”, Martin Davis (2018), The Universal Computer, p. 24, Martin Davis (2018), The Universal Computer, p. 188, n. 25.

131 Cf. “But nobody had shown with Shannon’s clarity and rigor that electrical engineers could use all the tools of Boolean algebra to design circuits with switches. In particular, if you can simplify a Boolean expression that describes a network, you can simplify the network accordingly”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 257. Charles Petzold (2000), Code. The Hidden Language of Computer Hardware and Software, Washington, Microsoft Press, p. 103.

132 “Mathematical logic, also called « logistic », « symbolic logic », the « algebra of logic », and, more recently, simply « formal logic », is the set of logical theories elaborated in the course of the last century with the aid of an artificial notation and a rigorously deductive method”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, Dordrecht, D. Reidel, p. 1, « Mais ce qui a le plus contribué à la solution du problème, c'est la nouvelle et importante science qu'on appelle Logique mathématique, et qui étudie les propriétés formelles des opérations et des relations de logique. [...] Par la combinaison des signes d'Algèbre et de Logique, on peut représenter toutes les relations de logique avec peu de signes, ayant une signification précise, et assujettis à des règles bien déterminées. En conséquence, en introduisant des signes pour indiquer les idées de l'Algèbre, ou de la Géométrie, on peut énoncer complètement en symboles les propositions des ces sciences », Giuseppe Peano (1894), “Notations de Logique Mathématique”, p. 3‑4.

133 “Is there a limit to what we can, in principle, compute? [...] Ironically, it turns out that all this was discussed long before computers were built! Computer science, in a sense, existed before the computer. It was a very big topic for logicians and mathematicians in the thirties”, Richard P. Feynman (1996), Lectures on Computation, p. 52.

134 Paolo Zellini (2022), Discreto e continuo, p. 67.

135 Martin Davis (2018), The Universal Computer, p. 57-59. “The essence of the diagonal method is the fact of using one integer in two different ways – or, one could say, using one integer on two different levels – thanks to which one can construct an item which is outside of some predetermined list”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 423.

136 In contrast to propositional logic, predicate logic allows for the analysis of the internal structure of propositions, rather than merely their connections.

137 In first-order logic quantification is restricted to individuals and doesn’t apply to sets of individuals, unlike in Frege's broader theory.

138 Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 6. “His solution was to develop his Begriffsschrift as an artificial language with mercilessly precise rules of grammar, or as one says, of syntax. This made it possible to exhibit logical inferences as purely mechanical operations, so-called rules of inference, having reference only to the patterns in which symbols are arranged. It was also the first example of a formal artificial language constructed with a precise syntax. From this point of view, the Begriffsschrift was the ancestor of all computer programming languages in common use today”, Martin Davis (2018), The Universal Computer, p. 40.

139 “The most immediate point of contact between my formula language and that of arithmetic is the way in which letters are employed”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 6.

140 Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 11, “This indeterminacy makes it possible to use letters to express the universal validity of propositions”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 10.

141 “Thus it is that the great mediaeval controversy over universals has flared up anew in the modern philosophy of mathematics. The issue is clearer now than of old, because we now have a more explicit standard whereby to decide what ontology a given theory or form of discourse is committed to: a theory is committed to those and only those entities to which the bound variables of the theory must be capable of referring in order that the affirmation made in the theory be true. […] the fundamental cleavages among modern points of view on foundations of mathematics do come down pretty explicitly to disagreements as to the range of entities to which the bound variables should be permitted to refer. […] The three main mediaeval points of view regarding universals are designated by historians as realism, conceptualism, and nominalism. Essentially these same three doctrines reappear in twentieth-century surveys of the philosophy of mathematics under the new names logicism, intuitionism, and formalism”, Willard V. O. Quine (1963), From a Logical Point of View, New York, Harper & Row, p. 13-14.

142 Cf. <https://plato.stanford.edu/entries/frege-theorem/#S2>.

143 In particular, this was achieved by introducing a “theory of types” to prevent circular assertions. This theory divides objects into distinct types and organizes them into different orders or levels: a set of a given type can only contain objects or sets of a lower type.

144 “The present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all its propositions are deducible from a very small number of fundamental logical principles”, Bertrand Russell, Principles of Mathematics (New York, 1950), quoted by Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 172. Cf. Willard V. O. Quine (1963), From a Logical Point of View, p. 81.

145 The idea of formalism was to separate mathematical concepts from their meaning and view them as abstract, meaningless symbols or expressions obeying and manipulable according to given formal, syntactical rules. For this, it is associated with nominalism, for which abstract entities do not exist anywhere, neither in reality nor in the mind, but are just names. Cf. “the formalist keeps classical mathematics as a play of insignificant notations. This play of notations can still be of utility – whatever utility it has already shown itself to have as a crutch for physicists and technologists. But utility need not imply significance, in any literal linguistic sense. […] For an adequate basis for agreement among mathematicians can be found simply in the rules which govern the manipulation of the notations – these syntactical rules being, unlike the notations themselves, quite significant and intelligible”, Willard V. O. Quine (1963), From a Logical Point of View, p. 15, “Levels are not cleanly separated, as the formalist version of what mathematics is would have one believe. The formalist philosophy claims that mathematicians only deal with abstract symbols, and that they couldn't care less whether those symbols have any applications to or connections with reality. But this is quite a distorted picture. Nowhere is this clearer than in metamathematics. If the theory of numbers is itself used as an aid in gaining factual knowledge about formal systems, then mathematicians are tacitly showing that they believe these ethereal things called « natural numbers » are actually part of reality – not just figments of the imagination”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 458. Besides logicism and formalism, there was also a third school: intuitionism, founded by L.E.J. Brouwer. Intuitionists (or constructivists) held that mathematical objects exist only to the extent that they can be understood and constructed by human thought. This view was associated with “conceptualism,” which posits that abstract entities exist only in the mind and not in the external world.

146 The axiomatization of a theory is the formulation of its fundamental properties and truths from which all others can be deduced.

147 “It was really Hilbert's stroke of genius to understand that formalization is the proper technique to tackle such foundational questions. What he taught us can be put roughly as follows. Suppose that T is an axiomatized theory which has been formalized in terms of the first order language L. This language has such a precise syntax that it itself can be studied as a mathematical object. One can ask for instance: « Can one possibly run into contradictions if one proceeds entirely formally within L, using only the axioms of T and those of classical logic, all of which have been expressed in L? » If one can prove mathematically that the answer to this question is « no », one has there a mathematical proof that the theory T is free of contradictions! This is basically what the famous « Hilbert program » was all about. [...] In short, the formalists tried to create a mathematical technique by means of which one could prove that mathematics is free of contradictions. This was the original purpose of formalism”, Ernst Snapper (1979), “The Three Crises in Mathematics: Logicism, Intuitionism and Formalism”, Mathematics Magazine, 52 (4), p. 207-216. DOI: https://doi.org/10.2307/2689412 [consulted on 06/12/2024], p. 214..

148 Cf. Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, Milano, Tascabili degli Editori Associati, p. 14 and p. 63-64.

149 “7.71. « Complete system in a wide sense » for: « axiomatic system which contains all the true sentences of a given domain ». It can also be said that no sentence of a given domain is true if it is not derivable in the system. 7.72. « Complete system in a strict sense » for: « axiomatic system in which each sentence which is not a law is the negation of one of its laws »”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 28. A “law” is a sentence asserted in the system, either an axiom (not derived in the system) or a theorem (deduced from the axioms by application of rules).

150 There is a subtle but crucial distinction between systems that are externally correct, meaning they do not demonstrate falsehoods, and those that are internally coherent, meaning they do not demonstrate contradictions. Cf. Piergiogio Odifreddi (2005), Penna, pennello e bacchetta, p. 81.

151 7.61. « Non-contradictory system » for: « axiomatic system whose rules of deduction do not allow a sentence to be deduced along with the negation of this sentence ». 7.62. In a complete system which is contradictory any sentence can be deduced”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 28. This is the famous principle ex falso quodlibet.

152 “By the Entscheidungsproblem of a system of symbolic logic is here understood the problem to find an effective method by which, given any expression Q in the notation of the system, it can be determined whether or not Q is provable in the system”, Alonzo Church, “A Note on the Entscheidungsproblem”, quoted by Jack B. (ed.) (2004), The Essential Turing, p. 45, George B. Dyson (2012), Turing’s Cathedral, p. 245-247.

153 Trying all possibilities or combinations only works if the solution exists, as it will eventually be found (perhaps after a long time). However, if no solution exists, the procedure will continue indefinitely, rendering it ineffective. “This gets to the crux of the matter of what should count as a « test ». Of prime importance is a guarantee that we will get our answer in a finite length of time. If there is a test for theoremhood, a test which does always terminate in a finite amount of time, then that test is called a decision procedure for the given formal system”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 40-41.

154 Propositional logic, for example, is decidable because a method exists ‒ the truth-table ‒ that can be used to test whether an arbitrary propositional formula is satisfiable. A formula is satisfiable if there is at least one interpretation that makes the formula true; it is valid if it is true under every interpretation. Cf. “Hilbert had also sought explicit calculational procedures by means of which it would always be possible to determine, given some premises and a proposed conclusion, written in the notation of what has come to be called « first-order logic » whether Frege’s rules would enable that conclusion to be derived from those premises”, Martin Davis (2018), The Universal Computer, p. 127, “actually, Hilbert did not put the Entscheidungsproblem in quite that way: he asked for a procedure to determine whether a given expression of first order logic is valid in every possible interpretation”, Martin Davis (2018), The Universal Computer, p. 203, n. 8, “The calculus of reasoning will provide a universal decision procedure. By means of this procedure, human beings will « always be able to know whether it is possible to decide the question on the basis of the knowledge which is already given to them. » This will be useful even when the question is decided negatively, because « it is important at least to know that what is sought cannot be found by the means available to us. »”, Carlo Cellucci (2013), Rethinking Logic, p. 171.

155 “This conviction of the solvability of every mathematical problem is a powerful incentive to the worker. We hear within us the perpetual call: There is the problem. Seek its solutions. You can find it by pure reason, for in mathematics there is no ignorabimus”, David Hilbert, Mathematical Problems (1900b) quoted in Jack B. Copeland (ed.) (2004), The Essential Turing, p. 47.

156 “One should guard against confusing axiomatization and formalization. […] Examples of axiomatized theories are Euclidean plane geometry with the usual Euclidean axioms, arithmetic with the Peano axioms, ZF [Zermelo–Fraenkel set theory] with its nine axioms, etc. […] Suppose then that some axiomatized theory T is given. Restricting ourselves to first order logic, « to formalize T » means to choose an appropriate first order language for T. The vocabulary of a first order language consists of five items, four of which are always the same and are not dependent on the given theory T. These four items are the following: (1) A list of denumerably many variables […] (2) Symbols for the connectives of everyday speech, say ¬ for « not, » for « and, » for the inclusive « or, »→ for « if then, » and ↔for « if and only if » […] (3) The equality sign = [...] (4) The two quantifiers, the « for all » quantifier and the « there exist » quantifier […] Since T is an axiomatized theory, it has so called « undefined terms. » One has to choose an appropriate symbol for every undefined term of T and these symbols make up the fifth item”, Ernst Snapper (1979), “The Three Crises in Mathematics: Logicism, Intuitionism and Formalism”, p. 213.

157 “Before leaving the subject of computability, I want to make some remarks about the related topic of « grammars ». In mathematics, as in linguistics, a grammar is basically a set of rules for combining the element of a language, only the language is a mathematical one (such as arithmetic or algebra). It is possible to misapply these rules”, Richard P. Feynman (1996), Lectures on Computation, p. 91.

158 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 16.

159 “Axiomatic System for « the set of expressions falling into two classes such that the elements of the second are derived from the first by the application of explicitly formulated rules »”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 26, “An axiomatic system contains terms, sentences, and laws, rules of definition for terms, rules of formation for sentences, and rules of deduction for laws”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 26.

160 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27.

161 Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27. “Turing suggests that any puzzle can be re‑expressed as a substitution puzzle […] and the rules of the puzzle, whatever they are, are to be represented in terms of permissible substitutions of groups of letters for other groups of letters. […] The axioms – which are simply strings of mathematical symbols – form the starting position. The theorem–another string of symbols – is the winning position. The rules of the puzzle are substitutions that enable streams of mathematical symbols to be transformed into other strings [...] Turing calls the substitution formulation of any puzzle its « normal form »”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 577.

162 Rules of Transformation, by which these laws can be developed so as to yield still further laws”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 22.

163 Gödel, Kurt (1931), “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I”, Monatshefte für Mathematik und Physik, 38 (1931), p. 173-198. To be precise, Gödel first demonstrated the completeness of predicate logic, while the completeness of propositional logic had already been demonstrated by Emil Post.

164 Naturalmente la matematica può essere formalizzata in tanti modi diversi, scegliendo diversi sistemi di assiomi. Si potrebbe dunque sospettare che i due risultati precedenti dipendano dal particolare sistema scelto, ma non è così. Si può mostrare che i due risutlati valgono per qualunque sistema formale, a patto che esso contenga l'aritmetica dei numeri interi nella sua formulazione usuale, e che non dimostri falsità: cioè, che i suoi assiomi non portino a risultati che siano refutabili per motivi intuitivi”, Piergiogio Odifreddi (2005), Penna, pennello e bacchetta, p. 18.

165 “The Epimenides paradox is a one-step Strange Loop, like Escher's Print Gallery. But how does it have to do with mathematics? That is what Gödel discovered. His idea was to use mathematical reasoning in exploring mathematical reasoning itself. This notion of making mathematics « introspective » proved to be enormously powerful, and perhaps its richest implication was the one Gödel found: Gödel's Incompleteness Theorem”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 17.

166 “Is G a TNT-theorem? If so, then it must assert a truth. But what in fact does G assert? Its own nontheoremhood. Thus from its theoremhood would follow its nontheoremhood: a contradiction. Now what about G being a nontheorem? This is acceptable, in that it doesn't lead to a contradiction. But G's nontheoremhood is what G asserts–hence G asserts a truth. And since G is not a theorem, there exists (at least) one truth which is not a theorem of TNT”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 448, « The fascinating thing is that any such system digs its own hole; the system's own richness brings about its own downfall. The downfall occurs essentially because the system is powerful enough to have self-referential sentences. [...] It seems that with formal systems there is an analogous critical point. Below that point, a system is "harmless » and does not even approach defining arithmetical truth formally; but beyond the critical point, the system suddenly attains the capacity for self-reference, and thereby dooms itself to incompleteness”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 470.

167 “But it cannot be emphasized too strongly that this undecidability is only with respect to provability inside the system. From our outside viewpoint, it is clear that U is true”, Martin Davis (2018), The Universal Computer, p. 97.

168 All of the following explanation is derived from Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 88-91.

169 “If provability in a formal system for arithmetic is definable within the system itself, but truth is not, then provability and truth are not the same thing. So either there are provable formulae that are not true, or there are true formulae that are not provable. In the first case the system is not correct, because it proves some falsehood, and in the second case it is not complete, because it does not prove some truth. Put another way, any formal system for arithmetic that is correct is incomplete, if it allows the demonstrability in the system itself to be defined internally”, Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 101. (my translation). “The keystone in Gödel’s proof of the existence of undecidable propositions is the fact that provability in PM can be expressed in PM itself”, Martin Davis (2018), The Universal Computer, p. 98.

170 “The property of a formula, that it is provable, is a purely combinatorial (formal) one, in that it does not depend on the meaning of the symbols. That a formula A is provable within a specified system simply means that there is a finite sequence of formulas that begins with some axioms of the system and ends with A, and which, in addition, has the property that each formula of the sequence arises from some of the preceding ones by application of a rule of inference (wheere as rules of inferences, in essence, only the substitution rule and the rule of implication come into play, which refer merely to simple combinatorial properties of formulas). The class of provable formulas may therfore be traced back to simple arithmetical concepts”, Gödel in a letter to Zermelo, 12 October 1931 (quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 103), in Kurt Gödel (2003), Collected Works, Vol. v, p. 42.

171 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104 (my translation).

172 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104 (my translation).

173 “That one can not capture all of mathematics in one formal system already follows according to Cantor's diagonal procedure, but nevertheless it remains conceivable that one could at least formalize certain subsystems of mathematics completely (in the syntactic sense). My proof shows that that is also impossible if the subsystem contains at least the concepts of addition and multiplication of whole numbers”, Gödel to Zermelo (12 October 1931, quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 103), in Kurt Gödel (2003), Collected Works, Vol. v, p. 429. Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 419. “One of the interesting things Gödel did was to designate each provable theorem by a sequence of integers with a corresponding situation for remarks about the theorem. This provides a numerical algorithm for each theorem and put us in the field of numerical computation”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 174, “As an informal introduction to the motivation for Gödel numbering, suppose we have a theory which talks about positive integers. We are interested in « keeping track » of the sentences of the theory within the theory itself; this cannot be done directly since the theory talks about numbers and not about sentences. We can, however, achieve the same result indirectly by assigning a number to each sentence (the so-called « Gödel number » of the sentence) and then translating any statement about the sentences to the corresponding statement about their Gödel numbers”, Raymond M. Smullyan (1961), Theory of Formal Systems, Theory of Formal Systems, p. 11.

174 “Typographical rules for manipulating numerals are actually arithmetical rules for operating on numbers. This simple observation is at the heart of Gödel's method, and it will have an absolutely shattering effect. It tells us that once we have a Gödel-numbering for any formal system, we can straightaway form a set of arithmetical rules which complete the Gödel isomorphism. The upshot is that we can transfer the study of any formal system – in fact the study of all formal systems – into number theory”, “The whole point of Gödel-numbering is that it shows how, even without formalizing quotation, one can get self-reference: through a code”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 264, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 738-739.

175 Kurt Gödel (1995), “[On undecidable sentences] (*1931?)”, in Kurt Gödel, Collected Works, Volume iii, p. 33 (quoted by Piergiorgio Odifressi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 98‑99).

176 “Viewed from the outside, these systems involve relationships among strings of symbols. On the inside, these systems can express propositions about various mathematical objects including natural numbers. Moreover, it isn’t difficult to think of ways that strings of symbols can be coded by natural numbers. Aha! By using such codes, the outside can be brought inside”, Martin Davis (2018), The Universal Computer, p. 95, “The crucial step in Gödel’s proof was his demonstration that the property of a natural number of being the code of a proposition provable in PM is itself expressible in PM. Using this fact, Gödel could construct propositions in PM that to one who knew the specific code being used could be seen to express the assertion that some proposition is not provable in PM. That is, he was able to construct proposition A that, read via the encoding, assert that some proposition B is not provable in PM”, Martin Davis (2018), The Universal Computer, p. 96, “And this is not an accidental feature of TNT; it happens because the architecture of any formal system can be mirrored inside N (number theory)”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 270. “Gödel's string G, and a Bach fugue: they both have the property that they can be understood on different levels”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 285.

177 Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 18.

178 Alan Turing, “On Computable Numbers”, p. 231.

179 “« On Computable Numbers » is regarded as the founding publication of the modern science of computing. It contributed vital ideas to the development, in the 1940s, of the electronic stored-programme digital computer”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 6.

180 “It is possible to produce the effect of a computing machine by writing down a set of rules of procedure and asking a man to carry them out. Such a combination of a man with written instructions will be called a « Paper Machine ». A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 416. “A systematic method-sometimes also called an effective method and a mechanical method – is any mathematical method of which all the following are true: the method can, in practice or in principle, be carried out by a human computer working with paper and pencil; the method can be given to the human computer in the form of a finite number of instructions; the method demands neither insight nor ingenuity on the part of the human being carrying it out; the method will definitely work if carried out without errors; the method produces the desired result in a finished number of steps; or, if the desired result is some infinite sequence of symbols (e.g. the digital expansion of π), then the method produces each individual symbol in the sequence in some finite number of steps”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 42.

181 Paolo Zellini (2022), Discreto e continuo, p. 264 (my translation). “An effective procedure is a set of rules telling you, moments by moment, what to do to achieve a particular end; it is an algorithm”, Richard P. Feynman (1996), Lectures on Computation, p. 52.

182 Richard P. Feynman (1996), Lectures on Computation, p. 54.

183 Nel 1936 successe però l'inaspettato. Indipendentemente l'uno dall'altro, e quasi simultaneamente, Turing e Post pubblicarono due articoli [...] nei quali arrivarono alla definizione che avrebbe convinto tutti: calcolabile significa programmabile su un computer. Poiché ovviamente il computer allora non c'era, per poter dare la loro definizione Turing e Post dovettero inventarlo. E lo fecero usando gli strumenti che Gödel aveva fornito loro nell'articolo del 1931: in particolare, l'aritmetizzazione, che con il senno di poi si scoprì non essere altro che la digitalizzazione di cui oggi tanto si parla”, Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 153.

184 “In a recent paper Alonzo Church as introduced an idea of « effective calculability », which is equivalent to my « computability »”, Alan Turing, “On computable number”, p. 231. Cf. <https://plato.stanford.edu/entries/church-turing/>, “The importance of Turing's proposal is this. If the proposal is correct – i.e. if the Church-Turing thesis is true – then talk about the existence or non-existence of systematic methods can be replaced throughout mathematics and logic by talk about the existence or non-existence of Turing-machine programmes”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 43, “But further, Turing asserted that if anything could be done by an effective procedure, it could be done by his Universal machine, and vice versa”, Richard P. Feynman (1996), Lectures on Computation, p. 54-55, “For the Church-Turing Thesis is certainly one of the most important concepts in the philosophy of mathematics, brains, and thinking. Actually, like tea, the Church-Turing Thesis can be given in a variety of different strengths. [...] Church-Turing Thesis, Standard Version: Suppose there is a method which a sentient being follows in order to sort numbers into two classes. Suppose further that this method always yields an answer within a finite amount of time, and that it always gives the same answer for a given number. Then: Some terminating FlooP program (i.e., some general recursive function) exists which gives exactly the same answers as the sentient being's method does”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 561.

185 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104‑105.

186 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 152. “In connection with metamathematical concepts (like being the code number of a proof), Gödel had introduced a class of functions defined on the natural numbers that he had called recursive. He chose this name because functions belonging to this class were typically defined by specifying their value for an initial input value, and then specifying how, knowing the value of the function for a given input value, to specify the value of the function for the next input value. He remarked in these lectures that the recursive functions had the important property that their values could be computed by a « finite procedure, » or as we would say, by an algorithm. He went further and suggested that the class of recursive functions could be extended to a larger class, still embodying the idea of using recursion that would include all functions defined on the natural numbers whose values could be calculated by an algorithm. And, as a step in that direction, he defined a class of functions he called « general recursive »”, Martin Davis (2018), The Universal Computer, p. 106.

187 “One of my conclusion was that the idea of a « rule of thumb » process and a « machine process » were synonymous”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 378, “The normal form principle for puzzles closely parallels the Church-Turing thesis, which says that given any systematic method, we can find a corresponding Turing machine that is equivalent to it”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 577.

188 Kurt Gödel (1986), “Postscriptum (3 June 1964)”, in Kurt Gödel, Collected Works, Volume i, p. 370, quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 154-155: “Gödel fece l'una e l'altra cosa nel Postscriptum (1964) alle sue lezioni del 1934, scegliendo di usare la formulazione di Turing e Post in termini di calcolo meccanico, che l'aveva particolarmente convinto: Un sistema formale è semplicemente un procedimento meccanico per produrre formule, dette dimostrabili, e la sua essenza è che il ragionamento viene completamente rimpiazzato da operazioni meccaniche condotte sulle formule”.

189 “For instance if Gödel's theorem is to be used we need in addition to have some means of describing logical systems in terms of machines, and machines in terms of logical system”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 450. “Gödel's famous incompleteness theorem of 1931 is, however, importantly less general than the above statement, since it concerns only one particular systematic method of proving mathematical theorems, the system set out by Whitehead and Russell in Principia Mathematica [...] Gödel did later generalize his result of 1931 to all formal systems (containing a certain amount of arithmetic), but emphasized the importance that Turing's work played in this generalization. Gödel said in 1964: [D]ue to A. M. Turing's work, a precise and unquestionable adequate definition of the general concept of formal system can now be given... Turing's work gives an analysis of the concept of « mechanical procedure » (alias « algorithm » or « computation procedure » or « finite combinatorial procedure »). ... A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas”, Martin Davis (ed.) (1965), The Undecidable, p. 71-73. “thanks to Turing's abstract logical work, von Neumann knew that by making use of coded instructions stored in memory, a single machine of fixed structure could in principle carry out any task for which instruction table can be written”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 22, “These ideas are all closely related to Gödel's beautiful result whereby mathematical logics are reduced to a kind of computation theory (above, p. 173). Indeed, he showed that the basic concepts of logics are recursive, which is equivalent to saying they can be computed on a Turing machine”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 275. “The notion of formal system and mechanical operation are intimately connected; either can be defined in terms of the other. If we should first define a mechanical operation directly (e.g., in terms of Turing machines), we would then define a « formal » system as one whose set of theorems could be generated by such a machine (that is to say, the machine grinds out all the theorems, one after another, but never grinds out a non-theorem). Alternatively (following the lines of Post), we can first define a formal system directly and define an operation to be « mechanical » or « recursive » if it is computable in some formal systems”, Raymond M. Smullyan (1961), Theory of Formal Systems, p. 1.

190 “Gödel later generalized this result, pointing out that « due to A. M. Turing's work, a precise and unquestionably adequate definition of the general concept of formal system can now be given », with the consequence that incompleteness can « be proved rigorously for every consistent formal system containing a certain amount of finitary number theory ». The definition made possible by Turing's work is this (in Gödel's words): A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas'”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 48. “There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines. The best known of these results is known as Gödel's theorem, and shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 450.

191 To see a Turing machine in action, several simulators are available online, such as <https://morphett.info/turing/turing.html>.

192 “Turing introduced two fundamental assumptions: discreteness of time and discreteness of state of mind. To a Turing machine, time exists not as a continuum, but as a sequence of changes of state. Turing assumed a finite number of possible states at any given time. « If we admitted an infinity of states of mind, some of them will be “arbitrarily close” and will be confused, » he explained. « The restriction is not one which seriously affects computation, things the use of more complicated states of mind can be avoided by writing more symbols on the tape. » […] Each step in the relationship between tape and Turing machine is determined by an instruction table listing all possible internal states, all possible external symbols, and, for every possible combination, what to do (write or erase a symbol, move right or left, change the internal state) in the event that combination comes up. The Turing machine follows instructions and never makes mistakes. Complicated behavior does not require complicated states of mind. [...] Behavioral complexity is equivalent whether embodied in complex states of mind (m-configurations) or complex symbols (for strings of simple symbols) encoded on the tape”, George B. Dyson (2012), Turing’s Cathedral, p. 248.

193 “We shall have a description of the machine in the form of an Arabic numeral. The integer represented by this numeral may be called a description number (D.N) of the machine. The D.N determine the S.D and the structure of the machine uniquely”, Alan Turing, “On Computable Numbers”, p. 240-241.

194 Alan Turing, “On Computable Numbers”, p. 241-242, “When we have decided what machine we wish to imitate we punch a description of it on the tape of the universal machine. This description explains what the machine would do in every configuration in which it might find itself. The universal machine has only to keep looking at this description in order to find out what it should do at each stage. Thus the complexity of the machine to be imitated is concentrated in the tape and does not appear in the universal machine proper in any way. If we take the properties of the universal machine in combination with the fact that machine processes and rule of thumb processes are synonymous we may say that the universal machine is one which, when supplied with the appropriate instructions, can be made to do any rule of thumb process”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 383, “A digital computer is a universal machine in the sense that it can be made to replace any machine of a certain very wide class. […] It will replace any rival design of calculating machine, that is to say any machine into which one can fit data and which will later print out results. In order to arrange for our computer to imitate a given machine it is only necessary to programme the computer to calculate what the machine in question would do under given circumstances, and in particular what answers it would print out. The computer can then be made to print out the same answers. […] It should be noticed that there is no need for there to be any increase in the complexity of the computer used. [...] this may appear paradoxical, but the explanation is not difficult. The imitation of a machine by a computer requires not only that we should have made the computer, but that we should have programmed it appropriately. The more complicated the machine to be imitated the more complicated must the programme be”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 482-483, “The UTM's [Universal Turing Machine] internal program then takes this information and mimics the action of the original machine. [...] what is impressive about that UTM is that all we have to do is give it a list of quintuplets and some initial data”, Richard P. Feynman (1996), Lectures on Computation, p. 68.

195 “« On Computable Numbers » is the birthplace of the fundamental principle of the modern computer, the idea of controlling the machine's operations by means of a programme of coded instructions stored in the computer's memory”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 6, “operating in accordance with this table of instructions, the universal machine is able to carry out any tasks for which an instruction table can be written. The trick is to put an instruction table–programme–for carrying out the desired task onto the tape of the universal machine. […] Turing's greatest contributions to the development of the modern computer were: the idea of controlling the function of a Computing Machine by storing a programme of symbolically encoded instructions in the machine’s memory. His demonstration (in section 7 of « On computable numbers ») that, by this means, a single machine of fixed structure is able to carry out every computation that can be carried out by any Turing machine whatsoever, i.e. is universal”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 15.

196 “The difference is that in the Universal Turing machine, but not the Analytical Engine, there is no fundamental distinction between programme and data. It is the absence of such a distinction that marks off a stored-programme computer from a programme-controlled computer. As Gandy put the point, Turing’s ‘universal machine is a stored-program machine [in that], unlike Babbage’s all-purpose machine, the mechanisms used in reading a program are of the same kind as those used in executing it’”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 30.

197 “The plugged pattern can be changed from one problem to another, but–at least in the simplest arrangement–it is fixed for the entire duration of a problem”, John von Neumann (2012), The Computer & the Brain, p. 16-17.

198 Note the important difference between this mode of control and the plugged one, described earlier: there the control sequence points were real, physical objects, and their plugged connections expressed the problem. Now the orders are ideal entities, stored in the memory, and it is thus the contents of this particular segment of the memory that express the problem. Accordingly, this mode of control is called « memory-stored control. »”, John von Neumann (2012), The Computer & the Brain, p. 19-20.

199 “Before Turing, the general supposition was that in dealing with such machines the three categories – machine, program, and data – were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion. A Turing machine is initially envisioned as a machine with mechanical parts, hardware. But its code on the tape of the universal machine functions as a program, detailing the instructions to the universal machine needed for the appropriate computation to be carried out. Finally, the universal machine in its step-by-step actions sees the digits of a machine code as just more data to be worked on”, Martin Davis (2018), The Universal Computer, p. 143.

200 “What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 393, “Since the orders that exercise the entire control are in the memory, a higher degree of flexibility is achieved [...] Indeed, the machine, under the control of its orders, can extract numbers (or orders) from the memory, process them (as numbers!), and return them to the memory (to the same or to other locations); i.e. it can change the contents of the memory […] Hence it can, in particular, change the orders (since these are in the memory!)–the very orders that control its actions. Thus all sorts of sophisticated order-systems become possible, which keep successively modifying themselves and hence also the computational processes that are likewise under their control. In this way more complex processes than mere iterations become possible”, John von Neumann (2012), The Computer & the Brain, p. 20, “One of the most important reasons for storing instructions in the memory is the fact that one needs to modify them. The most obvious such modification is to change the address in an instruction. Only in this way can a sub-routine be useful in many different parts of a problem or in many different problems. […] This ability to modify the addresses of instructions is not merely aesthetically elegant, it is absolutely fundamental”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 265.

201 “This is a very simple idea, but is of the utmost importance. The idea of the iterative cycle of instruction”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 389, “The great power of the computer lies in its ability to iterate repeatedly the same short description of a basic mathematical process”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 343. “The functions in Gödel’s original class of recursive functions (renamed primitive recursive functions by Kleene) were built by a succession of such recursive definitions”, Martin Davis (2018), The Universal Computer, p. 201, n. 27, “È il residuo il vero motore dell'algoritmo, perché innesca ad ogni passo la serie di operazioni che occorrono per calcolare un'approssimazione sempre più accurata della soluzione. Quindi il residuo è, assieme alla struttura dell'iterazione, il vero elemento regolatore di tutto il procedimento. Esso racchiude pure l'informazione utile per arrestare il calcolo al livello di precisione richiesta, secondo uno schema che non differisce molto dal Principio di feedback, o retroazione, in cui Norbert Wiener avrebbe visto un possibile motivo di affinità tra gli organismi viventi e i meccanismi artificiali”, Paolo Zellini (2022), Discreto e continuo, p. 212.

202 “The fundamental point of Turing's analysis has to do with infinite sequences of binary digits”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 274.

203 Alan Turing, “On Computable Numbers”, p. 230. “Just as any set of typographical rules generates a set of theorems, a corresponding set of natural numbers will be generated by repeated applications of arithmetical rules. These producible numbers play the same role inside number theory as theorems do inside any formal system. Of course, different numbers will be producible, depending on which rules are adopted. « Producible numbers » are only producible relative to a system of arithmetical rules. [...] Note that the producible numbers (in any given system) are defined by a recursive method: given numbers which are known to be producible, we have rules telling how to make more producible numbers”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 264.

204 Cf. Martin Davis (2018), The Universal Computer, p. 140-141.

205 Martin Davis (2018), The Universal Computer, p. 141. “We are now in a position to show that the Entscheidungsproblem cannot be solved. Let us suppose the contrary. Then there is a general (mechanical) process for determining whether Un(M) is provable. By Lemmas 1 and 2, this implies that there is a process for determining whether M ever prints 0, and this is impossible, by §8. Hence the Entscheidungsproblem cannot be solved”, Alan Turing, “On Computable Numbers”, p. 262.

206 “Turing was able to construct, by a method similar to Gödel's, functions that could be given a finite description but could not be computed by finite means. One of these was the halting function: given the number of a Turing machine and the number of an input tape, it returns either the value 0 or the value 1 depending on whether the computation will ever come to a halt. Turing calls the configurations that halt "circular" and the configurations that keep going indefinitely « circle free, » and demonstrated that the unsolvability of the halting problem implies the unsolvability of a broad class of similar problems, including the Entscheidungsproblem. Contrary to Hilbert's expectations, no mechanical procedure can be counted on to determine the probability of any given mathematical statement in a finite number of steps”, George B. Dyson (2012), Turing’s Cathedral, p. 248-249, “Turing’s method makes use of his proof that no computing machine can solve the printing problem. He showed that if a Turing machine could tell, of any given statement, whether or not the statement is provable in FOPC, then a Turing machine could tell, of any given Turing machine, whether or not it ever prints ‘0’. Since, as he had already established, no Turing machine can do the latter, it follows that no Turing machine can do the former. The final step of the argument is to apply Turing's thesis: if no Turing machine can perform the task in question, then there is no systematic method for performing it”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 52.

207 “The fact of the matter is that there is no systematic method of testing puzzles to see whether they are solvable or not. […] But it is not merely that the test has never been found. It has been proved that no such tests ever can be found”, Alan Turing, “Solvable and Unsolvable Problems”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 582.

208 Alan Turing, “On Computable Numbers”, p. 241. Cf. “We are always able to obtain from the rules of a formal logic a method of enumerating the propositions proved by its means. We then imagine that all proofs take the form of a search through this enumeration for the theorem for which a proof is desired. In this way ingenuity is replaced by patience”, Alan Turing, “Systems of Logic Based on Ordinals”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 193.

209 Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 146.

210 “So there are some computational problems (e.g. determining whether a UTM will halt) that cannot be solved by any Turing machine. This is Turing's main result”, Richard P. Feynman (1996), Lectures on Computation, p. 88.

211 Initially a follower of Hilbert’s program, von Neumann shifted from pure to applied mathematics after Gödel’s results and went on to accomplish extraordinary feats. Beyond his contributions to computing and mathematical physics, he created (based on an idea of Stan Ulam) the Monte Carlo method, developed game theory and the theory of economic behavior with Oskar Morgenstern, and established the theoretical foundations of self-replication (biological, mechanical, and digital), predating the discovery of DNA’s function. Cf. “Viewing the problem of self-replication and self-reproduction through the lens of formal logic and self-referential systems, von Neumann applied the results of Gödel and Turing to the foundations of biology”, George B. Dyson (2012), Turing’s Cathedral, p. 285.

212 “Julian Bigelow, von Neumann’s chief engineer, recollected: The person who really... pushed the whole field ahead was von Neumann, because he understood logically what [the stored-programme concept] meant in a deeper way than anybody else...The reason he understood it is because, among other things, he understood a good deal of the mathematical logic which was implied by the idea, due to the work of A. M. Turing”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 23.

213 “The machine described in the paper (variously known as the IAS, or Princeton, or von Neumann machine) was constructed and copied (never exactly), and the copies copied…”, Paul Armer quoted by, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 256.

214 John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 between the U.S Army Ordnance Departement and the University of Pennsylvania, Moore School of Electrical Engineering, University of Pennsylvania, June 30, p. 7-10.

215 Jack B. Copeland (ed.) (2004), The Essential Turing, p. 27.

216 “« Random access » meant that all individual memory locations – collectively constituting the machines internal « state of mind » – were equally accessible at any time. « High speed » meant that the memory was accessible at the speed of light”, George B. Dyson (2012), Turing’s Cathedral, p. 5.

217 George B. Dyson (2012), Turing’s Cathedral, p. x.

218 However, hybrid machines exist, and a digital machine can always simulate the behavior of an analog machine.

219 “In an analog machine each number is represented by a suitable physical quantity, whose value, measured in some pre-assigned unit, is equal to the number in question”, John von Neumann (2012), The Computer & the Brain, p. 3, “In a decimal digital machine each number is represented in the same way as in conventional writing or printing, i.e. as a sequence of decimal digits”, John von Neumann (2012), The Computer & the Brain, p. 6, “We may call a machine « discrete » when it is natural to describe its possible states as a discrete set, the motion of the machine occurring by jumping from one state to another. The states of « continuous » machinery on the other end form a continuous manifold, and the behaviour of the machine is described by a curve on this manifold”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 412, “Machines of this class are generically described as digital or arithmetical. The former name clearly calls attention to the quantities employed and the latter to the processes performed on these quantities. The premier device in this category is the abacus, the simplest form of digital computer still used in many places throughout the world. Analog machines are very different. They are often described as being continuous or measurement. They are rather difficult to explain and describe. Here the former name again calls attention to the quantities employed and the latter to the processes performed on them. In all cases analog machines depend upon the representation of numbers as physical quantities such as lengths of rods, direct current voltages, etc. Usually they are developed for a fairly specific purpose. […] The designer of an analog device decides what operations he wishes to perform and then seeks a physical apparatus whose laws of operation are analogous to those he wishes to carry out. He next builds the apparatus and solves his problem by measuring the physical, and hence continuous, quantities involved in the apparatus”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 39-40, “Given a mathematical formula, they were, in principle at least, able to invent a machine exactly describable by the formula. This is what the analog computer is. […] [the digital approach] is the realization that a machine can be built to imitate the human method of calculating: to count and to build up the elementary operations – addition, subtraction, multiplication, division – by counting”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 140-142.

220 “That the machine is digital however has more subtle significance. It means firstly that numbers are represented by sequences of digits which can be as long as one wishes. One can therefore work to any desired degree of accuracy”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 378. “In contrast to the physical processes that mediate this process in analog machines, in this case rules of strict and logical character control this operation”, John von Neumann (2012), The Computer & the Brain, p. 9. “Differential analyzers were not « digital » devices operating on numbers digit by digit. Rather numbers were represented by physical quantities that could be measured (like electric currents or voltages) and components were linked together to emulate the desired mathematical operations. These « analog » machines were limited in their accuracy by that of the instruments used for the measurements. The ENIAC was a digital device, the first electronic machine able to deal with the same kind of mathematical problems as differential analyzers. Its designers built it of components functionally similar to those in differential analyzers, relying on the capabilities of vacuum tube electronics for greater speed and accuracy”, Martin Davis (ed.) (1965), The Undecidable, p. 157.

221 It is challenging to succinctly and precisely define what a number is. Put simply, a number is an arithmetic value representing numerosity or quantity, used in counting and calculation. Numbers can be represented (denoted) by numerals, either as words (“ten”) or as digits (“10”), which are graphical signs. Initially, numbers were perceived as properties of collections or sets of objects, but over time, the concept evolved to become increasingly abstract.

222 “Secondo alcuni esperti di archeologia cognitiva, come Karenleigh Overmann (ma anche altri), queste dita sono tracce di numeri: sono sequenziali, sono intenzionali, sono contate. […] Dita e numeri vanno naturalmente bene insieme. Coinvolgono i lobi parietali che integrano le sensazioni tattili, visive e spaziali, il gioco delle dita tra tempo e spazio. Non è un caso che le dita siano la prima cosa che i bambini usano per contare. È letteralmente la manipolazione dei numeri”, Silvia Ferrara (2021), Il salto, p. 27-28.

223 George B. Dyson (2012), Turing’s Cathedral, p. 250.

224 “bits can represent words, pictures, sounds, music, and movies as well as product codes, film speeds, movie ratings, an invasion of the British army, and the intentions of one’s beloved. But most fundamentally, bits are numbers. All that needs to be done when bits represent other information is to count the number of possibilities. This determines the number of bits that are needed so that each possibility can be assigned a number”, Charles Petzold (2000), Code. The Hidden Language of Computer Hardware and Software, Washington, Microsoft Press., p. 85, “An arbitrary Turing machine T will come with an arbitrary set of possible symbols, but with thought you should be able to see that we can always label the distinct symbols by binary numbers and work with this”, Richard P. Feynman (1996), Lectures on Computation, p. 82.

225 “It is customary to use the symbols « 0 » and « 1 » as the names of the two states, but any two distinct symbols (marks), such as a circle and a cross, will do. It is often useful to think of the 0 and 1 as only a pair of arbitrary symbols, not as numbers”, Richard W. Hamming (1986), Coding and Information Theory, Upper Saddle River, Prentice‑Hall., p. 7.

226 Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 384-385, “There are several reasons for this choice [of the binary system], the outstanding ones of which are these: the greater simplicity and speed with which the elementary operation can be performed; the fact that electronic circuitry and technology tends to be binary in character; and the fact that the control portions of a computer are not arithmetical but rather logical in nature – logics is a binary system”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 260, “Binary arithmetic has a simple and more one-piece logical structure that any other, particularly than the decimal one”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 6. The correspondence between binary arithmetic, logic, and electricity is beautifully explained by Petzold.

227 Cf. “Hence, instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1. Let us conceive, then, of an Algebra in which the symbols x, y, z, &c., admit indifferently of the values 0 and 1, and of these values alone. The laws, the axioms, and the processes, of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic. Difference of interpretation will alone divide them”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 37-38, “This turns our abstract mathematical problem into a matter of real world « mechanics »”, Richard P. Feyman (1996), Lectures on Computation, p. 20.

228 “The power of mathematics in applications usually lies in revealing similarities or even identities between previously unknown material and well-established material. [...] this then made knowledge about heat immediately transferable to electric cables”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 86.

229 “To enable the machine to compute, i.e. to operate on these numbers according to a predetermined plan, it is necessary to provide organs (or components) that can perform on these representative quantities the basic operations of mathematics”, John von Neumann (2012), The Computer & the Brain, p. 3.

230 [The basic arithmetical operations] all are patterns of alternative actions, organized in highly repetitive sequences, and governed by strict and logical rules”, John von Neumann (2012), The Computer & the Brain, p. 10, “In other words, the « arithmetical depth » of the necessary operations is usually quite great. Note that the « logical depth » is still greater, and by a considerable factor – that is, if, e.g. the four species of arithmetic are broken down into the underlying logical steps (cf. above), each one of them is a long logical chain by itself”, John von Neumann (2012), The Computer & the Brain, p. 27, “Note that the first‑mentioned logical operations [sense coincidences, combine stimuli, and possibly sense anticoincidences] are the elements from which the arithmetical ones are built up”, John von Neumann (2012), The Computer & the Brain, p. 30.

231 “Reducing logical reasoning to formal rules is an endeavor going back to Aristotle. It was the underlying basis for Leibniz’s dream of a universal computational language. And it underlay Turing’s achievement in showing that all computation could be carried out on his universal machines. Computation and logical reasoning are indeed two sides of the same coin. This insight is used not only to make it possible to program computers to perform a bewildering variety of tasks, but indeed in the very way that computers are designed and built”, Martin Davis (2018), The Universal Computer, p. 168.

232 The logical nature of the digital sum becomes even clearer when the binary (rather than decimal) system is used. […] Multiplication: the primarily logical character is even more obvious – and the structure more involved”, John von Neumann (2012), The Computer & the Brain, p. 9, Richard P. Feyman (1996), Lectures on Computation, p. 6. Arithmetic multiplication corresponds to the logical connective and, where the result is true (1) only when both terms are true: 0 × 0 = 0, 0 × 1 = 0, 1 × 0 = 0, 1 × 1 = 1.

233 A logic gate is a fundamental component of digital circuits that performs a basic logical operation on one or more binary inputs and produces a single binary output.

234 All of these gates are examples of « switching functions », which take as input some binary-valued variables and compute some binary function. Claude Shannon was the first to apply the rules of Boolean algebra to switching networks in his MIT Master's thesis in 1937. Such switching function can be implemented electronically with basic circuits called, appropriately enough, « gates ». The presence of an electronical signal on a wire is a « 1 » (or « true »), the absence a « 0 » (or « false »). […] The simplest operations of all is an « identity » or « do-nothing » operation. This is just a wire coming into a box and then out again, with the same signal on it. [...] The next simplest, namely, a box which « negates » the incoming signal. If the input is a 1, then the output will be 0, and vice versa”, Richard P. Feyman (1996), Lectures on Computation, p. 23-24.

235 “That syllogism, conversion, &c., are not the ultimate processes of Logic. It will be shown in this treatise that they are founded upon, and are resolvable into, ulterior and more simple processes which constitute the real elements of method in Logic”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 10. “The active organs are the following. First, organs which perform the basic logical actions: sense coincidences, combine stimuli, and possibly sense anticoincidences (no more than this is necessary, although sometimes organs for more complex logical operations are also provided)”, John von Neumann (2012), The Computer & the Brainp. 29-30, “«And » and « or » are the basic operations of logic. Together with « no » (the logical operation of negation) they are a complete set of basic logical operations–all other logical operations, no matter how complex, can be obtained by suitable combinations of these”, John von Neumann (2012), The Computer & the Brainp. 54, “Now I've been very happy to say that with a so-called « complete set » of operators, you can do anything, that is, build any logical function”, Richard P. Feyman (1996), Lectures on Computation, p. 40, “There are two operations in this system which we may call + and x, or we may say or and and. It is most fortunate for us that all logics can be comprehended in so simple a system, since otherwise the automation of computation would probably not have occurred – or at least not when it did”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 37-38. “One of the nice games you can play with logic gates is trying to find out which is the best set to use for a specific purpose, and how to express other operators in terms of this best set. [...] Suffice it to say that the set AND, OR and NOT is complete; with these operators, one can build absolutely any switching function. To tempt you to go further with all this cute stuff, I will note that there exist single operators that are complete!”, Richard P. Feyman (1996), Lectures on Computation, p. 25.

236 “Beyond the capability to execute the basic operation singly, a computing machine must be able to perform them according to the sequence – or rather, the logical pattern – in which they generate the solution of the mathematical problem that is the actual purpose of the calculation in hand”, John von Neumann (2012), The Computer & the Brain, p. 11.

237 “Any computing machine that is to solve a complex mathematical problem must be « programmed » for this task. This means that the complex operation of solving that problem must be replaced by a combination of the basic operations of the machine. Frequently it means something even more subtle: approximation of that operation ‒ to any desired (prescribed) degree ‒ by such combinations”, John von Neumann (2012), The Computer & the Brain, p. 5.

238 “Probably the most important idea involved in instruction tables is that of standard subsidiary tables. Certain processes are used repeatedly in all sorts of different connections, and we wish to use the same instructions, from the same part of the memory every time. Thus we may use interpolation for the calculation of a great number of different functions, but we shall always use the same instruction table for interpolation. We have only to think out how this is to be done once, and forget then how it is done. Each time we want to do an interpolation we have only to remember the memory position where this table is kept, and make the appropriate reference in the instruction table which is using the interpolation”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. (ed.) (2004), The Essential Turing, p. 389. “We call the coded sequence of a problem a routine and one which if formed with the purpose of possible substitution into other routines, a subroutine”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 336.

239 Richard P. Feyman (1996), Lectures on Computation, p. 14 “For example, one of the instructions was « put the contents of memory M into register ». The computer doesn't speak English, so we have to encode this command into a form it can understand; in other words, into a binary string. This is the opcode, or instruction number, and its length clearly determines how many different instructions we can have. If the op code is a four-digit binary number, then we can have 24 =16 different instructions [...] The second part of the instruction is the instruction address, which tells the computer where to go to find what it has to load into A; that is, memory address M”, Richard P. Feyman (1996), Lectures on Computation, p. 14, “Orders for CC to instruct CA to carry out one of its ten specific operations enumerated in 11.4. [...] We designate this operation by the numbers 0, 1, 2, ..., 9 [...] and thereby place ourselves in the position to refer to any one of them by its number”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 40.

240 “The architectural principle that a pair of 5-bit coordinates (25 = 32) uniquely identified one of the 1,024 memory locations containing a string (or « word ») of 40 bits. In 24 microseconds, any specified 40-bit string of code could be retrieved”, George B. Dyson (2012), Turing’s Cathedral, p. 6.

241 “Obtain a piece of information almost immediately by « dialling » the position of this information in the store”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 415, “all existing machines and memories use « direct addressing, » which is to say that every word in the memory has a numerical address of its own that characterizes it and its position within the memory (the total aggregate of all hierarchic levels) uniquely. This numerical address is always explicitly specified when the memory word is to be read or written. […] there is never any ambiguity about the address, and the place is designated”, John von Neumann (2012), The Computer & the Brain, p. 37-38, “Accomplishment of the desired time-sequential process on a given computing apparatus turns out to be largely a matter of specifying sequences of addresses of items which are to interact”, Bigelow quoted in George B. Dyson (2012), Turing’s Cathedral, p. 275, “The 40 Selectron tubes constituted a 32-by-32-by-40-bit matrix containing 1,024 40-bit strings of code, with each string assigned a unique identity number, or numerical address, in a manner reminiscent of how Gödel had assigned what are now called Gödel numbers to logical statements in 1931. By manipulating the 10-bit addresses, it was possible to manipulate the underlying 40-bits strings–containing any desired combination of data, instructions, or additional addresses, all modifiable by the progress of the program being executed at the time”, George B. Dyson (2012), Turing’s Cathedral, p. 105-106.

242 “They took us by the hand and explained how numbers could live in houses with addresses...”, Frederic C. Williams, “Early Computers at Manchester University”, p. 328 quoted by Jack B. (ed.) (2004), The Essential Turing, p. 371, “Every word in memory has a distinct location, like a house on a street; and its location is called its address. [...] Hence the « pointer » part of an instruction is the numerical address of some word(s) in memory. There are no restrictions on the pointer, so an instruction may even « point » at itself, so that when it is executed, it causes a change in itself to be made”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 289.

243 “An order must indicate which basic operation is to be performed, from which memory registers the inputs of that operation are to come, and to which memory register its output is to go. [...] Note that this presupposed that all memory registers are numbered serially – the number of a memory register is called its « address. » [...] It is convenient to number the basic operations, too. Then an order simply contains the number of its operations and the addresses of the memory registers referred to above, as a sequence of decimal digits (in a fixed order)”, John von Neumann (2012), The Computer & the Brain, p. 18, “an order is usually « physically » the same thing as a number [...] each order is stored in the memory, in a definite memory register, that is to say, at a definite address”, John von Neumann (2012), The Computer & the Brain, p. 19.

244 “Now it could be objected here that a coded message, unlike an uncoded message, does not express anything on its own – it requires knowledge of the code. But in reality there is no such thing as an uncoded message. There are only messages written in more familiar codes, and messages written in less familiar codes. If the meaning of a message is to be revealed, it must be pulled out of the code by some sort of mechanism, or isomorphism. It may be difficult to discover the method by which the decoding should be done; but once that method has been discovered, the message becomes transparent as water. When a code is familiar enough, it ceases appearing like a code; one forgets that there is a decoding mechanism. The message is identified with its meaning”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 267.

245 “Whether we regard signs as the representatives of things and of their relations, or as the representatives of the conceptions and operations of the human intellect, in studying the laws of signs, we are in effect studying the manifested laws of reasoning. […] The elements of which all language consists are signs or symbols. Words are signs”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 24-25.

246 Un codice è un sistema di significazione che accoppia entità presenti a entità assenti”, Umberto Eco (1975), Trattato di semiotica generale, p. 19, “Proponiamo quindi di definire come segno tutto ciò che, sulla base di una convenzione sociale previamente accettata, possa essere inteso come qualcosa che sta al posto di qualcos'altro”, Umberto Eco (1975), Trattato di semiotica generale, p. 27, “vi sono precise convenzioni in base alle quali certe espressioni grafiche hanno un significato e quindi veicolano una porzione di contenuto”, Umberto Eco (1975), Trattato di semiotica generale, p. 250.

247 Occorre dunque (come è stato fatto) concepire il codice come una doppia entità che stabilisce da un lato correlazioni semantiche e dall'altro regole di combinabilità sintattica”, Umberto Eco (1975), Trattato di semiotica generale, p. 130, “We use the word « all » in a few ways which are defined by the thought processes of reasoning. That is, there are rules which our usage of « all » obeys. We may be unconscious of them, and tend to claim we operate on the basis of the meaning of the word; but that, after all, is only a circumlocution for saying that we are guided by rules which we never make explicit. We have used words all our lives in certain patterns, and instead of calling the patterns « rules », we attribute the courses of our thought processes to the « meanings » of words. That discovery was a crucial recognition in the long path towards the formalization of number theory”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 60.

248 “Information theory is usually thought of as « sending information from here to there » (transmission of information), but this is exactly the same as « sending information from now to then » (stockage of information). Both situations occur constantly when handling information. Clearly, the encoding of information for efficient storage as well as reliable recovery in the presence of « noise » is essential in computer science”, Richard W. Hamming (1986), Coding and Information Theory, p. xi.

249 Although the text uses the colorful words « information, » « transmission, » and « coding, » a close examination will reveal that all that is actually assumed is an information source of symbols s1, s2, ..., sq.”; Richard W. Hamming (1986), Coding and Information Theory, p. 1, “Logically speaking, coding theory leads to information theory, and information theory provides bounds on what can be done by suitable encoding of the information. Thus the two theories are intimately related”, Richard W. Hamming (1986), Coding and Information Theory, p. 4.

250 “A Mathematical Theory of Communication”, Claude E. Shannon (1948), “A Mathematical Theory of Communication”, The Bell System Technical Journal, 27 (3), p. 379-423. DOI: https://doi.org/10.1002/j.1538-7305.1948.tb01338.x [consulted on 06/12/2024]. “« Any difference that makes a difference » is how cybernetician Gregory Bateson translated Shannon's definition into informal terms. To a digital computer, the only difference that makes a difference is the difference between a zero and a one”, George B. Dyson (2012), Turing’s Cathedral, p. 3.

251 Charles Petzold (2000), Code, p. 70.

252 “The two main problems of representation are the following. 1. Channel encoding: How to represent the source symbols so that their representations are far apart in some suitable sense. As a result, in spite of small changes (noise) in their representations, the altered symbols can, at the receiving end, be discovered to be wrong and even possibly corrected. This is sometimes called « feed forward » error control. 2. Source encoding: How to represent the source symbols in a minimal form for purpose of efficiency”, Richard W. Hamming (1986), Coding and Information Theory, p. 2, “The central idea of error detection and correction is that the meaningful messages must be kept far apart (in the space of probable errors) if we are to handle errors successfully. If two of the possible messages are not far enough apart, one message can be carried by an error (or errors) into the other, or carried at least so close that at the receiving end we will make a mistake in identifying the source”, Richard W. Hamming (1986), Coding and Information Theory, p. 49, “using longer-than-minimum names is probably a very wise idea – but it is unlikely that this computation will convince many people to do so! Minimal-length names are the source of much needless confusion and waste of time, yours and the machine's. […] We have given the fundamental nature of error detection and error correction for white noise, namely the minimum distance between message points that must be observed”, Richard W. Hamming (1986), Coding and Information Theory, p. 50.

253 “Try and decode this message: 011010110. You can't do it! At least, not uniquely. You do not know whether it is 01-1-01-01-10 or 011-01-01-10 or 01-101-01-10 or another possibility. There is an ambiguity due to the fact that the symbols can run into each other. A good, uniquely decodable symbol choice is necessary to avoid this”, Richard P. Feyman (1996), Lectures on Computation, p. 127.

254 l'informazione nel senso (a,i) [teoria matematica dell'informazione come una teoria strutturale delle proprietà statistiche della fonte] non è tanto quello che « viene detto » quanto quel che « può essere » detto. Rappresenta la libertà di scelta disponibile per la possibile selezione di un evento e quindi è una proprietà statistica della fonte”, Umberto Eco (1975), Trattato di semiotica generale, p. 64.

255 In a sense, the amount of information in a message reflects how much surprise we feel at receiving it. [...] In this respect, information is as much a property of your own knowledge as anything in the message. To clarify this point, consider someone sending you two duplicate messages: a message, then a copy. Every time you receive a communication from him, you get it twice. [...] We might say, well, the information in the two messages must be the sum of that in each [...] But this would be wrong. There is still only one message, the first, and information only comes from this first half. This illustrates how « information » is not simply a physical property of a message: it is a property of the message and your knowledge about it”, Richard P. Feyman (1996), Lectures on Computation, p. 119, “Shannon defined the information in a message to be the base two logarithm of the probability of that message appearing. Note how this ties in with our notion of information as « surprise »: the less likely the message to appear, the greater the information it carries”, Richard P. Feyman (1996), Lectures on Computation, p. 122. Cf. Umberto Eco (1975), Trattato di semiotica generale, p. 196-197.

256 “Whenever we talk about bits, we often talk about a certain number of bits. The more bits we have, the greater the number of different possibilities we can convey”, Charles Petzold (2000), Code, p. 75.

257 Cf. “Each decimal digit, in turn, is represented by a system of « markers ». […] A marker which can appear in ten different forms suffices by itself to represent a decimal digit. A marker which can appear in two different forms only will have to be used so that each decimal digit corresponds to a whole group. (A group of three two-valued markers allow 8 combinations; this is inadequate. A group of four such markers allows 16 combinations; this is more than adequate. Hence, groups of at least four markers must be used per decimal digit)”, John von Neumann (2012), The Computer & the Brain, p. 6.

258 “The essential concept here is that information represents a choice among two or more possibilities. For example, when we talk to another person, every word we speak is a choice among all the words in the dictionary. If we numbered all the words in the dictionary from 1 through 351,482, we could just as accurately carry on conversations using the numbers rather than words”, Charles Petzold (2000), Code, p. 72.

259 <https://home.unicode.org/>.

260 Cf. <https://www.unicode.org/faq/>.

261 Il computer e l'uomo (è un'ovvietà) non usano un codice comunicativo condiviso. Il problema della memorizzazione informatica di un qualsiasi testo, dunque, è sempre un problema di codifica, poiché si tratta di tradurre quel testo qualque in modo che sia leggibile alla macchina, di trasporre l'informazione testuale, come si dice, in Machine Readable Form (MRF)”, Edoardo Ferrarini (2007), “La trascrizione dei testimoni manoscritti: metodi di filologia computazionale”, in Arianna Ciula & Francesco Stella (eds.), Digital philology and medieval texts, Pisa, Pacini, p. 104.

262 “Such an analysis [of CC], however, is dependent upon a precise knowledge of the system of orders used in controlling the device, since the function of CC is to receive this orders, to interpret them, and then either to carry them out, or to stimulate properly those organs which will carry them out. It is therefore our immediate task to provide a list of the orders which control the device, i.e. to describe the code to be used in the device, and to define the mathematical and logical meaning and the operational significance of its code words”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 37, “A computing machine is controlled, as I pointed out above, by codes, sequences of symbols–usually binary symbols–i.e. by strings of bits. In any set of instructions that govern the use of a particular computing machine it must be made clear which strings of bits are orders and what they are supposed to cause the machine to do”, John von Neumann (2012), The Computer & the Brain, p. 72, “The problem [of devising codes] is of a practical nature and is closely allied to that connected with the choice of the elementary operations in the arithmetic organ. The code for a machine is in reality the vocabulary or totality of words or orders that the machine can « understand » and « obey »”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 258, “It is the totality of orders that makes up the language a machine understands; it is usually referred to as machine language. This is in modern parlance the most primitive or lowest level language of machines. Let us therefore ask a little about the structure of this language. By 1 July 1952 the Institute computer had a basic vocabulary of 29 instructions. Each such order consisted, in general, of ten binary digits to express a memory location – 210 = 1024 – and 10 additional ones to express the specific operation”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 333.

263 “Il lavoro della trascrizione-edizione consisterebbe dunque nella ri-codificazione di tutti gli elementi pertinenti del sistema segnico MSc nel sistema segnico MEcm; e le norme chiamate a presiedere, in modo analitico e convenzionale, alla trascrizione fungerebbero da intercodice (o metacodice) di equivalenza tra i due sistemi, garantendo la scientificità dell'operazione di ricodificazione”, Raul Mordenti (2001), Informatica e critica dei testi, p. 76, “Using the same method of logical substitution by which a Turing machine can be instructed to interpret successively higher-level languages – or by which Gödel was able to encode metamathematical statements within ordinary arithmetic – it was possible to design Turing machines whose coded instructions addressed physical components, not merely locations, and whose output could be translated into physical objects, not just zeros and ones”, George B. Dyson (2012), Turing’s Cathedral, p. 284.

264 “details such as how the instruction of codes are represented or exactly how things are set out in memory are not needed to use the instructions. This is the first and most elementary step in a series of hierarchies. We want to be able to maintain such ignorance consistently. In other words, we only want to have to think about the lower details once and then design things so that the next guy who comes along and wants to use your structure does not have to worry about the lower level details”, Richard P. Feyman (1996), Lectures on Computation, p. 3, Multiple levels of translation separate the languages now used by computer programmers from the machine language by which the instructions are carried out”, George B. Dyson (2012), Turing’s Cathedral, p. 241.

265 “It must be mentioned, however, that computer programming was originally done on an even lower level, if possible, than that of machine language – namely, connecting wires to each other, so that the proper operations were « hard-wired » in”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 290.

266 The many levels in a complex computer system have the combined effect of « cushioning » the user, preventing him from having to think about the many lower-level goings-on which are most likely totally irrelevant to him anyway”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 296, “One of the major goals of the drive to higher levels has always been to make as natural as possible the task of communicating to the computer what you want it to do”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297.

267 The idea of assembly language is to « chunk » the individual machine language instructions, so that instead of writing the sequence of bits « 010111000 » when you want an instruction which adds one number to another, you simply write ADD, and then instead of giving the address in binary representation, you can refer to the word in memory by a name”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 290.

268 And here is the vital point: someone can write, in machine language, a translation program. This program, called an assembler, accepts mnemonic instruction names, decimal numbers, and other convenient abbreviations which a programmer can remember easily, and carries out the conversion into the monotonous but critical bit-sequences. After the assembly language program has been assembled (i.e., translated), it is run–or rather, its machine language equivalent is run”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 291.

269 It was clear that a most powerful addition to any programming language would be the ability to define new higher‑level entities in terms of previously known ones, and then to call them by name. […] Unlike the case with assembly language, there is no straightforward one-to-one correspondence between statements in Algol and machine language instructions. To be sure, there is still a type of mapping from Algol into machine language, but it is far more « scrambled" than that between assembly language and machine language”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 292-293.

270 “Of course, we already went « up » a bit when we summarized operations by instructions such as « Clear A », and so on. This sort of shorthand is introduced for our benefit, and programs written in it cannot be understood directly by the machine itself. Such « assembly language » programs have to be translated into a « machine language » that the computer can understand, and this is done by a program called an « assembler ». The next level up, where we have multiplication and variables and so on, needs another program to translate these « high-level » programs into Assembly Language. These translation programs are called « compilers » or « interpreters ». The difference between them is in when the translation is done. An interpreter works out what to do step by step, as the program runs, interpreting each successive instruction in terms of the cruder language. A compiler takes the program as a whole and converts it all into assembly or machine language before the program is run”, Richard P. Feyman (1996), Lectures on Computation, p. 18, “Clearly, one can keep going up in level, putting together new algorithms, programming languages, adding the ability to manipulate « files » containing programs and data, and so on. Nowadays it is possible for most people to actually work at these higher levels using high-level languages to program their machines”, Richard P. Feyman (1996), Lectures on Computation, p. 19. a compiler can be written in assembly language, and an assembler in machine language”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 294. “Perhaps we should pause to say a few words about the meaning of the concepts interpreter and compiler. Both are programs written in languages which are not understood by a computer. Both must therefore be translated by the computer into a machine-language program before they can be carried out. In the case of the interpreter, the translation or decoding of each statement is done every time that statement is read; with the compiler the decoding of each statement is done a priori, and from there on the computer deals only with the machine-language program”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 339.

271 The next level of the hierarchy carries much further the extremely powerful idea of using the computer itself to translate programs from a high level into lower levels”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 292.

272 “All our interactions with digital information are mediated through layers of platforms. […] This includes, but is not limited to, operating systems, programming languages, file formats, software applications for creating and rendering content, encoding schemes, compression algorithms, and exchange protocols”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 41.

273 “It is striking how tight the connection is between progress in computer science (particularly Artificial Intelligence) and the development of new languages. A clear trend has emerged in the last decade: the trend to consolidate new types of discoveries in new languages. One key for the understanding and creation of intelligence lies in the constant development and refinement of the languages in terms of which processes for symbol manipulation are describable. […] It is not that each higher level extends the potential of the computer; the full potential of the computer already exists in its machine language instruction set. It is that the new concepts in a high-level language suggest directions and perspectives by their very nature. […] The « space » of all possible programs is so huge that no one can have a sense of what is possible. Each higher-level language is naturally suited for exploring certain regions of « program space »; thus the programmer, by using that language, is channeled into those areas of program space. He is not forced by the language into writing programs of any particular type, but the language makes it easy for him to do certain kinds of things. [...] This shows how a notational system can play a significant role in shaping the final product”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 299.

274 “The actual code for a problem is that sequence of coded symbols... that has to be placed into the... memory in order to cause the machine to perform the desired and planned sequence of operations, which amounts to solving the problem in question. […] Coding a problem for the machine would merely be what its name indicates: Translating a meaningful text... from one language (the language of mathematics, in which the planner will have conceived the problem, or rather the numerical procedure by which he has decided to solve the problem) into another language (that one of our code)”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 268. “How can you compare a program written in APL, with one written in Algol? Certainly not by matching them up line by line. You will again chunk these programs in your mind, looking for conceptual, functional units which correspond. Thus, you are not comparing hardware, you are not comparing software – you are comparing « etherware » – the pure concepts which lie back on the software. There is some sort of abstract « conceptual skeleton » which must be lifted out of low levels before you can carry out a meaningful comparison of two programs in different computer languages, of two animals, or of two sentences in different natural languages”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 381.

275 “The instructions which govern these operation must be given to the device in absolutely exhaustive detail. They include all numerical information which is required to solve the problem under consideration [...] All these procedures require the use of some code to express the logical and the algebraical definition of the problem under consideration, as well as the necessary numerical material”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 1.

276 “For today's computers to perform a complex task, we need a precise and complete description of how to do that task in terms of a sequence of simple basic procedures ‒ the « software » ‒ and we need a machine to carry out these procedures in a specifiable order ‒ this is the « hardware ». This instructing has to be exact and unambiguous. In life, of course, we never tell each other exactly what we want to say; we never need to, as context, body language, familiarity with the speaker, and so on, enable us to « fill in the gaps » and resolve any ambiguities in what is said. Computers, however, can't yet « catch on » to what is being said, the way a person does. They need to be told in excruciating detail exactly what to do. Perhaps one day we will have machines that can cope with approximate task descriptions, but in the meantime we have to be very prissy about how we tell computers to do things”, Richard P. Feyman (1996), Lectures on Computation, p. 2-3, “If the computer is to be reliable, then it is necessary that it should understand, without the slightest chance of ambiguity, what it is supposed to do”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297.

277 Questo passaggio comporta pertanto l'assunzione delle note caratteristiche di univocità, coerenza, non contraddittorietà, non ridondanza, che sono connesse all'uso dello strumento informatico. Da tali caratteristiche della ricodifica (e da esse soltanto) dipende la possibilità di utilizzare le straordinarie capacità della macchina di produrre automaticamente e in tempi brevissimi concordanze e frequenze, indices locorum, co-occorrenze, analisi statistiche e stilistiche, indici dei nomi etc., nonché di compiere in un istante correzioni automatiche e sistematiche sull'intero testo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 48-49.

278 “It was realized that the computer really processed information, not just numbers”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 8-9.

279 It must be noted that time in the machine is not the same as time outside the machine. “No time is there. Sequence is different from time. (Julian Bigelow, 1999)”, George B. Dyson (2012), Turing’s Cathedral, p. 294, “In our universe, we measure time with clocks, and computers have a « clock speed, » but the clocks that govern the digital universe are very different from the clocks that govern ours. In the digital universe, clocks exist to synchronize the translation between bits that are stored in memory (as structures in space) and the bits that are communicated by code (as sequences in time). They are clocks more in the sense of regulating escapement than in the sense of measuring time”, George B. Dyson (2012), Turing’s Cathedral, p. 299, “«No clocks. You don't need clocks. You only need counters. There's a difference between a counter and a clock. A clock keeps track of time. A modern general purpose computer keeps track of events. » This distinction separates the digital universe from our universe, and is one of the few distinctions left”, George B. Dyson (2012), Turing’s Cathedral, p. 300, The Turing machine thus embodies the relationship between an array of symbols in space and a sequence of events in time”, George B. Dyson (2012), Turing’s Cathedral, p. 248.

280 “A distinction which is made in Artificial Intelligence is that between procedural and declarative types of knowledge. A piece of knowledge is said to be declarative if it is stored explicitly, so that not only the programmer but also the program can « read » it as if it were in an encyclopedia or an almanac. This usually means that it is encoded locally, not spread around. By contrast, procedural knowledge is not encoded as facts – only as programs. [...] Thus procedural knowledge is usually spread around in pieces, and you can't retrieve it, or « key » on it. It is a global consequence of how the program works, not a local detail. In other words, a piece of purely procedural knowledge is an epiphenomenon. […] In between the declarative and procedural extremes, there are all possible shades”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 363, “A large amount of work in AI has nevertheless gone into systems in which the bulk of the knowledge is stored in specific places – that is, declaratively. It goes without saying that some knowledge has to be embodied in programs; otherwise one would not have a program at all, but merely an encyclopedia. The question is how to split up knowledge between program and data. Not that it is always easy to distinguish between program and data, by any means”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 616.

281 “Although we can try to draw a clear line between program and data, the distinction is somewhat arbitrary. Carrying this line of thought further, we find that not only are program and data intricately woven together, but also the interpreter of programs, the physical processor, and even the language are included in this intimate fusion. Therefore, although it is possible (to some extent) to draw boundaries and separate out the levels, it is just as important – and just as fascinating – to recognize the level-crossings and mixings”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 547.

282 “You can see that a [shift] register like this takes a sequential piece of information and turns it into parallel information”, Richard P. Feyman (1996), Lectures on Computation, p. 50, “Turing machines, which by definition are structures that can be encoded as sequences”, George B. Dyson (2012), Turing’s Cathedral, p. 290. “« The importance of structure to how logical processes take place is beginning to diminish as the complexity of the logical process increases. » Bigelow then pointed out that the significance of Turing's 1936 result was « to show in a very important, suggestive way how trivial structure really is. » Structure can always be replaced by code”, George B. Dyson (2012), Turing’s Cathedral, p. 274-275.

283 The fundamental, indivisible unit of information is the bit. The fundamental, indivisible unit of digital computation is the transformation of a bit between its two possible forms of existence: a structure (memory) or a sequence (code)”, George B. Dyson (2012), Turing’s Cathedral, p. 124, “A digital universe – whether 5 kilobytes or the entire Internet – consists of two species of bits: differences in space, and differences in time. Digital computers translate between these two forms of information – structure and sequence – according to definite rules. Bits that are embodied as structure (varying in space, invariant across time) we perceive as memory, and bits that are embodied as sequence (varying in time, invariant across space) we perceive as code. Gates are the intersections were bits span both worlds at the moments of transition from one instant to the next”, George B. Dyson (2012), Turing’s Cathedral, p. 3.

284 “Here is a case which demonstrates that, despite the theoretical equivalence of data and programs, in practice the choice of one over the other has major consequences”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 630.

285 “A key factor in the re-usability of data is the extent to which it is well structured. The more regular and well-defined the structure of the data the more easily people can create tools to reliably process it for reuse”, Tom Heath & Christian Bizer (2011), Linked Data: Evolving the Web into a Global Data Space, San Francisco, Morgan & Claypool. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001 [consulted on 06/12/2024].

286 The servers are active processes that reply to requests [...] clients are browser processes”, Berners-Lee, Tim (1990), Proposal for a Hypertext Project. URL: https://cds.cern.ch/record/2639699/files/Proposal_Nov-1990.pdf [consulted on 06/12/2024], p. 4.

287 “But the gift of the Web wasn’t only informational: by its very existence it gave us new tools to identify and understand networks themselves”, James Bridle (2022), Ways of Being, p. 81.

288 Cf. <https://www.home.cern/science/computing/birth-web/short-history-web>; “HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will”, Berners-Lee, Tim (1990), Proposal for a Hypertext Project, p. 1.

289 Cf. <https://cds.cern.ch/record/1164399/?ln=it>.

290 <https://www.w3.org/about/>.

291 <https://html.spec.whatwg.org/>.

292 Berners-Lee, Tim (1990), Proposal for a Hypertext Project, p. 2

293 There is some terminological overlap between Web 3.0 and Web3, the latter being mainly associated with blockchain technology.

294 Tim Berners-Lee (1998a), “Semantic Web Road Map”, Design Issues. URL: https://www.w3.org/DesignIssues/Semantic.html [consulted on 06/12/2024], a collection of personal notes by Tim Berners-Lee that explain the architectural and philosophical principles underlying the Web.

295 Tim Berners-Lee, James Hendler & Ora Lassila (2001), “The Semantic Web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilites”, Scientific American, 284 (5), p. 34-43. URL: http://www.sciam.com/article.cfm?id=the-semantic-web [consulted on 06/12/2024].

296 Tim Berners-Lee et al. (2001), “The Semantic Web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilites”, p. 34; Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, 2012, p. 263.

297 Cf. <https://opendefinition.org/>.

298 That a bundle of data is « linked » means that any links or references in it are made explicit (for humans and for machines)”, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, in Maarten J.F.M. Hoenen (ed.), Past and Future: Medieval Studies Today, Turnhout, TEMA, p. 62.

299 “Technically speaking, Linked Data refers to data published on the Web in such a way that it is machine readable, its meaning is explicitly defined, it is linked to other external datasets, and it can in turn be linked to from external datasets as well. Conceptually, Linked Data refers to a set of best practices for publishing and connecting structured data on the Web”, Liyang Yu (2011), Developer’s Guide to the Semantic Web, p. 409. Cf. “Increasingly, one set of objects can serve as metadata for another set”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 87, “Many digital objects index, describe, and annotate each other. […] This linked set of connections becomes a powerful form of context”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 156, “Herein we see a key point about digital objects: They describe not only themselves in machine readable ways but also each other. Further, every bit of metadata points in every direction”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 145.

300 <https://5stardata.info/>.

301 1. Use URIs as names for things. 2. Use HTTP URIs so that people can look up those names. 3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL). 4. Include links to other URIs so that they can discover more things”, Tim Berners-Lee (2006), “Linked Data”, Design Issues. URL: https://www.w3.org/DesignIssues/LinkedData.html [consulted on 06/12/2024].

302 Cf. Tim Berners-Lee (2006), “Linked Data”, <https://handbook.opendata.swiss/de/content/glossar/bibliothek/linked-open-data.html>.

303 Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 50 and p. 66.

304 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 292, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 70.

305 Raul Mordenti (2001), Informatica e critica dei testi, p. 35; Peter Rovinson (2013), “Towards a Theory of Digital Editions”, Variants: The Journal of the European Society for Textual Scholarship, 10, p. 105 and p. 126.

306 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 305, Peter Boot & Marijn Koolen (2021), “Connecting TEI Content Into an Ontology of the Editorial Domain”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 9-10, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 66, Vogeler, Georg (2021), “« Standing-off Tree and Graphs »: On the Affordance of Technologies for the Assertive Edition”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 78-79.

307 « edizione critica eccellente » quella che « offre i materiali necessari e sufficienti per un’altra edizione critica della stessa opera condotta secondo criteri differenti »”, De Robertis quoted by Raul Mordenti (2001), Informatica e critica dei testi, p. 67, “fornire per mezzo della trascrizione la maggior quantità possibile di dati per la lettura (anzi, per le varie e molteplici letture) e non invece di fornire la lettura definitiva”, Raul Mordenti (2001), Informatica e critica dei testi, p. 80. Cf. “Data, even from distributed sources may fuel various editions, differing in scope and distributed over place and time. Editorial content is transformed into modules or even more fine granular sets or particles of addressable, linkable and integratable objects”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 36.

308 Cf. Jörg Wettlaufer (2018), “Der nächste Schritt? Semantic Web und digitale Editionen”, in Roland S. Kamzelak & Tim Steyer (eds.), Digital Metamorphose: Digital Humanities und Editionswissenschaft, Wolfenbüttel, Zeitschrift für digitale Geisteswissenschaften. DOI: https://doi.org/10.17175/sb002_007 [consulted on 06/12/2024], Vogeler, Georg (2021), “« Standing-off Tree and Graphs »”, p. 87, Scholastic Commentaries and Texts Archive (<https://scta.info>); Paolo Bufalini’s Notebook (<https://projects.dharc.unibo.it/bufalini-notebook/>).

309 <https://www.w3.org/2001/sw/wiki/Main_Page>.

310 <https://en.wikipedia.org/wiki/Semantic_Web_Stack>; <https://www.w3.org/2000/Talks/1206-xml2k-tbl/slide10-0.html>; <https://www.w3.org/2007/Talks/0130-sb-W3CTechSemWeb/#(24)>; <https://smiy.wordpress.com/2011/01/10/the-common-layered-semantic-web-technology-stack/>.

311 Cf. Pat Hayes’s criticism of the SW stack: <https://videolectures.net/videos/iswc09_hayes_blogic>.

312 <https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier>.

313 Cf. <https://datatracker.ietf.org/doc/html/rfc3986#section-1.1.3>.

314 Cf. <https://www.w3.org/RDF/>; <https://www.w3.org/1999/02/22-rdf-syntax-ns>; <https://en.wikipedia.org/wiki/Resource_Description_Framework>.

315 “Definition of RDF Triple: Assume that I is the set of all IRI references, B (an infinite) set of blank nodes, L the set of literals. An RDF triple t is defined as a triple t = <s,p,o> where s I U B is called the subject, p I is called the predicate and o I U B U L is called the object”, Dominik Tomaszuk (2016), “Inferences rules for RDF(S) and OWL in N3Logic”, arXiv. DOI: https://doi.org/10.48550/arXiv.1601.02650 [consulted on 06/12/2024], p. 1.

316 Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 99.

317 Tim Berners-Lee (1999a), “The Semantic Web as a language of logic”, Design Issues. URL: https://www.w3.org/DesignIssues/Logic.html [consulted on 06/12/2024], Tim Berners-Lee (1999b), “The Semantic Toolbox”, Design Issues. URL: https://www.w3.org/DesignIssues/Toolbox.html [consulted on 06/12/2024], Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 283. For reasons of computational efficiency and logical decidability, it is often necessary to limit expressiveness to specific parts or subsystems composed of consistent and reliable data. Cf. Tim Berners-Lee (1999b), “The Semantic Toolbox”.

318 Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 280, Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 257.

319 “To publish data on the Web, the items in a domain of interest must first be identified. These are the things whose properties and relationships will be described in the data, and may include Web documents as well as real‑world entities and abstract concepts. [...] Where URIs identify real-world objects, it is essential to not confuse the objects themselves with the Web documents that describe them. It is, therefore, common practice to use different URIs to identify the real-world object and the document that describes it, in order to be unambiguous”, Tom Heath & Christian Bizer (2011), Linked Data: Evolving the Web into a Global Data Space, San Francisco, Morgan & Claypool. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001 [consulted on 06/12/2024], p. 9-10.

320 Tim Berners-Lee (1998b), “Using XML for Data”.

321 <https://dbpedia.org/>.

322 <https://www.w3.org/TR/xml/>; <https://en.wikipedia.org/wiki/XML>.

323 <https://www.w3.org/TR/turtle/>; <https://en.wikipedia.org/wiki/Turtle_(syntax)>.

324 <https://w3c.github.io/N3/spec/>; <https://en.wikipedia.org/wiki/Notation3>.

325 <https://json-ld.org/>.

326 <https://www.w3.org/TR/sparql11-query/>; <https://en.wikipedia.org/wiki/SPARQL>.

327 “The logic of computational media is, by and large, the logic of the database. Where the index or the codex is a valuable metaphor for the order and structure of a book, as new media studies scholarships suggests, the database is and should be approached as the foundational metaphor for digital media. From this perspective, there is no persistent « first row » in a database; instead the presentation and sorting of digital information is based on the query posed to the data”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 35, “The search function is one of the core aspects of the database logic. We don't read databases. We query them”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 35.

328 For the specification of RDF formal semantics see <https://www.w3.org/TR/rdf11-mt/>.

329 <https://www.w3.org/TR/rdf-schema/>.

330 <https://www.w3.org/OWL/>.

331 Willem N. Borst (1997), Construction of Engineering Ontologies, Institute for Telematica and Information Technology/University of Twente, Enschede, p. 12. Cf. also Thomas R. Gruber (1994), “Towards Principles for the Design of Ontologies Used for Knowledge Sharing”, International Journal Human-Computer Studies, 43, p. 907-928.

332 Tim Berners-Lee et al. (2001), “The Semantic Web”, “Pure logic is ontologically neutral. It makes no presuppositions about what exists or may exist in any domain or any language for talking about the domain. To represent knowledge about a specific domain, it must be supplemented with an ontology that defines the categories of things in that domain and the terms that people use to talk about them. The ontology defines the words of a natural language, the predicates of predicate calculus, the concept and relation types of conceptual graphs, the classes of an object-oriented language, or the tables and fields of a relational database”, John F. Sowa (2000), “Ontology, Metadata, and Semiotics”, p. 3.

333 Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 41.

334 Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 260.

335 Cf. Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 98.

336 “When you represent information as a DLG [directed labelled graph] the nodes don’t actually contain any information: it’s all in the connections”, Tim Berners-Lee (1998a), “Semantic Web Road Map”, “Entities obtain meaning based on the way, and in the extent to which, they are related to other entities via properties. Databases often contain implicit, condensed, or shortcut semantics. In order to be explicit, it is important that these semantics are unravelled”, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 107.

337 Reification is also used to describe RDF triples.

338 I dati acquistano valore di conoscenza quando sono interconnessi con altri dati, quando la loro interconnessione produce deflagranti effetti di rete. E la rivoluzione copernicana dei linked data consiste proprio nel fatto che il link, strumento di collegamento tra documenti nel web tradizionale, acquista, nel contesto del semantic web, un ruolo semantico primario, una funzione predicativa che dà significato ai dati stessi, poiché rappresenta ed esprime i differenti tipi di relazione che essi possono intrattenere”, Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 305.

339 <https://cidoc-crm.org/lrmoo>.

340 Tim Berners-Lee (1998b), “Using XML for Data”, Tim Berners-Lee et al. (2001), “The Semantic Web”, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 63.

341 Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 49.

342 <https://cidoc-crm.org/>.

343 Cf. <https://www.w3.org/TR/owl2-overview/#Semantics>.

344 SROIQ is a highly expressive Description Logic, extending the SHOIN Description Logic with features like reflexive and transitive roles, qualified cardinality constraints, and more. Cf. <https://en.wikipedia.org/wiki/Description_logic>.

345 <https://www.w3.org/TR/rdf11-mt/>.

346 <https://www.w3.org/TR/owl2-overview/#Profiles>; <https://www.w3.org/TR/owl2-profiles/>.

347 Dörthe Arndt (2019), Notation3 as the unifying logic for the semantic web, PhD thesis in Technology and Engineering, Ghent University, Ghent. URL: https://biblio.ugent.be/publication/8634507 [consulted on 06/12/2024], p. 12.

348 Tim Berners-Lee et al. (2001), “The Semantic Web”.

349 Dean Allemang & James Hendler (2011), Semantic Web for the Working Ontologist, Oxford, Elsevier LTD, p. 6.

350 <https://eulersharp.sourceforge.net/2003/03swap/rdfs-subClassOf.html>.

351 <https://w3c.github.io/N3/spec/>; <https://www.w3.org/TeamSubmission/n3/>. Cf. also <https://nie-ine.github.io/e-editiones/n3-rule-based-machine-reasoning>.

352 <https://www.w3.org/DesignIssues/N3Logic>.

353 <https://github.com/eyereasoner/EyeClient>; <https://josd.github.io/eye/>; Ruben Verborgh & Jos De Roo (2015), “Drawing Conclusions from Linked Data on the Web. The EYE Reasoner”, IEEE Software, May/June (3). URL: https://josd.github.io/Papers/EYE.pdf [consulted on 06/12/2024]. See also the “Online course Semantic Web Reasoning With EYE” at <https://n3.restdesc.org/>.

354 Cf. Berners-Lee, Tim, Dan Connolly, Lalana Kagal, Yosi Scharf & Jim Hendler (2008), “N3Logic: A logical framework for the World Wide Web”, Theory and Practice of Logic Programming, 8 (3), p. 249-269. DOI: https://doi.org/10.1017/S1471068407003213 [consulted on 06/12/2024], Dörthe Arndt (2019), Notation3 as the unifying logic for the semantic web.

355 <http://www.w3.org/2000/10/swap/reason>.

356 Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 261.

357 “[The Semantic Web] is not merely another data model, but also includes reflections on semiotics, semantics, linguistics (in relation to different natural languages), logic, and IT”, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 101.

358 “But the question what ontology actually to adopt still stands open, and the obvious counsel is tolerance and an experimental spirit”, Willard V. O. Quine (1963), From a Logical Point of View, p. 19.

359 as developers strive to provide the structure and organization beyond the just linking of data, they are not making very much use of the formal semantics that were standardized in the semantic web languages. Modern semantic approaches leverage vastly distributed, heterogeneous data collection with needs-based, lightweight data integration. These approaches take advantage of the coexistence of a myriad of different, sometimes contradictory, ontologies of varying levels of detail without assuming all-encompassing or formally correct ontologies. In addition, we are beginning to see the increased use of textual data that is available on the web, in hundreds of languages, to train artificially intelligent agents [...] These projects are increasingly leveraging the semantic markup that is available on the web”, Abraham Bernstein, James Hendler & Natalya Noy (2016), “A New Look at the Semantic Web”, Communications of the ACM, 59 (9). DOI: https://doi.org/10.1145/2890489 [consulted on 06/12/2024], p. 2.

360 Cf. Harry Halpin et al. (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”. “Sameness can be quite subtle”, Melanie Mitchell (2019), Artificial Intelligence, p. 337. Cf. also “To each of these ways of determining the point there corresponds a particular name. Hence the need for a sign for identity of content rests upon the following consideration: the same content can be completely determined in different ways; but that in a particular case two ways of determining it really yield the same result is the content of a judgment. [...] The judgment, however, requires for its expression a sign for identity of content, a sign that connects these two names. From this it follows that the existence of different names for the same content is not always merely an irrelevant question of form; rather, that there are such names is the very heart of the matter if each is associated with a different way of determining the content”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”.

361 <https://www.w3.org/2021/12/rdf-star.html>;

362 <https://www.w3.org/community/rdf-dev/2022/01/26/provenance-in-rdf-star/>.

363 <https://w3c-cg.github.io/rdfsurfaces/>; “RDF lacks the capability to express negated statements in a generic way. As a result, exchanging negative information on a Web scale is thus far restricted to specific cases and predefined statements. The ability to negate (virtually) any RDF statement allows for a comprehensive way to refute, deny or otherwise invalidate claims on a Web scale. Via an intermediate step of a diagrammatic approach to logical expressions called Peirce graphs, we introduce RDF Surfaces, an extension of RDF that incorporates the concept of classical negation, known from first-order logic. Overall, RDF Surfaces provides an abstract, visual approach to negation withing the Semantic Web, offering a more general and widely applicable approach than previous attempts at incorporating negation”, Patrick Hochstenbach, Mathijs van Noort, Dörthe Arndt, Rebekka Martens, Jos De Roo, Ruben Verborgh, Pieter Bonte & Femke Ongenae, “RDF Surfaces: Enabling Classical Negation on the Semantic Web”, arXiv. DOI: https://doi.org/10.48550/arXiv.2406.10659 [consulted on 06/12/2024].

364 <https://linked.art/loud/>.

365 <https://linked-data-from-tei.readthedocs.io/en/latest/>.

366 <https://www.geovistory.org/>.

367 <https://www.leaf-vre.org/docs/features/about-lw>.

Haut de page

Pour citer cet article

Référence électronique

Roberta Padlina, « Machines, Symbolic AI, and the Semantic Web:
What They Are and Why They Matter in the Humanities
 »
Methodos [En ligne], 24 | 2024, mis en ligne le 16 décembre 2024, consulté le 06 février 2025. URL : http://journals.openedition.org/methodos/11206 ; DOI : https://doi.org/10.4000/12xqo

Haut de page

Auteur

Roberta Padlina

University of Geneva

Articles du même auteur

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Search OpenEdition Search

You will be redirected to OpenEdition Search