Notes
For both quotations see George B. Dyson, Turing's Cathedral: The Origins of the Digital Universe, London, Penguin Group, p. 87 and 64 respectively.
Alan Turing, “On Computable Numbers with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society, 42 (1936-7), p. 230-265; Alan Turing (1951), “Can Digital Computers Think?”, in Jack B. Copeland (ed) (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life: Plus The Secrets of Enigma, Oxford, Oxford University Press, p. 482-486.
John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 between the U.S Army Ordnance Departement and the University of Pennsylvania, Moore School of Electrical Engineering, University of Pennsylvania, June 30.
“It turns out that the recent success of deep learning is due less to new breakthrough in AI than to the availability of huge amounts of data (thank you, internet!) and very fast parallel computer hardware”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, London, Pelican Books, p. 101, “The address matrix that began, in 1951, with a single 40-floor hotel, with 1,024 rooms on every floor, has now expanded to billions of 64-floor hotels with billions of rooms, yet the contents are still addressed by numerical coordinates that have to be specified exactly, or everything comes to a halt”, George B. Dyson (2012), Turing’s Cathedral. The origins of the Digital Universe, London, Penguin Group p. 309, “Yet we still face the same questions that were asked in 1953. Turing’s question was what it would take for machines to begin to think. Von Neumann's question was what it would take for machines to begin to reproduce” George B. Dyson (2012), Turing’s Cathedral. p. 10.
“The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 486, “All of a sudden the idionsyncracies, the weaknesses and powers, the vagaries and vicissitudes of human thought were hinted at by the newfound ability to experiment with alien, yet hand-tailored forms of thought – or approximations of thought. As a result, we have acquired, in the last twenty years or so, a new kind of perspective on what thought is, and what it is not”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach: An Eternal Golden Braid, New York, Basic Books, p. 337.
This is why Turing replaced the original question of whether machines can think ‒ “too meaningless to deserve discussion”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 449 ‒ with an operational criterion: the famous imitation game. To pass this Turing test, a machine must be able to: “answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine” (Jack B. Copeland (ed) (2004), The Essential Turing, p. 484). “as soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking, but a sort of unimaginative donkey-work. From this point of view one might be tempted to define thinking as consisting of « those mental processes that we don't understand ». If this is right then to make a thinking machine is to make one which does interesting things without our really understanding quite how it is done” Alan Turing, “Can Automatic Calculating Machines Be Said to Think?”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 500, “The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand”, George B. Dyson (2012), Turing’s Cathedral, p. 263, “There is a related « Theorem » about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of « real thinking ». The ineluctable core of intelligence is always in that next thing which hasn't yet been programmed. [...] « AI is whatever hasn't been done yet. »” Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 600, “As Claude Shannon wrote presciently in 1950, a machine that can surpass humans at chess « will force us either to admit the possibility of mechanized thinking or to further restrict our concept of thinking. » The latter happened”, Melanie Mitchell (2019), Artificial Intelligence, p. 196.
“Beyond the narrow framing put forward by both technology companies and the doctrine of human uniqueness (the idea that, among all beings, human intelligence is singular and pre-eminent) exists a whole realm of other ways of thinking and doing intelligence”, James Bridle (2022), Ways of Being. Beyond Human Intelligence, Dublin, Allen Lane, p. 10, “Systems of intelligent, computational ability – mycorrhizal networks, slime moulds and ant colonies, to name a few – have always existed in the natural world, but we had to recreate them in our labs and workshops before we were capable of recognizing them elsewhere. This is technological ecology in practice. We need the mental models provided by our technology, the words we make up for its concepts and metaphors, in order to describe and properly understand that analogous processes are already at play in the more-than-human-world”, James Bridle (2022), Ways of Being, p. 194. The octopus and Physarum polycephalum (aka “Blob”) are examples of animals and organisms that display remarkable intelligence. Cf. Peter Godfrey-Smith (2016), Other Minds. The Octopus and the Evolution of Intelligent Life, London, Collins and also “logic [as a problem solving capacity and means of discovery] is not peculiar to human beings. Biological evolution has endowed not only human beings, but virtually all organisms, with a natural logic through which they manage to survive”, Carlo Cellucci (2013), Rethinking Logic: Logic in Relation to Mathematics. Evolution, and Method, New York, Springer, p. 365.
“I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures”, Kate Crawford (2021), Atlas of AI. Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven/London, Yale University Press, p. 8, “One of the less recognized facts of artificial intelligence is how many underpaid workers are required to help build, maintain, and test AI systems. This unseen labor takes many forms – supply-chain work, on-demand crowdwork, and traditional service-industry jobs. […] Sometimes workers are directly asked to pretend to be an AI system”, Kate Crawford (2021), Atlas of AI., p. 63‑65, “As the social anthropologist F. G. Bailey observed, the technique of « obscuring by mystification » is often employed in public settings to argue for a phenomenon’s inevitability. We are told to focus on the innovative nature of the method rather than on what is primary: the purpose of the thing itself. Above all, enchanted determinism obscures power and closes off informed public discussion, critical scrutiny, or outright rejection”, Kate Crawford (2021), Atlas of AI, p. 214.
It is important to acknowledge the existence of alternative open-source and transparent solutions, such as those shared on the Hugging Face platform (https://huggingface.co/), which lie beyond the most well-known and mainstream options discussed here. However, these solutions tend to be less immediately accessible, and their overall impact is comparatively limited.
“That’s what happens, it would seem, when the development of AI is led primarily by venture-funded technology companies. The definition of intelligence which is framed, endorsed and ultimately constructed in machines is a profit-seeking, extractive one”, James Bridle (2022), Ways of Being, p. 9-10, “Their learned responses are that of a corporate intelligence, evolving within the arid, airless ecology of neoliberal capitalism, tech company boardrooms and ever-increasing financial and social disparities. If we wish them to evolve differently, we will need to address and alter this ecology”, James Bridle (2022), Ways of Being, p. 67, “The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that the mass harvesting of data is necessary and justified for creating systems of profitable computational « intelligence »”, Kate Crawford (2021), Atlas of AI, p. 95.
AlphaGo Zero, for example, played 4.9 million games of Go against itself in just 72 hours. Cf. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel & Demis Hassabis (2017), “Mastering the game of Go without human knowledge”, Nature, 550, p. 354-359, “AI game engines are designed to play millions of games, run statistical analyses to optimize for winning outcomes, and then play millions more. These programs produce surprising moves uncommon in human games for a straightforward reason: they can play and analyze far more games at a far greater speed than any human can. This is not magic; it is statistical analysis at scale”, Kate Crawford (2021), Atlas of AI, p. 215, “« We are not scanning all those books to be read by people », an engineer [of Google] revealed to me after lunch. « We are scanning them to be read by an AI. » The AI that is reading all these books is also reading everything else – including most of the code written by human programmers over the past sixty years”, George B. Dyson (2012), Turing’s Cathedral, p. 312-313.
“There are indications however that it is possible to make the machine display intelligence at the risk of its making occasional serious mistakes”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 374, “if a machine is expected to be infallible, it cannot also be intelligent”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 394.
“Let’s start by telling the truth: machines don’t learn. What a typical « learning machine » does, is finding a mathematical formula, which, when applied to a collection of inputs (called « training data »), produces the desired outputs. This mathematical formula also generates the correct outputs for most other inputs (distinct from the training data) on the condition that those inputs come from the same or a similar statistical distribution as the one the training data was drawn from. Why isn’t that learning? Because if you slightly distort the inputs, the output is very likely to become completely wrong”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, Quebec, Themlbook, p. xvii, “This lack of understanding is clearly revealed by the un-humanlike errors these systems can make; by their difficulties with abstracting and transferring what they have learned; by their lack of commonsense knowledge; and by their vulnerability to adversarial attacks. The barrier of meaning between AI and human-level intelligence still stands today”, Melanie Mitchell (2019), Artificial Intelligence, p. 307, Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali. Potenzialità e limiti in una prospettiva filosofica, Milan, Egea, p. 56.
Cf. Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 131.
“The most efficient search of an unmapped territory takes the form of a random walk”, George B. Dyson (2012), Turing’s Cathedral, p. 198. Conventional computers cannot generate true randomness; it must be supplied externally through methods such as cards, dice, a decaying nucleus, or random number tables. Cf. “Anyone who considers arithmetical methods of producing a random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number – there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method”, John von Neumann (1951), “Various Techniques Used in Connection with Random Digits”, Journal of Research of the National Bureau of Standards, 3 (36-38), p. 768-770. URL: https://mcnp.lanl.gov/pdf_files/InBook_Computing_1961_Neumann_JohnVonNeumannCollectedWorks_VariousTechniquesUsedinConnectionwithRandomDigits.pdf [consulted on 06/12/2024], p. 768.
“It is probably wise to include a random element in a learning machine [...]. A random element is rather useful when we are searching for a solution of some problem […] Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 463, “it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behavior depend on something like a roulette wheel or a supply of radium. The behavior of these may perhaps be predictable, but if so, we do not know how to do the prediction”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 484, “[Turing] suggested incorporating a random-number generator to create what he referred to as a « learning machine, » granting the computer the ability to take a guess and then either reinforce or discard the consequent results. If guesses were applied to modifications in the computer’s own instructions, a machine could then learn to teach itself”, George B. Dyson (2012), Turing’s Cathedral, p. 261.
“What is this Monte Carlo method? Very roughly, the idea is to replace a given precise mathematical procedure by one involving random processes. […] By doing this probabilistic experiment enough times we can approach the correct volume with arbitrarily high probability”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, Princeton, Princeton University Press, p. 296-297, “Monte Carlo originated as a form of emergency first aid, in answer to the question: What to do until the mathematician arrives? « The idea was to try out thousands of such possibilities and, at each stage, to select by chance, by means of a “random number” with suitable probability, the fate or kind of event, to follow it in a line, so to speak, just considering all branches »”, George B. Dyson (2012), Turing’s Cathedral, p. 191, “Today's search engines, long descended from their ENIAC-era ancestors, still bear the imprint of their Monte Carlo origins: random search paths being accounted for, statistically, to accumulate increasingly accurate results. The genius of Monte Carlo – and its search-engine descendants – lies in the ability to extract meaningful solutions, in the face of overwhelming information, by recognizing that meaning resides less in the data at the end points and more in the intervening path”; George B. Dyson (2012), Turing’s Cathedral, p. 198‑199, “In this way, rather than trying to represent and solve a whole problem at once – to capture and dominate it – the Monte Carlo method seeks to actively explore and draw conclusions about a problem, a very different, and more naturalistic, approach”, James Bridle (2022), Ways of Being, p. 225, “it uses its roll-outs to collect statistics on how many times a given move actually leads to a win or loss. The more roll-outs the algorithm runs, the better its statistics”, Melanie Mitchell (2019), Artificial Intelligence, p. 205-206.
“Training sets raise complex questions from ethical, methodological, and epistemological perspectives”, Kate Crawford (2021), Atlas of AI, p. 119, “To make such predictions, machine learning systems are seeking to classify entirely relational things into fixed categories and are rightly critiqued as scientifically and ethically problematic”, Kate Crawford (2021), Atlas of AI, p. 146, “This epistemological flattening of complexity into clean signal for the purposes of prediction is now a central logic of machine learning”, Kate Crawford (2021), Atlas of AI, p. 146, “Bias is a symptom of a deeper affliction: a far-ranging and centralizing normative logic that is used to determine how the world should be seen and evaluated”, Kate Crawford (2021), Atlas of AI, p. 221, “A predictive algorithm is a Hall of Mirrors. A mirror reflects what’s in front of it. If the people in front of it behave in biased ways, the mirror will « behave » in the same biased ways. The mirror doesn’t know that it’s arresting more blacks than whites under the same circumstances or hiring more men than women. It knows only what the police do or what employers do. It assumes they know what makes them successful, so it tells them to keep on doing more of the same”, Deborah Stone (2020), Counting: How We Use Numbers to Decide What Matters, New York, Liveright Publishing Corporation, p. 58.
“As Vannevar Bush foresaw, machines have enormous appetites. But how and what they are fed has an enormous impact on how they will interpret the world, and the priorities of their masters will always shape how that vision is monetized”, Kate Crawford (2021), Atlas of AI, p. 121, “Intuitively, the larger is the set of training examples, the more unlikely that the new examples will be dissimilar to (and lie on the plot far from) the examples used for training”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 7. Essentially, machine and deep learning algorithms extract structures from input data by identifying statistical regularities or correlations and encoding these structures in network parameters. However, they can easily mistake noise in the data for a signal, producing numerous correlations that do not convey any “hidden” information or meaning. Moreover, Calude and Longo have shown that as the amount of data increases, so does the prevalence of spurious correlations. This is a consequence of the under‑determination of similarity (anything can be considered to be similar to anything else in a certain respect). Cf. Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 75-76, “Ora un'ultima ambiguità leagata all'idea di somiglianza sta nel fatto che – a livello di fenomeni molto elementari come alto|basso, destra|sinistra o lungo|largo – ogni cosa rassomiglia a qualsiasi altra cosa. Il che significa che vi sono certe caratteristiche formali così generiche da appartenere a quasi tutti i fenomeni e che possono essere considerate iconiche di ogni altro fenomeno”, Umberto Eco (1975), Trattato di semiotica generale, Milan, Bompiani, p. 278.
“Anything and everything online was primed to become a training set for AI”, Kate Crawford (2021), Atlas of AI, p. 106.
“It's no secret: deep learning requires big data. Big in the sense of the million-plus labeled training images in ImageNet. Where does all this data come from? The answer is, of course, you – and probably everyone you know. [...] Have you ever identified a picture in order to prove to a website that you're not a robot? Your identification might have helped Google tag an image for use in training its image search system”, Melanie Mitchell (2019), Artificial Intelligence, p. 97.
George B. Dyson (2012), Turing’s Cathedral, p. 312, “One approach is to start with the questions, and search for the answers. Another approach is to start with the answers and search for the questions. Because it is easier (and more economical) to collect answers (which are already encoded) than to ask questions (which have to be encoded), the first step would be to crawl through the matrix and collect the meaningful strings. [...] Human beings and machines have already done much of the work, filing away meaningfully encoded strings since the beginning of the digital universe and, since the dawn of the Internet, giving them unique numerical addresses. [...] The result is an indexed list (within your machine's « state of mind, » to use Turing's language) of a significant fraction of the meaningful answers in the Digital Universe. With two huge deficiencies: you don't have any questions – you have only answers –and you have no clue where the meaning is. Where do you go to get the questions, and how do you find where the meaning is? If, as Turing imagined, you have the mind of a child, you ask people, you guess, and you learn from your mistakes. You invite people to submit questions – keeping track of all submissions – and, starting with simple template-matching, suggest possible answers from your indexed list. People click more frequently on the results that provide more meaningful answers, and with simple bookkeeping, meaning, and the map between questions and answers, begins to accumulate over time. Are we searching the search engine, or are the search engines searching us?”, George B. Dyson (2012), Turing’s Cathedral, p. 263-264.
“We are training Google’s image recognition algorithms for free. Again, the myth of AI as affordable and efficient depends on layers of exploitation, including the extraction of mass unpaid labor to fine-tune the AI systems of the richest companies on earth. Contemporary forms of artificial intelligence are neither artificial nor intelligent. We can – and should – speak instead of the hard physical labor of mine workers, the repetitive factory labor on the assembly line, the cybernetic labor in the cognitive sweatshops of outsourced programmers, the poorly paid crowdsourced labor of Mechanical Turk workers, and the unpaid immaterial work of everyday users”, Kate Crawford (2021), Atlas of AI, p. 69.
“It has become so normalized across the industry to take and use whatever is available that few stop to question the underlying politics”, Kate Crawford (2021), Atlas of AI, p. 93, “There has been a general failure to address the ways in which the instruments of knowledge in AI reflect and serve the incentives of a wider extractive economy” Kate Crawford (2021), Atlas of AI, p. 135, “Questions about who gets to do that rewriting of reality, which decisions are made along the way, and who gains from it, are all too often missed and forgotten in the excitement”, James Bridle (2022), Ways of Being, p. 21-22, “optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers. One of the lessons of the financial crisis that led to the Great Recession is that there are periods in which competition, among experts and among organizations, creates powerful forces that favor a collective blindness to risk and uncertainty”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 262.
“We become more like the machines we envisage, in ways which, in the present, have profoundly negative effects on our relationships with one another and with the wider world”, James Bridle (2022), Ways of Being, p. 10.
AI encompasses ML, which, in turn, includes artificial neural networks (ANNs). Deep learning is a variant of ANNs that incorporates multiple layers. The term “deep” refers to the number (depth) of layers in the neural network.
“Some people – generally mathematicians – promoted mathematical logic and deductive reasoning as the language of rational thought. Others championed inductive methods in which programs extract statistics from data and use probabilities to deal with uncertainty. Still others believed firmly in taking inspiration from biology and psychology to create brain-like programs”, Melanie Mitchell (2019), Artificial Intelligence, p. 7-8. “it's important to understand a philoso phical split that occurred early in the AI research community: the split between so-called symbolic and subsymbolic AI. […] A symbolic AI program's knowledge consists of words or phrases (the « symbols »), typically understandable to a human, along with rules by which the program can combine and process these symbols in order to perform its assigned task”, Melanie Mitchell (2019), Artificial Intelligence, p. 7-8, “subsymbolic approach to AI took inspiration from neuroscience”, Melanie Mitchell (2019), Artificial Intelligence, p. 12.
Cf. Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 20.
George B. Dyson (2012), Turing’s Cathedral, p. 318, “As Marcus notes, while we humans attribute to the program a certain understanding of what we consider basic concepts [...] the program actually has no such concepts”, Melanie Mitchell (2019), Artificial Intelligence, p. 215, “these systems don't comprehend the meaning of what we ask them”, Melanie Mitchell (2019), Artificial Intelligence, p. 279, “We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted”, Melanie Mitchell (2019), Artificial Intelligence, p. 368.
“I expect that digital computing machines will eventually stimulate a considerable interest in symbolic logic and mathematical philosophy. The language in which one communicates with these machines, i.e. the language of instruction tables, forms a sort of symbolic logic. The machine interprets whatever it is told in a quite definite manner without any sense of humor or sense of proportion. Unless in communicating with it one says exactly what one means, trouble is bound to result. Actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machine were given instruction tables which would enable it to interpret that logical system. This should mean that there will be much more practical scope for logical systems then there has been in the past”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 392.
James Bridle (2022), Ways of Being, p. 169. Cf. “The languages we have developed thus far to communicate with machines are themselves pidgins: simplified forms which are native to neither party. It requires effort on both sides to fit our ideas inside them; neither side expresses itself well, but each thinks itself superior. Linguists call this the “double illusion”: humans think they are speaking computer, computers think they are speaking human, and neither is very satisfied”, James Bridle (2022), Ways of Being, p. 169.
“it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g. a symbolic language. These orders are to be transmitted through the « unemotional ». The use of this language will diminish greatly the number of punishments and rewards required”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed) (2004), The Essential Turing, p. 461, “Over the last six decades of AI research, people have repeatedly debated the relative advantages and disadvantages of symbolic and subsymbolic approaches. Symbolic systems can be engineered by humans, be imbued with human knowledge, and use human-understandable reasoning to solve problems. […] While there have been some attempts to construct hybrid systems that integrate subsymbolic and symbolic methods, none have yet led to any striking success”, Melanie Mitchell (2019), Artificial Intelligence, p. 34‑35.
Such systems are known to be efficient for domain-specific and local solutions but are typically limited in scope and not suited for general, global solutions that can be applied to other problems or domains.
“The digital, computational approach to humanistic material has led to new comparisons and methods of analysis. [...] The digital humanities are bringing not just new understanding but also new questions that have never been asked before. This movement is drastically changing humanistic practice”, Rens Bod (2013), A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present, Oxford, Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199665211.001.0001 [consulted on 06/12/2024], p. 362.
“The powers of computers derive as much from their ability to copy as from their ability to compute”, George B. Dyson (2012), Turing’s Cathedral, p. 283.
“before attempting to translate our data into the rigorous language of symbols, it is above all things necessary to ascertain the intended import of the word we are using. But this necessity cannot be regarded as an evil by those who value correctness of thought, and regard the right employment of language as both its instrument and its safeguard”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, Cambridge, Cambridge University Press, 2009, p. 60-61, “I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed, so that its origin can be investigated. [...] Everything necessary for a correct inference is expressed in full, but what is not necessary is generally not indicated; nothing is left to guesswork. In this I faithfully follow the example of the formula language of mathematics”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, in Jean van Heijenoort (ed.), Frege and Gödel. Two Fundamental Texts in Mathematical Logic, Cambridge, Harvard University Press, 1970, p. 5-6. “un linguaggio reso più limpido e sorvegliato, anche più magro forse, nel passaggio attraverso l'arte della logica. […] Degli strumenti umani che offre la logica dunque non solo non bisogna avere paura, ma anzi occorre considerali come preziosissimi (e localmente migliorabili) strumenti di controllo del pensare, vale a dire ausilio all'espressione trasparente e non ambigua dei pensieri”, Roberta de Monticelli (2006), Esercizi di pensiero per apprendisti filosofi, Turin, Bollati Boringhieri., p. 28-29.
Nevertheless, the same concepts can always be expressed through different symbolic representations. Cf. “It is important not to get the idea, from the rather strict nature of all the formal systems we have seen, that the isomorphism between symbols and real things is a rigid, one-to-one mapping, like the strings which link a marionette and the hand guiding it”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 338.
“The web is an amazing access platform. Content published on the web becomes almost immediately globally accessible. It is hard to adequately underscore how massive an impact this has had and is still having on access to cultural heritage material”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, Baltimore, John Hopkins University Press, p. 163, “But the gift of the Web wasn’t only informational: by its very existence it gave us new tools to identify and understand networks themselves”, James Bridle (2022), Ways of Being, p. 81.
Cf. Umberto Eco (1975), Trattato di semiotica generale, p. 112 and 377. Cf. also Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 266, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 120‑121.
“We will increasingly operate in a world of networked and linked collection and descriptions. With more and more content made available online, a click away from the holdings of any number of institutions, individual institutional collections have become part of a linked global collection. In this context, more and more digital materials will be described and will also describe each other”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 130.
“The humanities have become increasingly fragmented over the last two centuries–unlike the sciences, where the opposite seems to have taken place”, Rens Bod (2013), A New History of the Humanities, p. 4.
“Scientific observation is not merely pure description of separate facts. Its main goal is to view an event from as many perspectives as possible. The eye of science does not probe « a thing, » an event isolated from other things or events. Its real object is to see and understand the way a thing or event relates to other things or events”, Aleksandr (1976), “Romantic science: Unimagined portraits, (Moscow 1976)”, in Aleksandr Lurija, Viaggio nella mente di un uomo che non dimenticava nulla, Roma, Armando, p. 108-116.
Cf. <https://www.force11.org/group/fairgroup/fairprinciples> and <https://www.go-fair.org/fair-principles/>.
Cf. Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 99-100.
“The exercise helps you to check your own reasoning; you can be your own critic. What is more, as we shall see, making the reasoning explicit can aid in the creative process by bringing out weaknesses and suggesting remedies”, Paul F. A. Bartha (2010), By Parallel Reasoning. The Construction and Evalutation of Analogical Arguments, Oxford, Oxford Univerity Press, p. 5.
“La transformation du langage ordinaire en symboles présente des difficultés plus grandes. Il faut d'abord analyser complètement les propositions qu'on veut écrire en symboles. Mais cette analyse a son avantage; combien de fois la proposition se transforme-t-elle en une identité, ou on y découvre des inexactitudes, des lacunes, des ambiguïtés!”, Giuseppe Peano(1896), “Introduction au tome II du « Formulaire de mathématiques »”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 198, “Da punto di vista teorico, la trascrizione come codifica fa interrogare analiticamente e esplicitamente su cosa stiamo codificando (o ricodificando o decodificando), a scomporre questo qualcosa in elementi discreti, a ordinare in modo sequenziale le nostre operazioni, a evitare ambiguità, contraddizioni ridondanze; e ci costringe, soprattutto, a formulare tutto ciò in modo rigoroso, senza poter ricorrere più alle (comodissime) tassonomie semiclandestine del buon senso, « tolleranti e bonarie »”, Raul Mordenti (2001), Informatica e critica dei testi, Roma, Bulzoni, p. 29, “The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 274-275, “a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 277.
“« Theories are nets », wrote Novalis, « and only he who casts will catch ». Theories are nets, and we should learn to evaluate them for the empirical data they allow us to process and understand: for how they concretely change the way we work rather than as ends in themselves. Theories are nets; and there are so many interesting creatures that await to be caught, if only we try”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, New Left Review, 24. URL: https://newleftreview.org/issues/ii24/articles/franco-moretti-graphs-maps-trees-1 [consulted on 06/12/2024], p. 63.
“Ideas about how we should think are locked into our culture. It’s a problem exacerbated by technology. Once a way of seeing the world has been moulded into a tool it’s very hard to think otherwise: « When all you have is a hammer, everything looks like a nail » as the saying goes”, James Bridle (2022), Ways of Being, p. 175, “All computers are simulators. They contain abstract models of aspects of the world, which we set in motion – and then immediately forget that they’re models. We take them for the world itself”, James Bridle (2022), Ways of Being, p. 207.
“No schema is ever complete, no taxonomy ever finished – and that’s fine, providing the systems we put in place for interpreting and applying those schemas are open, transparent, comprehensible and renegotiable”, James Bridle (2022), Ways of Being, p. 111, Umberto Eco (1975), Trattato di semiotica generale, p. 182.
Kate Crawford (2021), Atlas of AI, p. 132.
“Usualmente un solo significante veicola contenuti diversi e interallaciati e che pertanto quello che si chiama « messaggio » è il più delle volte un testo il cui contenuto è un discorso a più livelli”, Umberto Eco (1975), Trattato di semiotica generale, p. 86, “it is important to determine whether there is enough context, so that someone in the future can make sense of the digital objects”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 119.
“This redundancy prevented bits being lost in transit”, George B. Dyson (2012), Turing’s Cathedral, p. 137, “redundancy was the clue to the correct way to organize automata made with unreliable components”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 280.
Cf. Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 147 and 160. “It turns out that it is extremely useful to have the same information in several different forms for different purposes. [...] It suggests that there are advantages to being able to switch back and forth between procedural and declarative representations”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 616-617.
“You reduce the text to a few elements, and abstract them, and construct a new, artificial object. A model. And at this point you start working at a « secondary » level, removed form the text: a map, after all, is always a look from afar – or is useless, like Borges's map of the empire. Distant reading, I have called this work elsewhere; where distance is however not an obstacle, but a specific form of knowledge: fewer elements, hence a sharper sense of their overall interconnection. Shapes, relations, structures. Patterns”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, p. 94.
The Linked Open Data movement itself appears to adopt a somewhat relaxed approach toward the formal aspects of the SW. Cf. “In Linked Data, the use of owl:sameAs is ubiquitous in « inter-linking » data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, in David Wood, Stefan Decker & Ivan Herman (eds.), Proceedings of W3C Workshop: RDF Next Steps 2010. URL: https://www.w3.org/2009/12/rdf-ws/papers/ws21 [consulted on 06/12/2024], p. 1, “Contrary to popular belief in some circles, formal semantics are not a silver bullet. Just because a construct in a knowledge representation language is prescribed a behaviour using formal semantics does not necessarily mean that people will follow those semantics when actually using that language « in the wild »”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, p. 1, “A loosely defined system of metadata may be adequate for finding information, but inadequate for any further processing. As Phipps observed, superficial agreements about vocabulary may hide complexities that make interoperability impossible”, John F. Sowa (2000), “Ontology, Metadata, and Semiotics”, in Bernhard Ganter & Guy W. Mineau (eds.), Conceptual Structures: Logical, Linguistic, and Computational Issues, Berlin, Springer, p. 55-81. DOI: https://doi.org/10.1007/10722280_5 [consulted on 06/12/2024], p. 6. “The digital world appears to be a cool rule-bound universe of logic and order. In practice it is a word of partially completed and conflicting specifications, files that were never valid but seem to work fine in the application most people use them in, and, deep down, information that is made up of physical markings on tangible media”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p, 52-53.
“What precisely is new in its latest guise of this problem on the Web of Linked Data is that this is the first time the problem is being encountered by different individuals attempting to independently knit their knowledge representations together using the same standardized language. Much of the supposed « crisis » over the proliferation of owl:sameAs in Linked Data can be traced to the fact that these uses of owl:sameAs tend to be mutually incompatible, and almost always violate the rather strict logical semantics of identity demanded by owl:sameAs”, Harry Halpin, Ivan Herman & Patrick J. Hayes (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”, p. 1.
“An unwillingness to admit the possibility that mankind can have any rivals in intellectual power. This occurs as much amongst intellectual people as amongst others: they have more to lose”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 410-411. Turing anticipated numerous objections and prejudices against intelligent machines. One of the most famous objections was put forth by Ada Lovelace, who referenced Charles Babbage’s machine: «The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform». Cf. Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 392-393, “The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the working out of consequences from data and general principles”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 456.
“The essential argument against formal specification is their difficulty. Formal specifications, it is said, are hard to learn, hard to write, hard to read”, Bertrand Meyer (1991), Introduction to the Theory of Programming Languages, London, Prentice Hall, p. 4.
“Why is computer so important to mankind? We might have felt at one time that calculating would make up only a tiny part of human activity. Computing would seem, in the post-Galilean era, to have one foot in the physical sciences and the other in the accounting world. For this reason the average person might expect that little of his intellectual activity need to be spent in computing”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 344.
“Another argument that one used to hear against formal specifications was that they only applied to toy examples, and failed to describe full-size, realistic languages”, Bertrand Meyer (1991), Introduction to the Theory of Programming Languages, p. 4.
“A variant of Lady Lovelace's objection states that a machine can 'never do anything really new'”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 454. In response, Turing stated that it was impossible to ascertain the outcome of a particular code without executing it. “Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. One does not need to be able to understand how these orders lead to the machine's subsequent behavior, anymore that one needs to understand the mechanism of germination when one puts a seed in the ground. The plant comes up whether one understands or not”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 485. “There is an old saw which says, « Computers can only do what you tell them to do. » This is right in one sense, but it misses the point: you don't know in advance the consequences of what you tell a computer to do; therefore its behavior can be as baffling and surprising and unpredictable to you as that of a person. [...] There is another sense in which this old saw is rusty. This involves the fact that as you program in ever higher-level languages, you know less and less precisely what you've told the computer to do! Layers and layers of translation may separate the « front end » of a complex program form the actual machine language instructions. At the level you think and program, your statements may resemble declaratives and suggestions more than they resemble imperatives or commands”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 306.
“All this work helped to show many people in a variety of scientific disciplines the tremendous breadth of applicability of the electronic computer. It certainly was seminal and, I believe, it was vital in conditioning scientists to accept and welcome the computer as a basic new tool both for the experimentalist and the theoretician”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 299. “If asked what is really the gist of the matter in our still ongoing change from analogue to digital media ‒ what « the real revolution » is ‒ my answer, at least, would be transmedialisation. The shift from media orientation to data orientation with its focus on abstraction, modelling and multi-purpose representations can be shown particularly clearly for the field of scholarly editions”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32.
“Si pensa spesso che le culture scientifica e umanistica siano contrapposte nei metodi e nelle finalità. Secondo i pregiudizi degli osservatori distratti, la prima si interessa dell'esperienza pubblica, universale, oggettiva, quantitativa, unitaria, e il suo linguaggio è preciso, razionale, fatto di idee e concetti. La seconda guarda invece all'esperienza privata, particolare, soggetttiva, qualitativa, molteplice, e il suo linguaggio è ambiguo, emotivo, fatto di immagini e racconti. Questi pregiudizi vengono messi profondamente in crisi dalla constatazione che scienza e arte, e cioè le rispettive punte di diamante delle due culture, sono visioni complementari e non contraddittorie del mondo, sia esterno che interno. [...] La prova più esplicita della compatibilità fra scienza e arte si trova proprio nella matematica, che fornisce ad entrambe uno strumento comune per esprimerne gli aspetti essenziali”, Piergiogio Odifreddi (2005), Penna, pennello e bacchetta. Le tre invidie del matematico, Bari, Laterza, p. 53.
“Although it is certainly true that the computer can solve very many problems in areas that can be rendered into a mathematical form, this is a rather sterile and not very useful definition since it suggests largely scientific and engineering applications far removed from the man in the street. It is therefore better to recognize that what a computer really deals with is not just numbers alone but rather with information broadly. It does not just operate on numbers; rather, it transforms information and communicates it. Perhaps the greatest importance of the stored-program concept lies precisely at this point. As we have seen, information – the instructions characterizing a problem – can be coded into numerical form and then altered at will at the computer as the computation proceeds. This may well be the genesis of the idea of encoding information into digital form and then transforming it as desired. Herein lies the key to the importance of electronic computers. Their universality makes them as useful for sorting information as for multiplying numbers. […] In sum, the importance of the computer to society lies not only in its superb ability to do very complex tasks of an abstruse mathematical nature but also in its ability to alter profoundly the communication and transformation of all sorts of information. It is the latter capacity that has been so useful to the humanists and the sociologist as well as to the businessman”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 344-345, “In the late 1950s Newell and Simon proved that computers could do more than calculate. They demonstrated that a computer's strings of bits could be made to stand for anything, including features of the real world, and that its programs could be used as rules for relating these features”, Hubert L. Dreyfus (1992), What Computers Still Can't Do: A Critique of Artificial Reason, Cambridge, MIT Press, p. x, “Some learning algorithms only work with numerical feature vectors. When some feature in your dataset is categorical, like « colors » or « days of the week, » you can transform such a categorical feature into several binary ones”, Andriy Burkov (2019), The Hundred-Page Machine Learning Book, p. 44. “Physically, a bit is just a magnetic « switch » that can be in either of two positions. You could call the two positions « up » and « down », or « x » and « o », or « 1 » and « 0 »... The third is the usual convention. It is perfectly fine, but it has the possibly misleading effect of making people think that a computer, deep down, is storing numbers. This is not true. A set of thirty-six bits does not have to be thought of as a number any more than two bits has to be thought of as the price of an ice cream cone. Just as money can do various things depending on how you use it, so a word in a memory can serve many functions. [...] How a word in memory is to be thought of depends entirely on the role that this word plays in the program which uses it. It may, of course, play more than one role–like a note in a canon”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 288-289.
Cf. Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche. Analisi, algebra e geometria analitica, Turin, Bollati Boringhieri, 2010, 19-21.
“Language and number serve as instrumental aids to the processes of reasoning”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 2, “Indeed, numbers acquire a life of their own, devoid of any direct reference to concrete sets of objects. The scaffolding of mathematics can then rise, ever higher, ever more abstract”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, Oxford, Oxford University Press, p. xix, “Obviously, what distinguishes us from other animals is our ability to use arbitrary symbols for numbers, such as words or Arabic digits. These symbols consist of discrete elements that can be manipulated in a purely formal way, without any fuzziness”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 62, “language eases the computation and communication of precise numerical quantities”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 75, “A system of symbolic numerals, however, seems essential in order to go beyond this evolutionarily ancient system and to perform exact calculations”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 263, “In well-trained adults, at least, parietal cortex is the place where quantities and symbols meet. Education provides us with a shared neuronal code for numerosities and symbols”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 271.
Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 79.
“Logic is conversant with two kinds of relations, – relations among things, and relations among facts. But as facts are expressed by propositions, the latter species of relation may, at least for the purposes of Logic, be resolved into a relation among propositions”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 7, “As pointed out, the nervous system is based on two types of communications: those which do not involve arithmetical formalisms, and those which do, i.e. communication of orders (logical ones) and communication of numbers (arithmetical ones). The former may be described as language proper, the latter as mathematics”, John von Neumann (2012), The Computer & the Brain, New Haven/London, Yale University Press, p. 82, “The foundations of any mathematical construction are grounded on fundamental intuition such as notions of set, number, space, time, or logic. These are almost never questioned, so deeply do they belong to the irreducible representations concocted by our brain. Mathematics can be characterized as the progressive formalization of these intuitions. Its purpose is to make them more coherent, mutually compatible, and better adapted to our experience of the external world”, Stanislas Deheane (2011), The Number Sense. How the Mind Creates Mathematics, p. 228.
“The terms narrow and weak are used to contrast with strong, human-level, general, or full-blown AI (sometimes called AGI, or artificial general intelligence) – that is, the AI that we see in movies, that can do most everything we humans can do, and possibly much more”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, p. 40-41, “But essentially everyone in AI research agrees that core « commonsense » knowledge and the capacity for sophisticated abstraction and analogy are among the missing links required for future progress in AI”, Melanie Mitchell (2019), Artificial Intelligence: A Guide for Thinking Humans, p. 322.
“That is the fact that interhuman communication is far less rigidly constrained than human-machine communication. For instance, we often produce meaningless sentence fragments as we search for the best way to express something, we cough in the middle of sentences, we interrupt each other, we use ambiguous descriptions and « improper » syntax, we coin phrases and distort meanings – but our message still gets through, mostly. With programming languages, it has generally been the rule that there is a very strict syntax which has to be obeyed one hundred per cent of the time; there are no ambiguous words or constructions”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297, “The amazing thing about language is how imprecisely we use it and still manage to get away with it”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 674.
“To our minds it is clearest when several steps are telescoped together, to form one single sentence”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 60, “We can't possibly keep track of everything happening around us, so the brain takes mental shortcuts. Shortcuts enable us to make rough judgments, but often they distort our counting”, Deborah Stone (2020), Counting: How We Use Numbers to Decide What Matters, New York, Liveright Publishing Corporation, p. 40.
“Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 86, “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance”, Daniel Kahneman (2012), Thinking, Fast and Slow, London, Penguin Book, p. 201.
“Logics and statistics should be primarily, although not exclusively, viewed as the basic tools of « information theory »”, John von Neumann (2012), The Computer & the Brain, p. 2, “In principle the computer can also be used as a heuristic tool to study a large class of related situations in the hopes of finding any regularities that may exist”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 87. Rens Bod (2013), A New History of the Humanities, p. 298-299.
“Deductive rules are non-ampliative, but that does not mean that they play no useful role in knowledge. Since the conclusion makes explicit all or part of what is contained in the premises, establishing that the conclusion is plausible facilitates the comparison of the premises with experience”, Carlo Cellucci (2013), Rethinking Logic: Logic in Relation to Mathematics. Evolution, and Method, New York, Springer, p. 306.
“The necessity for using the intuition is then greatly reduced by setting down formal rules for carrying out inferences which are always intuitively valid”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 135, “The computer is a logical, mathematical system, upon which higher-level statistical, probabilistic systems, such as human language and intelligence, could possibly be built”, George B. Dyson (2012), Turing’s Cathedral, p. 278.
“Complessità strutturale che resiste certo all'analisi, ma non vi si sottrae”, Umberto Eco (1975), Trattato di semiotica generale, p. 341, “Il proposito della teoria dei codici era di mostrare che i linguaggi, se pure non hanno logica esatta, hanno almeno una qualche logica. E probabilmente il problema non è quello di trovare una logica, se per logica si intende solo una teoria strettamente assiomatizzata. Si tratta di trovare una teoria semiotica”, Umberto Eco (1975), Trattato di semiotica generale, p. 219, nota 4.
“The patterns found can consist of a regularity (often with exceptions) but they can also consist of a system of rules such as a grammar, or a system of interpretations, and they may even be similar to 'laws' such as the sound shift laws in linguistics and the laws of harmony in music”, Rens Bod (2013), A New History of the Humanities, p. 9, “Yet I will argue that seeking and finding patterns is timeless and ubiquitous, not only when observing nature but also when examining texts, art, poetry, theatre, languages, and music. Just as in all other scholarship, it is about trying to make a meaningful distinction between fortuitous and non-fortuitous pattern”, Rens Bod (2013), A New History of the Humanities, p. 10, “regularities found in previous observations impose themselves on new observations. In this regard there is no essential difference between the study of scientific phenomena and humanities phenomena”, Rens Bod (2013), A New History of the Humanities, p. 72, “Both humanists and scientists search for underlying patterns, which they try to express in logical, procedural or mathematical formalizations”, Rens Bod (2013), A New History of the Humanities, p. 355, “The quest for patterns represents an uninterrupted constant in humanistic research and is being investigated increasingly often with the aid of cognitive and digital approaches”, Rens Bod (2013), A New History of the Humanities, p. 362.
“Since Panini specifies a clear procedure for his grammar, which he expresses as a system of rules, we will designate his method as the procedural system of rules principle”, Rens Bod (2013), A New History of the Humanities, p. 16, “Medieval logic was rule-based and procedural virtually all over the world, with the syllogism as the most important reasoning pattern. If scholars believed they could discover a formal system of rules somewhere, it was apparently in the structure of human reasoning”, Rens Bod (2013), A New History of the Humanities, p. 129.
“Abstraction, in some form, underlies all of our concepts [...] Abstraction is closely linked to analogy making. [...] « the perception of a common essence between two things »”, Melanie Mitchell (2019), Artificial Intelligence, p. 319‑320, “In short, analogies, most often made unconsciously, are what underlie our abstraction abilities and the formation of concepts”, Melanie Mitchell (2019), Artificial Intelligence, p. 320.
“As the cognitive scientist Robert French phrased it, abstraction and analogy are all about perceiving « the subtlety of sameness. » To discover this subtle sameness, you need to determine which attributes of the situation are relevant and which you can ignore”, Melanie Mitchell (2019), Artificial Intelligence, p. 332.
“Una unità culturale non può essere però identificata soltanto attraverso la serie dei propri interpretanti. Essa deve essere definita come POSTA in un sistema di altre unità culturali che vi si oppongono o la circoscrivono. Un'unità culturale « esiste » solo in quanto ne viene definita un'altra che vi si oppone. È solo la relazione tra i vari elementi di un sistema di unità culturali che sottrae a ciascuno dei termini ciò che è portato dagli altri”, Umberto Eco (1975), Trattato di semiotica generale, p. 108, “il soggetto di ogni attività semiosica non è altro che il risultato della segmentazione storica e sociale dell'universo, quello stesso che l'indagine sulla natura dello spazio semantico globale ha reso evidente. Questo soggetto si presenta nella teoria dei codici come un modo di vedere il mondo; per conoscerlo non si può che vederlo come un modo di segmentare l'universo e di associare unità espressive a unità di contenuto, in un lavoro nel corso del quale queste concrezioni storico-sistematiche si fanno e si sfanno senza posa”, Umberto Eco (1975), Trattato di semiotica generale, p. 377, “See here how a quantitative history of literature is also a profoundly formalist one–especially at the beginning and at the end of the research process. At the end, because it must account for the data; and at the beginning, because a formal concept is usually what makes quantification possible in the first place: since a series must be composed of homogeneous objects, a morphological category is needed ‒ « novel », « anti‑Jacobin novel », « comedy », etc. – to establish such homogeneity”, Franco Moretti (2003), “Graphs, Maps, Trees: Abstract Models for Literary History”, p. 86, note 14, “Every man will tend to segregate a mass of moving matter as a unit, separate from the static background, and to pay it particular attention”, Rens Bod (2013), A New History of the Humanities, p. 62. Cf. also “Ultimately, the database nature of new media, its inherent indexicality, means that this decision about structure and order are far more fluid than they are for artifactual physical objects. We need to figure out how to best chunk this data and process it to extract the meaningful information that will make it usable and discoverable now and in the future”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 146.
“The Infinity Principle: To shed light on any continuous shape, object, motion, process, or phenomenon – no matter how wild and complicated it may appear – reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole”, Steven Strogatz (2019), Infinite Powers. How Calculus Reveals the Secrets of the Universe, New York, Eamon Dolan Book, p. xvi.
“Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cutting process always involves infinitely fine subtraction, which is used to quantify the differences between the parts. Accordingly, this half of the subject is called differential calculus. The reassembly process always involves infinite addition, which integrates the parts back into the original whole. This half of the subject is called integral calculus”, Steven Strogatz (2019), Infinite Powers, p. xv.
“In realtà nei linguaggi naturali le unità culturali di rado sono entità formalmente univoca e spesso sono ciò che la logica dei linguaggi naturali chiama oggi « fuzzy concepts », o insiemi sfumati (Lakoff, 1972)”, Umberto Eco (1975), Trattato di semiotica generale, p. 119.
“Formal logic can only take account of relations which are formally expressed (VI.16); and it may thus, in particular instances, become necessary to express, in a formal manner, some connexion among the premises which, without actual statement, is involved in the very meaning of the language employed”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 204, “una caratteristica dello stesso strumento informatico, dal suo paradossale limite che potremmo così formulare: « L’informatica può risolvere in modo soddisfacente solo problemi informatici, cioè posti in modo informatico ». È questo il « difetto » dell’informatica, il suo intrinseco limite; ma si tratta di un limite davvero paradossale, perché si rovescia in un elemento di illimitata pervasività. Per poter usare il computer, per giovarsi al meglio delle sue straordinarie possibilità, si è, prima o poi, costretti a modificare l’intero processo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 24.
“Al contrario, in una procedura ecdotica segnata dall'informatica la trascrizione da un manoscritto rappresenta il momento forte, anzi decisivo, non solo perché è il momento più costoso in termini di tempo/uomo ma sopprattutto perché configura il momento crucialissimo della codifica, cioè dell'immissione nella macchina dell'informazione da cui dipenderanno tutti i successivi trattamenti e manipolazioni”, Raul Mordenti (2001), Informatica e critica dei testi, p. 29.
“The use of a modern computing machine is based on the user's ability to develop and formulate the necessary complete codes for any given problem that the machine is supposed to solve”, John von Neumann (2012), The Computer & the Brain, p. 71.
“Representations try to capture objects in their entirety and can be further transformed into publications. This already indicates a possible distinction between representation and presentation which will be discussed later. For the moment it is important that representation is a necessity for an edition. Critical engagement without representation is not an edition – but an examination, a catalogue or a description”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 24, “today information resources are being created without primarily thinking of them in terms of publication. We are less looking forward to the layout and functionality of the presentation, but start with the decoding and encoding of what is actually there. We create information resources that are guided by abstract models and abstract descriptions of the objects at hand. The dogma of our current markup strategies is the separation or rather translation from form to content. Thus, we do not just transform our textual witnesses from one (material) media and form into another (digital) media and form. Rather, we try to encode structures and meaning of documents and texts beyond their mediality”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32, “Here we see a transition from the edition as a media product to the edition as a modelled information resource that can be presented in media but is about the abstract representation of knowledge in the first place”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32.
“[Vi sono] due momenti caratteristici dell’impatto dell’informatica con il mondo: 1) in una prima fase il computer viene inteso/usato come macchina utile per la risoluzione di vecchi problemi tipici dell’assetto epistemico dato; b) in una seconda fase il computer viene finalmente inteso come generatore di problemi inediti in un assetto epistemico del tutto nuovo (determinato dallo stesso uso dell’informatica)”, Raul Mordenti (2001), Informatica e critica dei testi, p. 22.
In the same way as developers strive to build user-friendly digital tools and solutions.
“Infatti la macchina informatica che legge (per così dire) chiude il cerchio dell'utilizzazione dell'informatica in filologia. Con essa diventa evidente che non si tratta più di un'utilizzazione parziale, cioè dell'ausilio dell'informatica (intesa ancora come strumento) per superare le difficoltà quantitative della antiche procedure (una fase questa che è bene rappresentata, per la filologia, dai primi tentativi di informatizzare la stemmatica degli anni Settanta). Ora, se la macchina « sa leggere », si tratta invece evidentemente anche di scrivere per la macchina, e, più in generale, di tener conto della macchina quando si scrive; ma dunque di tenere conto della macchina soprattutto quando si produce quella forma fortissima e (in senso proprio) fondamentale di scrittura che è l'edizione di un testo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 45-46.
“So it is important that the high-level program, while comfortable for the human, still should be unambiguous and precise”, Douglas R. (1999), Gödel, Escher, Bach, p. 298.
“Who seeks for methods without having a definite problem in mind seeks for the most part in vain”, David Hilbert (2005), “Mathematical Problems (1900b)”, in William B. Ewald, From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Oxford, Oxford University Press, p. 1101.
Richard P. Feynman (1996), Lectures on Computation, ed. by Anthony J.G. Hey & Robin W. Allen, Boston, Addison-Wesley Publishing Compagny, p. 54. “La conclusione inevitabile a cui ci conduce l'analisi svolta in questo testo è che la costruzione di teorie, ipotesi e modelli (il theory-building) da parte degli esseri umani non solo non è stata e non può essere dismessa dalla Big Data Revolution, ma rimane essenziale per sfruttare al meglio ciò che le macchine possono offrire”, Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 125.
“Indebolire la pretesa di esattezza per produrne una simulazione più semplice e adeguata: è questo una sorta di paradosso come pure una tacita e risolutiva astuzia della scienza del calcolo”, Paolo Zellini (2022), Discreto e continuo. Storia di un errore, Milan, Adelphi, p. 237. “In definitiva, anche quando un idioletto d'opera sia identificato al massimo grado, rimangono infinite sfumature, a livello della pertinentizzazione dei livelli inferiori del continuum espressivo, che non saranno mai completamente risolte, perché spesso neppure l'autore ne è cosciente. Ciò non significa che non siano analizzabili, ma significa certamente che la loro analisi è destinata ad approfondirsi di lettura in lettura e il processo interpretativo assume l'aspetto di una approssimazione infinita […] Vi è però una pigrizia filosofica nell'etichettare come « intuizione » tuto ciò che richiede una analisi molto approfondita per essere descritto con sufficiente approssimazione”, Umberto Eco (1975), Trattato di semiotica generale, p. 340‑341.
“Evans identified two central questions: how to represent the line figures and how to define the transformation rules. Like contemporary case-based reasoning advocates, Evans recognized the importance of describing his figures in a standardized way. If the program (rather than the program user) were really to do the work, there should be no leeway for arbitrary choices in the representation”, Paul F. A. Bartha (2010), By Parallel Reasoning, p. 64. “While the electronic computer produced a revolution by increasing incredibly the speed of processing data, it still left a large task for the human: the task of programming the problems to be run. It is therefore not at all surprising that from the start great emphasis was put upon methods for alleviating the burden of the programmer”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 333.
“The chief practical difficulty of this inquiry will consist, not in the application of the method to the premises once determined, but in ascertaining what the premises are. In what are regarded as the most rigorous examples of reasoning applied to metaphysical questions, it will occasionally be found that different trains of thought are blended together; that particular but essential parts of the demonstration are given parenthetically, or out of the main course of the argument; that the meaning of a premiss may be in some degree ambiguous; and, not unfrequently, that arguments, viewed by the strict laws of formal reasoning, are incorrect or inconclusive. The difficulty of determining and distinctly exhibiting the true premises of a demonstration may, in such cases, be very considerable. […] The necessity of a rigorous determination of the real premises of a demonstration ought not to be regarded as an evil; especially as, when the task is accomplished, every source of doubt or ambiguity is removed”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 185-186. “It should be noted that this connection-pattern can be set up at will – indeed, this is the means by which the problem to be solved, i.e. the intention of the user, is impressed on the machine”, John von Neumann (2012), The Computer & the Brain, p. 11, “the art of computing consists to no small degree of measures to keep this effect down [the amplification of errors introduced by earlier operations]”, John von Neumann (2012), The Computer & the Brain, p. 28.
“This was done in part by thinking things through logically, but also perhaps more importantly by coding a large number of problems. Through this procedure real difficulties emerged and helped illustrate general problems that were then solved”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 267.
“Digital computers are able to answer most – but not all – questions stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). Computers have been getting better and better at providing answers–but only to questions that programmers are able to ask. What about questions that computers can give useful answers to but that are difficult to define? In the real world, most of the time, finding an answer is easier than defining the question”, George B. Dyson (2012), Turing’s Cathedral, p. 262-263.
“Back to Leibniz!” is the title of a 1932 article by Norbert Wiener, a major figure of cybernetics : Norbert Wiener (1985), Cybernetics: or Control and Communication in the Animal and the Machine, Cambridge, M.I.T. Press.
“The machine's processes are mosaics of very simple standard parts, but the designs can be of great complexity, and it is not obvious where the limit is to the patterns of thought they could imitate”, Alan Turing, “Can Automatic Calculating Machines Be Said to Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 500.
“Doing digital preservation requires a foundational understanding of the structure and the nature of digital information and media. [...] first, all digital information is material. Second, the database is an essential media form for understanding the logic of digital information systems. Third, digital information is best understood as existing in and through a nested set of platforms”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 34, “In this context, like so many others, to be able to decide on how to collect content and ultimately to arrange and describe it requires significant technical understanding of how the underlining system functions and has been designed”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 146.
“It seems that the data is the place where the editorial content is stored, where the editorial processes are recorded and the editorial knowledge is kept. The most important task for the editor is the creation of information as rich, accurate and reliable data. The creation of online publications or print spin-offs from this data may be left to other specialists such as publishing houses, web agencies or media designers”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 36-37. Computer science teaches us that noise should be removed from data as early as possible to prevent it from accumulating along the way. Cf. “[Bigelow’s] Maxim 5 added that « if noise is ever to be filtered from signal, it must be done at the earliest possible stage rather than after the two are tangled with other noises and signals »”, George B. Dyson (2012), Turing’s Cathedral, p. 112.
Researcher should also be aware of the importance of copyleft: the practice of granting the right to freely distribute and modify a work, while requiring that the same rights be preserved (and not restricted) in derivative works created from that original work.
“Arrangement and description is the process by which collections are made discoverable, intelligible, and legible to their future users”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 129.
“Anche per le discipline umanistiche « il Web semantico » rappresenta « il futuro » e l'informatica umanistica è chiamata a fornire «rappresentazioni formali del lascito documentario dell'umanità (human record)”, adatte cioè ad essere elaborate automaticamente. Infatti, come ancora ricorda John Unsworth, “queste rappresentazioni – ontologie, schemi, rappresentazioni della conoscenza, comunque le si voglia chiamare – dovrebbero essere prodotte da persone formate negli studi umanistici. E la disciplina che li produce richiede una formazione umanistica, unita ad una conoscenza di elementi di matematica, logica, ingegneria e informatica. […] C'è una grande quantità di lavoro da fare – e indubbiamente non tutto di natura tecnica. Nella costruzione di questa grande mappa del sapere una grande parte sarà costituita da lavoro collaborativo (social work), creazione del consenso, compromesso. Ma anche quest'attività dovrà essere affidata a persone che sappiano come il consenso possa essere raggiunto ed espresso in un medium di natura computazionale”, Dino Buzzetti (2012), “Cos’è, oggi, l’informatica umanistica? L’impatto della tecnologia”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, 127-128, Hans & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 102.
“This separation of ethical questions away from the technical reflects a wider problem in the field, where the responsibility for harm is either not recognized or seen as beyond the scope of the research”, Kate Crawford (2021), Atlas of AI, p. 117, “Simply leaving regulation up to AI practitioners would be as unwise as leaving it solely up to government agencies”, Melanie Mitchell (2019), Artificial Intelligence, p. 150.
“Se l'etica è la logica dell'agire giusto, la logica è l'etica del pensare. [...] Parlare con giustezza è un modo dell'agire responsabile. Fare asserzioni è assumersi l'impegno di sosterne la loro verità”, Roberta de Monticelli (2006), Esercizi di pensiero per apprendisti filosofi, p. 11.
“We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence. Machine stupidity creates a tail risk. […] Or as the AI researcher Pedro Domingos so memorably put it, « People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world. »”, Melanie Mitchell (2019), Artificial Intelligence, p. 368.
Emiliano Ippoliti (2023), Guida critica alle intelligenze artificiali, p. 115.
“The philosopher Achille Mbembé [...] writes: « It is about extraction, capture, the cult of data, the commodification of human capacity for thought and the dismissal of critical reason in favour of programming…. Now more than ever before, what we need is a new critique of technology, of the experience of technical life. »”, Kate Crawford (2021), Atlas of AI, p. 225-226, “Suddenly what were once only academic questions have started to matter very much in the real world”, Melanie Mitchell (2019), Artificial Intelligence, p. 322, “Perhaps AI will take away truck-driving jobs, but because of the need to develop AI ethics, the field will create new positions for moral philosophers”, Melanie Mitchell (2019), Artificial Intelligence, p. 358. Cf. also “the AI-problem, the problem of giving a description of intelligent behaviour that would be precise enough to make a computer simulation possible. Everyone nowadays seems to be concerned with this problem, I said. Neurophysiologists and psychologists who work on the problem of perception are joining forces with computer scientists, and even the philosophers are hoping against hope to be taken seriously, and thereby to get a salary raise”, Gian-Carlo Rota (1986), “In Memoriam of Stan Ulam. The Barrier of Meaning”, Physica, 22D, p. 1.
Cf. “The integration of linguistics and logic was one of the major attainments of twentieth-century humanities. It has had an enormous effect on other disciplines both within and outside the humanities, from musicology to literary studies and from cognitive psychology to artificial intelligence”, Rens Bod (2013), A New History of the Humanities, p. 297.
“If I were to choose a patron saint for cybernetics out of the history of science, I should have to choose Leibniz. The philosophy of Leibniz centers about two closely related concepts – that of a universal symbolism and that of a calculus of reasoning. From these are descended the mathematical notation and the symbolic logic of the present day. Now, just as the calculus of arithmetic lends itself to a mechanization progressing through the abacus and the desk computing machine to the ultra-rapid computing machines of the present day, so the calculus ratiocinator of Leibniz contains the gems of the machina ratiocinatrix, the reasoning machine”, Norbert Wiener (1985), Cybernetics, p. 12, “[Leibniz’s] four great accomplishments to the field of computing: his initiation of the field of formal logics; his construction of a digital machine; his understanding of the inhuman quality of calculation and the desirability as well as the capability of automating this task; and, lastly, his very pregnant idea that the machine could be used for testing hypotheses”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 9, “Leibniz believed, following Hobbes and in advance of Hilbert, that a consistent system of logic, language, and mathematics could be formalized by means of an alphabet of unambiguous symbols manipulated according to mechanical rules. […] With his logical calculus, or calculus ratiocinator, Leibniz took the first steps towards his vision of a "universal symbolistic in which all truths of reason would be reduced to a kind of calculus. […] he proposed a universal coding in which primary concepts would be represented by prime numbers–an all-encompassing mapping between numbers and ideas”, George B. Dyson (2012), Turing’s Cathedral, p. 103-104.
“Leibniz referred to such a system of characters as a characteristic. Unlike the alphabetic symbols which had no meaning, the examples just mentioned were, for him, a real characteristic in which each symbol represented some definite idea in a natural and appropriate way. What was needed, Leibniz maintained, was a universal characteristic, a system of symbols that was not only real, but which also encompassed the full scope of human thought”, Martin Davis (2018), The Universal Computer. The Road from Leibniz to Turing, Abingdon‑on‑Thames, CRC Press, p. 10‑11.
“Leibniz a énoncé, il y a deux siècles, le projet de créer une écriture universelle, dans laquelle toutes les idées composées fussent exprimées au moyen de signes conventionnels des idées simples, selon des règles fixes”, Giuseppe Peano (1894), “Notations de Logique Mathématique. Introduction au Formulaire de mathématiques”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 3, “That zero and one where sufficient for logic as well as arithmetic was established by Gottfried Wilhelm Leibniz in 1679, following the lead given by Thomas Hobbes in his Computation, or Logique of 1656. « By ratiocination, I mean computation, » Hobbes had announced. « Now to compute, is either to collect the sum of many things that are added together, or to know what remains when one thing is taken out of another. Ratiocination, therefore is the same with Addition or Subtraction »”, George B. Dyson (2012), Turing’s Cathedral, p. 4.
“Fascinated by the Aristotelian division of concepts into fixed « categories, » Leibniz thought of what he came to call his « wonderful idea »: he would seek a special « alphabet » whose elements represented not sounds, but concepts. A language based on such an alphabet should make it possible to determine by symbolic calculation which sentences written in the language were true and what logical relationships existed among them”, Martin Davis (2018), The Universal Computer, p. 3.
“Itaque profertur hic calculus quidam novus et mirificus, qui in omnibus nostris ratiocinationibus locum habet, et qui non minus accurate procedit quam Arithmetica aut Algebra. Quo adhibito semper terminari possunt controversiae quantum ex datis eas determinari possibile est, manu tantum ad calamum admoto, ut sufficiat duos disputantes omissis verborum concertationibus sibi invicem dicere: calculemus, ita enim perinde ac si duo Arithmetici disputarent de quondam calculi errore”, Gottfried W. Leibniz (1666), Dissertatio de arte combinatoria, Berlin, De Gruyter, 1923. « Une manière de Langue ou d'Écriture universelle, mais infinement différente de toutes celles qu'on a projetées jusqu'ici ; car les caractères, et les paroles mêmes, y dirigeroient la Raison ; et les erreurs exceptées celles de fait, n'y seroient que des erreurs de calcul », Giuseppe Peano (1896), “Introduction au tome II du « Formulaire de mathématiques »”, in Giuseppe Peano, Opere scelte, Volume II, Rome, Cremonese, 1958, p. 1.
“In 1666 Leibniz wrote the work De arte combinatoria, which was considered to be a continuation of Llull's project to discover truths through the exhaustive combination of concepts”, Rens Bod (2013), A New History of the Humanities, p. 195.
“the rules of deduction could then be reduced to manipulations of these symbols, via what Leibniz called a calculus ratiocinator, what nowadays might be called a symbolic logic”, Martin Davis (2018), The Universal Computer, p. 12. “According to Leibniz, however, in order to be an effective method of discovery, the axiomatic method must be converted into a formal axiomatic method. This involves setting up a formal language capable of expressing all thoughts, and a system of formal deductive rules capable of representing all human reasoning. [...] Such language will be based on the fact that « all human ideas can be resolved into a few ones as primitive ideas. » The latter are ideas « which are conceived per se, and from whose combination all our other ideas arise. » They are like « a sort of alphabet of human thought. » [...] One will then be able to assign characters to the primitive ideas and form new characters for all other ideas by means of combinations of such characters, which will « have among themselves the relation that the ideas have among themselves. »” Carlo Cellucci (2013), Rethinking Logic, p. 169, “On the other hand, Leibniz calls the system of formal deductive rules, capable of representing all human reasoning, « calculus of reasoning » [calculus ratiocinator]. Such a calculus is necessary because, to « avoid being left wandering in a labyrinth, » the « mind must be guided by some (as it were) sensible thread. » The mind is « unable to embrace distinctly many things at the same time, » only by means of a calculus of reasoning will it be able to « do without imagination, using signs in place of things. » The calculus of reasoning will be « like a sort of general algebra » that will « give the means to perform reasoning by calculation »”, Carlo Cellucci (2013), Rethinking Logic, p. 170.
“These mathematical tables were calculated by hand, and the mistakes were simply the result of human error. This caused Babbage to exclaim, « I wish to God these calculations had been executed by steam! » This marked the beginning of an extraordinary endeavor to build a machine capable of faultlessly calculating the tables to a high degree of accuracy. [...] The scientific tragedy was that Babbage's machine would have been a stepping-stone to the Analytical Engine, which would have been programmable. Rather than merely calculating a specific set of tables, the Analytical Engine would have been able to solve a variety of mathematical problems depending on the instructions that it was given. In fact, the Analytical Engine provided the template for modern computers. The design included a « store » (a memory) and a « mill » (processor), which would allow it to make decisions and repeat instructions, which are equivalent to the « if… then… » and « loop » commands in modern programming”, Simon Singh (1999), The Code Book, The Science of Secrecy from Ancient Egypt to Quantum Cryptography, New York, Anchor Books, p. 64-65. Cf. Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 17-18 and illustration 4 following p. 120.
“There is not only a close analogy between the operations of the mind in general reasoning and its operations in the particular science of Algebra, but there is to a considerable extent an exact agreement in the laws by which the two classes of operations are conducted”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 6, “Whence it is that the ultimate laws of Logic are mathematical in their form; why they are, except in a single point, identical with the general laws of Number; and why in that particular point they differ”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 11.
“The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 1. “George Boole’s great achievement was to demonstrate once and for all that logical deduction could be developed as a branch of mathematics”, Martin Davis (2018), The Universal Computer, p. 30.
Alexandre D. Aleksandrov, Andreï N. Komogorov & Mikhaïl A. Lavrent’ev (1974), Le matematiche, p. 88.
The term “algorithm” was also derived from his name.
George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 6.
“As the combination of two literal symbols in the form xy expresses the whole of that class of objects to which the names or qualities represented by x and y are together applicable, it follows that if the two symbols have exactly the same signification, their combination expresses no more than either of the symbols taken alone would to. In such case we should therefore have xy = x. As y is, however, supposed to have the same meaning as x, we may replace it in the above equation by x, and we thus get xx = x. Now in common Algebra the combination xx is more briefly represented by x2. Let us adopt the same principle of notation here; for the mode of expressing a particular succession of mental operations is a thing in itself quite as arbitrary as the mode of expressing a single idea or operation (II.3). In accordance with this notation, then, the above equation assumes the form x2 = x, and is, in fact, the expression of a second general law of those symbols by which names, qualities, or descriptions, are symbolically represented”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 31.
“We have seen (II.9) that the symbols of Logic are subject to the special law, x2 = x. Now of the symbols of Number there are but two, viz. 0 and 1, which are subject to the same formal law. We know that 02 = 0, and that 12 = 1; and the equation x2 = x, considered as algebraic, has no other roots than 0 and 1”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 37-38, “The literal symbols of Logic are universally subject to the law whose expression is x2 = x. Of the symbols of Number there are two only, 0 and 1, which satisfy this law. But each of these symbols is also subject to a law peculiar to itself in the system of numerical magnitude, and this suggests the inquiry, what interpretations must be given to the literal symbols of Logic, in order that the same peculiar and formal laws may be realized in the logical system also”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 46-47, “the fundamental law of duality (2) Chap. II., whose expression is x2 = x, or, x(1 - x) = 0; a law, which while it serves to distinguish the system of thought in Logic from the system of thought in the science of quantity, gives to the processes of the former a completeness and a generality which they could not otherwise possess”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 166, “His remark about a « special law to which the symbols of quantity are not subject » is very important: this law in effect is that x2 = x for every x in his system. Now in numerical terms this equation or law has as its only solution 0 and 1. This is why the binary system plays so vital a role in modern computers: their logical parts are in effect carrying out binary operations”, Herman H. (1972), The Computer from Pascal to von Neumann, p. 37.
George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 50. Cf. also “That axiom of metaphysicians which is termed the principle of contradiction, and which affirms that it is impossible for any being to possess a quality, and at the same time not to possess it, is a consequence of the fundamental law of thought, whose expression is x2 = x. Let us write this equation in the form x – x2 = 0, whence we have x(1 – x) = 0”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 49, “Thus it is a consequence of the fact that the fundamental equation of thought is of the second degree, that we perform the operation of analysis and classification, by division into pairs of opposites, or, as it is technically said, by dichotomy”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 50-51.
George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities. “In Boole's system 1 denotes the entire realm of discourse, the set of all objects being discussed, and 0 the empty set”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 37, “Nothing and Universe are the two limits of class extension, for they are the limits of the possible interpretations of general names”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann p. 76.
“We might undoubtedly have established the theory of Primary Propositions upon the simple notion of space, in the same way as that of secondary propositions has been established upon the notion of time”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 174-175, “the real ground upon which the symbol 1 represents in primary propositions the universe of things, and not the space they occupy, is, that the sign of identity = connecting the members of the corresponding equations, implies that the things which they represent are identical, not simply that they are found in the same portion of space. Let it in like manner be affirmed, that the reason why the symbol 1 in secondary propositions represents, not the universe of events, but the eternity in whose successive moments and periods they are evolved, is, that the same sign of identity connecting the logical members of the corresponding equations implies, not that the events which those members represent are identical, but that the times of their occurrence are the same”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 176. “This led Boole to the principle that the algebra of logic was precisely what ordinary algebra would become if restricted to the two values 0 and 1. However, to make sense of this, it became necessary to reinterpret the symbols 0 and 1 as classes. A clue is provided by the behaviors of 0 and 1, respectively, with respect to ordinary multiplication: 0 times any number is 0; 1 times any number is that very number. In symbols, 0x = 0, 1x = x”, “Boole’s method of relating secondary propositions to his algebra of classes was to bring time into the picture. With each proposition Boole would in effect associate the class of instants of time for which that proposition was true. To say that proposition X is true, Boole would write X = 1 meaning that the class of instants in which the proposition is true encompasses the entire time span under consideration. Likewise, X = 0 would express that X is false, because there are no instants of time in which X is true. Given a proposition X&Y which expresses the truth of both X and Y, the set of instants in which it is true is just the set intersection XY. Finally, for a proposition if X then Y to be true, what is required is that any time that X is true, Y is also true, that is that there is no time when X is true and Y is false. As an equation: X(1 − Y) = 0”, Martin Davis (2018), The Universal Computer, p. 24, Martin Davis (2018), The Universal Computer, p. 188, n. 25.
Cf. “But nobody had shown with Shannon’s clarity and rigor that electrical engineers could use all the tools of Boolean algebra to design circuits with switches. In particular, if you can simplify a Boolean expression that describes a network, you can simplify the network accordingly”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 257. Charles Petzold (2000), Code. The Hidden Language of Computer Hardware and Software, Washington, Microsoft Press, p. 103.
“Mathematical logic, also called « logistic », « symbolic logic », the « algebra of logic », and, more recently, simply « formal logic », is the set of logical theories elaborated in the course of the last century with the aid of an artificial notation and a rigorously deductive method”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, Dordrecht, D. Reidel, p. 1, « Mais ce qui a le plus contribué à la solution du problème, c'est la nouvelle et importante science qu'on appelle Logique mathématique, et qui étudie les propriétés formelles des opérations et des relations de logique. [...] Par la combinaison des signes d'Algèbre et de Logique, on peut représenter toutes les relations de logique avec peu de signes, ayant une signification précise, et assujettis à des règles bien déterminées. En conséquence, en introduisant des signes pour indiquer les idées de l'Algèbre, ou de la Géométrie, on peut énoncer complètement en symboles les propositions des ces sciences », Giuseppe Peano (1894), “Notations de Logique Mathématique”, p. 3‑4.
“Is there a limit to what we can, in principle, compute? [...] Ironically, it turns out that all this was discussed long before computers were built! Computer science, in a sense, existed before the computer. It was a very big topic for logicians and mathematicians in the thirties”, Richard P. Feynman (1996), Lectures on Computation, p. 52.
Paolo Zellini (2022), Discreto e continuo, p. 67.
Martin Davis (2018), The Universal Computer, p. 57-59. “The essence of the diagonal method is the fact of using one integer in two different ways – or, one could say, using one integer on two different levels – thanks to which one can construct an item which is outside of some predetermined list”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 423.
In contrast to propositional logic, predicate logic allows for the analysis of the internal structure of propositions, rather than merely their connections.
In first-order logic quantification is restricted to individuals and doesn’t apply to sets of individuals, unlike in Frege's broader theory.
Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 6. “His solution was to develop his Begriffsschrift as an artificial language with mercilessly precise rules of grammar, or as one says, of syntax. This made it possible to exhibit logical inferences as purely mechanical operations, so-called rules of inference, having reference only to the patterns in which symbols are arranged. It was also the first example of a formal artificial language constructed with a precise syntax. From this point of view, the Begriffsschrift was the ancestor of all computer programming languages in common use today”, Martin Davis (2018), The Universal Computer, p. 40.
“The most immediate point of contact between my formula language and that of arithmetic is the way in which letters are employed”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 6.
Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 11, “This indeterminacy makes it possible to use letters to express the universal validity of propositions”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”, p. 10.
“Thus it is that the great mediaeval controversy over universals has flared up anew in the modern philosophy of mathematics. The issue is clearer now than of old, because we now have a more explicit standard whereby to decide what ontology a given theory or form of discourse is committed to: a theory is committed to those and only those entities to which the bound variables of the theory must be capable of referring in order that the affirmation made in the theory be true. […] the fundamental cleavages among modern points of view on foundations of mathematics do come down pretty explicitly to disagreements as to the range of entities to which the bound variables should be permitted to refer. […] The three main mediaeval points of view regarding universals are designated by historians as realism, conceptualism, and nominalism. Essentially these same three doctrines reappear in twentieth-century surveys of the philosophy of mathematics under the new names logicism, intuitionism, and formalism”, Willard V. O. Quine (1963), From a Logical Point of View, New York, Harper & Row, p. 13-14.
Cf. <https://plato.stanford.edu/entries/frege-theorem/#S2>.
In particular, this was achieved by introducing a “theory of types” to prevent circular assertions. This theory divides objects into distinct types and organizes them into different orders or levels: a set of a given type can only contain objects or sets of a lower type.
“The present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all its propositions are deducible from a very small number of fundamental logical principles”, Bertrand Russell, Principles of Mathematics (New York, 1950), quoted by Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 172. Cf. Willard V. O. Quine (1963), From a Logical Point of View, p. 81.
The idea of formalism was to separate mathematical concepts from their meaning and view them as abstract, meaningless symbols or expressions obeying and manipulable according to given formal, syntactical rules. For this, it is associated with nominalism, for which abstract entities do not exist anywhere, neither in reality nor in the mind, but are just names. Cf. “the formalist keeps classical mathematics as a play of insignificant notations. This play of notations can still be of utility – whatever utility it has already shown itself to have as a crutch for physicists and technologists. But utility need not imply significance, in any literal linguistic sense. […] For an adequate basis for agreement among mathematicians can be found simply in the rules which govern the manipulation of the notations – these syntactical rules being, unlike the notations themselves, quite significant and intelligible”, Willard V. O. Quine (1963), From a Logical Point of View, p. 15, “Levels are not cleanly separated, as the formalist version of what mathematics is would have one believe. The formalist philosophy claims that mathematicians only deal with abstract symbols, and that they couldn't care less whether those symbols have any applications to or connections with reality. But this is quite a distorted picture. Nowhere is this clearer than in metamathematics. If the theory of numbers is itself used as an aid in gaining factual knowledge about formal systems, then mathematicians are tacitly showing that they believe these ethereal things called « natural numbers » are actually part of reality – not just figments of the imagination”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 458. Besides logicism and formalism, there was also a third school: intuitionism, founded by L.E.J. Brouwer. Intuitionists (or constructivists) held that mathematical objects exist only to the extent that they can be understood and constructed by human thought. This view was associated with “conceptualism,” which posits that abstract entities exist only in the mind and not in the external world.
The axiomatization of a theory is the formulation of its fundamental properties and truths from which all others can be deduced.
“It was really Hilbert's stroke of genius to understand that formalization is the proper technique to tackle such foundational questions. What he taught us can be put roughly as follows. Suppose that T is an axiomatized theory which has been formalized in terms of the first order language L. This language has such a precise syntax that it itself can be studied as a mathematical object. One can ask for instance: « Can one possibly run into contradictions if one proceeds entirely formally within L, using only the axioms of T and those of classical logic, all of which have been expressed in L? » If one can prove mathematically that the answer to this question is « no », one has there a mathematical proof that the theory T is free of contradictions! This is basically what the famous « Hilbert program » was all about. [...] In short, the formalists tried to create a mathematical technique by means of which one could prove that mathematics is free of contradictions. This was the original purpose of formalism”, Ernst Snapper (1979), “The Three Crises in Mathematics: Logicism, Intuitionism and Formalism”, Mathematics Magazine, 52 (4), p. 207-216. DOI: https://doi.org/10.2307/2689412 [consulted on 06/12/2024], p. 214..
Cf. Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, Milano, Tascabili degli Editori Associati, p. 14 and p. 63-64.
“7.71. « Complete system in a wide sense » for: « axiomatic system which contains all the true sentences of a given domain ». It can also be said that no sentence of a given domain is true if it is not derivable in the system. 7.72. « Complete system in a strict sense » for: « axiomatic system in which each sentence which is not a law is the negation of one of its laws »”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 28. A “law” is a sentence asserted in the system, either an axiom (not derived in the system) or a theorem (deduced from the axioms by application of rules).
There is a subtle but crucial distinction between systems that are externally correct, meaning they do not demonstrate falsehoods, and those that are internally coherent, meaning they do not demonstrate contradictions. Cf. Piergiogio Odifreddi (2005), Penna, pennello e bacchetta, p. 81.
“7.61. « Non-contradictory system » for: « axiomatic system whose rules of deduction do not allow a sentence to be deduced along with the negation of this sentence ». 7.62. In a complete system which is contradictory any sentence can be deduced”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 28. This is the famous principle ex falso quodlibet.
“By the Entscheidungsproblem of a system of symbolic logic is here understood the problem to find an effective method by which, given any expression Q in the notation of the system, it can be determined whether or not Q is provable in the system”, Alonzo Church, “A Note on the Entscheidungsproblem”, quoted by Jack B. (ed.) (2004), The Essential Turing, p. 45, George B. Dyson (2012), Turing’s Cathedral, p. 245-247.
Trying all possibilities or combinations only works if the solution exists, as it will eventually be found (perhaps after a long time). However, if no solution exists, the procedure will continue indefinitely, rendering it ineffective. “This gets to the crux of the matter of what should count as a « test ». Of prime importance is a guarantee that we will get our answer in a finite length of time. If there is a test for theoremhood, a test which does always terminate in a finite amount of time, then that test is called a decision procedure for the given formal system”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 40-41.
Propositional logic, for example, is decidable because a method exists ‒ the truth-table ‒ that can be used to test whether an arbitrary propositional formula is satisfiable. A formula is satisfiable if there is at least one interpretation that makes the formula true; it is valid if it is true under every interpretation. Cf. “Hilbert had also sought explicit calculational procedures by means of which it would always be possible to determine, given some premises and a proposed conclusion, written in the notation of what has come to be called « first-order logic » whether Frege’s rules would enable that conclusion to be derived from those premises”, Martin Davis (2018), The Universal Computer, p. 127, “actually, Hilbert did not put the Entscheidungsproblem in quite that way: he asked for a procedure to determine whether a given expression of first order logic is valid in every possible interpretation”, Martin Davis (2018), The Universal Computer, p. 203, n. 8, “The calculus of reasoning will provide a universal decision procedure. By means of this procedure, human beings will « always be able to know whether it is possible to decide the question on the basis of the knowledge which is already given to them. » This will be useful even when the question is decided negatively, because « it is important at least to know that what is sought cannot be found by the means available to us. »”, Carlo Cellucci (2013), Rethinking Logic, p. 171.
“This conviction of the solvability of every mathematical problem is a powerful incentive to the worker. We hear within us the perpetual call: There is the problem. Seek its solutions. You can find it by pure reason, for in mathematics there is no ignorabimus”, David Hilbert, Mathematical Problems (1900b) quoted in Jack B. Copeland (ed.) (2004), The Essential Turing, p. 47.
“One should guard against confusing axiomatization and formalization. […] Examples of axiomatized theories are Euclidean plane geometry with the usual Euclidean axioms, arithmetic with the Peano axioms, ZF [Zermelo–Fraenkel set theory] with its nine axioms, etc. […] Suppose then that some axiomatized theory T is given. Restricting ourselves to first order logic, « to formalize T » means to choose an appropriate first order language for T. The vocabulary of a first order language consists of five items, four of which are always the same and are not dependent on the given theory T. These four items are the following: (1) A list of denumerably many variables […] (2) Symbols for the connectives of everyday speech, say ¬ for « not, » ∧ for « and, » ∨for the inclusive « or, »→ for « if then, » and ↔for « if and only if » […] (3) The equality sign = [...] (4) The two quantifiers, the « for all » quantifier ∀ and the « there exist » quantifier ∃ […] Since T is an axiomatized theory, it has so called « undefined terms. » One has to choose an appropriate symbol for every undefined term of T and these symbols make up the fifth item”, Ernst Snapper (1979), “The Three Crises in Mathematics: Logicism, Intuitionism and Formalism”, p. 213.
“Before leaving the subject of computability, I want to make some remarks about the related topic of « grammars ». In mathematics, as in linguistics, a grammar is basically a set of rules for combining the element of a language, only the language is a mathematical one (such as arithmetic or algebra). It is possible to misapply these rules”, Richard P. Feynman (1996), Lectures on Computation, p. 91.
Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 16.
“Axiomatic System for « the set of expressions falling into two classes such that the elements of the second are derived from the first by the application of explicitly formulated rules »”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 26, “An axiomatic system contains terms, sentences, and laws, rules of definition for terms, rules of formation for sentences, and rules of deduction for laws”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 26.
Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27.
Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 27. “Turing suggests that any puzzle can be re‑expressed as a substitution puzzle […] and the rules of the puzzle, whatever they are, are to be represented in terms of permissible substitutions of groups of letters for other groups of letters. […] The axioms – which are simply strings of mathematical symbols – form the starting position. The theorem–another string of symbols – is the winning position. The rules of the puzzle are substitutions that enable streams of mathematical symbols to be transformed into other strings [...] Turing calls the substitution formulation of any puzzle its « normal form »”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 577.
“Rules of Transformation, by which these laws can be developed so as to yield still further laws”, Józef M. Bocheński (1959), A Precis of Mathematical Logic, p. 22.
Gödel, Kurt (1931), “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I”, Monatshefte für Mathematik und Physik, 38 (1931), p. 173-198. To be precise, Gödel first demonstrated the completeness of predicate logic, while the completeness of propositional logic had already been demonstrated by Emil Post.
“Naturalmente la matematica può essere formalizzata in tanti modi diversi, scegliendo diversi sistemi di assiomi. Si potrebbe dunque sospettare che i due risultati precedenti dipendano dal particolare sistema scelto, ma non è così. Si può mostrare che i due risutlati valgono per qualunque sistema formale, a patto che esso contenga l'aritmetica dei numeri interi nella sua formulazione usuale, e che non dimostri falsità: cioè, che i suoi assiomi non portino a risultati che siano refutabili per motivi intuitivi”, Piergiogio Odifreddi (2005), Penna, pennello e bacchetta, p. 18.
“The Epimenides paradox is a one-step Strange Loop, like Escher's Print Gallery. But how does it have to do with mathematics? That is what Gödel discovered. His idea was to use mathematical reasoning in exploring mathematical reasoning itself. This notion of making mathematics « introspective » proved to be enormously powerful, and perhaps its richest implication was the one Gödel found: Gödel's Incompleteness Theorem”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 17.
“Is G a TNT-theorem? If so, then it must assert a truth. But what in fact does G assert? Its own nontheoremhood. Thus from its theoremhood would follow its nontheoremhood: a contradiction. Now what about G being a nontheorem? This is acceptable, in that it doesn't lead to a contradiction. But G's nontheoremhood is what G asserts–hence G asserts a truth. And since G is not a theorem, there exists (at least) one truth which is not a theorem of TNT”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 448, « The fascinating thing is that any such system digs its own hole; the system's own richness brings about its own downfall. The downfall occurs essentially because the system is powerful enough to have self-referential sentences. [...] It seems that with formal systems there is an analogous critical point. Below that point, a system is "harmless » and does not even approach defining arithmetical truth formally; but beyond the critical point, the system suddenly attains the capacity for self-reference, and thereby dooms itself to incompleteness”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 470.
“But it cannot be emphasized too strongly that this undecidability is only with respect to provability inside the system. From our outside viewpoint, it is clear that U is true”, Martin Davis (2018), The Universal Computer, p. 97.
All of the following explanation is derived from Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 88-91.
“If provability in a formal system for arithmetic is definable within the system itself, but truth is not, then provability and truth are not the same thing. So either there are provable formulae that are not true, or there are true formulae that are not provable. In the first case the system is not correct, because it proves some falsehood, and in the second case it is not complete, because it does not prove some truth. Put another way, any formal system for arithmetic that is correct is incomplete, if it allows the demonstrability in the system itself to be defined internally”, Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 101. (my translation). “The keystone in Gödel’s proof of the existence of undecidable propositions is the fact that provability in PM can be expressed in PM itself”, Martin Davis (2018), The Universal Computer, p. 98.
“The property of a formula, that it is provable, is a purely combinatorial (formal) one, in that it does not depend on the meaning of the symbols. That a formula A is provable within a specified system simply means that there is a finite sequence of formulas that begins with some axioms of the system and ends with A, and which, in addition, has the property that each formula of the sequence arises from some of the preceding ones by application of a rule of inference (wheere as rules of inferences, in essence, only the substitution rule and the rule of implication come into play, which refer merely to simple combinatorial properties of formulas). The class of provable formulas may therfore be traced back to simple arithmetical concepts”, Gödel in a letter to Zermelo, 12 October 1931 (quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 103), in Kurt Gödel (2003), Collected Works, Vol. v, p. 42.
Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104 (my translation).
Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104 (my translation).
“That one can not capture all of mathematics in one formal system already follows according to Cantor's diagonal procedure, but nevertheless it remains conceivable that one could at least formalize certain subsystems of mathematics completely (in the syntactic sense). My proof shows that that is also impossible if the subsystem contains at least the concepts of addition and multiplication of whole numbers”, Gödel to Zermelo (12 October 1931, quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 103), in Kurt Gödel (2003), Collected Works, Vol. v, p. 429. Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 419. “One of the interesting things Gödel did was to designate each provable theorem by a sequence of integers with a corresponding situation for remarks about the theorem. This provides a numerical algorithm for each theorem and put us in the field of numerical computation”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 174, “As an informal introduction to the motivation for Gödel numbering, suppose we have a theory which talks about positive integers. We are interested in « keeping track » of the sentences of the theory within the theory itself; this cannot be done directly since the theory talks about numbers and not about sentences. We can, however, achieve the same result indirectly by assigning a number to each sentence (the so-called « Gödel number » of the sentence) and then translating any statement about the sentences to the corresponding statement about their Gödel numbers”, Raymond M. Smullyan (1961), Theory of Formal Systems, Theory of Formal Systems, p. 11.
“Typographical rules for manipulating numerals are actually arithmetical rules for operating on numbers. This simple observation is at the heart of Gödel's method, and it will have an absolutely shattering effect. It tells us that once we have a Gödel-numbering for any formal system, we can straightaway form a set of arithmetical rules which complete the Gödel isomorphism. The upshot is that we can transfer the study of any formal system – in fact the study of all formal systems – into number theory”, “The whole point of Gödel-numbering is that it shows how, even without formalizing quotation, one can get self-reference: through a code”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 264, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 738-739.
Kurt Gödel (1995), “[On undecidable sentences] (*1931?)”, in Kurt Gödel, Collected Works, Volume iii, p. 33 (quoted by Piergiorgio Odifressi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 98‑99).
“Viewed from the outside, these systems involve relationships among strings of symbols. On the inside, these systems can express propositions about various mathematical objects including natural numbers. Moreover, it isn’t difficult to think of ways that strings of symbols can be coded by natural numbers. Aha! By using such codes, the outside can be brought inside”, Martin Davis (2018), The Universal Computer, p. 95, “The crucial step in Gödel’s proof was his demonstration that the property of a natural number of being the code of a proposition provable in PM is itself expressible in PM. Using this fact, Gödel could construct propositions in PM that to one who knew the specific code being used could be seen to express the assertion that some proposition is not provable in PM. That is, he was able to construct proposition A that, read via the encoding, assert that some proposition B is not provable in PM”, Martin Davis (2018), The Universal Computer, p. 96, “And this is not an accidental feature of TNT; it happens because the architecture of any formal system can be mirrored inside N (number theory)”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 270. “Gödel's string G, and a Bach fugue: they both have the property that they can be understood on different levels”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 285.
Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 18.
Alan Turing, “On Computable Numbers”, p. 231.
“« On Computable Numbers » is regarded as the founding publication of the modern science of computing. It contributed vital ideas to the development, in the 1940s, of the electronic stored-programme digital computer”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 6.
“It is possible to produce the effect of a computing machine by writing down a set of rules of procedure and asking a man to carry them out. Such a combination of a man with written instructions will be called a « Paper Machine ». A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 416. “A systematic method-sometimes also called an effective method and a mechanical method – is any mathematical method of which all the following are true: the method can, in practice or in principle, be carried out by a human computer working with paper and pencil; the method can be given to the human computer in the form of a finite number of instructions; the method demands neither insight nor ingenuity on the part of the human being carrying it out; the method will definitely work if carried out without errors; the method produces the desired result in a finished number of steps; or, if the desired result is some infinite sequence of symbols (e.g. the digital expansion of π), then the method produces each individual symbol in the sequence in some finite number of steps”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 42.
Paolo Zellini (2022), Discreto e continuo, p. 264 (my translation). “An effective procedure is a set of rules telling you, moments by moment, what to do to achieve a particular end; it is an algorithm”, Richard P. Feynman (1996), Lectures on Computation, p. 52.
Richard P. Feynman (1996), Lectures on Computation, p. 54.
“Nel 1936 successe però l'inaspettato. Indipendentemente l'uno dall'altro, e quasi simultaneamente, Turing e Post pubblicarono due articoli [...] nei quali arrivarono alla definizione che avrebbe convinto tutti: calcolabile significa programmabile su un computer. Poiché ovviamente il computer allora non c'era, per poter dare la loro definizione Turing e Post dovettero inventarlo. E lo fecero usando gli strumenti che Gödel aveva fornito loro nell'articolo del 1931: in particolare, l'aritmetizzazione, che con il senno di poi si scoprì non essere altro che la digitalizzazione di cui oggi tanto si parla”, Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 153.
“In a recent paper Alonzo Church as introduced an idea of « effective calculability », which is equivalent to my « computability »”, Alan Turing, “On computable number”, p. 231. Cf. <https://plato.stanford.edu/entries/church-turing/>, “The importance of Turing's proposal is this. If the proposal is correct – i.e. if the Church-Turing thesis is true – then talk about the existence or non-existence of systematic methods can be replaced throughout mathematics and logic by talk about the existence or non-existence of Turing-machine programmes”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 43, “But further, Turing asserted that if anything could be done by an effective procedure, it could be done by his Universal machine, and vice versa”, Richard P. Feynman (1996), Lectures on Computation, p. 54-55, “For the Church-Turing Thesis is certainly one of the most important concepts in the philosophy of mathematics, brains, and thinking. Actually, like tea, the Church-Turing Thesis can be given in a variety of different strengths. [...] Church-Turing Thesis, Standard Version: Suppose there is a method which a sentient being follows in order to sort numbers into two classes. Suppose further that this method always yields an answer within a finite amount of time, and that it always gives the same answer for a given number. Then: Some terminating FlooP program (i.e., some general recursive function) exists which gives exactly the same answers as the sentient being's method does”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 561.
Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 104‑105.
Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 152. “In connection with metamathematical concepts (like being the code number of a proof), Gödel had introduced a class of functions defined on the natural numbers that he had called recursive. He chose this name because functions belonging to this class were typically defined by specifying their value for an initial input value, and then specifying how, knowing the value of the function for a given input value, to specify the value of the function for the next input value. He remarked in these lectures that the recursive functions had the important property that their values could be computed by a « finite procedure, » or as we would say, by an algorithm. He went further and suggested that the class of recursive functions could be extended to a larger class, still embodying the idea of using recursion that would include all functions defined on the natural numbers whose values could be calculated by an algorithm. And, as a step in that direction, he defined a class of functions he called « general recursive »”, Martin Davis (2018), The Universal Computer, p. 106.
“One of my conclusion was that the idea of a « rule of thumb » process and a « machine process » were synonymous”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 378, “The normal form principle for puzzles closely parallels the Church-Turing thesis, which says that given any systematic method, we can find a corresponding Turing machine that is equivalent to it”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 577.
Kurt Gödel (1986), “Postscriptum (3 June 1964)”, in Kurt Gödel, Collected Works, Volume i, p. 370, quoted by Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 154-155: “Gödel fece l'una e l'altra cosa nel Postscriptum (1964) alle sue lezioni del 1934, scegliendo di usare la formulazione di Turing e Post in termini di calcolo meccanico, che l'aveva particolarmente convinto: Un sistema formale è semplicemente un procedimento meccanico per produrre formule, dette dimostrabili, e la sua essenza è che il ragionamento viene completamente rimpiazzato da operazioni meccaniche condotte sulle formule”.
“For instance if Gödel's theorem is to be used we need in addition to have some means of describing logical systems in terms of machines, and machines in terms of logical system”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 450. “Gödel's famous incompleteness theorem of 1931 is, however, importantly less general than the above statement, since it concerns only one particular systematic method of proving mathematical theorems, the system set out by Whitehead and Russell in Principia Mathematica [...] Gödel did later generalize his result of 1931 to all formal systems (containing a certain amount of arithmetic), but emphasized the importance that Turing's work played in this generalization. Gödel said in 1964: [D]ue to A. M. Turing's work, a precise and unquestionable adequate definition of the general concept of formal system can now be given... Turing's work gives an analysis of the concept of « mechanical procedure » (alias « algorithm » or « computation procedure » or « finite combinatorial procedure »). ... A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas”, Martin Davis (ed.) (1965), The Undecidable, p. 71-73. “thanks to Turing's abstract logical work, von Neumann knew that by making use of coded instructions stored in memory, a single machine of fixed structure could in principle carry out any task for which instruction table can be written”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 22, “These ideas are all closely related to Gödel's beautiful result whereby mathematical logics are reduced to a kind of computation theory (above, p. 173). Indeed, he showed that the basic concepts of logics are recursive, which is equivalent to saying they can be computed on a Turing machine”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 275. “The notion of formal system and mechanical operation are intimately connected; either can be defined in terms of the other. If we should first define a mechanical operation directly (e.g., in terms of Turing machines), we would then define a « formal » system as one whose set of theorems could be generated by such a machine (that is to say, the machine grinds out all the theorems, one after another, but never grinds out a non-theorem). Alternatively (following the lines of Post), we can first define a formal system directly and define an operation to be « mechanical » or « recursive » if it is computable in some formal systems”, Raymond M. Smullyan (1961), Theory of Formal Systems, p. 1.
“Gödel later generalized this result, pointing out that « due to A. M. Turing's work, a precise and unquestionably adequate definition of the general concept of formal system can now be given », with the consequence that incompleteness can « be proved rigorously for every consistent formal system containing a certain amount of finitary number theory ». The definition made possible by Turing's work is this (in Gödel's words): A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas'”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 48. “There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines. The best known of these results is known as Gödel's theorem, and shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent”, Alan Turing, “Computing Machinery and Intelligence”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 450.
To see a Turing machine in action, several simulators are available online, such as <https://morphett.info/turing/turing.html>.
“Turing introduced two fundamental assumptions: discreteness of time and discreteness of state of mind. To a Turing machine, time exists not as a continuum, but as a sequence of changes of state. Turing assumed a finite number of possible states at any given time. « If we admitted an infinity of states of mind, some of them will be “arbitrarily close” and will be confused, » he explained. « The restriction is not one which seriously affects computation, things the use of more complicated states of mind can be avoided by writing more symbols on the tape. » […] Each step in the relationship between tape and Turing machine is determined by an instruction table listing all possible internal states, all possible external symbols, and, for every possible combination, what to do (write or erase a symbol, move right or left, change the internal state) in the event that combination comes up. The Turing machine follows instructions and never makes mistakes. Complicated behavior does not require complicated states of mind. [...] Behavioral complexity is equivalent whether embodied in complex states of mind (m-configurations) or complex symbols (for strings of simple symbols) encoded on the tape”, George B. Dyson (2012), Turing’s Cathedral, p. 248.
“We shall have a description of the machine in the form of an Arabic numeral. The integer represented by this numeral may be called a description number (D.N) of the machine. The D.N determine the S.D and the structure of the machine uniquely”, Alan Turing, “On Computable Numbers”, p. 240-241.
Alan Turing, “On Computable Numbers”, p. 241-242, “When we have decided what machine we wish to imitate we punch a description of it on the tape of the universal machine. This description explains what the machine would do in every configuration in which it might find itself. The universal machine has only to keep looking at this description in order to find out what it should do at each stage. Thus the complexity of the machine to be imitated is concentrated in the tape and does not appear in the universal machine proper in any way. If we take the properties of the universal machine in combination with the fact that machine processes and rule of thumb processes are synonymous we may say that the universal machine is one which, when supplied with the appropriate instructions, can be made to do any rule of thumb process”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 383, “A digital computer is a universal machine in the sense that it can be made to replace any machine of a certain very wide class. […] It will replace any rival design of calculating machine, that is to say any machine into which one can fit data and which will later print out results. In order to arrange for our computer to imitate a given machine it is only necessary to programme the computer to calculate what the machine in question would do under given circumstances, and in particular what answers it would print out. The computer can then be made to print out the same answers. […] It should be noticed that there is no need for there to be any increase in the complexity of the computer used. [...] this may appear paradoxical, but the explanation is not difficult. The imitation of a machine by a computer requires not only that we should have made the computer, but that we should have programmed it appropriately. The more complicated the machine to be imitated the more complicated must the programme be”, Alan Turing, “Can Digital Computers Think?”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 482-483, “The UTM's [Universal Turing Machine] internal program then takes this information and mimics the action of the original machine. [...] what is impressive about that UTM is that all we have to do is give it a list of quintuplets and some initial data”, Richard P. Feynman (1996), Lectures on Computation, p. 68.
“« On Computable Numbers » is the birthplace of the fundamental principle of the modern computer, the idea of controlling the machine's operations by means of a programme of coded instructions stored in the computer's memory”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 6, “operating in accordance with this table of instructions, the universal machine is able to carry out any tasks for which an instruction table can be written. The trick is to put an instruction table–programme–for carrying out the desired task onto the tape of the universal machine. […] Turing's greatest contributions to the development of the modern computer were: the idea of controlling the function of a Computing Machine by storing a programme of symbolically encoded instructions in the machine’s memory. His demonstration (in section 7 of « On computable numbers ») that, by this means, a single machine of fixed structure is able to carry out every computation that can be carried out by any Turing machine whatsoever, i.e. is universal”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 15.
“The difference is that in the Universal Turing machine, but not the Analytical Engine, there is no fundamental distinction between programme and data. It is the absence of such a distinction that marks off a stored-programme computer from a programme-controlled computer. As Gandy put the point, Turing’s ‘universal machine is a stored-program machine [in that], unlike Babbage’s all-purpose machine, the mechanisms used in reading a program are of the same kind as those used in executing it’”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 30.
“The plugged pattern can be changed from one problem to another, but–at least in the simplest arrangement–it is fixed for the entire duration of a problem”, John von Neumann (2012), The Computer & the Brain, p. 16-17.
“Note the important difference between this mode of control and the plugged one, described earlier: there the control sequence points were real, physical objects, and their plugged connections expressed the problem. Now the orders are ideal entities, stored in the memory, and it is thus the contents of this particular segment of the memory that express the problem. Accordingly, this mode of control is called « memory-stored control. »”, John von Neumann (2012), The Computer & the Brain, p. 19-20.
“Before Turing, the general supposition was that in dealing with such machines the three categories – machine, program, and data – were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion. A Turing machine is initially envisioned as a machine with mechanical parts, hardware. But its code on the tape of the universal machine functions as a program, detailing the instructions to the universal machine needed for the appropriate computation to be carried out. Finally, the universal machine in its step-by-step actions sees the digits of a machine code as just more data to be worked on”, Martin Davis (2018), The Universal Computer, p. 143.
“What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 393, “Since the orders that exercise the entire control are in the memory, a higher degree of flexibility is achieved [...] Indeed, the machine, under the control of its orders, can extract numbers (or orders) from the memory, process them (as numbers!), and return them to the memory (to the same or to other locations); i.e. it can change the contents of the memory […] Hence it can, in particular, change the orders (since these are in the memory!)–the very orders that control its actions. Thus all sorts of sophisticated order-systems become possible, which keep successively modifying themselves and hence also the computational processes that are likewise under their control. In this way more complex processes than mere iterations become possible”, John von Neumann (2012), The Computer & the Brain, p. 20, “One of the most important reasons for storing instructions in the memory is the fact that one needs to modify them. The most obvious such modification is to change the address in an instruction. Only in this way can a sub-routine be useful in many different parts of a problem or in many different problems. […] This ability to modify the addresses of instructions is not merely aesthetically elegant, it is absolutely fundamental”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 265.
“This is a very simple idea, but is of the utmost importance. The idea of the iterative cycle of instruction”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 389, “The great power of the computer lies in its ability to iterate repeatedly the same short description of a basic mathematical process”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 343. “The functions in Gödel’s original class of recursive functions (renamed primitive recursive functions by Kleene) were built by a succession of such recursive definitions”, Martin Davis (2018), The Universal Computer, p. 201, n. 27, “È il residuo il vero motore dell'algoritmo, perché innesca ad ogni passo la serie di operazioni che occorrono per calcolare un'approssimazione sempre più accurata della soluzione. Quindi il residuo è, assieme alla struttura dell'iterazione, il vero elemento regolatore di tutto il procedimento. Esso racchiude pure l'informazione utile per arrestare il calcolo al livello di precisione richiesta, secondo uno schema che non differisce molto dal Principio di feedback, o retroazione, in cui Norbert Wiener avrebbe visto un possibile motivo di affinità tra gli organismi viventi e i meccanismi artificiali”, Paolo Zellini (2022), Discreto e continuo, p. 212.
“The fundamental point of Turing's analysis has to do with infinite sequences of binary digits”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 274.
Alan Turing, “On Computable Numbers”, p. 230. “Just as any set of typographical rules generates a set of theorems, a corresponding set of natural numbers will be generated by repeated applications of arithmetical rules. These producible numbers play the same role inside number theory as theorems do inside any formal system. Of course, different numbers will be producible, depending on which rules are adopted. « Producible numbers » are only producible relative to a system of arithmetical rules. [...] Note that the producible numbers (in any given system) are defined by a recursive method: given numbers which are known to be producible, we have rules telling how to make more producible numbers”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 264.
Cf. Martin Davis (2018), The Universal Computer, p. 140-141.
Martin Davis (2018), The Universal Computer, p. 141. “We are now in a position to show that the Entscheidungsproblem cannot be solved. Let us suppose the contrary. Then there is a general (mechanical) process for determining whether Un(M) is provable. By Lemmas 1 and 2, this implies that there is a process for determining whether M ever prints 0, and this is impossible, by §8. Hence the Entscheidungsproblem cannot be solved”, Alan Turing, “On Computable Numbers”, p. 262.
“Turing was able to construct, by a method similar to Gödel's, functions that could be given a finite description but could not be computed by finite means. One of these was the halting function: given the number of a Turing machine and the number of an input tape, it returns either the value 0 or the value 1 depending on whether the computation will ever come to a halt. Turing calls the configurations that halt "circular" and the configurations that keep going indefinitely « circle free, » and demonstrated that the unsolvability of the halting problem implies the unsolvability of a broad class of similar problems, including the Entscheidungsproblem. Contrary to Hilbert's expectations, no mechanical procedure can be counted on to determine the probability of any given mathematical statement in a finite number of steps”, George B. Dyson (2012), Turing’s Cathedral, p. 248-249, “Turing’s method makes use of his proof that no computing machine can solve the printing problem. He showed that if a Turing machine could tell, of any given statement, whether or not the statement is provable in FOPC, then a Turing machine could tell, of any given Turing machine, whether or not it ever prints ‘0’. Since, as he had already established, no Turing machine can do the latter, it follows that no Turing machine can do the former. The final step of the argument is to apply Turing's thesis: if no Turing machine can perform the task in question, then there is no systematic method for performing it”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 52.
“The fact of the matter is that there is no systematic method of testing puzzles to see whether they are solvable or not. […] But it is not merely that the test has never been found. It has been proved that no such tests ever can be found”, Alan Turing, “Solvable and Unsolvable Problems”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 582.
Alan Turing, “On Computable Numbers”, p. 241. Cf. “We are always able to obtain from the rules of a formal logic a method of enumerating the propositions proved by its means. We then imagine that all proofs take the form of a search through this enumeration for the theorem for which a proof is desired. In this way ingenuity is replaced by patience”, Alan Turing, “Systems of Logic Based on Ordinals”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 193.
Piergiorgio Odifreddi (2021), Il dio della logica. Vita geniale di Kurt Gödel, matematico della filosofia, p. 146.
“So there are some computational problems (e.g. determining whether a UTM will halt) that cannot be solved by any Turing machine. This is Turing's main result”, Richard P. Feynman (1996), Lectures on Computation, p. 88.
Initially a follower of Hilbert’s program, von Neumann shifted from pure to applied mathematics after Gödel’s results and went on to accomplish extraordinary feats. Beyond his contributions to computing and mathematical physics, he created (based on an idea of Stan Ulam) the Monte Carlo method, developed game theory and the theory of economic behavior with Oskar Morgenstern, and established the theoretical foundations of self-replication (biological, mechanical, and digital), predating the discovery of DNA’s function. Cf. “Viewing the problem of self-replication and self-reproduction through the lens of formal logic and self-referential systems, von Neumann applied the results of Gödel and Turing to the foundations of biology”, George B. Dyson (2012), Turing’s Cathedral, p. 285.
“Julian Bigelow, von Neumann’s chief engineer, recollected: The person who really... pushed the whole field ahead was von Neumann, because he understood logically what [the stored-programme concept] meant in a deeper way than anybody else...The reason he understood it is because, among other things, he understood a good deal of the mathematical logic which was implied by the idea, due to the work of A. M. Turing”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 23.
“The machine described in the paper (variously known as the IAS, or Princeton, or von Neumann machine) was constructed and copied (never exactly), and the copies copied…”, Paul Armer quoted by, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 256.
John von Neumann (1945), First Draft of a Report on the EDVAC, Contract No. W-670-ORD-4926 between the U.S Army Ordnance Departement and the University of Pennsylvania, Moore School of Electrical Engineering, University of Pennsylvania, June 30, p. 7-10.
Jack B. Copeland (ed.) (2004), The Essential Turing, p. 27.
“« Random access » meant that all individual memory locations – collectively constituting the machines internal « state of mind » – were equally accessible at any time. « High speed » meant that the memory was accessible at the speed of light”, George B. Dyson (2012), Turing’s Cathedral, p. 5.
George B. Dyson (2012), Turing’s Cathedral, p. x.
However, hybrid machines exist, and a digital machine can always simulate the behavior of an analog machine.
“In an analog machine each number is represented by a suitable physical quantity, whose value, measured in some pre-assigned unit, is equal to the number in question”, John von Neumann (2012), The Computer & the Brain, p. 3, “In a decimal digital machine each number is represented in the same way as in conventional writing or printing, i.e. as a sequence of decimal digits”, John von Neumann (2012), The Computer & the Brain, p. 6, “We may call a machine « discrete » when it is natural to describe its possible states as a discrete set, the motion of the machine occurring by jumping from one state to another. The states of « continuous » machinery on the other end form a continuous manifold, and the behaviour of the machine is described by a curve on this manifold”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 412, “Machines of this class are generically described as digital or arithmetical. The former name clearly calls attention to the quantities employed and the latter to the processes performed on these quantities. The premier device in this category is the abacus, the simplest form of digital computer still used in many places throughout the world. Analog machines are very different. They are often described as being continuous or measurement. They are rather difficult to explain and describe. Here the former name again calls attention to the quantities employed and the latter to the processes performed on them. In all cases analog machines depend upon the representation of numbers as physical quantities such as lengths of rods, direct current voltages, etc. Usually they are developed for a fairly specific purpose. […] The designer of an analog device decides what operations he wishes to perform and then seeks a physical apparatus whose laws of operation are analogous to those he wishes to carry out. He next builds the apparatus and solves his problem by measuring the physical, and hence continuous, quantities involved in the apparatus”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 39-40, “Given a mathematical formula, they were, in principle at least, able to invent a machine exactly describable by the formula. This is what the analog computer is. […] [the digital approach] is the realization that a machine can be built to imitate the human method of calculating: to count and to build up the elementary operations – addition, subtraction, multiplication, division – by counting”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 140-142.
“That the machine is digital however has more subtle significance. It means firstly that numbers are represented by sequences of digits which can be as long as one wishes. One can therefore work to any desired degree of accuracy”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 378. “In contrast to the physical processes that mediate this process in analog machines, in this case rules of strict and logical character control this operation”, John von Neumann (2012), The Computer & the Brain, p. 9. “Differential analyzers were not « digital » devices operating on numbers digit by digit. Rather numbers were represented by physical quantities that could be measured (like electric currents or voltages) and components were linked together to emulate the desired mathematical operations. These « analog » machines were limited in their accuracy by that of the instruments used for the measurements. The ENIAC was a digital device, the first electronic machine able to deal with the same kind of mathematical problems as differential analyzers. Its designers built it of components functionally similar to those in differential analyzers, relying on the capabilities of vacuum tube electronics for greater speed and accuracy”, Martin Davis (ed.) (1965), The Undecidable, p. 157.
It is challenging to succinctly and precisely define what a number is. Put simply, a number is an arithmetic value representing numerosity or quantity, used in counting and calculation. Numbers can be represented (denoted) by numerals, either as words (“ten”) or as digits (“10”), which are graphical signs. Initially, numbers were perceived as properties of collections or sets of objects, but over time, the concept evolved to become increasingly abstract.
“Secondo alcuni esperti di archeologia cognitiva, come Karenleigh Overmann (ma anche altri), queste dita sono tracce di numeri: sono sequenziali, sono intenzionali, sono contate. […] Dita e numeri vanno naturalmente bene insieme. Coinvolgono i lobi parietali che integrano le sensazioni tattili, visive e spaziali, il gioco delle dita tra tempo e spazio. Non è un caso che le dita siano la prima cosa che i bambini usano per contare. È letteralmente la manipolazione dei numeri”, Silvia Ferrara (2021), Il salto, p. 27-28.
George B. Dyson (2012), Turing’s Cathedral, p. 250.
“bits can represent words, pictures, sounds, music, and movies as well as product codes, film speeds, movie ratings, an invasion of the British army, and the intentions of one’s beloved. But most fundamentally, bits are numbers. All that needs to be done when bits represent other information is to count the number of possibilities. This determines the number of bits that are needed so that each possibility can be assigned a number”, Charles Petzold (2000), Code. The Hidden Language of Computer Hardware and Software, Washington, Microsoft Press., p. 85, “An arbitrary Turing machine T will come with an arbitrary set of possible symbols, but with thought you should be able to see that we can always label the distinct symbols by binary numbers and work with this”, Richard P. Feynman (1996), Lectures on Computation, p. 82.
“It is customary to use the symbols « 0 » and « 1 » as the names of the two states, but any two distinct symbols (marks), such as a circle and a cross, will do. It is often useful to think of the 0 and 1 as only a pair of arbitrary symbols, not as numbers”, Richard W. Hamming (1986), Coding and Information Theory, Upper Saddle River, Prentice‑Hall., p. 7.
Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 384-385, “There are several reasons for this choice [of the binary system], the outstanding ones of which are these: the greater simplicity and speed with which the elementary operation can be performed; the fact that electronic circuitry and technology tends to be binary in character; and the fact that the control portions of a computer are not arithmetical but rather logical in nature – logics is a binary system”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 260, “Binary arithmetic has a simple and more one-piece logical structure that any other, particularly than the decimal one”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 6. The correspondence between binary arithmetic, logic, and electricity is beautifully explained by Petzold.
Cf. “Hence, instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1. Let us conceive, then, of an Algebra in which the symbols x, y, z, &c., admit indifferently of the values 0 and 1, and of these values alone. The laws, the axioms, and the processes, of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic. Difference of interpretation will alone divide them”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 37-38, “This turns our abstract mathematical problem into a matter of real world « mechanics »”, Richard P. Feyman (1996), Lectures on Computation, p. 20.
“The power of mathematics in applications usually lies in revealing similarities or even identities between previously unknown material and well-established material. [...] this then made knowledge about heat immediately transferable to electric cables”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 86.
“To enable the machine to compute, i.e. to operate on these numbers according to a predetermined plan, it is necessary to provide organs (or components) that can perform on these representative quantities the basic operations of mathematics”, John von Neumann (2012), The Computer & the Brain, p. 3.
“[The basic arithmetical operations] all are patterns of alternative actions, organized in highly repetitive sequences, and governed by strict and logical rules”, John von Neumann (2012), The Computer & the Brain, p. 10, “In other words, the « arithmetical depth » of the necessary operations is usually quite great. Note that the « logical depth » is still greater, and by a considerable factor – that is, if, e.g. the four species of arithmetic are broken down into the underlying logical steps (cf. above), each one of them is a long logical chain by itself”, John von Neumann (2012), The Computer & the Brain, p. 27, “Note that the first‑mentioned logical operations [sense coincidences, combine stimuli, and possibly sense anticoincidences] are the elements from which the arithmetical ones are built up”, John von Neumann (2012), The Computer & the Brain, p. 30.
“Reducing logical reasoning to formal rules is an endeavor going back to Aristotle. It was the underlying basis for Leibniz’s dream of a universal computational language. And it underlay Turing’s achievement in showing that all computation could be carried out on his universal machines. Computation and logical reasoning are indeed two sides of the same coin. This insight is used not only to make it possible to program computers to perform a bewildering variety of tasks, but indeed in the very way that computers are designed and built”, Martin Davis (2018), The Universal Computer, p. 168.
“The logical nature of the digital sum becomes even clearer when the binary (rather than decimal) system is used. […] Multiplication: the primarily logical character is even more obvious – and the structure more involved”, John von Neumann (2012), The Computer & the Brain, p. 9, Richard P. Feyman (1996), Lectures on Computation, p. 6. Arithmetic multiplication corresponds to the logical connective and, where the result is true (1) only when both terms are true: 0 × 0 = 0, 0 × 1 = 0, 1 × 0 = 0, 1 × 1 = 1.
A logic gate is a fundamental component of digital circuits that performs a basic logical operation on one or more binary inputs and produces a single binary output.
“All of these gates are examples of « switching functions », which take as input some binary-valued variables and compute some binary function. Claude Shannon was the first to apply the rules of Boolean algebra to switching networks in his MIT Master's thesis in 1937. Such switching function can be implemented electronically with basic circuits called, appropriately enough, « gates ». The presence of an electronical signal on a wire is a « 1 » (or « true »), the absence a « 0 » (or « false »). […] The simplest operations of all is an « identity » or « do-nothing » operation. This is just a wire coming into a box and then out again, with the same signal on it. [...] The next simplest, namely, a box which « negates » the incoming signal. If the input is a 1, then the output will be 0, and vice versa”, Richard P. Feyman (1996), Lectures on Computation, p. 23-24.
“That syllogism, conversion, &c., are not the ultimate processes of Logic. It will be shown in this treatise that they are founded upon, and are resolvable into, ulterior and more simple processes which constitute the real elements of method in Logic”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 10. “The active organs are the following. First, organs which perform the basic logical actions: sense coincidences, combine stimuli, and possibly sense anticoincidences (no more than this is necessary, although sometimes organs for more complex logical operations are also provided)”, John von Neumann (2012), The Computer & the Brain, p. 29-30, “«And » and « or » are the basic operations of logic. Together with « no » (the logical operation of negation) they are a complete set of basic logical operations–all other logical operations, no matter how complex, can be obtained by suitable combinations of these”, John von Neumann (2012), The Computer & the Brain, p. 54, “Now I've been very happy to say that with a so-called « complete set » of operators, you can do anything, that is, build any logical function”, Richard P. Feyman (1996), Lectures on Computation, p. 40, “There are two operations in this system which we may call + and x, or we may say or and and. It is most fortunate for us that all logics can be comprehended in so simple a system, since otherwise the automation of computation would probably not have occurred – or at least not when it did”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 37-38. “One of the nice games you can play with logic gates is trying to find out which is the best set to use for a specific purpose, and how to express other operators in terms of this best set. [...] Suffice it to say that the set AND, OR and NOT is complete; with these operators, one can build absolutely any switching function. To tempt you to go further with all this cute stuff, I will note that there exist single operators that are complete!”, Richard P. Feyman (1996), Lectures on Computation, p. 25.
“Beyond the capability to execute the basic operation singly, a computing machine must be able to perform them according to the sequence – or rather, the logical pattern – in which they generate the solution of the mathematical problem that is the actual purpose of the calculation in hand”, John von Neumann (2012), The Computer & the Brain, p. 11.
“Any computing machine that is to solve a complex mathematical problem must be « programmed » for this task. This means that the complex operation of solving that problem must be replaced by a combination of the basic operations of the machine. Frequently it means something even more subtle: approximation of that operation ‒ to any desired (prescribed) degree ‒ by such combinations”, John von Neumann (2012), The Computer & the Brain, p. 5.
“Probably the most important idea involved in instruction tables is that of standard subsidiary tables. Certain processes are used repeatedly in all sorts of different connections, and we wish to use the same instructions, from the same part of the memory every time. Thus we may use interpolation for the calculation of a great number of different functions, but we shall always use the same instruction table for interpolation. We have only to think out how this is to be done once, and forget then how it is done. Each time we want to do an interpolation we have only to remember the memory position where this table is kept, and make the appropriate reference in the instruction table which is using the interpolation”, Alan Turing, “Lecture on the Automatic Computing Engine”, Jack B. (ed.) (2004), The Essential Turing, p. 389. “We call the coded sequence of a problem a routine and one which if formed with the purpose of possible substitution into other routines, a subroutine”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 336.
Richard P. Feyman (1996), Lectures on Computation, p. 14 “For example, one of the instructions was « put the contents of memory M into register A ». The computer doesn't speak English, so we have to encode this command into a form it can understand; in other words, into a binary string. This is the opcode, or instruction number, and its length clearly determines how many different instructions we can have. If the op code is a four-digit binary number, then we can have 24 =16 different instructions [...] The second part of the instruction is the instruction address, which tells the computer where to go to find what it has to load into A; that is, memory address M”, Richard P. Feyman (1996), Lectures on Computation, p. 14, “Orders for CC to instruct CA to carry out one of its ten specific operations enumerated in 11.4. [...] We designate this operation by the numbers 0, 1, 2, ..., 9 [...] and thereby place ourselves in the position to refer to any one of them by its number”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 40.
“The architectural principle that a pair of 5-bit coordinates (25 = 32) uniquely identified one of the 1,024 memory locations containing a string (or « word ») of 40 bits. In 24 microseconds, any specified 40-bit string of code could be retrieved”, George B. Dyson (2012), Turing’s Cathedral, p. 6.
“Obtain a piece of information almost immediately by « dialling » the position of this information in the store”, Alan Turing, “Intelligent Machinery”, Jack B. Copeland (ed.) (2004), The Essential Turing, p. 415, “all existing machines and memories use « direct addressing, » which is to say that every word in the memory has a numerical address of its own that characterizes it and its position within the memory (the total aggregate of all hierarchic levels) uniquely. This numerical address is always explicitly specified when the memory word is to be read or written. […] there is never any ambiguity about the address, and the place is designated”, John von Neumann (2012), The Computer & the Brain, p. 37-38, “Accomplishment of the desired time-sequential process on a given computing apparatus turns out to be largely a matter of specifying sequences of addresses of items which are to interact”, Bigelow quoted in George B. Dyson (2012), Turing’s Cathedral, p. 275, “The 40 Selectron tubes constituted a 32-by-32-by-40-bit matrix containing 1,024 40-bit strings of code, with each string assigned a unique identity number, or numerical address, in a manner reminiscent of how Gödel had assigned what are now called Gödel numbers to logical statements in 1931. By manipulating the 10-bit addresses, it was possible to manipulate the underlying 40-bits strings–containing any desired combination of data, instructions, or additional addresses, all modifiable by the progress of the program being executed at the time”, George B. Dyson (2012), Turing’s Cathedral, p. 105-106.
“They took us by the hand and explained how numbers could live in houses with addresses...”, Frederic C. Williams, “Early Computers at Manchester University”, p. 328 quoted by Jack B. (ed.) (2004), The Essential Turing, p. 371, “Every word in memory has a distinct location, like a house on a street; and its location is called its address. [...] Hence the « pointer » part of an instruction is the numerical address of some word(s) in memory. There are no restrictions on the pointer, so an instruction may even « point » at itself, so that when it is executed, it causes a change in itself to be made”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 289.
“An order must indicate which basic operation is to be performed, from which memory registers the inputs of that operation are to come, and to which memory register its output is to go. [...] Note that this presupposed that all memory registers are numbered serially – the number of a memory register is called its « address. » [...] It is convenient to number the basic operations, too. Then an order simply contains the number of its operations and the addresses of the memory registers referred to above, as a sequence of decimal digits (in a fixed order)”, John von Neumann (2012), The Computer & the Brain, p. 18, “an order is usually « physically » the same thing as a number [...] each order is stored in the memory, in a definite memory register, that is to say, at a definite address”, John von Neumann (2012), The Computer & the Brain, p. 19.
“Now it could be objected here that a coded message, unlike an uncoded message, does not express anything on its own – it requires knowledge of the code. But in reality there is no such thing as an uncoded message. There are only messages written in more familiar codes, and messages written in less familiar codes. If the meaning of a message is to be revealed, it must be pulled out of the code by some sort of mechanism, or isomorphism. It may be difficult to discover the method by which the decoding should be done; but once that method has been discovered, the message becomes transparent as water. When a code is familiar enough, it ceases appearing like a code; one forgets that there is a decoding mechanism. The message is identified with its meaning”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 267.
“Whether we regard signs as the representatives of things and of their relations, or as the representatives of the conceptions and operations of the human intellect, in studying the laws of signs, we are in effect studying the manifested laws of reasoning. […] The elements of which all language consists are signs or symbols. Words are signs”, George Boole (1854), An Investigation of the Laws of Thought, on which are founded The Mathematical Theories of Logic and Probabilities, p. 24-25.
“Un codice è un sistema di significazione che accoppia entità presenti a entità assenti”, Umberto Eco (1975), Trattato di semiotica generale, p. 19, “Proponiamo quindi di definire come segno tutto ciò che, sulla base di una convenzione sociale previamente accettata, possa essere inteso come qualcosa che sta al posto di qualcos'altro”, Umberto Eco (1975), Trattato di semiotica generale, p. 27, “vi sono precise convenzioni in base alle quali certe espressioni grafiche hanno un significato e quindi veicolano una porzione di contenuto”, Umberto Eco (1975), Trattato di semiotica generale, p. 250.
“Occorre dunque (come è stato fatto) concepire il codice come una doppia entità che stabilisce da un lato correlazioni semantiche e dall'altro regole di combinabilità sintattica”, Umberto Eco (1975), Trattato di semiotica generale, p. 130, “We use the word « all » in a few ways which are defined by the thought processes of reasoning. That is, there are rules which our usage of « all » obeys. We may be unconscious of them, and tend to claim we operate on the basis of the meaning of the word; but that, after all, is only a circumlocution for saying that we are guided by rules which we never make explicit. We have used words all our lives in certain patterns, and instead of calling the patterns « rules », we attribute the courses of our thought processes to the « meanings » of words. That discovery was a crucial recognition in the long path towards the formalization of number theory”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 60.
“Information theory is usually thought of as « sending information from here to there » (transmission of information), but this is exactly the same as « sending information from now to then » (stockage of information). Both situations occur constantly when handling information. Clearly, the encoding of information for efficient storage as well as reliable recovery in the presence of « noise » is essential in computer science”, Richard W. Hamming (1986), Coding and Information Theory, p. xi.
“Although the text uses the colorful words « information, » « transmission, » and « coding, » a close examination will reveal that all that is actually assumed is an information source of symbols s1, s2, ..., sq.”; Richard W. Hamming (1986), Coding and Information Theory, p. 1, “Logically speaking, coding theory leads to information theory, and information theory provides bounds on what can be done by suitable encoding of the information. Thus the two theories are intimately related”, Richard W. Hamming (1986), Coding and Information Theory, p. 4.
“A Mathematical Theory of Communication”, Claude E. Shannon (1948), “A Mathematical Theory of Communication”, The Bell System Technical Journal, 27 (3), p. 379-423. DOI: https://doi.org/10.1002/j.1538-7305.1948.tb01338.x [consulted on 06/12/2024]. “« Any difference that makes a difference » is how cybernetician Gregory Bateson translated Shannon's definition into informal terms. To a digital computer, the only difference that makes a difference is the difference between a zero and a one”, George B. Dyson (2012), Turing’s Cathedral, p. 3.
Charles Petzold (2000), Code, p. 70.
“The two main problems of representation are the following. 1. Channel encoding: How to represent the source symbols so that their representations are far apart in some suitable sense. As a result, in spite of small changes (noise) in their representations, the altered symbols can, at the receiving end, be discovered to be wrong and even possibly corrected. This is sometimes called « feed forward » error control. 2. Source encoding: How to represent the source symbols in a minimal form for purpose of efficiency”, Richard W. Hamming (1986), Coding and Information Theory, p. 2, “The central idea of error detection and correction is that the meaningful messages must be kept far apart (in the space of probable errors) if we are to handle errors successfully. If two of the possible messages are not far enough apart, one message can be carried by an error (or errors) into the other, or carried at least so close that at the receiving end we will make a mistake in identifying the source”, Richard W. Hamming (1986), Coding and Information Theory, p. 49, “using longer-than-minimum names is probably a very wise idea – but it is unlikely that this computation will convince many people to do so! Minimal-length names are the source of much needless confusion and waste of time, yours and the machine's. […] We have given the fundamental nature of error detection and error correction for white noise, namely the minimum distance between message points that must be observed”, Richard W. Hamming (1986), Coding and Information Theory, p. 50.
“Try and decode this message: 011010110. You can't do it! At least, not uniquely. You do not know whether it is 01-1-01-01-10 or 011-01-01-10 or 01-101-01-10 or another possibility. There is an ambiguity due to the fact that the symbols can run into each other. A good, uniquely decodable symbol choice is necessary to avoid this”, Richard P. Feyman (1996), Lectures on Computation, p. 127.
“l'informazione nel senso (a,i) [teoria matematica dell'informazione come una teoria strutturale delle proprietà statistiche della fonte] non è tanto quello che « viene detto » quanto quel che « può essere » detto. Rappresenta la libertà di scelta disponibile per la possibile selezione di un evento e quindi è una proprietà statistica della fonte”, Umberto Eco (1975), Trattato di semiotica generale, p. 64.
“In a sense, the amount of information in a message reflects how much surprise we feel at receiving it. [...] In this respect, information is as much a property of your own knowledge as anything in the message. To clarify this point, consider someone sending you two duplicate messages: a message, then a copy. Every time you receive a communication from him, you get it twice. [...] We might say, well, the information in the two messages must be the sum of that in each [...] But this would be wrong. There is still only one message, the first, and information only comes from this first half. This illustrates how « information » is not simply a physical property of a message: it is a property of the message and your knowledge about it”, Richard P. Feyman (1996), Lectures on Computation, p. 119, “Shannon defined the information in a message to be the base two logarithm of the probability of that message appearing. Note how this ties in with our notion of information as « surprise »: the less likely the message to appear, the greater the information it carries”, Richard P. Feyman (1996), Lectures on Computation, p. 122. Cf. Umberto Eco (1975), Trattato di semiotica generale, p. 196-197.
“Whenever we talk about bits, we often talk about a certain number of bits. The more bits we have, the greater the number of different possibilities we can convey”, Charles Petzold (2000), Code, p. 75.
Cf. “Each decimal digit, in turn, is represented by a system of « markers ». […] A marker which can appear in ten different forms suffices by itself to represent a decimal digit. A marker which can appear in two different forms only will have to be used so that each decimal digit corresponds to a whole group. (A group of three two-valued markers allow 8 combinations; this is inadequate. A group of four such markers allows 16 combinations; this is more than adequate. Hence, groups of at least four markers must be used per decimal digit)”, John von Neumann (2012), The Computer & the Brain, p. 6.
“The essential concept here is that information represents a choice among two or more possibilities. For example, when we talk to another person, every word we speak is a choice among all the words in the dictionary. If we numbered all the words in the dictionary from 1 through 351,482, we could just as accurately carry on conversations using the numbers rather than words”, Charles Petzold (2000), Code, p. 72.
<https://home.unicode.org/>.
Cf. <https://www.unicode.org/faq/>.
“Il computer e l'uomo (è un'ovvietà) non usano un codice comunicativo condiviso. Il problema della memorizzazione informatica di un qualsiasi testo, dunque, è sempre un problema di codifica, poiché si tratta di tradurre quel testo qualque in modo che sia leggibile alla macchina, di trasporre l'informazione testuale, come si dice, in Machine Readable Form (MRF)”, Edoardo Ferrarini (2007), “La trascrizione dei testimoni manoscritti: metodi di filologia computazionale”, in Arianna Ciula & Francesco Stella (eds.), Digital philology and medieval texts, Pisa, Pacini, p. 104.
“Such an analysis [of CC], however, is dependent upon a precise knowledge of the system of orders used in controlling the device, since the function of CC is to receive this orders, to interpret them, and then either to carry them out, or to stimulate properly those organs which will carry them out. It is therefore our immediate task to provide a list of the orders which control the device, i.e. to describe the code to be used in the device, and to define the mathematical and logical meaning and the operational significance of its code words”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 37, “A computing machine is controlled, as I pointed out above, by codes, sequences of symbols–usually binary symbols–i.e. by strings of bits. In any set of instructions that govern the use of a particular computing machine it must be made clear which strings of bits are orders and what they are supposed to cause the machine to do”, John von Neumann (2012), The Computer & the Brain, p. 72, “The problem [of devising codes] is of a practical nature and is closely allied to that connected with the choice of the elementary operations in the arithmetic organ. The code for a machine is in reality the vocabulary or totality of words or orders that the machine can « understand » and « obey »”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 258, “It is the totality of orders that makes up the language a machine understands; it is usually referred to as machine language. This is in modern parlance the most primitive or lowest level language of machines. Let us therefore ask a little about the structure of this language. By 1 July 1952 the Institute computer had a basic vocabulary of 29 instructions. Each such order consisted, in general, of ten binary digits to express a memory location – 210 = 1024 – and 10 additional ones to express the specific operation”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 333.
“Il lavoro della trascrizione-edizione consisterebbe dunque nella ri-codificazione di tutti gli elementi pertinenti del sistema segnico MSc nel sistema segnico MEcm; e le norme chiamate a presiedere, in modo analitico e convenzionale, alla trascrizione fungerebbero da intercodice (o metacodice) di equivalenza tra i due sistemi, garantendo la scientificità dell'operazione di ricodificazione”, Raul Mordenti (2001), Informatica e critica dei testi, p. 76, “Using the same method of logical substitution by which a Turing machine can be instructed to interpret successively higher-level languages – or by which Gödel was able to encode metamathematical statements within ordinary arithmetic – it was possible to design Turing machines whose coded instructions addressed physical components, not merely locations, and whose output could be translated into physical objects, not just zeros and ones”, George B. Dyson (2012), Turing’s Cathedral, p. 284.
“details such as how the instruction of codes are represented or exactly how things are set out in memory are not needed to use the instructions. This is the first and most elementary step in a series of hierarchies. We want to be able to maintain such ignorance consistently. In other words, we only want to have to think about the lower details once and then design things so that the next guy who comes along and wants to use your structure does not have to worry about the lower level details”, Richard P. Feyman (1996), Lectures on Computation, p. 3, “Multiple levels of translation separate the languages now used by computer programmers from the machine language by which the instructions are carried out”, George B. Dyson (2012), Turing’s Cathedral, p. 241.
“It must be mentioned, however, that computer programming was originally done on an even lower level, if possible, than that of machine language – namely, connecting wires to each other, so that the proper operations were « hard-wired » in”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 290.
“The many levels in a complex computer system have the combined effect of « cushioning » the user, preventing him from having to think about the many lower-level goings-on which are most likely totally irrelevant to him anyway”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 296, “One of the major goals of the drive to higher levels has always been to make as natural as possible the task of communicating to the computer what you want it to do”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297.
“The idea of assembly language is to « chunk » the individual machine language instructions, so that instead of writing the sequence of bits « 010111000 » when you want an instruction which adds one number to another, you simply write ADD, and then instead of giving the address in binary representation, you can refer to the word in memory by a name”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 290.
“And here is the vital point: someone can write, in machine language, a translation program. This program, called an assembler, accepts mnemonic instruction names, decimal numbers, and other convenient abbreviations which a programmer can remember easily, and carries out the conversion into the monotonous but critical bit-sequences. After the assembly language program has been assembled (i.e., translated), it is run–or rather, its machine language equivalent is run”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 291.
“It was clear that a most powerful addition to any programming language would be the ability to define new higher‑level entities in terms of previously known ones, and then to call them by name. […] Unlike the case with assembly language, there is no straightforward one-to-one correspondence between statements in Algol and machine language instructions. To be sure, there is still a type of mapping from Algol into machine language, but it is far more « scrambled" than that between assembly language and machine language”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 292-293.
“Of course, we already went « up » a bit when we summarized operations by instructions such as « Clear A », and so on. This sort of shorthand is introduced for our benefit, and programs written in it cannot be understood directly by the machine itself. Such « assembly language » programs have to be translated into a « machine language » that the computer can understand, and this is done by a program called an « assembler ». The next level up, where we have multiplication and variables and so on, needs another program to translate these « high-level » programs into Assembly Language. These translation programs are called « compilers » or « interpreters ». The difference between them is in when the translation is done. An interpreter works out what to do step by step, as the program runs, interpreting each successive instruction in terms of the cruder language. A compiler takes the program as a whole and converts it all into assembly or machine language before the program is run”, Richard P. Feyman (1996), Lectures on Computation, p. 18, “Clearly, one can keep going up in level, putting together new algorithms, programming languages, adding the ability to manipulate « files » containing programs and data, and so on. Nowadays it is possible for most people to actually work at these higher levels using high-level languages to program their machines”, Richard P. Feyman (1996), Lectures on Computation, p. 19. “a compiler can be written in assembly language, and an assembler in machine language”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 294. “Perhaps we should pause to say a few words about the meaning of the concepts interpreter and compiler. Both are programs written in languages which are not understood by a computer. Both must therefore be translated by the computer into a machine-language program before they can be carried out. In the case of the interpreter, the translation or decoding of each statement is done every time that statement is read; with the compiler the decoding of each statement is done a priori, and from there on the computer deals only with the machine-language program”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 339.
“The next level of the hierarchy carries much further the extremely powerful idea of using the computer itself to translate programs from a high level into lower levels”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 292.
“All our interactions with digital information are mediated through layers of platforms. […] This includes, but is not limited to, operating systems, programming languages, file formats, software applications for creating and rendering content, encoding schemes, compression algorithms, and exchange protocols”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 41.
“It is striking how tight the connection is between progress in computer science (particularly Artificial Intelligence) and the development of new languages. A clear trend has emerged in the last decade: the trend to consolidate new types of discoveries in new languages. One key for the understanding and creation of intelligence lies in the constant development and refinement of the languages in terms of which processes for symbol manipulation are describable. […] It is not that each higher level extends the potential of the computer; the full potential of the computer already exists in its machine language instruction set. It is that the new concepts in a high-level language suggest directions and perspectives by their very nature. […] The « space » of all possible programs is so huge that no one can have a sense of what is possible. Each higher-level language is naturally suited for exploring certain regions of « program space »; thus the programmer, by using that language, is channeled into those areas of program space. He is not forced by the language into writing programs of any particular type, but the language makes it easy for him to do certain kinds of things. [...] This shows how a notational system can play a significant role in shaping the final product”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 299.
“The actual code for a problem is that sequence of coded symbols... that has to be placed into the... memory in order to cause the machine to perform the desired and planned sequence of operations, which amounts to solving the problem in question. […] Coding a problem for the machine would merely be what its name indicates: Translating a meaningful text... from one language (the language of mathematics, in which the planner will have conceived the problem, or rather the numerical procedure by which he has decided to solve the problem) into another language (that one of our code)”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 268. “How can you compare a program written in APL, with one written in Algol? Certainly not by matching them up line by line. You will again chunk these programs in your mind, looking for conceptual, functional units which correspond. Thus, you are not comparing hardware, you are not comparing software – you are comparing « etherware » – the pure concepts which lie back on the software. There is some sort of abstract « conceptual skeleton » which must be lifted out of low levels before you can carry out a meaningful comparison of two programs in different computer languages, of two animals, or of two sentences in different natural languages”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 381.
“The instructions which govern these operation must be given to the device in absolutely exhaustive detail. They include all numerical information which is required to solve the problem under consideration [...] All these procedures require the use of some code to express the logical and the algebraical definition of the problem under consideration, as well as the necessary numerical material”, John von Neumann (1945), First Draft of a Report on the EDVAC, p. 1.
“For today's computers to perform a complex task, we need a precise and complete description of how to do that task in terms of a sequence of simple basic procedures ‒ the « software » ‒ and we need a machine to carry out these procedures in a specifiable order ‒ this is the « hardware ». This instructing has to be exact and unambiguous. In life, of course, we never tell each other exactly what we want to say; we never need to, as context, body language, familiarity with the speaker, and so on, enable us to « fill in the gaps » and resolve any ambiguities in what is said. Computers, however, can't yet « catch on » to what is being said, the way a person does. They need to be told in excruciating detail exactly what to do. Perhaps one day we will have machines that can cope with approximate task descriptions, but in the meantime we have to be very prissy about how we tell computers to do things”, Richard P. Feyman (1996), Lectures on Computation, p. 2-3, “If the computer is to be reliable, then it is necessary that it should understand, without the slightest chance of ambiguity, what it is supposed to do”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 297.
“Questo passaggio comporta pertanto l'assunzione delle note caratteristiche di univocità, coerenza, non contraddittorietà, non ridondanza, che sono connesse all'uso dello strumento informatico. Da tali caratteristiche della ricodifica (e da esse soltanto) dipende la possibilità di utilizzare le straordinarie capacità della macchina di produrre automaticamente e in tempi brevissimi concordanze e frequenze, indices locorum, co-occorrenze, analisi statistiche e stilistiche, indici dei nomi etc., nonché di compiere in un istante correzioni automatiche e sistematiche sull'intero testo”, Raul Mordenti (2001), Informatica e critica dei testi, p. 48-49.
“It was realized that the computer really processed information, not just numbers”, Herman H. Goldstine (1972), The Computer from Pascal to von Neumann, p. 8-9.
It must be noted that time in the machine is not the same as time outside the machine. “No time is there. Sequence is different from time. (Julian Bigelow, 1999)”, George B. Dyson (2012), Turing’s Cathedral, p. 294, “In our universe, we measure time with clocks, and computers have a « clock speed, » but the clocks that govern the digital universe are very different from the clocks that govern ours. In the digital universe, clocks exist to synchronize the translation between bits that are stored in memory (as structures in space) and the bits that are communicated by code (as sequences in time). They are clocks more in the sense of regulating escapement than in the sense of measuring time”, George B. Dyson (2012), Turing’s Cathedral, p. 299, “«No clocks. You don't need clocks. You only need counters. There's a difference between a counter and a clock. A clock keeps track of time. A modern general purpose computer keeps track of events. » This distinction separates the digital universe from our universe, and is one of the few distinctions left”, George B. Dyson (2012), Turing’s Cathedral, p. 300, “The Turing machine thus embodies the relationship between an array of symbols in space and a sequence of events in time”, George B. Dyson (2012), Turing’s Cathedral, p. 248.
“A distinction which is made in Artificial Intelligence is that between procedural and declarative types of knowledge. A piece of knowledge is said to be declarative if it is stored explicitly, so that not only the programmer but also the program can « read » it as if it were in an encyclopedia or an almanac. This usually means that it is encoded locally, not spread around. By contrast, procedural knowledge is not encoded as facts – only as programs. [...] Thus procedural knowledge is usually spread around in pieces, and you can't retrieve it, or « key » on it. It is a global consequence of how the program works, not a local detail. In other words, a piece of purely procedural knowledge is an epiphenomenon. […] In between the declarative and procedural extremes, there are all possible shades”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 363, “A large amount of work in AI has nevertheless gone into systems in which the bulk of the knowledge is stored in specific places – that is, declaratively. It goes without saying that some knowledge has to be embodied in programs; otherwise one would not have a program at all, but merely an encyclopedia. The question is how to split up knowledge between program and data. Not that it is always easy to distinguish between program and data, by any means”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 616.
“Although we can try to draw a clear line between program and data, the distinction is somewhat arbitrary. Carrying this line of thought further, we find that not only are program and data intricately woven together, but also the interpreter of programs, the physical processor, and even the language are included in this intimate fusion. Therefore, although it is possible (to some extent) to draw boundaries and separate out the levels, it is just as important – and just as fascinating – to recognize the level-crossings and mixings”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 547.
“You can see that a [shift] register like this takes a sequential piece of information and turns it into parallel information”, Richard P. Feyman (1996), Lectures on Computation, p. 50, “Turing machines, which by definition are structures that can be encoded as sequences”, George B. Dyson (2012), Turing’s Cathedral, p. 290. “« The importance of structure to how logical processes take place is beginning to diminish as the complexity of the logical process increases. » Bigelow then pointed out that the significance of Turing's 1936 result was « to show in a very important, suggestive way how trivial structure really is. » Structure can always be replaced by code”, George B. Dyson (2012), Turing’s Cathedral, p. 274-275.
“The fundamental, indivisible unit of information is the bit. The fundamental, indivisible unit of digital computation is the transformation of a bit between its two possible forms of existence: a structure (memory) or a sequence (code)”, George B. Dyson (2012), Turing’s Cathedral, p. 124, “A digital universe – whether 5 kilobytes or the entire Internet – consists of two species of bits: differences in space, and differences in time. Digital computers translate between these two forms of information – structure and sequence – according to definite rules. Bits that are embodied as structure (varying in space, invariant across time) we perceive as memory, and bits that are embodied as sequence (varying in time, invariant across space) we perceive as code. Gates are the intersections were bits span both worlds at the moments of transition from one instant to the next”, George B. Dyson (2012), Turing’s Cathedral, p. 3.
“Here is a case which demonstrates that, despite the theoretical equivalence of data and programs, in practice the choice of one over the other has major consequences”, Douglas R. Hofstadter (1999), Gödel, Escher, Bach, p. 630.
“A key factor in the re-usability of data is the extent to which it is well structured. The more regular and well-defined the structure of the data the more easily people can create tools to reliably process it for reuse”, Tom Heath & Christian Bizer (2011), Linked Data: Evolving the Web into a Global Data Space, San Francisco, Morgan & Claypool. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001 [consulted on 06/12/2024].
“The servers are active processes that reply to requests [...] clients are browser processes”, Berners-Lee, Tim (1990), Proposal for a Hypertext Project. URL: https://cds.cern.ch/record/2639699/files/Proposal_Nov-1990.pdf [consulted on 06/12/2024], p. 4.
“But the gift of the Web wasn’t only informational: by its very existence it gave us new tools to identify and understand networks themselves”, James Bridle (2022), Ways of Being, p. 81.
Cf. <https://www.home.cern/science/computing/birth-web/short-history-web>; “HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will”, Berners-Lee, Tim (1990), Proposal for a Hypertext Project, p. 1.
Cf. <https://cds.cern.ch/record/1164399/?ln=it>.
<https://www.w3.org/about/>.
<https://html.spec.whatwg.org/>.
Berners-Lee, Tim (1990), Proposal for a Hypertext Project, p. 2
There is some terminological overlap between Web 3.0 and Web3, the latter being mainly associated with blockchain technology.
Tim Berners-Lee (1998a), “Semantic Web Road Map”, Design Issues. URL: https://www.w3.org/DesignIssues/Semantic.html [consulted on 06/12/2024], a collection of personal notes by Tim Berners-Lee that explain the architectural and philosophical principles underlying the Web.
Tim Berners-Lee, James Hendler & Ora Lassila (2001), “The Semantic Web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilites”, Scientific American, 284 (5), p. 34-43. URL: http://www.sciam.com/article.cfm?id=the-semantic-web [consulted on 06/12/2024].
Tim Berners-Lee et al. (2001), “The Semantic Web: A new form of web content that is meaningful to computers will unleash a revolution of new possibilites”, p. 34; Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, 2012, p. 263.
Cf. <https://opendefinition.org/>.
“That a bundle of data is « linked » means that any links or references in it are made explicit (for humans and for machines)”, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, in Maarten J.F.M. Hoenen (ed.), Past and Future: Medieval Studies Today, Turnhout, TEMA, p. 62.
“Technically speaking, Linked Data refers to data published on the Web in such a way that it is machine readable, its meaning is explicitly defined, it is linked to other external datasets, and it can in turn be linked to from external datasets as well. Conceptually, Linked Data refers to a set of best practices for publishing and connecting structured data on the Web”, Liyang Yu (2011), Developer’s Guide to the Semantic Web, p. 409. Cf. “Increasingly, one set of objects can serve as metadata for another set”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 87, “Many digital objects index, describe, and annotate each other. […] This linked set of connections becomes a powerful form of context”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 156, “Herein we see a key point about digital objects: They describe not only themselves in machine readable ways but also each other. Further, every bit of metadata points in every direction”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 145.
<https://5stardata.info/>.
“1. Use URIs as names for things. 2. Use HTTP URIs so that people can look up those names. 3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL). 4. Include links to other URIs so that they can discover more things”, Tim Berners-Lee (2006), “Linked Data”, Design Issues. URL: https://www.w3.org/DesignIssues/LinkedData.html [consulted on 06/12/2024].
Cf. Tim Berners-Lee (2006), “Linked Data”, <https://handbook.opendata.swiss/de/content/glossar/bibliothek/linked-open-data.html>.
Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 32, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 50 and p. 66.
Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, in Fabio Ciotti & Gianfranco Crupi (eds.), Dall’Informatica umanistica alle culture digitali. In memoria di Giuseppe Gigliozzi, Roma, Quaderni DigiLab/Università Sapienza di Roma, p. 292, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 70.
Raul Mordenti (2001), Informatica e critica dei testi, p. 35; Peter Rovinson (2013), “Towards a Theory of Digital Editions”, Variants: The Journal of the European Society for Textual Scholarship, 10, p. 105 and p. 126.
Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 305, Peter Boot & Marijn Koolen (2021), “Connecting TEI Content Into an Ontology of the Editorial Domain”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 9-10, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 66, Vogeler, Georg (2021), “« Standing-off Tree and Graphs »: On the Affordance of Technologies for the Assertive Edition”, in Elena Spadini, Francesca Tomasi & Georg Vogeler (eds.), Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing, Band, Schriften des Instituts für Dokumentologie und Editorik, p. 78-79.
“« edizione critica eccellente » quella che « offre i materiali necessari e sufficienti per un’altra edizione critica della stessa opera condotta secondo criteri differenti »”, De Robertis quoted by Raul Mordenti (2001), Informatica e critica dei testi, p. 67, “fornire per mezzo della trascrizione la maggior quantità possibile di dati per la lettura (anzi, per le varie e molteplici letture) e non invece di fornire la lettura definitiva”, Raul Mordenti (2001), Informatica e critica dei testi, p. 80. Cf. “Data, even from distributed sources may fuel various editions, differing in scope and distributed over place and time. Editorial content is transformed into modules or even more fine granular sets or particles of addressable, linkable and integratable objects”, Patrick Sahle (2016), “What is a Scholarly Digital Edition”, p. 36.
Cf. Jörg Wettlaufer (2018), “Der nächste Schritt? Semantic Web und digitale Editionen”, in Roland S. Kamzelak & Tim Steyer (eds.), Digital Metamorphose: Digital Humanities und Editionswissenschaft, Wolfenbüttel, Zeitschrift für digitale Geisteswissenschaften. DOI: https://doi.org/10.17175/sb002_007 [consulted on 06/12/2024], Vogeler, Georg (2021), “« Standing-off Tree and Graphs »”, p. 87, Scholastic Commentaries and Texts Archive (<https://scta.info>); Paolo Bufalini’s Notebook (<https://projects.dharc.unibo.it/bufalini-notebook/>).
<https://www.w3.org/2001/sw/wiki/Main_Page>.
<https://en.wikipedia.org/wiki/Semantic_Web_Stack>; <https://www.w3.org/2000/Talks/1206-xml2k-tbl/slide10-0.html>; <https://www.w3.org/2007/Talks/0130-sb-W3CTechSemWeb/#(24)>; <https://smiy.wordpress.com/2011/01/10/the-common-layered-semantic-web-technology-stack/>.
Cf. Pat Hayes’s criticism of the SW stack: <https://videolectures.net/videos/iswc09_hayes_blogic>.
<https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier>.
Cf. <https://datatracker.ietf.org/doc/html/rfc3986#section-1.1.3>.
Cf. <https://www.w3.org/RDF/>; <https://www.w3.org/1999/02/22-rdf-syntax-ns>; <https://en.wikipedia.org/wiki/Resource_Description_Framework>.
“Definition of RDF Triple: Assume that I is the set of all IRI references, B (an infinite) set of blank nodes, L the set of literals. An RDF triple t is defined as a triple t = <s,p,o> where s ∈ I U B is called the subject, p ∈ I is called the predicate and o ∈ I U B U L is called the object”, Dominik Tomaszuk (2016), “Inferences rules for RDF(S) and OWL in N3Logic”, arXiv. DOI: https://doi.org/10.48550/arXiv.1601.02650 [consulted on 06/12/2024], p. 1.
Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 99.
Tim Berners-Lee (1999a), “The Semantic Web as a language of logic”, Design Issues. URL: https://www.w3.org/DesignIssues/Logic.html [consulted on 06/12/2024], Tim Berners-Lee (1999b), “The Semantic Toolbox”, Design Issues. URL: https://www.w3.org/DesignIssues/Toolbox.html [consulted on 06/12/2024], Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 283. For reasons of computational efficiency and logical decidability, it is often necessary to limit expressiveness to specific parts or subsystems composed of consistent and reliable data. Cf. Tim Berners-Lee (1999b), “The Semantic Toolbox”.
Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 280, Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 257.
“To publish data on the Web, the items in a domain of interest must first be identified. These are the things whose properties and relationships will be described in the data, and may include Web documents as well as real‑world entities and abstract concepts. [...] Where URIs identify real-world objects, it is essential to not confuse the objects themselves with the Web documents that describe them. It is, therefore, common practice to use different URIs to identify the real-world object and the document that describes it, in order to be unambiguous”, Tom Heath & Christian Bizer (2011), Linked Data: Evolving the Web into a Global Data Space, San Francisco, Morgan & Claypool. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001 [consulted on 06/12/2024], p. 9-10.
Tim Berners-Lee (1998b), “Using XML for Data”.
<https://dbpedia.org/>.
<https://www.w3.org/TR/xml/>; <https://en.wikipedia.org/wiki/XML>.
<https://www.w3.org/TR/turtle/>; <https://en.wikipedia.org/wiki/Turtle_(syntax)>.
<https://w3c.github.io/N3/spec/>; <https://en.wikipedia.org/wiki/Notation3>.
<https://json-ld.org/>.
<https://www.w3.org/TR/sparql11-query/>; <https://en.wikipedia.org/wiki/SPARQL>.
“The logic of computational media is, by and large, the logic of the database. Where the index or the codex is a valuable metaphor for the order and structure of a book, as new media studies scholarships suggests, the database is and should be approached as the foundational metaphor for digital media. From this perspective, there is no persistent « first row » in a database; instead the presentation and sorting of digital information is based on the query posed to the data”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 35, “The search function is one of the core aspects of the database logic. We don't read databases. We query them”, Trevor Owens (2018), The Theory and Craft of Digital Preservation, p. 35.
For the specification of RDF formal semantics see <https://www.w3.org/TR/rdf11-mt/>.
<https://www.w3.org/TR/rdf-schema/>.
<https://www.w3.org/OWL/>.
Willem N. Borst (1997), Construction of Engineering Ontologies, Institute for Telematica and Information Technology/University of Twente, Enschede, p. 12. Cf. also Thomas R. Gruber (1994), “Towards Principles for the Design of Ontologies Used for Knowledge Sharing”, International Journal Human-Computer Studies, 43, p. 907-928.
Tim Berners-Lee et al. (2001), “The Semantic Web”, “Pure logic is ontologically neutral. It makes no presuppositions about what exists or may exist in any domain or any language for talking about the domain. To represent knowledge about a specific domain, it must be supplemented with an ontology that defines the categories of things in that domain and the terms that people use to talk about them. The ontology defines the words of a natural language, the predicates of predicate calculus, the concept and relation types of conceptual graphs, the classes of an object-oriented language, or the tables and fields of a relational database”, John F. Sowa (2000), “Ontology, Metadata, and Semiotics”, p. 3.
Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 41.
Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 260.
Cf. Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 98.
“When you represent information as a DLG [directed labelled graph] the nodes don’t actually contain any information: it’s all in the connections”, Tim Berners-Lee (1998a), “Semantic Web Road Map”, “Entities obtain meaning based on the way, and in the extent to which, they are related to other entities via properties. Databases often contain implicit, condensed, or shortcut semantics. In order to be explicit, it is important that these semantics are unravelled”, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 107.
Reification is also used to describe RDF triples.
“I dati acquistano valore di conoscenza quando sono interconnessi con altri dati, quando la loro interconnessione produce deflagranti effetti di rete. E la rivoluzione copernicana dei linked data consiste proprio nel fatto che il link, strumento di collegamento tra documenti nel web tradizionale, acquista, nel contesto del semantic web, un ruolo semantico primario, una funzione predicativa che dà significato ai dati stessi, poiché rappresenta ed esprime i differenti tipi di relazione che essi possono intrattenere”, Gianfranco Crupi (2012), “Universo bibliografico e semantic web”, p. 305.
<https://cidoc-crm.org/lrmoo>.
Tim Berners-Lee (1998b), “Using XML for Data”, Tim Berners-Lee et al. (2001), “The Semantic Web”, Gabriel Müller & Ueli Zahnd (2021), “Open Scholasticism. Editing Networks of Thought in the Digital Age”, p. 63.
Dino Buzzetti (2011), “Oltre il rappresentare: le potenzialità del markup”, p. 49.
<https://cidoc-crm.org/>.
Cf. <https://www.w3.org/TR/owl2-overview/#Semantics>.
SROIQ is a highly expressive Description Logic, extending the SHOIN Description Logic with features like reflexive and transitive roles, qualified cardinality constraints, and more. Cf. <https://en.wikipedia.org/wiki/Description_logic>.
<https://www.w3.org/TR/rdf11-mt/>.
<https://www.w3.org/TR/owl2-overview/#Profiles>; <https://www.w3.org/TR/owl2-profiles/>.
Dörthe Arndt (2019), Notation3 as the unifying logic for the semantic web, PhD thesis in Technology and Engineering, Ghent University, Ghent. URL: https://biblio.ugent.be/publication/8634507 [consulted on 06/12/2024], p. 12.
Tim Berners-Lee et al. (2001), “The Semantic Web”.
Dean Allemang & James Hendler (2011), Semantic Web for the Working Ontologist, Oxford, Elsevier LTD, p. 6.
<https://eulersharp.sourceforge.net/2003/03swap/rdfs-subClassOf.html>.
<https://w3c.github.io/N3/spec/>; <https://www.w3.org/TeamSubmission/n3/>. Cf. also <https://nie-ine.github.io/e-editiones/n3-rule-based-machine-reasoning>.
<https://www.w3.org/DesignIssues/N3Logic>.
<https://github.com/eyereasoner/EyeClient>; <https://josd.github.io/eye/>; Ruben Verborgh & Jos De Roo (2015), “Drawing Conclusions from Linked Data on the Web. The EYE Reasoner”, IEEE Software, May/June (3). URL: https://josd.github.io/Papers/EYE.pdf [consulted on 06/12/2024]. See also the “Online course Semantic Web Reasoning With EYE” at <https://n3.restdesc.org/>.
Cf. Berners-Lee, Tim, Dan Connolly, Lalana Kagal, Yosi Scharf & Jim Hendler (2008), “N3Logic: A logical framework for the World Wide Web”, Theory and Practice of Logic Programming, 8 (3), p. 249-269. DOI: https://doi.org/10.1017/S1471068407003213 [consulted on 06/12/2024], Dörthe Arndt (2019), Notation3 as the unifying logic for the semantic web.
<http://www.w3.org/2000/10/swap/reason>.
Cf. Fabio Ciotti (2012), “Web semantico, linked data e studi letterari: verso una nuova convergenza”, p. 261.
“[The Semantic Web] is not merely another data model, but also includes reflections on semiotics, semantics, linguistics (in relation to different natural languages), logic, and IT”, Hans Cools & Roberta Padlina, “Formal Semantics for Scholarly Editions”, p. 101.
“But the question what ontology actually to adopt still stands open, and the obvious counsel is tolerance and an experimental spirit”, Willard V. O. Quine (1963), From a Logical Point of View, p. 19.
“as developers strive to provide the structure and organization beyond the just linking of data, they are not making very much use of the formal semantics that were standardized in the semantic web languages. Modern semantic approaches leverage vastly distributed, heterogeneous data collection with needs-based, lightweight data integration. These approaches take advantage of the coexistence of a myriad of different, sometimes contradictory, ontologies of varying levels of detail without assuming all-encompassing or formally correct ontologies. In addition, we are beginning to see the increased use of textual data that is available on the web, in hundreds of languages, to train artificially intelligent agents [...] These projects are increasingly leveraging the semantic markup that is available on the web”, Abraham Bernstein, James Hendler & Natalya Noy (2016), “A New Look at the Semantic Web”, Communications of the ACM, 59 (9). DOI: https://doi.org/10.1145/2890489 [consulted on 06/12/2024], p. 2.
Cf. Harry Halpin et al. (2010), “When owl:sameAs isn’t the Same: An Analysis of Identity Links on the Semantic Web”. “Sameness can be quite subtle”, Melanie Mitchell (2019), Artificial Intelligence, p. 337. Cf. also “To each of these ways of determining the point there corresponds a particular name. Hence the need for a sign for identity of content rests upon the following consideration: the same content can be completely determined in different ways; but that in a particular case two ways of determining it really yield the same result is the content of a judgment. [...] The judgment, however, requires for its expression a sign for identity of content, a sign that connects these two names. From this it follows that the existence of different names for the same content is not always merely an irrelevant question of form; rather, that there are such names is the very heart of the matter if each is associated with a different way of determining the content”, Gottlob Frege (1879), “Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought”.
<https://www.w3.org/2021/12/rdf-star.html>;
<https://www.w3.org/community/rdf-dev/2022/01/26/provenance-in-rdf-star/>.
<https://w3c-cg.github.io/rdfsurfaces/>; “RDF lacks the capability to express negated statements in a generic way. As a result, exchanging negative information on a Web scale is thus far restricted to specific cases and predefined statements. The ability to negate (virtually) any RDF statement allows for a comprehensive way to refute, deny or otherwise invalidate claims on a Web scale. Via an intermediate step of a diagrammatic approach to logical expressions called Peirce graphs, we introduce RDF Surfaces, an extension of RDF that incorporates the concept of classical negation, known from first-order logic. Overall, RDF Surfaces provides an abstract, visual approach to negation withing the Semantic Web, offering a more general and widely applicable approach than previous attempts at incorporating negation”, Patrick Hochstenbach, Mathijs van Noort, Dörthe Arndt, Rebekka Martens, Jos De Roo, Ruben Verborgh, Pieter Bonte & Femke Ongenae, “RDF Surfaces: Enabling Classical Negation on the Semantic Web”, arXiv. DOI: https://doi.org/10.48550/arXiv.2406.10659 [consulted on 06/12/2024].
<https://linked.art/loud/>.
<https://linked-data-from-tei.readthedocs.io/en/latest/>.
<https://www.geovistory.org/>.
<https://www.leaf-vre.org/docs/features/about-lw>.
Haut de page