Skip to navigation – Site map

HomeVolumes and IssuesSpecial Issue 171. Understanding the Rise of Arti...The Third Age of Artificial Intel...

1. Understanding the Rise of Artificial Intelligence

The Third Age of Artificial Intelligence

Nicolas Miailhe and Cyrus Hodes
p. 6-11

Abstract

If the definitional boundaries of Artificial Intelligence (AI) remains contested, experts agree that we are witnessing a revolution. “Is this time different?” is the question that they worryingly argue over when they analyze the socio-economic impact of the AI revolution as compared with the other industrial revolutions of the 19th and 20th centuries. This Schumpeterian wave may prove to be a creative destruction raising incomes, enhancing quality of life for all and generating previously unimagined jobs to replace those that get automatized. Or it may turn out to be a destructive creation leading to mass unemployment abuses, or loss of control over decision-making processes. This depends on the velocity and magnitude of the development and diffusion of AI technologies, a point over which experts diverge widely.

Top of page

Full text

Introduction

  • 1 There is no standardized and globally accepted definition for what AI is. “The choice of the very n (...)

1The definition of “Artificial Intelligence” is not easy and remains contested1, especially given science’s inability to nail a definition of “intelligence” accepted by all. Definitions abound and generally overlap by pointing to ‘agents’ (programs running on computer systems) able to learn, adapt and deploy themselves successfully in dynamic and uncertain environments. Intelligence in that sense intersects with autonomy and adaptability, through the ability to learn from a dynamic environment.

Defining Artificial Intelligence

The intersection of Big Data, machine learning and cloud computing

  • 2 The first processors in the 1970s could carry out about 92,000 instructions per second. The process (...)

2To understand the current renaissance of what we frame as “Artificial Intelligence,” which is as old as computer science, we need to turn to the convergence of three trends: i) Big Data, ii) machine learning and iii) cloud super-computing. In that sense, the rise of AI is really a manifestation of the digital revolution. One of its central laws, predicted in 1965 by Intel chip manufacturer co-founder Gordon Moore, tells us that computing power doubles every two years, on an average, at a constant cost2. This exponential growth has resulted from continued technoscientific prowess in miniaturization, bringing about the age of micro- and, now, nano-computing with increasing power; and along with it, the possibility of smart phones and the “Internet of Things.”

  • 3 IBM estimates that 90 percent of the world’s data has been created in the last two years. Looking a (...)

3Coupled with the development of Internet communication protocols and machine virtualization, the digital revolution then made possible the availability of highly and easily scalable supercomputing capabilities on the cloud. From that point, the exponentially growing flow of high resolution data3 produced day after day by connected humans and machines could be processed by algorithms.

4These contexts finally made possible the flourishing of an old branch of computer science, called machine learning,4 where algorithms are capable of automatically sorting out complex patterns out of very large data sets, either via supervised or unsupervised learning.5 The convergence of two branches of machine learning in particular have demonstrated impressive results over the past five years: deep learning6 and reinforced learning.

AI vs. Robotics

  • 7 AI refers to a program running on a computer, either embedded or on the cloud. It thus carries a ve (...)

5To better understand Artificial Intelligence as an interdisciplinary field, it is useful to draw and analyze its boundary with robotics. In both cases, we refer to ‘machines’ (since an algorithm is a robot, hence the shortened word ‘bot’ to refer to conversational computer programs); but while robotics is mostly material in its manifestations, and operates at the intersection of mechanical engineering, electrical engineering, and computer sciences, artificial intelligence is mostly7 immaterial and virtual in its manifestations. In order to simplify for analytical purposes, one can say that, in an “autonomous machine,” the AI is the intelligence, and refers to cognitive functions, while robotics refers to motor functions.

6Indeed, the boundary between cognitive and motor functions is porous, since mobility requires sensing/knowing the environment. For example, advances in machine learning have played a crucial role in computer vision. That said, relying on materiality as a differentiating criterion is useful because it carries major industrial consequences affecting the growth potential of autonomous machines: the more complex the motor functions, the slower the growth, and vice versa. The most popular symbols of the convergence between AI and robotics are self-driving cars and humanoid robots.

AI vs. Neurosciences

7To then hone our understanding of the state of AI today and where it could go in the future, we need to turn to its relation with the interdisciplinary field of neurosciences. The renaissance of AI since 2011 is mostly attributed to the success of a branch of machine learning called “deep artificial neural networks” (also called deep learning), supported by another branch called “reinforcement learning”. Both branches claim to loosely emulate the way the brain processes information, in the way that they learn through pattern recognition.

8It is crucial not to exaggerate the current state of convergence between AI and neurosciences. To date, our understanding of the extremely complex biochemical processes that run the human brain remain far beyond the reach of science. In short, the human brain largely remains a “black box,” and neuroscience knows how the brain functions mainly by correlating inputs and outputs. As such, there is not much that designers of algorithms can emulate from, especially given that machine learning still operates exclusively from the realm of statistics; that too on silicon-based computer systems, which are radically different from biological brains. A more meaningful convergence between the fields of AI and neuroscience is expected to unfold later this century, as we break into the “black box” and seek to understand the human brain in greater depth.

  • 8 As a matter of comparison, a child needs to be exposed to five to ten images of elephant to be able (...)

9Owing to the very different evolutionary trajectories followed by artificial intelligence and our biological brains, two consequential differences should be singled out. First, humans can reliably develop pattern recognition and generalize transferable knowledge out of very few occurrences, but in general we struggle to replicate and transfer learning processes across educational subjects. Machines, on the contrary, require very large data sets8 to achieve pattern recognition, and struggle to generalize knowledge. However, they excel at transferring and replicating pattern recognition at scale once it is achieved. Facial recognition is the most well-known example of this. Second, while autonomous machines that combine the most advanced AI and robotics techniques are still poor at reproducing very basic non-cognitive motor functions mastered by most animals (for example, walking or hand-manipulation), they are increasingly proving very adept at outperforming humans over a number of complex cognitive functions, for example, image recognition in radiology and computationally-intensive tasks.

Artificial ‘Narrow’ Intelligence vs. Artificial ‘General’ Intelligence

  • 9 See here the emerging field of “transfer knowledge” perceived by an increasing number of experts, i (...)

10The penultimate boundary we need to explore to better delineate and understand what we mean by artificial intelligence is the frontier between Artificial Narrow Intelligence (ANI, also called “weak” AI) and Artificial General Intelligence (AGI, also called “strong” AI). For a majority of experts, AGI refers to an autonomous machine’s ability to perform any intellectual tasks that a human can perform. This implies generalizing and abstracting learning across various cognitive functions. Transferring learning autonomously and nimbly from one domain to another has happened only very embryonically thus far9.

11According to experts, the most advanced artificial intelligence systems available today, such as the famous IBM Watson10 or Google’s AlphaGo11, are still “narrow” (weak), in the sense that they operate strictly within the confine of the scenarios for which they are programmed. Even if they are capable of generalizing pattern recognition, for instance transferring knowledge learned in the frame of image recognition into speech recognition12, we are still very far away from the versatility of a human mind. This is expected to change with the convergence of machine learning and neurosciences in the coming decades, but experts disagree profoundly over the probability and timeline of the march towards AGI: some say it will never happen; some say it will take one hundred years or more; some say thirty; and some say ten13.

12Beyond the discord among experts, relying on the frontier between narrow and general artificial intelligence is problematic because of its very benchmark for measurement: human intelligence. Since we still have an imperfect understanding today of the complex processes driving the brain and the way human intelligence and consciousness manifest themselves, excessively relying on that boundary to gauge the transformative impact of the rise of AI could be risky. It could expose us to major blind spots, with supposed “advances” masking major socio-economic externalities which we need to anticipate in order to adapt. We recommend doing more research to delineate that boundary and map its surroundings as well as their evolution more precisely.

13Beyond their disagreement, experts broadly agree on two levels. First the socio-economic impacts of the current rise of ANI will bring about serious consequences, generating new opportunities, new risks, and new challenges. Second, the advent of an AGI later this century would amplify these consequences by—at least—an order of magnitude. More research is needed to map and understand what these consequences would be as well as how they would play out socially and economically.

The unresolved question of consciousness; and speculations over the possibility of an intelligence explosion

14The final boundary we need to explore to map the future terrain of AI is that of consciousness. Here, there is a broad consensus among experts: neither the most advanced AI systems currently existing, nor the ones that are expected to be developed in the coming decades, exhibit consciousness. Machines (programs running on connected and sensing computer systems) are not aware of themselves, and this “functionality” may never be possible. But, again, a word of caution: since science is still far from having explained the mysteries of animal sentience and human consciousness, that boundary remains more fragile that it seems.

  • 14 For more information, see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford Unive (...)

15Finally, one speculative but highly consequential long-term scenario which constantly appears in mainstream media and across the expert community: “the technological singularity”. According to that hotly contested scenario, popularized by the inventor, futurist, and now Director of Engineering at Google, Ray Kurzweil, the rise of AI could lead to an “intelligence explosion” as early as 2045. It would result from the emergence of an Artificial Super Intelligence (ASI): a self-recursive AI improving exponentially, which could follow relatively quickly (a few decades or less) the advent of an Artificial General Intelligence (AGI). If this scenario were to unfold, it would naturally carry with it potentially existential consequences for mankind and intelligent life.14 We recommend nurturing a reasonable debate across the expert community, and society at large, over the possibilities and consequences of an ASI, to enable responsible investment choices and risk management. Framing the conversation in the right way will be critical: in this case, transparency and moderation will be key.

  • 15 Computer vision and audio processing, for example, are able to actively perceive the world around t (...)
  • 16 Natural language processing and inference engines can enable analysis of the information collected. (...)
  • 17 An AI system can take cognitive action like decision-making (e.g. credit application or tumor diagn (...)

16To be clear, the analysis we will carry out in the remainder of this article excludes the AGI or ASI scenarios. To narrow the definition even further for practical analytical purpose, “Artificial Intelligence” will henceforth mean machine-learning algorithms, which combine various techniques (e.g. deep learning), and are associated with sensors and other computer programs and algorithms. These sense,15 comprehend,16 and act17 on the world, learning from experience and adapting over time.

Contemporary dynamics and main players

AI pervasiveness

17Unlimited access to supercomputing on the cloud — a market estimated to reach $70 billion in 201518 — and continued growth in big data, which has had a compound annual growth rate of more than 50 percent since 2010,19 are the two key macro-trends powering the rise of Artificial Intelligence. AI systems are already profoundly changing the way we live, work, and socialize. On the market are virtual personal assistants, recommendation engines, self-driving cars, surveillance systems, crop prediction, smart grids, drones, banking and trading, and gene-sequencing machines. More and more multinationals are now shifting their business models to revolve around data and predictive analytics to be able to capture the productivity gains generated by the rise of AI.

18This revolution is fueled on the one hand by the quest for technological solutions to address pressing global challenges, including climate change, growth and development, security or demography which increasingly unfold in urban environment. On the other hand, it is spurred by the continuing international strategic competition whereby nation-states fund science and early innovation in pursuit of technological dominance, which private global players then scale up, competing with others to become “go-to” platforms. Though the ambiguity of the definitional boundaries of “Artificial Intelligence” constrains the ability to generate a robust classification or ranking of most advanced countries in the field of AI, capabilities in the field of computer sciences and Information & Communication Technologies (ICT) can be used as a proxy. Accordingly, the U.S., China, Russia, Japan, South Korea, the U.K., France, Germany, and Israel are emerging as the dominant players in AI. Given their techno-scientific capabilities and their large market size, India and Brazil should also figure in this leading group, even if they are yet to translate potential into reality.

The role of governments

19National governments have historically played, and will continue to play, a key role in spurring the rise of AI through the allocation of higher education, research & development budgets for defense, security, healthcare, science and technology (e.g. computer sciences, neuroscience, ICT), infrastructure (especially transport, energy, healthcare, and finance), and pro-innovation policies. AI is increasingly perceived as a source of technological dominance in the information age where cyber and physical worlds merge as hybrids, so more and more countries have or are in the process of releasing national strategies for AI.

20In the U.S., where the term Artificial Intelligence was coined, and which has been a pioneer in the field since its inception in the 1950s, the Obama Administration led an inter-agency initiative last year on “Preparing for the Future of Artificial Intelligence.”20 This high-level initiative culminated with the release of a “National Research & Development Artificial Intelligence Strategic Plan,”21 as well as two reports.22 Historically, the U.S. Defense Advanced Research Project Agency (DARPA), and more recently the Intelligence Advance Research Projects Activity (IARPA), have provided long-term high-risk investment in AI, playing an instrumental role in most AI techno-scientific breakthroughs. Last year, the U.S. Department of Defense (DoD) unveiled its “Third Offset” strategy23 with a total five-year investment of $18 billion24. To maintain technological dominance, this macro-strategy plans on bringing AI and autonomous systems to the forefront of all U.S. battle digital networks, operational, planning and support processes. DoD’s operational goal is to make such processes faster and more efficient. In January 2017, a report published by a group of elite scientists which advises the U.S. Government on sensitive technoscientific matters confirmed the strategic importance of the rise of AI for defense capabilities25.

21Meanwhile, the Chinese Government unveiled an ambitious three-year national AI plan in May 2016. The plan was formulated jointly by the National Development and Reform Commission, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and the Cyberspace Administration of China. The government envisions creating a $15 billion market by 2018 by investing in research and supporting the development of the Chinese AI techno-industrial base. Anecdotally, the country surpassed the U.S. last year in terms of the number of papers published annually on “deep learning.”26 The rate of increase was remarkably steep, reflecting how quickly China’s research priorities have shifted.

  • 27 South Korea government announced in March last year a $863 million five-year R&D investment in AI. (...)
  • 28 France’s government announced in January 2017 it is working on a National AI Strategy to be publish (...)
  • 29 UK Government announced in January that AI would be at the center of its post-Brexit “Modern Indust (...)

22Beyond U.S. and China, Japan, South Korea,27 France,28 the U.K,29 and Germany are also in the process of developing specific plans and strategies in AI, robotics, and other complementary sectors.

The platform business

23From the business perspective, we seem to be heading towards a global oligopoly dominated by a dozen U.S. (Google, Apple, Facebook, Amazon, Microsoft and IBM) and Chinese (Baidu, Alibaba, Tencent, Xiaomi) multinationals controlling AI.

24For competition played on the global stage, the key factor for success is no longer the length of computer code, but the size of databases. As of now, AI needs to see millions of pictures of animals or cars to achieve actionable pattern recognition. Facebook has effectively relied on the nearly ten billion images published every day by its users to continuously improve its visual recognition algorithms. Similarly, Google DeepMind has relied heavily on YouTube video clips to train its AI image recognition software. In a way, consumers are used as commodities to train AI systems through their behaviors and interactions.

25The efficiency of AI systems has also relied on the use of specific microprocessors, which are playing an increasing role in the IT infrastructure on the cloud. For example, the training phase of the deep neural networks has tended to rely on so-called “Graphic Processing Units” (GPUs), processors which were initially designed for video games and have become more powerful over the years30. For the implementation phase, digital giants tend to develop dedicated processors. Google, for instance, developed the “Tensor Processing Unit” (TPU), while Microsoft has repurposed “Field Programmable Gate Array” (FPGA).

26These digital giants are building ecosystems around an “AI tap” that they control, and an intense competition is on to become the “go to” AI platforms which host consumers’ and businesses’ data. Selling AI through the “software-as-a-service” (SAAS) business model seems to be the route which Google and IBM have adopted. The start-up landscape is also very active in this area. According to CB Insight, the value of AI Mergers & Acquisitions (M&A) has increased from $160 million in 2012 to over $658 million in 2016, while disclosed funding rose from $589 million to over $5 billion over the same time period.31 Nearly 62 percent of the deals in 2016 went to U.S. start-ups, down from 79 percent in 2012,32 with U.K., Israeli, Indian, and Canadian start-ups following respectively. The AI market is expected to represent from $40 to $70 billion by 2020, depending on definitional boundaries.33

27Machine-learning algorithms require a vast amount of data to achieve efficient pattern recognition, so consumer markets’ critical mass appears to be a crucial enabler of the establishment of AI techno-industrial bases, in tandem with technoscientific capabilities.

Top of page

Notes

1 There is no standardized and globally accepted definition for what AI is. “The choice of the very name “artificial intelligence” is a perfect example: if the mathematician John McCarthy used these words to propose the Dartmouth Summer Research Project – the workshop of summer 1956 that many consider as the kick-off of the discipline – it was as much to set it apart from related research, such as automata theory and cybernetics, as to give it a proper definition […].There are actually many definitions for artificial intelligence. A first great group of definitions could be called “essentialist”, aiming at defining the end-goal a system has to show to enter the category […].Besides this – and often complementarily – are the definitions one could call “analytical”, which means they unfold a list of required abilities to create artificial intelligence, in part or in whole. […]”. Tom Morisse, “AI New Age, Fabernovel, February 2017 https://en.fabernovel.com/insights/tech-en/ais-new-new-age ; See also U.K. Government Office for Science, Report on “Artificial Intelligence: opportunities and implications for the future of decision-making”, 2016 (page 6). See also https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/566075/gs-16-19-artificial-intelligence-ai-report.pdf

2 The first processors in the 1970s could carry out about 92,000 instructions per second. The processor in an average modern smartphone can carry out billions of instructions per second.

3 IBM estimates that 90 percent of the world’s data has been created in the last two years. Looking at various application platforms, experts estimate that Spotify has 10 Petabytes in storage (1 Petabyte = 1 million Gigabyte); eBay has 90 PB; Facebook 300 PB; and Google 15 000 PB. For reference, the human brain has 2.5 Petabyte in storage. https://royalsociety.org/topics-policy/projects/machine-learning/machine-learning-infographic/

4 Short explanatory infographic from the Royal Society: https://royalsociety.org/topics-policy/projects/machine-learning/machine-learning-infographic/

5 There are many different kinds of algorithm used in machine learning. The key distinction between them is whether their learning is ‘unsupervised’ or ‘supervised’. Unsupervised learning presents a learning algorithm with an unlabeled set of data – that is, with no ‘right’ or ‘wrong’ answers – and asks it find structure in the data, perhaps by clustering elements together – for example, examining a batch of photographs of faces and learning how to say how many different people there are. Google’s News service uses this technique to group similar news stories together, as do researchers in genomics looking for differences in the degree to which a gene might be expressed in a given population, or marketers segmenting a target audience. Supervised learning involves using a labelled data set to train a model, which can then be used to classify or sort a new, unseen set of data (for example, learning how to spot a particular person in a batch of photographs). This is useful for identifying elements in data (perhaps key phrases or physical attributes), predicting likely outcomes, or spotting anomalies and outliers. Essentially this approach presents the computer with a set of ‘right answers’ and asks it to find more of the same. Deep Learning is a form of supervised learning”. U.K. Government Office for Science, Report on “Artificial Intelligence: opportunities and implications for the future of decision-making”, 2016 (page 6).

6 Short explanatory video here from the Royal Society:
https://www.youtube.com/watch?v=bHvf7Tagt18

7 AI refers to a program running on a computer, either embedded or on the cloud. It thus carries a very concrete material manifestation which we tend to forget at times.

8 As a matter of comparison, a child needs to be exposed to five to ten images of elephant to be able to recognize an ‘elephant’ while a deep neural networks requires over a million images.

9 See here the emerging field of “transfer knowledge” perceived by an increasing number of experts, including Google Deepmind as a potential path of accelerated progress in the coming decades. See here for example https://hackernoon.com/transfer-learning-and-the-rise-of-collaborative-artificial-intelligence-41f9e2950657#.n5aboetnm and https://medium.com/@thoszymkowiak/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46#.6fnivpish

10 See https://www.ibm.com/cognitive/

11 See https://deepmind.com/research/alphago/

12 See https://hackernoon.com/transfer-learning-and-the-rise-of-collaborative-artificial-intelligence-41f9e2950657#.n5aboetnm

13 A detailed study of AI timeline surveys carried out by AI Impacts in 2015 concluded: “If we collapse a few slightly different meanings of ‘human-level AI’: median estimates for when there will be a 10% chance of human-level AI are all in the 2020s (from seven surveys); median estimates for when there will be a 50% chance of human-level AI range between 2035 and 2050 (from seven surveys); of three surveys in recent decades asking for predictions but not probabilities, two produced median estimates of when human-level AI will arrive in the 2050s, and one in 2085. One small, informal survey asking about how far we have come rather than how far we have to go implies over a century until human-level AI, at odds with the other surveys. Participants appear to mostly be experts in AI or related areas, but with a large contingent of others. Several groups of survey participants seem likely over-represent people who are especially optimistic about human-level AI being achieved soon”. See http://aiimpacts.org/ai-timeline-surveys/

14 For more information, see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.

15 Computer vision and audio processing, for example, are able to actively perceive the world around them by acquiring and processing images, sounds and speech. Facial and speech recognition are two typical applications.

16 Natural language processing and inference engines can enable analysis of the information collected. Language translation is a typical application.

17 An AI system can take cognitive action like decision-making (e.g. credit application or tumor diagnostic) or undertake actions in the physical world (e.g. from assisted braking to full auto-pilot in cars).

18 https://www.accenture.com/us-en/_acnmedia/PDF-33/Accenture-Why-AI-is-the-Future-of-Growth.pdf

19 https://www.accenture.com/us-en/_acnmedia/PDF-33/Accenture-Why-AI-is-the-Future-of-Growth.pdf

20 https://obamawhitehouse.archives.gov/blog/2016/05/03/preparing-future-artificial-intelligence

21 https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf

22 Executive Office of the U.S. President, “Preparing for the Future of Artificial Intelligence”, October 2016. And “Artificial Intelligence, Automation and the Economy”, December 2016.

23 DEPSECDEF, http://www.defense.gov/News/Speeches/Speech-View/Article/606641/the-third-us-offset-strategyand-its-implications-for-partners-and-allies. The “First Offset Strategy” refers to the development of nuclear weapons, the “Second Offset Strategy” to precision guided munitions.

24 Mackenzie Eaglen, “What is the Third Offset Strategy”, Real Clear Defense, February 2016. Note: this $18 billion five-year investment goes far beyond Artificial Intelligence.
http://www.realcleardefense.com/articles/2016/02/16/what_is_the_third_offset_strategy_109034.html

25 JASON, The MITRE Corporation, Report on Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, January 2017. https://fas.org/irp/agency/dod/jason/ai-dod.pdf

26 https://www.washingtonpost.com/news/the-switch/wp/2016/10/13/china-has-now-eclipsed-us-in-ai-research/

27 South Korea government announced in March last year a $863 million five-year R&D investment in AI. http://www.nature.com/news/south-korea-trumpets-860-million-ai-fund-after-alphago-shock-1.19595

28 France’s government announced in January 2017 it is working on a National AI Strategy to be published in March 2017. http://www.gouvernement.fr/en/franceia-the-national-artificial-intelligence-strategy-is-underway

29 UK Government announced in January that AI would be at the center of its post-Brexit “Modern Industrial Strategy”. http://www.cbronline.com/news/verticals/central-government/modern-industrial-strategy-theresa-may-bets-ai-robotics-5g-uks-long-term-future/. See also U.K. Government Office for Science, Report on “Artificial Intelligence: opportunities and implications for the future of decision-making”, 2016 (page 6)
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/566075/gs-16-19-artificial-intelligence-ai-report.pdf

30 http://www.nvidia.com/object/what-is-gpu-computing.html . See also JASON, Report on Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, (p. 7 & 15). Ibid.

31 CB Insights, “The 2016 AI Recap: Startups See Record High In Deals And Funding”, January 2017, https://www.cbinsights.com/blog/artificial-intelligence-startup-funding/ . Important note: these figures don’t include the Chinese market.

32 Ibid.

33 http://techemergence.com/valuing-the-artificial-intelligence-market-2016-and-beyond/ ; and https://www.bofaml.com/content/dam/boamlimages/documents/PDFs/robotics_and_ai_condensed_primer.pdf

Top of page

References

Bibliographical reference

Nicolas Miailhe and Cyrus Hodes, The Third Age of Artificial IntelligenceField Actions Science Reports, Special Issue 17 | 2017, 6-11.

Electronic reference

Nicolas Miailhe and Cyrus Hodes, The Third Age of Artificial IntelligenceField Actions Science Reports [Online], Special Issue 17 | 2017, Online since 31 December 2017, connection on 15 May 2025. URL: http://journals.openedition.org/factsreports/4383

Top of page

About the authors

Nicolas Miailhe

Nicolas Miailhe is the co-founder and President of “The Future Society at Harvard Kennedy School” under which he also co-founded and co-leads the “AI Initiative”. A recognized strategist, social entrepreneur, and thought-leader, he advises multinationals, governments and international organizations. Nicolas is a Senior Visiting Research Fellow with the Program on Science, Technology and Society (STS) at HKS. His work centers on the governance of emerging technologies. He also specializes in urban innovation and civic engagement. Nicolas has ten years of professional experience in emerging markets such as India, working at the nexus of innovation, high technology, government, industry and civil society.

By this author

Cyrus Hodes

Cyrus Hodes is passionate about drastically disruptive technologies, such as Artificial Intelligence, robotics, nanotech, biotech, genetics, IT and cognitive sciences as well as their cross-pollination and impacts on society. He is currently leading a robotics (Autonomous Guided Vehicles) startup and a biotech venture. In 2015, Cyrus co-founded the AI Initiative under The Future Society to help shape the governance of AI. Cyrus is a member of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.

Top of page

Copyright

CC-BY-4.0

The text only may be used under licence CC BY 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search