Navigation – Plan du site

AccueilEvènements1996-2016GIS in the Era of Big Data

GIS in the Era of Big Data

Les SIG à l'ère des Big Data
Michael F. Goodchild

Résumé

Big Data can be defined by volume, velocity, and variety. Techniques for dealing with volume have a long history in geography, but velocity and variety raise new issues. A rich background of geographic knowledge will always be essential to effective use of GIS. Spatial prediction offers new and unique opportunities, while the consumerization of GIS raises many technical, social, and educational questions.

Haut de page

Texte intégral

Introduction

1Geographic information systems (GIS) first became widely available to geographers in the 1980s, and have since become an indispensable part of teaching and research. They are designed to capture information about the planet; to make such information easy to visualize, analyze, and share; and to support a wide range of modeling and theory-building (Longley et al., 2015). Big Data (capitalized here in order to emphasize that more is involved than simply large volumes of data) is a more recent trend that has become popular in both science and commerce. In this short paper I discuss some of the issues that arise at the juncture of GIS and Big Data: on what Big Data can mean for our understanding of Earth as the home of humanity, that is, geography; on what new developments will be needed in GIS; and on how Big Data will change the world of quantitative geography in fundamental ways.

What is Big Data?

2It is customary to define Big Data in terms of the three Vs: volume, velocity, and variety. Big Data involves unprecedented volumes of data, such as are now becoming available from sensors, commercial transactions, social media, online publications, and so forth. We are said to be experiencing in an “exaflood”, that is, data volumes on the order of 1015 bytes, and to be “drinking from a firehose”. The scientific community has already established internationally standard prefixes for 1018 (peta), 1021 (zetta), and 1024 (yotta), and it seems likely that additional ones will be needed soon. Data are also becoming available more rapidly, and we are already familiar with real-time data on aircraft positions and traffic congestion in smart-phone apps. Big Data also has variety, given the multitude of sources that may be available on a single topic and the difficulty of integrating them into a single result about which one can be confident.

3If Big Data is only about volume then geographers in particular might reasonably be unimpressed. Landsat, launched in the early 1970s, was able to produce far more data than could easily be stored, let alone analyzed, and special high-capacity tape drives had to be constructed. Even today, despite vast increases in computing speed and storage capacity, it is still true that our capacity to acquire geographic information is orders of magnitude greater than our capacity to examine, visualize, analyze, or make sense of it. Instead geographers have adopted a variety of techniques for dealing with the potentially infinite complexity of geographic information. First, we generalize and abstract, typically by ignoring spatial detail or by creating regions that we assume to be uniform. Second, we divide and conquer, partitioning the world into manageable pieces. In the case of Landsat Thematic Mapper, for example, much research is conducted on single scenes of approximately 3000 by 3000 pixels, and would need to be repeated roughly 50,000 times to cover the entire planet.

4Divide and conquer has two major disadvantages that are potentially addressed by greater computing capacity. Larger chunks, even the entire planet, can be analyzed at once, avoiding the hazards of generalizing from research on a single scene, or even a well-chosen sample of scenes. More subtly, the long-distance interactions that must be ignored when analyzing a single scene can now be modeled. In atmospheric science, for example, it becomes possible to model the teleconnections that exist over long distances and play an important role in weather patterns, and similarly in economic geography where long-distance interactions are now readily analyzed. Thus increased computing capacity potentially offers the ability to analyze in greater detail or finer scale, and the capacity to model some of the interactions that we have had to assume away or ignore in the past.

5Velocity raises questions that are perhaps more fundamental in their potential impact on geographical science. In the past much data production has followed its own timetable. Census data became available on a regular basis, in many countries every ten years. Remotely sensed data had to be downlinked, processed, and distributed, in a process that might take weeks. Yet today data are available from satellites, ground-based sensors, and social media in near-real time, offering the potential of almost immediate discoveries and predictions. But science has always preferred discoveries that are true everywhere, at all times. It would not be interesting if the special theory of relativity were true only on Tuesdays, for example. Moreover science has never placed great emphasis on prediction, seeing it as possibly useful but as a by-product of the more-highly-valued discovery and theory-building. Traditional science, governed by proposal-writing, peer review, data collection, analysis, and eventual publication, has thus mostly proceeded at its own somewhat leisurely pace, except when motivated by emergencies such as the Zita virus, or the Manhattan project of World War II. It seems that the impact of velocity on geographical science will be a shift of emphasis toward problem-solving and prediction, and away from fundamental discovery – towards what would arguably be a more pragmatic and useful geographical science.

6The third V, variety, is perhaps the most disruptive and promising of the three. Traditionally geographers have relied on single, authoritative sources of data: human geographers on the census, physical geographers on remote sensing and digital elevation models. Collecting data has often been costly, so the focus of publicly-available data has generally been on data types that are least likely to change, and thus be useful for as long as possible. Of the seven data types deemed foundational by the US National Spatial Data Infrastructure (https://www.fgdc.gov/​framework), for example, are all slow to change (topography, streets, rivers, land ownership, geodetic control, administrative boundaries) except one, orthoimagery. But Big Data offers entirely new data sources that have never been available before, and the average citizen is now able to make entirely new kinds of maps, based on data collected and compiled using cheap GPS-enabled devices, free software, and free basemaps.

7The obvious problem with many of these new data sources is quality, or the lack of it. It has often been suggested that a fourth V might be added, standing for validity or veracity, but unfortunately and unlike the first three this fourth V would be a property that Big Data lacks instead of possesses. Big Data are often undocumented, lacking in metadata, and without clearly identified provenance.

8An example might be helpful at this point. The current, agreed consensus regarding the elevation of Mt Everest is 8848m above sea level. This is based on an agreed definition of sea level and of the meter, but the exact process by which the figure was determined is not readily available; instead, we trust the authorities that provided the figure, and believe they are appropriately qualified. Now consider a different figure, the time it will take me to drive from my house to Seattle airport if I leave now. I have numerous sources of estimates: Google, my own experience, current weather conditions, the GPS in my car, Waze. None are well documented, and all have associated and unquantified uncertainties. How can I obtain a reliable figure from such a morass of unreliable information?

9If we are willing to make the reasonable assumption that a combination of sources will be more reliable than any single source, then the issue of variety boils down to one of integration or synthesis: how to emulate the process by which experts synthesized the available information on the elevation of Mt Everest. But experts are slow and expensive, compared to a version of synthesis that can be automated, using techniques of machine learning and artificial intelligence.

10To date the literature on synthesis of geographic information is very sparse. Fusion of sources of remote sensing is well recognized and frequently used, and techniques have been developed for the conflation of various kinds of vector datasets, notably street centerline data (e.g., Li and Goodchild, 2011). But what is needed is something much more powerful and comprehensive. To return to Mt Everest, what is needed is a set of techniques for weighting and combining all of the relevant information from an Internet search – spot GPS heights, photogrammetric sources, barometric measurements, and the early visual estimates from the plains of India – into a single estimate with an associated level of uncertainty. Sui (2009) has argued that synthesis is “the new analysis”, a new priority for quantitative geography in the era of Big Data.

The Promise of Data Science

11Discussions of Big Data often make reference to the Fourth Paradigm (Hey, Tansley, and Tolle, 2009), a much-heralded era in which science is driven by data, prediction is central, and theory is no longer science’s most prized objective. Instead, we are urged to “let the data speak for themselves”. Miller and I (2014) discuss the prospects for a geographical science driven by data.

12While there are certainly kinds of data that perfectly represent the truth – my name, age, date of birth, street address, etc. – all geographic information is subject to uncertainty. Coordinates, which all GIS data must possess, are established by measurement, and like all measurements they inherit the uncertainties of the measurement process. Many kinds of geographic information omit fine-scale detail, or variation within assumed-uniform areas. Many kinds, including data on soils, land use, or land cover, classify land according to class definitions that are inherently vague. More than three decades of research have identified the sources of uncertainty, devised metrics, and developed models that allow uncertainties to be simulated and their effects determined.

13It follows, unfortunately, that there are always differences between a geographic database and the real world it purports to represent. If we let the data speak for themselves, we must be aware at the same time that they are not necessarily speaking for geography. Thus caution is always needed in interpreting the results of an analysis of geographic information, and that sense of caution must be present in the mind of the analyst, since the GIS has no way of knowing the magnitude of the differences or their impacts. GIS is best seen as a way of augmenting the geographic skills of the analyst, rather than replacing them. GIS is thus intimately linked to geography, and very risky when placed in the hands of someone with no understanding of the geographic world.

14Moreover the problem solved by the GIS is often subtly different from the problem as it exists in the mind of the user, introducing another reason for caution. For example, the query “Find me the least-cost path from A to B over this cost surface” will actually be executed by searching for a set of moves between neighboring cells in a raster. This form of analysis, often termed a spread function, limits the moves to each cell’s eight immediate neighbors. As a result, if the function is executed on a uniform-cost surface, the lines of equal cost from the origin A will not be the circles that the user might naïvely expect, but octagons aligned with the raster. Yet the octagons are entirely artifacts of the algorithm, and have no meaning in the real world.

Spatial Prediction

15As noted earlier, much of the excitement over Big Data, especially in the commercial world, stems from many well-publicized successes in prediction. What might be the role of prediction in a GIS context? As the word’s etymology suggests, prediction is closely bound to the temporal dimension, so it is not immediately obvious that it has an equivalent in the spatial world of GIS. A simple expedient is to suggest that Big Data has a role to play in what we might term spatial prediction, or the prediction of where rather than when. In keeping with current practice in the GIS literature, we assume that spatial may also imply temporal, and hence spatial prediction may be prediction of both where and when.

16Spatial prediction has some history in the GIS literature, for example in the use of weights-of-evidence to predict significant gold deposits (Bonham-Carter, 1994) or likely sites of Mayan ruins in the Yucatan Peninsula (Ford, Clarke, and Raines, 2009). But as we move into the Big Data era it seems likely that spatial prediction will pay a more important role, in answering questions such as “Where will be the next major outbreaks of flu?”, “What is the estimated value of a commercial property at this location?”, or “Where will this development have the least environmental impact?” Many forms of GIS analysis have been used to answer such questions in the past, for example by combining various layers of data into suitability scores. But much more could be done to develop tools and techniques that assemble the appropriate data, calibrate suitable models, and return answers.

The Consumerization of GIS

17Many of the new sources of data that are fuelling interest in Big Data originate with citizens, through social media and processes of crowdsourcing. At the same time many GIS tools that were previously the exclusive domain of professionals, including tools for wayfinding, map-making, and locating points of interest, are being eagerly adopted by the general public. This is the world of what Turner has termed neogeography (Turner, 2006), a reorientation of GIS and mapping and a blurring of the distinction between amateur and professional. GIS tools are being made available as smart-phone apps, ported to the Cloud, and made accessible through user interfaces that are much simpler and less demanding than in the past.

18Yet many aspects of GIS, including the use of coordinates to define location, are sharply distinct from the ways in which humans learn and reason about space. The consumerization of GIS is placing new emphasis on named places, as the basis for knowledge about the world and its communication between individuals. Recently there has been much interest in the concept of a platial technology in which named places constitute the basic elements of knowledge instead of coordinates. Clearly many tasks become more difficult, including estimation of distance and direction. But other tasks become much simpler, especially the sharing of an individual’s knowledge and the creation of sketch-maps for guidance.

Conclusion

19My purpose in this short paper has been to explore the juncture of Big Data and GIS, and to point to some of the many ways in which the emergence of Big Data is stimulating new thinking about GIS, pointing to new research directions, and creating exciting opportunities for new tools and products. Although the perspective here has been largely technical in nature, Big Data raises serious questions of ethics, especially over privacy, and especially when data are geographically enabled. There are also many interesting issues for educators regarding curriculum and the all-important question of what the average citizen will need to know in order to survive and flourish in a world of Big Data.

Haut de page

Bibliographie

Bonham-Carter, G., 1994, Geographic Information Systems for Geoscientists: Modelling with GIS, New York: Pergamon.

Ford, A., K.C. Clarke, G. Raines, 2009, "Modeling settlement patterns of the Late Classic Maya civilization with Bayesian methods and geographic information systems", Annals of the Association of American Geographers, Vol.99, No.3, 1–25.

Hey, A., S. Tansley, and K. Tolle, 2009, The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond, WA: Microsoft Research.

Li, L. and M.F. Goodchild, 2011, "An optimisation model for linear feature matching in geographical data conflation", International Journal of Image and Data Fusion, Vol.2, No.4, 309–328.

Longley, P.A., M.F. Goodchild, D.J. Maguire, and D.W. Rhind, 2015, Geographic Information Science and Systems. Hoboken, NJ, Wiley.

Miller, H.J. and M.F. Goodchild, 2014, "Data-driven geography", GeoJournal, Vol.80, No.4, 449-461. DOI: 10.1007/s10708-014-9602-6.

Sui, D.Z., 2009, "Mashup and the spirit of GIS and geography", GeoWorld No.12, 15-17.

Turner, A., 2006, Introduction to Neogeography, Sebastopol, CA, O’Reilly.

Haut de page

Pour citer cet article

Référence électronique

Michael F. Goodchild, « GIS in the Era of Big Data », Cybergeo: European Journal of Geography [En ligne], 1996-2016, mis en ligne le 25 avril 2016, consulté le 12 décembre 2024. URL : http://journals.openedition.org/cybergeo/27647

Haut de page

Auteur

Michael F. Goodchild

Professor of Geography Emeritus
University of California, Santa Barbara, CA 93106-4060, USAgood@geog.ucsb.edu

Haut de page

Droits d’auteur

CC-BY-4.0

Le texte seul est utilisable sous licence CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search