1Environmental and social problems causing concern at a global scale have resulted in the development of a broad range of sustainability standards. Although not exclusively ‘private’, these have for the most part been developed by a range of private actors, especially within the NGO and business sectors. Many factors drive the uptake of certification schemes: many consumers now share stronger post-materialistic concerns (Inglehart, 1997); firms seek to build legitimacy and supply chain power (Bernstein & Cashore, 2007); upstream producers seek preferable prices and markets for their products (Jaffee, 2007; Neilson & Pritchard, 2009); NGOs seek funding, publicity and influence (Schwesinger Berlie, 2010); and a whole range of actors attempt to reduce their risk and exposure as they find themselves within uncertain socio-ecological and socio-economic systems (Beck, 1992; Giddens, 2002). Indeed, over 450 ‘ecolabels’ across 25 industry sectors are now recorded by the Ecolabel Index1, a global directory of sustainability labels.
2This field has a new and developing lexicon, which benefits from clarification. This paper follows Matus (2010: 80). A standard lists ‘specifications and/or criteria for the manufacture, use, and/or attributes of a product, process, or service’. Certification is the ‘process, often performed by a third party, of verifying that a product, process or service adheres to a given set of standards and/or criteria’. Labelling is the ‘method of providing information on the attributes, often unobservable, for a product, process or service’. While this paper is primarily concerned with agricultural commodity certification, the terms and discussion are more widely generalisable.
3Currently, global sustainability standards largely consist of technology-based indicators. Technology-based standards prescribe certain technology—“knowledge of how to fulfil certain human purposes in a specifiable and reproducible way” (Brooks, 1980: 66). The standard represents a hypothesis—the technology is expected to promote certain desired outcomes. These hypotheses are often drawn from ‘best practices’ and guidance. Their expected consequences stem from evidence ranging from anecdotes to large-scale randomised control trials.
4There is increasing interest in using performance-based metric standards in place of technology-based indicators within the context of sustainability. These attempt to measure outcomes directly, rather than proxying them with practices. Bonsucro, one of the first global performance-based metric standards, will be discussed below. Other initiatives, such as the Sustainable Apparel Coalition in the clothing sector, are also developing sustainability metrics, although these have yet to crystallise into standards.
5What are the benefits and pitfalls of performance-based metric standards in the field of sustainability? First, we present the case of Bonsucro as an emerging performance-based standard. Then, we draw on diverse strands of literature to point to five fields in which metric standards are promising: flexibility in application, provision of information, creating dynamic standards, enabling adaptive management, and harmonisation of policy instruments.
6Bonsucro, initially the Better Sugarcane Initiative, is a certifier and standard-setter for sugarcane and derivative products. Founded in 2008, it awards certificates to mills that meet its Production Standard, with the agricultural systems and lands that supply the raw cane falling under the same certificate. Bonsucro certifies a total of 4.08% of the world’s surface sugarcane production.2 The secretariat is based in London, given the time zones spanned by sugarcane-producing countries, with additional ground staff in Brazil. The secretariat organises the training and accreditation of recognised auditing organisations. Certification is also required for those who handle certified produce along the supply chain, called ‘chain of custody’ certification.
7Bonsucro is primarily a performance-based certification, requiring facilities seeking certification to input 237 data points about their activities. Most of these are metric, although some of them are ‘Y/N’ inputs familiar from technology-based standards. Some Y/N questions are technology-based (e.g. ‘Sulfitation Process Used?’), while others cover less quantifiable issues (e.g. ‘The right to use the land and water can be demonstrated and is not legitimately contested by local communities with demonstrable rights’; ‘Availability of sufficient drinking water to each worker present in the mill’). The breakdown of metric to non-metric data points, as well as the general structure of the certification process, is shown in Figure 1, along with (non-exhaustive) examples of some of the points in each section.
8Compliance with the standard is determined using a calculator built with Microsoft Excel that is distributed to mills seeking certification. This calculator takes the provided data points, which are independently audited for veracity, and enters them into formulae to calculate 82 criteria. Certification is awarded if the 16 core criteria are met, in addition to 80% of the total individual criteria.
Figure 1. The structure of input data and indicators in the Bonsucro Production Standard
9Technology-based standards confer a number of benefits. Many poor outcomes do indeed result from particular common and unremedied practices, and simply proposing and implementing better ones can be an easy step to a desirable socio-ecological situation. Training auditors and outreach staff to take a technology-based view—looking at and advising on practices on the ground—is often cheaper than examining outcomes, which can be methodologically complex. Many facilities seeking certification are also more comfortable with a technology-based view of the world, and find intangible sustainability impacts difficult to understand, explain and integrate.
10However, technology-based standards also generate their own issues. Their proposed causal mechanisms, which may be more complex than anticipated, may not hold in different contexts. Top-down command-and-control measures can result in unforeseen consequences for welfare and ecosystems, as rigid measures can restrict the natural variability of socio-ecological systems that helps generate resilience against shocks (Holling & Meffe, 1996). Practices that are only tested or developed in certain contexts may have different consequences elsewhere. Limited generalisability makes the strength, reliability and even the directionality of outcomes unclear in the heterogeneous contexts usually faced by global standards. Where they are tested, they are considered in isolation rather than in the synergies we find in the real world.
11Technological prescription reduces firms’ flexibility to select the least burdensome method to achieve a desired aim in their particular situation (Gunningham, 1996). The imposition of given approaches can cause resentment by those unable to adapt them to their needs (Bardach & Kagan, 1982). Moreover, prescription is difficult to reconcile with the local co-creation of practices. Local co-creation is considered important for many normative and instrumental reasons: key understandings of complex socio-ecological systems are often embedded in ‘local knowledge’ (Berkes & Folke, 2002; Gadgil et al., 1998; Gadgil et al., 2003; Lansing & Kremer, 1993), natural resource management has a significant cultural dimension (Ostrom & Nagendra, 2006), and complex sustainability problems appear to necessitate integrating society in a ‘transdisciplinary’ mode (Lang et al., 2012).
12Technology-based standards innovate as fast as the standard is updated, while performance-based standards innovate as fast as those seeking certification can keep up. Prescribed practices exhibit lags before incorporating innovation, as standards have to be renegotiated and reformed, which is both a technical and a political process. In contrast, performance-based standards offer flexibility for producers to use the best fitting technology or practice they have available (Ribaudo & Caswell, 1999). It does not matter how the targets are achieved, as long as they are achieved. Yet it can also be argued that while performance-based standards provide the ‘pull’ to achieve outcomes, they do not provide the positive ‘push’. Innovative practices are not utilized because they are suddenly understood or available—technology-based standards may produce awareness or capacity to utilize them, while performance-based standards shift awareness- and capacity-building onto the shoulders of the body being certified.
13Supporting the flexibility of performance-based standards is not an easy task, and if not done carefully, could even undermine the benefits, by absolving the standard-setting body of the requirement of establishing stable and globally-relevant causal pathways between practice and outcome. Still, this new-found flexibility needs to be managed and supported. Innovation is thought to be fostered by shared cognitive spaces (Nonaka, 1994) and inter-organisational networks (Powell & Grodal, 2006). To ask facilities lacking these to innovate and apply solutions without guidance could be problematic. In order to generate and understand locally-relevant sustainable practices, it appears facilities require networks of knowledge and expertise connecting them to other regional practitioners and experts (Carolan, 2006). On the other hand, if guidance is provided, these same facilities may fall back on them strongly (Coglianese et al., 2003; Gunningham, 1996). In the absence of such knowledge production and dissemination, the guidelines requested may act similarly to binding rules, removing the benefits of performance-based regulation (Bardach & Kagan, 1982).
- 3 It is important not to confuse a standard with a certifier. Rainforest Alliance, for example, has d (...)
14The required technical knowledge for the enforcement of a performance-based standard makes spanning several sectors difficult. In agriculture for example, single-commodity initiatives are able to cover all aspects of production to a much greater level of detail than multi-crop standards. It is hard to imagine a single standard that could set production performance targets for multiple crops in different climates in countries with different production realities.3 In this sense, the flexibility of a performance-based standard does not extend beyond a single good or service. In the case of expansion of performance-based standards, a key challenge will involve attempting to ensure that standards across sectors are coherent, similarly rigorous, and complementary, while recognising their necessary heterogeneity. This will entail combining numerous views, values and approaches. Given that producing standards and governance for a single sector requires a broad base of supporters and constant consultation with multiple stakeholders, recognising dispersed expertise, the necessity of local adaptation, openness to new knowledge and innovation (Abbott & Snidal, 2009), the challenge of ‘orchestrating’ these standards in inclusive and well-informed ways is daunting indeed.
15The creation of audited metrics for a facility, capturing indicators that otherwise there might be few incentives to measure, has direct and indirect benefits beyond meeting standards. Agencies extending credit are increasingly interested in environmental degradation and corporate social responsibility. Social and environmental performance conveys important non-financial information increasingly used to evaluate creditworthiness, which can result in lower financing costs for more socially and environmentally responsible facilities (Attig et al., 2013). Credible, externally audited data can serve this extra purpose, and can be a base for developing indices, such as the Dow Jones Sustainability Index, based on common metrics. Currently such indices tend to be built through the coding of sustainability reports (e.g. Morhardt et al., 2002). While these have been increasingly standardised in format through adoption of guidelines such as those of the Global Reporting Initiative, harmonising the methodologies of the underlying measurements in a comparable way would allow for less subjective comparison between performance.
16The collated information itself can be directly used as a decision support tool as well as to assess standard compliance. Significant findings, especially in the field of energy use, report that if an indicator is measured and observed by those who have influence over it, it is better managed (Faruqui et al., 2010). When it comes to actually improving compliance and the propensity for desired outcomes, it appears that the best approach is a combination of enforcement—through auditing and the awarding or removing of certificates—and a ‘management mechanism’ that embraces a problem-solving approach based on capacity building (Tallberg, 2002).
17The Bonsucro Calculator, discussed above, allows users to visualise their whole production in terms of environmental, social, and economic results, as well as being informed as to whether they comply with the standard. Through being able to understand their shortcomings and advantages in relation to the requirements of Bonsucro’s metric standard, the tool may aid producers in identifying action points and prioritising policies and investments.
18Standards systems, by virtue of collecting data across the sector, can also provide context to the individually collated and verified information. A wide variety of modern literature in behavioural economics and social psychology points to the roles of comparison, competition and social norms in ensuring better outcomes. Studies in fields such as energy usage (Allcott, 2011; Nolan et al., 2008; Schultz et al., 2007), voting (Gerber & Rogers, 2009), conservation of green spaces (Cialdini, 2003), and charitable giving (Frey & Meier, 2004) all note that providing context to individual performance or intention has a positive effect on outcomes. Informing facilities how well they perform within a distribution of their peers (which is possible with performance-based metric standards) would therefore seem likely to indirectly encourage facilities to go beyond compliance, as well as identifying areas of improvement where better practices and more positive impacts are already being carried out elsewhere, especially if they are at the lower end of the spectrum of performance (Schultz et al., 2007).
19The data collected in the process of certifying facilities to a performance-based standard can also be used in different ways by other interested bodies. It can be anonymously provided to researchers to investigate relevant questions and relationships of interest. It can also be scaled up to a national or regional level to answer questions about relative aggregate performance across space or time.
20The increased range of use of data is an exercise in knowledge production. This involves creating products with the data that lie on the boundary between knowledge and action. Creating meaningful products that are acted upon, be they aggregated sustainability reports or localised sustainability decision support, requires standard systems to grapple with questions of how to imbue them with sufficient salience, credibility and legitimacy (Cash et al., 2003; Mitchell et al., 2006). Salience refers to the relevance of the product to decision-makers and stakeholders; credibility refers to the scientific adequacy of the technical evidence and arguments; and legitimacy refers to the perception that the production of information and techniques has been respectful of divergent values and understandings, and is unbiased and fair in its conduct and treatment of opposing views. Given that an increase in one of these factors can often have a negative effect on another, drawing the balance can be difficult, and requires time, expertise, and serious effort spent on ‘boundary work’ (Clark et al., 2011). Performance-based standards bring many new types of knowledge products, but in doing so, bring the need for credibility, salience and legitimacy to the fore.
21Sustainability standards have to balance the requirements of the certification on one hand and the level of adoption by firms and impacts on sustainability on the other (Cashore et al., 2007; Lebel, 2012). A maximum positive aggregate impact on sustainability within this rigour-versus-uptake trade-off is not easily established nor understood. Metric performance-based certification allows standards-systems to try to approach this trade-off from a few new directions.
22Firstly, standards can create dynamic indicator compliance criteria. This means that rather than a set standard, formulae can be applied to make the threshold for indicator compliance contingent on characteristics of the facility. Furthermore, it also creates opportunities to link facility performance to the surrounding environment. Facility emissions, social efforts and the like are not outcomes in themselves—they too are proxies for actual impacts. Embedding performance in this way creates measurements of what Veleva et al. (2001) call indicators of sustainable systems. While this represents a steep methodological challenge, it also creates new possibilities for assessment, intervention and change.
23Secondly, rules more relevant to the topic of the standard can be applied in order to make a decision on certification. Currently, standards generally certify facilities that meet core criteria, plus a given proportion of other criteria in the standard, which may or may not be weighted. However, this is only one way of many in which multi-criteria decisions can be undertaken. A whole array of potential aggregation functions for multi-criteria decision-making exist (for an overview, see Campanella & Ribeiro, 2011). For example, it is possible to build an indicator that averages values below a certain threshold differently from values above. Developments of more nuanced decision rules for certification can incorporate findings about the multiple and interlinked underpinnings behind social development (Nussbaum, 2001; Sen, 1999) and poverty (Barrett et al., 2011; Carter & Barrett, 2006), as well as interlinked (Gunderson & Holling, 2002) and networked (Janssen et al., 2006) environmental systems, in addition to the links becoming increasingly apparent between social and economic performance (Ambec & Lanoie, 2008; Porter & Kramer, 2011).
24However, dynamic standards and decision rules also come with some key caveats. While such innovations are arguably better at promoting sustainability, and being relevant to the socio-ecological and socio-economic systems that the standard attempts to govern, they are difficult to understand intuitively. In particular, it is difficult for consumers or facilities seeking certification to judge the stringency of such an indicator or standard if the formulae are not easy to calculate mentally. It may be possible that standards like this confuse those seeking to meet them, as it is potentially more difficult to aim at a ‘moving target’. It will be up to the standard system to not only explain but also ‘sell’ these techniques in a way that does not undermine the perceived credibility of the standard.
25Furthermore, the value-laden nature of decision-rules underpinning performance-based standards can make them difficult both to calculate and renegotiate in a way considered legitimate by stakeholders. Quantification of the social dimension of sustainability, for example, is both methodologically underdeveloped, and considered by many to be a political stance in and of itself. Reaching agreement on this front is challenging during standard-setting, and renegotiating to create dynamic standards is likely to present political challenges. Whether such a metric is possible to calculate reliably and audit repeatedly is a considerable challenge. Language and gender, for example, have been pointed to as important issues, which can prevent auditors from understanding the social realities of facilities they are inspecting (van der Wal, 2011). Issues that remain in consistent measurement cast doubt over the immediate possibility of dynamic standards in some areas.
26Adaptive management entails treating all policy decisions as hypotheses, using them to test outcomes and assumptions, and revising strategies and underlying beliefs (Gunderson et al., 1995). Metric performance-based standards allow for many new opportunities to learn from data in order to ensure the integrity of certified facilities and the rigour of the standard itself.
27Through looking at certified data, standard statistical methods for outlier testing can identify anomalous data, which could flag concerns about facilities or audits. Through grouping methods such as principal component analysis, or classification methods such as machine-learning algorithms, constellations of low or high performance can be discerned. This can help to identify persistent areas of strength or weakness with regards to the sustainability of certified facilities, which can lead to the creation of relevant extension or outreach work to address struggles or learn from high-flyers. On an aggregated scale, spatial and temporal performance can be examined in order to better understand the dynamics of performance across nations and regions, or throughout time.
- 4 See the ISEAL Code of Good Practice for Setting Social and Environmental Standards: http://www.isea (...)
28Data can also contribute to standard revision. Revision of a standard is advisable as a best practice for many reasons4. All sustainability standards codify a certain view of sustainability, a concept that by definition is fluid and changing (Robinson, 2004). As societies’ views on both the scientific and social underpinnings of the idea develop, so should sustainability standards. In addition, the subject of certification is also changing—some new social, environmental or economic problems may emerge, and some old ones may become less relevant. Given the changing and varied nature of socio-ecological systems, it is unlikely that a static approach will work for more than a brief period of time in one specific context (Blann et al., 2003). A standard must also position itself in the marketplace at some position on the trade-off between rigour and uptake (Cashore et al., 2004; Lebel, 2012), ideally to aim for a maximum aggregate impact on sustainability.
29Data such as this can be used to identify how rigorous a standard is. Some indicators may turn out to be too difficult to meet for facilities, while others may turn out to be too easy—especially if data on actual practice was scarce at the initial standard-setting. Facilities with certain characteristics, such as those from a certain country, may easily meet indicators that facilities with different characteristics struggle with. This might indicate the need for a heterogeneous standard in order to ensure improvement across the board, especially when the relevant problems of sustainability differ across the world. Some indicators may be easily met by all facilities, perhaps indicating that they can be dropped and potentially replaced by another relevant issue where measurement is desired. None of this is prescriptive, but alongside stakeholder discussions on standard revision, it provides an extra source of information to focus deliberation and revision on where it seems warranted.
30However, this can be difficult to implement for a variety of reasons. Incumbent managers appear to perceive adaptive systems as a threat to current management practice (Walters, 1997). In addition, evaluative learning that accepts failure, confronting, questioning and challenging the assumptions that preceded it, requires especially strong leadership throughout the process (Argyris, 1976). To not only learn but to act from data requires a well-designed organisation in addition to a well-designed standard.
31Several sustainability certification schemes have synergised or otherwise interacted with public policy. Leadership in Energy and Environmental Design (LEED), a green building certification, has become incorporated into many public procurement and building codes, thus superseding national regulation (SCSKASC, 2012). Forest Stewardship Council certification is required by the Guatemalan Government for forestry companies that operate in the Mayan Biosphere reserve (UNCTAD, 2011). Programmes such as the Clean Development Mechanism Gold Standard, or Bonsucro EU Standard, ‘raise the bar’ on environmental criteria for projects or commodities while also meeting public policy requirements such as the CDM credit programme or the EU Renewable Energy Directive respectively (Fortin & Richardson, 2013; SCSKASC, 2012). Collecting data for standards can also allow that data to be available to comply with policy at other levels and for other purposes, thus economising on the costs of collection and auditing.
32Performance-based standards can both ‘upload’ and ‘download’ metrics and methodologies to and from public policy. Commonly used methodologies can be ‘downloaded’ from those developed in the public sector—as the Bonsucro EU Standard has adapted the greenhouse gas methodology from the EU Renewable Energy Directive. While metric standards are not yet widespread, we can see an example of technology-based ‘uploading’ in the case of organic certification, as most organic standards started off as private initiatives before becoming more publicly governed (Bendell et al., 2011), earning a degree of enforceability through the courts as a result of both legislative inclusion and private contracting (Webb & Morrison, 2004). While, under EU law, public procurement tenders cannot require a certain certificate per se, they can require sustainability criteria for which a certificate is ‘proof of compliance’. In areas where contracting and monitoring is underdeveloped, this is a method through which the structure and requirements of private standards can enter public policy (D’Hollander & Marx, 2014). Developing a pluralism of initial methodologies through competing sustainability standards can allow different methods of measuring impacts to be evaluated and chosen by actors, including policymakers (Smith & Fischlein, 2010).
33Both the field of sustainability and the practice of sustainability standards are still emerging, and there is much we do not yet know. With regards to performance-based metric standards, several areas for future research are especially striking.
34With regards to flexibility of moving facilities toward sustainability, there is a surplus of theory and a dearth of empirics surrounding the on-the-ground reality of efficiency increases from performance-based standards and potential barriers to understanding and building practices around metrics. Current studies tend to be drawn from the healthcare and local government literatures (e.g. De Bont & Grit, 2011; Kelman & Friedman, 2009; Moynihan & Pandey, 2010), and their generalisability to a completely different context, such as a farm, is unclear. Can knowledge and practices be easily disseminated across users of the standard? Is there a role here for new communication technologies, or stakeholder methodologies? We are seeing a transnational organisational field of sustainability professionals (Dingwerth & Pattberg, 2009)—can a transnational organisational field of on-the-ground sustainable practitioners emerge?
35Information provision must also prove itself empirically to be useful in a transition toward sustainable outcomes. On a facility scale, it is important to know if there are positive effects from such provision, are they significant, and are they lasting? What benefit can the information provide in practice to facilities beyond certification, and is this valued by financial markets or policy-makers? What synergies can be created between public policy and aggregate information from private schemes?
36A lot of research will need to be done on the new forms that standards can take. Can we build well-founded guidelines enabling the use of decision rules that better fit the systems they are trying to govern? Is it possible to strike a balance that allows for both new rules while maintaining adequate understanding and communication by the end-users of the standards? Through what type of processes can we develop legitimate decision rules for value-laden social, economic and environmental issues? Building on work done in technology assessment (e.g. Guston & Sarewitz, 2002; Schot & Rip, 1997), how can we adequately involve stakeholders in technical matters such as decision rules and thresholds, in order to help legitimise their development? Many standards informally have reported difficulties in pushing incremental revisions—the certified user base sees it as extra compliance costs. How can a dynamic standard be created politically, given that users and stakeholders have to agree to the ‘ratcheting up’ of necessary investment?
37Measurement itself is a research topic for many reasons. First, what are the role of technologies in measurement? Standard systems are just at the beginning of investigating the role that sensors and crowdsourcing can play in creating credible standards and reliable data. ISEAL Alliance, for example, has begun to scope the area with a recent research tender. Secondly, social measurement in standards is significantly more challenging than technical measurement. Social measurement encodes values, is methodologically hard, and difficult to robustly audit. Quantification may miss important dimensions of exclusion, disempowerment, discrimination and the like. Future research may wish to consider the differences between technology-based and performance-based approaches to social impact, and suggest ways forward.
38Learning from data requires guidelines in order to make methods accessible for all standard-schemes. This may be less about research than sharing tools and practices between systems. More work is required to look at whether spatially heterogeneous or homogenous standards have higher impact on sustainable outcomes in the long-term, which can inform how data is used in revision processes. As with all areas in adaptive management, concrete examples and guidelines of how to strengthen organisations to allow for ‘double-loop learning’ is required in order to turn thought into practice (Argyris, 1976).
39Given the novelty still surrounding metric standards and sustainability standards in general, case studies on synergies between policy instruments are still emerging, and more of those are needed to get a larger picture of how standards shape and are shaped by their institutional environments. What are the opportunities and challenges in harmonising private and public governance? In such synergies, what are the power dynamics, and how is accountability best ensured?
- 5 Nike Inc. Press Release, November 30, 2010: Nike releases environmental design tool. Retrieved from (...)
40Performance-based standards open up a wide range of new opportunities for sustainability standards. Naturally, with these opportunities also come challenges and pitfalls. Metric standards may have associated rewards that go beyond the prescription of good practices, but they also require serious investment in building relevant methods and skills. Nike, for example, spent $6 million USD in-house on its open source Considered Design Index and Environmental Apparel Design Tool, as no relevant metric system was on the market and available to purchase5. A one-size-fits-all approach is unlikely to be possible, and standard-systems are likely to require considerable bespoke effort to fit metrics to their need, which will only be possible with capacity building in areas of data science and technical stakeholder engagement.
41In summary, there is much still to be done. However, if sustainability standards began as consumer-facing labels, and are moving more and more towards business-to-business models, then the next step could very well be a move to more widely applicable and useful metrics, which can be communicated to buyers as well as be intrinsically useful to decision-makers. Certification systems must keep improving and reinventing themselves to keep up with changing socio-ecological systems and markets, and innovation and discovery in this field will be vital in their ability to enact long-term transformative change.