Crowdsourcing trust using Impact Framework attestations

October 30, 2024

The Impact Framework project provides a lightweight client for processing manifest files. Those files are structured in a specific way that simultaneously organizes observations about a system under investigation, represents the topology of that system, and acts as executable code for generating an impact metric (SCI score, carbon footprint, or something else).

One of the powerful aspects of this is that it turns environmental reporting into something collaborative - in the sense that anyone can re-execute a manifest file to verify correct execution and critique the models, coefficients and assumptions - and agile - in the sense that manifests work best with more granular data and can be used to forecast and test mitigation strategies, yielding metrics that work well as sustainability KPIs.

But how can you trust what's in a manifest file?

This is a big question that also generalizes to any other environmental reports. Here, I'll try to lay out some of my thinking around this and explain some prototype tooling, called if-attest, that I created for attesting to manifests as a route to crowdsourcing trustworthiness.

Why transparency != trust

One of the main benefits of an IF manifest file over other types of environmental audit is the transparency encouraged by the file structure. The gold standard is for a manifest to include all the input observations that were made about a system along with the pipeline of calculations that were excuted over them to yield the final imapct metrics. This means anyone can audit the procedures, challenge them or fork the manifest and make their own calculations.

However, there are some problems with relying on transparency as a route to trust.

  1. You can be transparently dishonest/wrong

Just because you've exposed your data and calculation pipeline doesn't mean the data is correct or that the pipeline choices you made were appropriate. There's no way to tell that a given set of observations are incorrect, either due to fraud or mistakes. The only real protection against this is direct access to the system under observation, which realistically is hardly ever going to be available.

  1. Transparency isn't always possible

Full transparency is the gold standard for manifest files, but there will be many cases where it simply isn't possible. For example, organizations could likely expose business-sensitive information in fully transparent manifests, or even leak information and data that pose security risks. Some data might be restricted, embargoed or otherwise legally restricted. We have to have a way for organizations that can't be fully transparent to still participate in the Impact Framework community. Full transparency can't be a hard requirement if we want broad engagement.

  1. There's an expertise barrier for critiquing a manifest

Critiquing a manifest requires some nontrivial expertise to grok the context of the system being observed, evaluate whether a particular processing pipeline is appropriate, understand the trade-offs and assumptions that have been made and assess the design-decisions that were taken. Simply releasing a manifest to the world, even if fully transparent, doesn't necessarily translate into trustworthiness because many observers will not have the necessary expertise and experience to evaluate it. There's also a time requirement - properly deep diving a manifest and assessing it can be a substantial task.

Crowdsourcing trust using attestations

So, how can we trust manifests?

One option is to have a model based on some authority, perhaps an accredited auditor or an organization, that signs off on manifests. However, this recreates some of the problems with the legacy system, in that it centralizes authority with a few operators that can be opaque and vulnerable to a range of incentives. When there are only a few accredited auditors, they are able to charge exorbitantly for their service, meaning trust becomes a luxury available to those who can afford it, rather than something inherent and observable about the manifest.

Another option is to have some algorithmic way of approving manifests. However, manifest quality is subjective and highly content-dependent. It's very difficult to define a set of unambiguous steps that an algorithm could take to distinguish trustworthy from untrustworthy manifests.

Another option is a completely open review process, maybe a forum, where the merits and deficiencies of manifests are discussed in the open, but this would probably generate a lot of noisy and ambiguous commentary from which a reliable trust signal will be hard to isolate.

The most promising option I've come up with is an attestation model, where individual reviewers "attest" to a given manifest, effectively staking some reputation on the veracity and quality of its contents. Anyone can attest to a manifest using the if-attest tooling bundled with Impact Framework (although please note it is NOT merged into main as it is just a prototype that I'm sharting for experimentation, it is not at all production ready). The attestations are small data objects that are signed by the attester. They include some summary information about the manifest - its aggregated SCI score, total carbon, a measure of its "quality", and an integer representing the depth of audit the reviewer is willing to attest to. They also include a manifest hash - a string of bytes that can only possibly be generated by that specific file - so that the attestation can only possibly refer to one specific manifest.

The trustworthiness of a manifest is then inferred from the attestations it has attracted.

How does this work? Using the magic of public/private key cryptography.

What is an attestation?

You can think of an attestation as a trust token - it represents some amount of staked reputation that you as an end user can use to make your own judgments about how trustworthy a manifest is.

For example, maybe you don't trust a manifest because some of the data is redacted. What if the manifest provider exposed the redacted data to a trusted third party under an NDA who then re-executed the manifest with all the original raw data and attested that the SCI score and total carbon were correct? What about if they attested at a level that meant that they fully audited the input data, including independently observing the target system themselves? What if that attester was the GSF? Or an audit firm? Or a government agency? What if ten diverse attesters all attested to the data?

Each attestation is a signal of additional trustworthiness - some signals are stronger than others, but collectively they build up an overall trust profile, similar to product reviews. There might not be a firm boundary between trustworthy and untrustworthy, just like there's no fixed amount of money that makes separates rich or not rich, but attestations are signals that accumulate up to form a trust picture for a given manifest.

More concretely, the attestation is a small data object. In my prototype system, each attestation conforms to the following schema:

'start': the first timestamp captured by the manifest
'end': the last timestamp captured by the manifest
'hash': the unique keccak256 hash for the manifest file
'if': the IF version used to compute the manifest
'verified': true or false showing that `if-check` returned a success response
'sci': the computes SCI score, always in gCO2e/fnctional unit
'energy': the aggregated energy value for the manifest in kWh
'carbon': the aggregated carbon value for the manifest in gCO2eq
'level': the audit level being attested to
'quality': the data quality score
'functionalUnit': the functional unit used to calculate the SCI

Each field is assigned a value from the manifest file, and the attester signs the object with their private key.

Anyone can then recover the attester's public key from the signed attestation.

As long as the human or organization holding the keys have shared their public key, this gives cryptographic assurance that they really attested to a given manifest.

The hash of the manifest is a 32 byte string of hex characters that can only be produced by applying a given hash function to a specific file - if anything changes about the file, the hash changes too. It means that an attestation can only refer to one specific instance of a manifest - you can't port an attestation from one manifest to another. This protects against equivocation (publishing multiple manifests and swapping out which one you present as the truuthful one) and attestation washing (having an attestation generated for one manifest and claiming it was for another).

This means that an organization might not be able to share a fully transparent manifest, but they might be able to selectively expose a transparent manifest to trusted third parties under NDA, and those third parties can publicly share an attestation, so at least the wider community get something that represents some (maybe small) amount of evidence that the values in the manifest summary can be trusted.

You get to set your own bar on trustworthiness - maybe you value GSF attestations very highly and seeing that specific attestation is enough for you. Maybe you'll only trust it when it has X number of attestations. Sometimes, an attestation might be worth more than full transparency because your confidence in your ability to independently evaluating a particular manifest is low. The point is that attestations provide a flexible way to build up a trust portfolio for manifests, for free, in a way that anyone can participate without requiring any permissions.

What do quality and audit level mean?

Not much in this prototype - they are just integers from 1-5 that refer to the quality of the manifest data and the depth of audit a reviewer was willing to attest to. In parallel to developing the software tooling for attestations, it will be necessary to create a trust framework that defines these metrics explicitly and unambiguously. I envisage the data quality being something similar to the quality rubric used in the Product Carbon Footprint (you can see my draft mapping from PCF to IF in the IF discussion forum here) and the audit level could look something like the following:

level-0: only checks manifest executes correctly
level-1: checks plugin choices conform to best practises
level-3: all config conforms to best practises
level-4: underlying code indepedently reviewed and tested
level-5: measurements independently verified by attester

This needs more work! However, enabling reviewers to attest to specific properties of the manifest feels important, because it allows the reviewer to vouch for particular things they are pretty certain about, rather than having to understand everything about a manifest to contribute anything to its trust profile.

if-attest

if-attest is an Impact Framework command line tool that allows users to "attest" to a manifest using a cryptographic signature and then either save the attestation locally or post it attestation to a public blockchain so it is available forever. This is available in the if-attest branch of Impact Framework - you can check it out and play with it today, but please note that it is a prototype, there are some very rough edges and it is certainly not close to production-ready!

The concept is as follows:

  1. User receives a computed manifest file
  2. User wants to verify the manifest is correct. If it is, attest to it at a level defined as a CLI argument.
  3. if-attest runs if-check over the given manifest to confirm it was executed correctly.
  4. If if-check is sucessful, if-attest grabs summary information from the manifest and computes the manifest hash (a string of bytes that can only be generated by hashing that precise file)
  5. if-attest compiles all the manifest information into an attestation object, optionally connects to the chosen blockchain, signs the attestation and sends it in a transaction to store it onchain. Altewrnatively, the attestation is created and saved locally as a .txt file.
  6. Anyone can then look up an onchain attestation to see that some specific attester (e.g. GSF, some trusted auditor) has attested to its validity. You can selectively share your offchain attestation, and its signature can still be verified by anyone locally in the browser.

Attestations in if-attest

Attestations are just snippets of data that conform to some predefined schema and are signed by a private key. They can then be stored locally, shared via your own network (post them on your website, if you wish) or posted to a public blockchain.

We are using the Ethereum Attestation Service to bootstrap our attestations.

Here's an example of a raw attestation. If you run if-attest configured to create local attestations, you'll get a text file that looks like this:

{"sig":{"version":2,"uid":"0x047d38b6d175fe8a36a597863b7d8d94939aa2fd4b4f19831229c95f5eda5604","domain":{"name":"EAS Attestation","version":"0.26","chainId":"11155111","verifyingContract":"0xC2679fBD37d54388Ce493F1DB75320D236e1815e"},"primaryType":"Attest","message":{"version":2,"recipient":"0xc8317137B5c511ef9CE1762CE498FE16950EF42d","expirationTime":"0","time":"1729173300","revocable":true,"schema":"0x11fdca810433efc2d5b9fe8305b39669e8d0feb81f699a767fe48ce26fcf6a6c","refUID":"0x0000000000000000000000000000000000000000000000000000000000000000","data":"0x000000000000000000000000000000000000000000000000000000000000016000000000000000000000000000000000000000000000000000000000000001a020d034e94040bf5b40ff76f9568155cacbc7ed0d488a8fcea8dc37bd24b0d0dd00000000000000000000000000000000000000000000000000000000000001e00000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000001e0000000000000000000000000000000000000000000000000000000000000014000000000000000000000000000000000000000000000000000000000000000f0000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000002200000000000000000000000000000000000000000000000000000000000000010323032332d30382d30365430303a3030000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010323032332d30382d30365430303a3030000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005302e372e30000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b736974652d766973697473000000000000000000000000000000000000000000","salt":"0x0951f85ad71286a55562608cfba8cad023820a78cce8fd5df2a1498af0812505"},"types":{"Attest":[{"name":"version","type":"uint16"},{"name":"schema","type":"bytes32"},{"name":"recipient","type":"address"},{"name":"time","type":"uint64"},{"name":"expirationTime","type":"uint64"},{"name":"revocable","type":"bool"},{"name":"refUID","type":"bytes32"},{"name":"data","type":"bytes"},{"name":"salt","type":"bytes32"}]},"signature":{"v":27,"r":"0xea4ac79da011dd364b69cfb45cce6e9b5d71444861d0bb44456d7e50f433f981","s":"0x4106232e9af816c465657e10d39612c958f5b6f552d637c9cff5469ee7634333"}}, "signer":"0xc8317137B5c511ef9CE1762CE498FE16950EF42d"}

OK, it's not super human readable. This is because the manifest data is hex-encoded. Everything you need to verify the signatuire and recover the manifest summary data is here.

Onchain vs offchain attestations

Having created an attestation, you can choose to post it to a blockchain or to store it "offchain". Offchain can mean storing it in your own database, storing it on some decentralized storage platform such as IPFS, or archived locally. They are just signed pieces of data.

Pushing the attestation to a public blockchain has some benefits. First, you do not have to store and serve the data yourself. You know that your attestation will always be available to any one, any time, and a network of blockchain node operators bear the cost of maintaining the network - you are outsourcing your server maintenance to them and you know anyone can access your attestation from anywhere, any time. Second, it's a good way to signal to the world that you are committed to sharing your data - once it's on the blockchain, you can't take it down, hide it, amend it or decide to be selective about who you share it with.

There are widespread concerns about the environmental impacts of blockchains, but for our purposes it is reasonable to use a small proof-of-stake or proof-of-authority network that has a negligible carbon footprint. You can target any EVM-compatible blockchain you wish, as long as the EAS contracts are deployed there.

Offchain attestations allow you more flexibility about what you share and when and with whom. The attestation itelf is identical to the one you would post on the blockchain, but you have control over public access to it. There might be business, security, legal or privacy reasons you might want to keep your attestations off the public blockchain. We have tried to make the attestations as privacy preserving and comapct as possible to encourage more open sharing of attestations. You can still verify the signatures of offchain attestations.

Verifying attestations

For now, you can simply drag and drop the Attestation.txt file that if-attest creates for you into this online verifier - if the signature is valid it will decode the attestation and show you the details in the browser. Later I may build the verification into the CLI.

Onchain attestations are verified by default as the nodes that re-execute transactiosn check the signatures as part of their normal workflow.

Try it out

There are usage instructions for if-attest in the feature README. You can then use if-attest by installing IF and using the if-attest command. It looks like this:

npm run if-attest -- -m <path-to-computed-manifest> --level 1

You'll see some logs in the console explaijning what is happening, and then an Attestation.txt file will be saved locally in your IF repository.

You can configure if-attest to push the attestation to the Sepolia blockchain (a small Ethereum, testnet) by adding --blockchain true to your command, but you'll also need some additional environment variables to be set up and a connection to a node provider.

Remember - this is a prototype - it's shared for experimentation and discussion only!

Summary

Here I've laid out a possible strategy for creating trust profiles for Impact Framework manifest files using cryptographic attestations and provided some prototype tooling. I'm not necessarily saying this is the right approach, but it feels interesting enough to explore further, and also feels like the most promising approach I've come up with so far that works for open source, builder communities as well as potentially organizations, and neatly balances transparency against preserving privacy and redacting data where necessary. It's all about gathering evidence, even if imperfect, that paints a general picture of the trustworthiness of a manifest. At the end of the day, accounting for carbon emissions correctly has its various technical challenges, but the root issue is that environmental reporting is built on foundations of scepticism and distrust.