10 min read
Verifiable impact — making green claims auditable with Azure Confidential Ledger
Carbon credits and biodiversity offsets only work if someone else can check the evidence later. A pattern for anchoring AI-generated verifications in a tamper-evident ledger.
- Azure Confidential Ledger
- Azure AI Foundry
- Impact verification
- Tamper evidence
- Social good
Trust is the bottleneck, not accuracy
If you have followed the carbon-credit market at all over the last few years, you have probably noticed a pattern. The big investigative stories — Verra’s rainforest projects, the Zimbabwean Kariba credits, various cookstove schemes — were not breaking because the maths was wrong. The maths, where anyone had published it, was usually fine. They were breaking because nobody outside the project could independently check the measurements that fed the maths.
Impact verification has a structural trust problem. The party making the claim is almost always the party gathering the evidence. And by the time a journalist or an auditor shows up three years later, the underlying telemetry has either been overwritten, aggregated away, or was never recorded in a form anyone else could read.
AI makes this harder, not easier. If the number going into your impact report was generated by a vision model scoring a drone photograph, or by an LLM grading a community survey, there is now a new question: “did the model produce this number, or did someone produce the number and attribute it to the model?” An “AI-verified” tag, by itself, solves nothing. It just shifts the trust burden sideways.
ConservAxion — the clean-energy and biodiversity impact-verification platform I have been building in KwaZulu-Natal — is a Microsoft Foundry application, and its whole reason to exist is to treat that trust gap as a first-class product feature. The pattern that makes that possible is the combination of Foundry for the validations and Azure Confidential Ledger for the record that the validations happened. This article is a walk through that pattern, in enough detail that you can lift it into a different impact-verification domain without redoing the design work.
Azure Confidential Ledger in one breath
Azure Confidential Ledger is the managed version of the Confidential
Consortium Framework (CCF). It runs inside Intel SGX enclaves, is
append-only, and every write returns a transactionId. Every
transactionId can be traded back for a cryptographic receipt that
any third party — an auditor, a journalist, a sceptical funder — can
verify offline against the ledger’s published identity. They do not
need access to the ledger, they do not need a copy of your data, and
they do not need to trust you. They just need the receipt and the
ledger’s public certificate.
It is deliberately not a blockchain. There is no proof-of-work, no global consensus, no gas token, no community to bribe. It is a single-tenant (or small-consortium) append-only log that you can trust Microsoft to keep running, and whose contents are protected by hardware attestation rather than by economic incentive. For impact-verification applications running in one country with one operator, that trade-off is exactly right — you want cryptographic tamper-evidence without taking on the operational weight of a distributed ledger nobody asked for.
What goes on the ledger, and what does not
The single biggest design decision, and the one that trips up most first-time ledger integrators, is what you actually commit. Beginners reach for “write the whole record.” They end up with a ledger full of PII, raw telemetry streams, photo metadata, and free-text survey comments, and they either lose sleep over GDPR / POPIA or have to tear the whole design up a year in.
The right answer is to commit the smallest possible fingerprint of the claim, and leave the rich record in your operational store. In ConservAxion the ledger entry for an individual impact credit looks like this:
ledger_tx: dict[str, Any] = {
"creditId": credit_id,
"deviceId": payload.get("deviceId"),
"timestamp": payload.get("timestamp", now_iso),
"metric": payload.get("kWh_or_metric"),
"validationHash": validation_hash,
"programme": payload.get("programme", "cetth_solar"),
}
Six fields. No donor identity, no photo content, no raw reading
series, no location to the nearest metre. What anchors the record is
validationHash — a SHA-256 digest of the full validation result
that the Foundry model produced, computed over a deterministic JSON
serialisation of the payload:
validation_hash = hashlib.sha256(
json.dumps(payload.get("validationResult", {}), sort_keys=True)
.encode()
).hexdigest()
The sort_keys=True matters. An auditor re-computing the hash from
the full record pulled out of Cosmos DB has to produce exactly the
same bytes; otherwise the hash they compute will not match the hash
on the ledger and the verification fails. Deterministic serialisation
turns “the same JSON” into a bit-exact concept. You do not want to
discover mid-audit that you forgot to sort keys.
The full credit record — with every field, including the
validationResult, the deviceId, the square ID, the donor link —
lives in Cosmos DB. It carries three pointers back to the ledger:
credit_record: dict[str, Any] = {
# ...
"ledgerTransactionId": ledger_receipt.get("transactionId"),
"ledgerCollectionId": ledger_receipt.get("collectionId"),
"ledgerContentHash": ledger_receipt.get("contentHash"),
# ...
}
Those three fields are the bridge. Anyone reading the Cosmos record can fetch the ledger receipt for that transaction, re-compute the hash from the record they were given, and confirm that what they are reading is what was committed.
Collections are the unit of audit scope
Every ledger entry goes into a named collection. In ConservAxion we key them by programme:
collection_id = f"credits-{transaction.get('programme', 'default')}"
So a solar energy credit writes to credits-cetth_solar; a
biodiversity credit writes to credits-biodiversity. Collections do
two useful things for audit. First, they let an auditor enumerate
every credit written under a given programme by walking a single
collection — you do not have to expose the whole ledger to prove the
integrity of one programme. Second, they give you a natural
retention / access-control boundary: a funder auditing the solar
programme does not need, and should not get, a handle on biodiversity
records.
The separate ProofLedger helper in the codebase generalises the
pattern. It takes a compact proof record, writes it unconditionally
to a Cosmos container — so you always have a local, queryable audit
trail — and mirrors to Confidential Ledger when the ACL_ENDPOINT
app setting is configured. Collections there are keyed by event
type (photo-validation, satellite-assessment, and so on), which
lets an auditor scope to a specific verification pipeline rather
than to a specific funding programme. Two different audit axes, the
same underlying primitive.
Graceful degradation — because the ledger will, eventually, be unavailable
The most important non-obvious lesson from running this in pilot is
that your user journey cannot block on ledger availability. Azure
Confidential Ledger has excellent SLAs, but the wider story includes
resource-provider latency, networking, and — for pilots — the
awkward early phase where the ACL_ENDPOINT setting is literally not
yet populated because the resource itself is still being provisioned
as the funder conversation accelerates.
The credit-write path handles that by treating the ledger write as a best-effort step with an explicit reconciliation path:
ledger_receipt: dict[str, Any] = {
"transactionId": "pending-ledger-unavailable",
"collectionId": f"credits-{ledger_tx.get('programme', 'default')}",
"contentHash": validation_hash,
}
try:
ledger = _get_ledger()
ledger_receipt = ledger.write_transaction(ledger_tx)
except Exception:
cosmos.queue_failed_ledger_write({
"creditId": credit_id,
"ledger_tx": ledger_tx,
})
If the ledger write fails — or if the endpoint is not configured at
all — the code still writes the credit record to Cosmos, still
updates the square, still sends the donor their notification email.
The credit record carries the sentinel "pending-ledger-unavailable"
as its ledgerTransactionId, and the full payload lands in a
FailedLedgerWrites container with status: pending, attempts: 0.
A timer-triggered worker sweeps that container on a schedule, retries
the ledger write, and when it succeeds, rewrites the credit record
with the real transactionId.
The product-level invariant is: a credit always has either a real ledger receipt or a pointer to a queued reconciliation. It never has nothing. An auditor looking at the credit six months later sees the real transaction ID and can verify it. A donor looking at their dashboard five seconds after their square was adopted does not see a spinner that may or may not resolve. Both constituencies get the guarantee they actually need.
How an auditor verifies the chain
The reader-side story is what makes the whole pattern valuable. An
auditor hands a transactionId to the ledger via get_receipt:
receipt = self._client.get_receipt(transaction_id=transaction_id)
What comes back is a cryptographic receipt — a payload containing the signed Merkle tree root that the transaction was committed under, the path from the transaction to that root, the ledger’s signing key identity, and the necessary metadata for offline verification. Concretely, an auditor armed with that receipt and the ledger’s published identity certificate — which ConservAxion exposes on its admin surface so it does not have to be fetched at audit time — can verify the transaction was committed, in the order the ledger claims, without asking the ledger or the platform operator anything further.
The “verify this credit” button on the donor’s certificate view does exactly this, in-browser. It fetches the receipt, re-computes the hash from the Cosmos record, walks the Merkle path, and checks the signatures against the cached ledger identity. If any part of the check fails — a field was changed in Cosmos after the fact, the receipt is for a different transaction, the ledger identity has been rotated in a way the client did not know about — the button renders red. The promise to the donor is operational, not rhetorical: if it says green, a journalist or an auditor can reproduce that green from the same artefacts, without trusting the platform.
What this is not
It is worth being explicit about the limits of the pattern, because the adjacent concepts sound similar and the distinctions matter.
Confidential Ledger is not a blockchain. There is no proof-of-work, no token, no public mempool, no community of validators. The tamper-evidence comes from CCF’s governance protocol running inside SGX, signed by keys whose identity Microsoft attests to. If you already have a trust relationship with Microsoft as a cloud operator — which you do, if you are on Azure at all — then CCF’s threat model inherits that trust and adds cryptographic protection against operator tampering inside the ledger surface itself. That is usually the right threat model for an impact-verification platform running in one legal jurisdiction, and it comes without the operational weight of a public chain.
It is not a replacement for Cosmos DB. You do not query the ledger. Collections are enumerable, but slowly, and the contents are designed for small committed payloads rather than rich document storage. The operational record stays in your operational store; the ledger is there to anchor the claim that the operational record was not altered after the fact.
It does not make your AI correct. Nothing does. A Foundry vision model that misclassifies an invasive species photo will write a wrong validation result, that wrong result will be hashed, and the hash will be committed to an unimpeachable ledger. What the ledger guarantees is that the wrong answer cannot be silently edited to become a right one, and — equally important — that a right answer cannot be silently edited to become a convenient one. That is a weaker guarantee than “the AI is correct,” but it is exactly the guarantee the market actually needs, and it is the one that existing impact-credit schemes have most visibly failed to provide.
Why this fits the Foundry story
Azure AI Foundry sits next to Confidential Ledger in the confidential-computing surface area of Azure. In ConservAxion the connection is operational: Foundry models decide whether a reading is plausible, a photo is of what the custodian claims it is of, or a NDVI assessment shows genuine regrowth. The ledger commits to the decision. Together they let the platform claim: “Here is what the model said. Here is the timestamp it said it. Here is cryptographic proof neither was changed after the fact, verifiable by a third party with a single HTTP call.”
That is the shape of a credible AI-driven impact claim. It is also the shape of a credible AI-driven claim in any domain with a long-horizon audit — clinical-trial adjudication, financial conduct review, provenance for training data used in regulated applications. The specifics of what goes in the hash change. The pattern — small deterministic fingerprint on the ledger, rich record in the operational store, graceful-degradation reconciliation queue, in-browser verification on the reader side — does not.
If you are building on Foundry and your product has to be trusted by someone who is not currently in the room, the ledger is cheaper, simpler, and more defensible than any of the more exotic trust primitives you might be tempted to reach for. Start there.