Anyone can generate an alternative chain of sha256 hashes. perhaps you should consider timestamping, e.g. https://opentimestamps.org/ As for what the regulation says, I haven't looked but perhaps it doesn't require the system to be actually tamper-proof.
The library is deliberately scoped as tamper-evident, not tamper-proof; it detects modification but does not prevent wholesale chain reconstruction by someone with storage access. The design assumes defence-in-depth: S3 Object Lock (Compliance mode) at the infrastructure layer, hash chain verification at the application layer.
External timestamping (OpenTimestamps, RFC 3161) would definitely add independent temporal anchoring and is worth considering as an optional feature. From what I can see, Article 12 does not currently prescribe specific cryptographic mechanisms (but of course the assurance level would increase with it).
On the regulatory question: Article 12 requires "automatic recording" that enables monitoring and reconstruction and current regulatory guidance does not require tamper-proof storage (only trustworthy, auditable records). The hash chain plus immutable storage is designed to meet that bar, but what you raise here is good and thoughtful.
Good one. QQ. If you store it hash chained, how are you handling GDPR erasure requests? Isn't that supposed to erase within 30 days for GDPR instead of 180? Do you recreate the chain or soe sort of Pseudonymization or anything else?
voxic11 is right that the AI Act creates a legal obligation that provides a lawful basis for processing under GDPR Article 6(1)(c).
To add to that, Article 17(3)(b) specifically carves out an exemption to the right to erasure where retention is necessary to comply with a legal obligation.
(So the defence works at both levels; you have a lawful basis to retain, and erasure requests don’t override it during the mandatory retention period).
That said, GDPR data minimisation (Article 5(1)(c)) still constrains what you log.
The library addresses this at write-time today, in that the pii config lets you SHA-256 hash inputs/outputs before they hit the log and apply regex redaction patterns, so personal data need never enter the chain in the first place.
This enables the pattern of “Hash by default, only log raw where necessary for Article 12”.
For cases where raw content must be logged (eg, full decision reconstruction for a regulator), we’re planning a dual-layer storage approach. The hash chain would cover a structural envelope (timestamps, decision ID, model ID, parameters, latency, hash pointers) while the actual PII-bearing content (input prompts, output text) would live in a separate referenced object.
Erasure would then mean deleting the content object, and the chain would stay intact because it never hashed the raw content directly.
The regulator would also therefore see a complete, tamper-evident chain of system activity.
Thanks both for the replies.
Can't you make it simpler: encrypt the data, store the encryption key separately and move the raw data to cold storage. If user wants to erase, delete the encryption key avoiding massive recompute from cold store. Do you think this is better approach? This is not efficient, but in large scale (peta bytes) this could work.
Developers make mistakes, if they miss encrypting due to some bug in the code, and they want to fix it, then the hash chaining will be a problem though.
IMO what you’re describing is essentially crypto-shredding.
It would definitely work (and when dealing with petabyte levels of data the simplicity of only having to delete the key is convenient).
We’re leaning toward the dual-layer separation I described though (metadata separate to content) mainly because crypto-shredding means every read (including regulatory reconstruction) depends on a key store.
In my view that’s a significant dependency for an audit log whose whole purpose is reliable reconstructability, whereas dual-layer lets the chain stand on its own.
Your point about developer mistakes is fair. It applies to dual layer as you say with your example, but I’d say crypto shredding isn’t immune to mistakes because (for example) deleting the key only works if the key and plaintext never leaked elsewhere accidentally in logs / backups etc.
Fair point on the reconstruction attack.
The library is deliberately scoped as tamper-evident, not tamper-proof; it detects modification but does not prevent wholesale chain reconstruction by someone with storage access. The design assumes defence-in-depth: S3 Object Lock (Compliance mode) at the infrastructure layer, hash chain verification at the application layer.
External timestamping (OpenTimestamps, RFC 3161) would definitely add independent temporal anchoring and is worth considering as an optional feature. From what I can see, Article 12 does not currently prescribe specific cryptographic mechanisms (but of course the assurance level would increase with it).
On the regulatory question: Article 12 requires "automatic recording" that enables monitoring and reconstruction and current regulatory guidance does not require tamper-proof storage (only trustworthy, auditable records). The hash chain plus immutable storage is designed to meet that bar, but what you raise here is good and thoughtful.
voxic11 is right that the AI Act creates a legal obligation that provides a lawful basis for processing under GDPR Article 6(1)(c).
To add to that, Article 17(3)(b) specifically carves out an exemption to the right to erasure where retention is necessary to comply with a legal obligation.
(So the defence works at both levels; you have a lawful basis to retain, and erasure requests don’t override it during the mandatory retention period).
That said, GDPR data minimisation (Article 5(1)(c)) still constrains what you log.
The library addresses this at write-time today, in that the pii config lets you SHA-256 hash inputs/outputs before they hit the log and apply regex redaction patterns, so personal data need never enter the chain in the first place.
This enables the pattern of “Hash by default, only log raw where necessary for Article 12”.
For cases where raw content must be logged (eg, full decision reconstruction for a regulator), we’re planning a dual-layer storage approach. The hash chain would cover a structural envelope (timestamps, decision ID, model ID, parameters, latency, hash pointers) while the actual PII-bearing content (input prompts, output text) would live in a separate referenced object.
Erasure would then mean deleting the content object, and the chain would stay intact because it never hashed the raw content directly.
The regulator would also therefore see a complete, tamper-evident chain of system activity.
It would definitely work (and when dealing with petabyte levels of data the simplicity of only having to delete the key is convenient).
We’re leaning toward the dual-layer separation I described though (metadata separate to content) mainly because crypto-shredding means every read (including regulatory reconstruction) depends on a key store.
In my view that’s a significant dependency for an audit log whose whole purpose is reliable reconstructability, whereas dual-layer lets the chain stand on its own.
Your point about developer mistakes is fair. It applies to dual layer as you say with your example, but I’d say crypto shredding isn’t immune to mistakes because (for example) deleting the key only works if the key and plaintext never leaked elsewhere accidentally in logs / backups etc.
The AI Act qualifies as such a legal obligation.