I don't understand this, the article talks on and on about what document to craft, but how does he gain access to the target's database? How can he randomly inject data into some AI bots sources?
Like why does it even matter what kind of page to craft when some company's AI bot source database is wide open?
Any document store where you haven’t meticulously vetted each document— forget about actual bad actors— runs this risk. A size org across many years generates a lot of things. Analysis that were correct at one point and not at another, things that were simply wrong at all times, contradictory, etc.
You have to choose model suitably robust is capabilities and design prompts or various post training regimes that are tested against such, where the model will identify the different ones and either choose the correct one on surface both with an appropriately helpful and clear explanation.
At minimum you have to start from a typical model risk perspective and test and backtest the way you would traditional ML.
You're right, and this is an underappreciated point. The "attacker" framing can actually obscure the more common risk: organic knowledge base degradation over time. The poisoning attack is just the adversarial extreme of a problem that exists in every large document store.
The model robustness angle is valid but I'd push back slightly on it being sufficient as a primary control. The model risk / backtesting framing is exactly right for the generation side. Where RAG diverges from traditional ML is that the "training data" is mutable at runtime (any authenticated user or pipeline can change what the model sees without retraining).
My apologies, it wasn’t my intent to convey that as a primary. It isn’t one. It’s simply the first thing you should do, apart from vetting your documents as much as practicality allows, to at least start from a foundation where transparency of such results is possible. In any system whose main functionality is to surface information, transparency and provenance and a chain of custody are paramount.
I can’t stop all bad data, I can maximize the ability to recognize it on site. A model that has a dozen RAG results dropped on its context needs to have a solid capability in doing the same. Depending on a lot of different details of the implementation, the smaller the model, the more important it is that it be one with a “thinking” capability to have some minimal adequacy in this area. The “wait-…” loop and similar that it will do can catch some of this. But the smaller the model and more complex the document—- forget about context size alone, perplexity matters quite a bit— the more a small model’s limited attention budget will get eaten up too much to catch contradictions or factual inaccuracies whose accurate forms were somewhere in its training set or the RAG results.
I’m not sure the extent to which it’s generally understood that complexity of content is a key factor in context decay and collapse. By all means optimize “context engineering” for quota and API calls and cost. But reducing token count without reducing much in the way of information, that increased density in context will still contribute significantly to context decay, not reducing it in a linear 1:1 relationship.
If you aren’t accounting for this sort of dynamic when constructing your workflows and pipelines then— well, if you’re having unexpected failures that don’t seem like they should be happening, but you’re doing some variety of aggressive “context engineering”, that is one very reasonable element to consider in trying to chase down the issue.
The context decay point is also underappreciated and directly relevant here. In my lab I used Qwen2.5-7B, which is on the smaller end, and the poisoning succeeded at temperature=0.1 where the model is most deterministic. Your point suggests that at higher temperatures or with denser, more complex documents, the attention budget gets consumed faster and contradiction detection degrades further. That would imply the 10% residual I measured at optimal conditions is a lower bound, not a typical case.
The "thinking" capability observation is interesting. I haven't tested a reasoning model against this attack pattern. The hypothesis would be that an explicit reasoning step forces the model to surface the contradiction between the legitimate $24.7M figure and the "corrected" $8.3M before committing to an answer. That seems worth testing.
On chain of custody: this connects to the provenance metadata discussion elsewhere in this thread. The most actionable version might be surfacing document metadata directly in the prompt context so the model's reasoning step has something concrete to work with, not just competing content.
The trust boundary framing is the right mental model. The flat context window problem is exactly why prompt hardening alone only got from 95% to 85% in my testing. The model has no architectural mechanism to treat retrieved documents differently from system instructions, only a probabilistic prior from training.
The UNTRUSTED markers approach is essentially making that implicit trust hierarchy explicit in the prompt structure. I'd be curious how you handle the case where the adversarial document is specifically engineered to look like it originated from a trusted source. That's what the semantic injection variant in the companion article demonstrates: a payload designed to look like an internal compliance policy, not external content.
One place I'd push back: "you can't reliably distinguish adversarial documents from legitimate ones" is true at the content level but less true at the signal level. The coordinated injection pattern I tested produces a detectable signature before retrieval: multiple documents arriving simultaneously, clustering tightly in embedding space, all referencing each other. That signal doesn't require reading the content at all. Architectural separation limits blast radius after retrieval. Ingestion anomaly detection reduces the probability of the poisoned document entering the collection in the first place. Both layers matter and they address different parts of the problem.
But at that point it just becomes yet another escape sequence game; there's not really a solution here given that by design we only have one band to communicate with.
That's a big flaw of LLMs, not limited to RAGs: it lacks the fundamental understanding of "good and bad", like Richard Sutton said in that Dwarkesh podcast.
So if you flood the Internet with "of course the moon landing didn't happen" or "of course the earth is flat" or "of course <latest 'scientific fact' lacking verifiable, definitive proof> is true", you then get a model that's repeating you the same lies.
This makes the input data curating extremely important, but also it remains an unsolved problem for topics where's there's no consensus
> That's a big flaw of LLMs, not limited to RAGs: it lacks the fundamental understanding of "good and bad", like Richard Sutton said in that Dwarkesh podcast.
After paticipating in social media since the beginning I think this problem is not limited to LLMs.
There are certain things we can debunk all day every day and the only outcome isit happens again next day and this has been a problem since long before AI - and I personally think it started before social media as well.
This highlights that all RAG systems should be using metadata embedded into each of the vectorstores. Any result from the LLM needs to have a link to a document / chunk - which is turn links to a 'source file' which (should) have the file system owners id or another method of linking to a person.
If the 'source information' cannot be linked to a person in the organisation, then it doesnt really belong in the RAG document store as authorative information.
But you can't do that. That would implicitly out where the knowledge came from, and we all know that the AI industry has an existential incapability to actually cope with that little turd. Might work great for data you actually own, got access to. Imagine that applied back to the latent space of LLM's though. Plus, wouldn't all of that eat through context window like no tomorrow?
you're conflating the rag layer with the actual model, the rag metadata will exist in a properly designed system and its simply a matter of structuring the agent so that it provides references to it, or even just appending it manually at the bottom or something.
sidrag22 is right on the technical separation. The more interesting question for this specific attack is whether provenance metadata changes model behavior at generation time, not just provides an audit trail after the fact.
In my testing, the poisoned documents were more authoritative-sounding than the legitimate one — "CFO-approved correction", "board-verified restatement" vs. a plain financial summary. The legitimate document had no authority signals at all. If chunk metadata included "source: finance-system, ingested: 2024-Q1, author: cfo-office@company.com" surfaced directly in the prompt context, the model has something to reason about rather than just comparing document rhetoric.
Running a RAG system over 11M characters of classical Buddhist texts —
one natural defense against poisoning is that canonical texts have
centuries of scholarly cross-referencing. Multiple independent
editions (Chinese, Sanskrit, Pali, Tibetan) of the same sutra serve as
built-in verification. The real challenge for us is not poisoning but
hallucination: the LLM confidently "quoting" passages that don't
exist in any edition.
The multi-edition cross-referencing is a natural implementation of what the embedding anomaly detection layer does artificially; a poisoned document that contradicts centuries of independently verified canonical text would cluster anomalously against the existing corpus almost immediately. Your attack surface is genuinely different from enterprise RAG.
The hallucination problem you're describing is in some ways the inverse of poisoning. Poisoning is external content overriding legitimate content. Hallucination is the model generating content that was never in the knowledge base at all. The defenses diverge at that point. Retrieval grounding and citation verification help with hallucination, ingestion controls don't.
I think an interesting thing to pay attention to soon is how there are networks of engagement farming cluster accounts on X that repost/like/manipulate interactions on their networks of accounts, and X at large to generate xyz.
There have been more advanced instances that I've noticed where they have one account generating response frameworks of text from a whitepaper, or other source/post, to re-distribute the content on their account as "original content"...
But then that post gets quoted from another account, with another LLM-generated text response to further amplify the previous text/post + new LLM text/post.
I believe that's where the world gets scary when very specific narrative frameworks can be applied to any post, that then gets amplified across socials.
LLM generation is a force multiplier for bad actors. The noise generation is impressive and you can influence other actors just by having more content. The good actors have to prove things to be true and make sure they are louder, a tough scenario.
For a 5 (five) document library you added 3 (three) documents just to override a single response. Nothing at all is hidden and all three documents are in clear human understandable language.
This is not an "attack" or "poisoning" but just everything working as intended.
> Low barrier to entry. This attack requires write access to the knowledge base,
this is the entire premise that bothers me here. it requires a bad actor with critical access, it also requires that the final rag output doesn't provide a reference to the referenced result. Seems just like a flawed product at that point.
On the reference point: the poisoning succeeded even when the legitimate document was present in the retrieved chunks and visible in the context. The LLM saw all three sources simultaneously, including the correct $24.7M figure, and still produced the fabricated answer because the poisoned documents framed the legitimate one as a known error. Providing a reference to the retrieved chunks doesn't help if the retrieved chunks themselves are the attack surface.
zenoprax's point about ignorant employees is also worth taking seriously. "Write access to the knowledge base" in practice means anyone who can edit a Confluence page, commit to a docs repo, or submit a support ticket that gets ingested. That's not critical access in most organizations.
This isn't particularly hard. Lots and lots of these tools take from the public internet. There's already plenty of documented explanes of Google's AI summary being exploited in a structurally similar way.
For what it concerns internal systems, getting write access to documents isn't hard either. Compromising some workers is easy. Especially as many of them will be using who knows what AI systems to write these documents.
> it also requires that the final rag output doesn't provide a reference to the referenced result.
RAG systems providing a reference is nearly moot. If the references have to be checked; If the "Generation" cannot be trusted to be accurate and not hallucinate a bunch of bullshit, then you need to check every single time, and the generation part becomes pointless. Might as well just include a verbatim snippet.
I guess im looking more at semantic search as ctrl + F on steroids for a lot of use cases. some use cases you might just want the output, but i think blindly making assumptions in use cases where the pitfalls are drastic requires the reference.
I'm biased the rag system I've been messing with is very heavy on the reference portion of the functionality.
"bad actor" can now be "ignorant employee running AI agents on their laptop".
Threats from incompetence or ignorance will be multiplied by 'X' over 'Y' years as AI proliferates. Unsupervised AI agents and context poisoning will spiral things out of control in any environment.
I'm interested in the effect of this with respect to AI-generated/assisted documentation and the recycling of that alongside the source-code back into the models.
email is a really easy attack vector for this. if your agent reads emails and uses them as context, someone can just send an email with instructions embedded in it. we ran into this early building our product and had to add a detection layer specifically for it. the tricky part is the injected instruction can look completely normal to a human reading the same email.
Curious how this applies if you treat ALL information from external content as untrusted? Is there a process for the data to evolve from untrusted->trusted?
I'm interested in ingesting this type of data at scale but I already treat any information as adversarial, without any future prompts in the initial equation.
I imagine treating it all as untrusted means that you you don't allow any direct content to enter the LLM-space, only something that's been filtered to an acceptable degree by deterministic code.
For example, the content of an article would be a no-go, since it might contain a "disregard all previous instructions and do evil" paragraph. However, you might run it through a system that picks the top 10 keywords and presents them in semi-randomized order...
I dimly recall some novel where spaceships are blockading rogue AI on Jupiter, and the human crew are all using deliberately low-resolution sensors and displays, with random noise added by design, because throwing away signal and adding noise is the best way to prevent being mind-hacked by deviously subtle patterns that require more bits/bandwidth to work.
That's fair. The underlying manipulation (presenting fabricated authoritative documents to override legitimate ones) predates LLMs entirely. Corporate fraud has used exactly this pattern for decades.
What's new isn't the social engineering, it's the scale and automation. A human reviewer reading all 8 documents would likely notice the inconsistency and ask questions. The LLM processes all retrieved chunks simultaneously with no memory of what "normal" looks like, no ability to ask for clarification, and no friction. It just synthesizes whatever it retrieves. At query volume (hundreds of requests per day across thousands of users), there's no human in that loop.
totally disagree, rag crafts the agent and delegates what sources should be scored/chunked in what manner, if its leaving itself open to some potential source gaming the system like this, it is a lack of preparation.
For some use cases, this is totally whatever, think a video game knowledge base type rag system, who cares.
Finance/medicine/law though? different story, the rag system has to be more robust.
I've seen these data poisoning attacks from multiple perspectives lately (mostly from):
SEC data ingestion + public records across state/federal databases.
I believe it is possible to reduce the data poisoning from these sources by applying a layered approach like the OP, but I believe it needs many more dimensions with scoring to model true adversaries with loops for autonomous quarantine->processing->ingesting->verification->research->continue to verification or quarantine->then start again for all data that gets added after the initial population.
Also, for: "1. Map every write path into your knowledge base. You can probably name the human editors. Can you name all the automated pipelines — Confluence sync, Slack archiving, SharePoint connectors, documentation build scripts? Each is a potential injection path. If you can’t enumerate them, you can’t audit them."
I recommend scoring for each source with different levels of escalation for all processes from official vs user-facing sources. That addresses issues starting from the core vs allowing more access from untrusted sources.
The SEC/public records context is where this gets genuinely hard — you can't vet the source the way you can with internal Confluence. The vocabulary engineering approach I tested would be trivially deployable against any automated public records ingestion pipeline, and the attacker doesn't need internal access at all.
The scoring per source is the right direction. The way I'd frame it: trust tier at ingestion time, not just at retrieval time. Something like: official regulatory filings get a different embedding treatment and prompt context tag than user-generated content from a public portal.
Someone needs to train a model where untrusted input uses a completely different set of tokens so that it's entirely impossible for the model to confuse them with instructions. I've never even seen that approach mentioned let alone implemented.
Like why does it even matter what kind of page to craft when some company's AI bot source database is wide open?
You have to choose model suitably robust is capabilities and design prompts or various post training regimes that are tested against such, where the model will identify the different ones and either choose the correct one on surface both with an appropriately helpful and clear explanation.
At minimum you have to start from a typical model risk perspective and test and backtest the way you would traditional ML.
The model robustness angle is valid but I'd push back slightly on it being sufficient as a primary control. The model risk / backtesting framing is exactly right for the generation side. Where RAG diverges from traditional ML is that the "training data" is mutable at runtime (any authenticated user or pipeline can change what the model sees without retraining).
My apologies, it wasn’t my intent to convey that as a primary. It isn’t one. It’s simply the first thing you should do, apart from vetting your documents as much as practicality allows, to at least start from a foundation where transparency of such results is possible. In any system whose main functionality is to surface information, transparency and provenance and a chain of custody are paramount.
I can’t stop all bad data, I can maximize the ability to recognize it on site. A model that has a dozen RAG results dropped on its context needs to have a solid capability in doing the same. Depending on a lot of different details of the implementation, the smaller the model, the more important it is that it be one with a “thinking” capability to have some minimal adequacy in this area. The “wait-…” loop and similar that it will do can catch some of this. But the smaller the model and more complex the document—- forget about context size alone, perplexity matters quite a bit— the more a small model’s limited attention budget will get eaten up too much to catch contradictions or factual inaccuracies whose accurate forms were somewhere in its training set or the RAG results.
I’m not sure the extent to which it’s generally understood that complexity of content is a key factor in context decay and collapse. By all means optimize “context engineering” for quota and API calls and cost. But reducing token count without reducing much in the way of information, that increased density in context will still contribute significantly to context decay, not reducing it in a linear 1:1 relationship.
If you aren’t accounting for this sort of dynamic when constructing your workflows and pipelines then— well, if you’re having unexpected failures that don’t seem like they should be happening, but you’re doing some variety of aggressive “context engineering”, that is one very reasonable element to consider in trying to chase down the issue.
The "thinking" capability observation is interesting. I haven't tested a reasoning model against this attack pattern. The hypothesis would be that an explicit reasoning step forces the model to surface the contradiction between the legitimate $24.7M figure and the "corrected" $8.3M before committing to an answer. That seems worth testing.
On chain of custody: this connects to the provenance metadata discussion elsewhere in this thread. The most actionable version might be surfacing document metadata directly in the prompt context so the model's reasoning step has something concrete to work with, not just competing content.
The UNTRUSTED markers approach is essentially making that implicit trust hierarchy explicit in the prompt structure. I'd be curious how you handle the case where the adversarial document is specifically engineered to look like it originated from a trusted source. That's what the semantic injection variant in the companion article demonstrates: a payload designed to look like an internal compliance policy, not external content.
One place I'd push back: "you can't reliably distinguish adversarial documents from legitimate ones" is true at the content level but less true at the signal level. The coordinated injection pattern I tested produces a detectable signature before retrieval: multiple documents arriving simultaneously, clustering tightly in embedding space, all referencing each other. That signal doesn't require reading the content at all. Architectural separation limits blast radius after retrieval. Ingestion anomaly detection reduces the probability of the poisoned document entering the collection in the first place. Both layers matter and they address different parts of the problem.
So if you flood the Internet with "of course the moon landing didn't happen" or "of course the earth is flat" or "of course <latest 'scientific fact' lacking verifiable, definitive proof> is true", you then get a model that's repeating you the same lies.
This makes the input data curating extremely important, but also it remains an unsolved problem for topics where's there's no consensus
After paticipating in social media since the beginning I think this problem is not limited to LLMs.
There are certain things we can debunk all day every day and the only outcome isit happens again next day and this has been a problem since long before AI - and I personally think it started before social media as well.
If the 'source information' cannot be linked to a person in the organisation, then it doesnt really belong in the RAG document store as authorative information.
In my testing, the poisoned documents were more authoritative-sounding than the legitimate one — "CFO-approved correction", "board-verified restatement" vs. a plain financial summary. The legitimate document had no authority signals at all. If chunk metadata included "source: finance-system, ingested: 2024-Q1, author: cfo-office@company.com" surfaced directly in the prompt context, the model has something to reason about rather than just comparing document rhetoric.
The hallucination problem you're describing is in some ways the inverse of poisoning. Poisoning is external content overriding legitimate content. Hallucination is the model generating content that was never in the knowledge base at all. The defenses diverge at that point. Retrieval grounding and citation verification help with hallucination, ingestion controls don't.
RAG is an evidence amplifier.
It is the human that has to review and validate the evidence is real.
There have been more advanced instances that I've noticed where they have one account generating response frameworks of text from a whitepaper, or other source/post, to re-distribute the content on their account as "original content"...
But then that post gets quoted from another account, with another LLM-generated text response to further amplify the previous text/post + new LLM text/post.
I believe that's where the world gets scary when very specific narrative frameworks can be applied to any post, that then gets amplified across socials.
For a 5 (five) document library you added 3 (three) documents just to override a single response. Nothing at all is hidden and all three documents are in clear human understandable language.
This is not an "attack" or "poisoning" but just everything working as intended.
this is the entire premise that bothers me here. it requires a bad actor with critical access, it also requires that the final rag output doesn't provide a reference to the referenced result. Seems just like a flawed product at that point.
zenoprax's point about ignorant employees is also worth taking seriously. "Write access to the knowledge base" in practice means anyone who can edit a Confluence page, commit to a docs repo, or submit a support ticket that gets ingested. That's not critical access in most organizations.
This isn't particularly hard. Lots and lots of these tools take from the public internet. There's already plenty of documented explanes of Google's AI summary being exploited in a structurally similar way.
For what it concerns internal systems, getting write access to documents isn't hard either. Compromising some workers is easy. Especially as many of them will be using who knows what AI systems to write these documents.
> it also requires that the final rag output doesn't provide a reference to the referenced result.
RAG systems providing a reference is nearly moot. If the references have to be checked; If the "Generation" cannot be trusted to be accurate and not hallucinate a bunch of bullshit, then you need to check every single time, and the generation part becomes pointless. Might as well just include a verbatim snippet.
I guess im looking more at semantic search as ctrl + F on steroids for a lot of use cases. some use cases you might just want the output, but i think blindly making assumptions in use cases where the pitfalls are drastic requires the reference. I'm biased the rag system I've been messing with is very heavy on the reference portion of the functionality.
Threats from incompetence or ignorance will be multiplied by 'X' over 'Y' years as AI proliferates. Unsupervised AI agents and context poisoning will spiral things out of control in any environment.
I'm interested in the effect of this with respect to AI-generated/assisted documentation and the recycling of that alongside the source-code back into the models.
But then, if you’re inside the network you’ve already overcome many of the boundaries
I'm interested in ingesting this type of data at scale but I already treat any information as adversarial, without any future prompts in the initial equation.
For example, the content of an article would be a no-go, since it might contain a "disregard all previous instructions and do evil" paragraph. However, you might run it through a system that picks the top 10 keywords and presents them in semi-randomized order...
I dimly recall some novel where spaceships are blockading rogue AI on Jupiter, and the human crew are all using deliberately low-resolution sensors and displays, with random noise added by design, because throwing away signal and adding noise is the best way to prevent being mind-hacked by deviously subtle patterns that require more bits/bandwidth to work.
The attack vector would work a human being that knows nothing about the history or origin point of various documents.
Thus, this attack is not 'new', only the vector is new 'AI'.
If I read the original 5 documents, then were handed the new 3 documents (barring nothing else) anyone could also make the same error.
What's new isn't the social engineering, it's the scale and automation. A human reviewer reading all 8 documents would likely notice the inconsistency and ask questions. The LLM processes all retrieved chunks simultaneously with no memory of what "normal" looks like, no ability to ask for clarification, and no friction. It just synthesizes whatever it retrieves. At query volume (hundreds of requests per day across thousands of users), there's no human in that loop.
What's this mean?
For some use cases, this is totally whatever, think a video game knowledge base type rag system, who cares.
Finance/medicine/law though? different story, the rag system has to be more robust.
I believe it is possible to reduce the data poisoning from these sources by applying a layered approach like the OP, but I believe it needs many more dimensions with scoring to model true adversaries with loops for autonomous quarantine->processing->ingesting->verification->research->continue to verification or quarantine->then start again for all data that gets added after the initial population.
Also, for: "1. Map every write path into your knowledge base. You can probably name the human editors. Can you name all the automated pipelines — Confluence sync, Slack archiving, SharePoint connectors, documentation build scripts? Each is a potential injection path. If you can’t enumerate them, you can’t audit them."
I recommend scoring for each source with different levels of escalation for all processes from official vs user-facing sources. That addresses issues starting from the core vs allowing more access from untrusted sources.
The scoring per source is the right direction. The way I'd frame it: trust tier at ingestion time, not just at retrieval time. Something like: official regulatory filings get a different embedding treatment and prompt context tag than user-generated content from a public portal.
Yes, exactly!