How are you handling prompt injection across multi-step agent workflows?

(msukhareva.substack.com)

3 points | by AnViF 1 hour ago

1 comments

  • bhagyeshsp 1 hour ago
    This is interesting. I want to understand how much of a latency does Omega Walls introduce to the workflow? Or is it recommended for long-running workflows where a couple of additional seconds of latency is fine?

    I've built a simple AI-chat interface for one of my products and I had experienced a latency of 6000 ms by adding a router LLM to selectively inject context to the main LLM. Dropped that idea completely.

    EDIT: my bad, just read the term "multi-agent workflow" in the title. Still, some idea about the added latency would be helpful.