This is interesting. I want to understand how much of a latency does Omega Walls introduce to the workflow? Or is it recommended for long-running workflows where a couple of additional seconds of latency is fine?
I've built a simple AI-chat interface for one of my products and I had experienced a latency of 6000 ms by adding a router LLM to selectively inject context to the main LLM. Dropped that idea completely.
EDIT: my bad, just read the term "multi-agent workflow" in the title. Still, some idea about the added latency would be helpful.
I've built a simple AI-chat interface for one of my products and I had experienced a latency of 6000 ms by adding a router LLM to selectively inject context to the main LLM. Dropped that idea completely.
EDIT: my bad, just read the term "multi-agent workflow" in the title. Still, some idea about the added latency would be helpful.