These companies all wax on about how important context engineering is yet not one of them has released acceptable tooling for end users to visualize and understand the context window as it grows and shrinks during a session. Best Claude code can do? Warn you when you hit 80% full
> We can't really do much with the information that x amount is reserved for MCP, tool calling or the system prompt.
I actually think this is pretty useful information. It helps you evaluate whether an MCP server is worth the context cost. Similar for getting a feel for how much context certain tool uses use up. I feel like there's a way you can change the system prompt, and so that helps you evaluate if what you've got there is worth it also.
My theory is that you will never get this from a frontier model provider because as is alluded to in sibling thread the context window management is actually a good hunk of the secret sauce that makes these things effective and companies do not want to give that up
The article doesn't really give helpful advice here, but please don't vibe this.
Create evals from previous issues and current tests. Use DSPy on prompts. Create hypotheses for the value of different context packs, and run an eval matrix to see what actually works and what doesn't. Instrument your agents with Otel and stratify failure cases to understand where your agents are breaking.
It's pretty straightforward, different optimizers have different requirements. Some require example inputs/outputs, others will just optimize on whatever you've got. You can use codex/claude code to set it up in order to bootstrap quickly, they're decent at it.
I find you can give it a task and the full context in your 1st message, and also include (a) asking what files are needed to understand and complete task, and (b) ask if there’s anything ambiguous about the task/question. Then, when you get the response, create a new chat with just the files it recommends, and the ambiguities explained in the 1st comment. Sometimes you need a couple of rounds of this.
The you will have a good starting point, with less chance of running out of space before solving the task.
If you can’t give it full context at the beginning, you can give it a tree listing of the files involved, and maybe a couple of READMEs (if there are any) and ask it see if it can work out what files are needed, giving it a couple of files at a time, at its suggestion.
I’ve been playing around with Apple’s Foundation Models, their on device llm has a 4k context window. That’s really been an interesting exercise in context engineering coming from others like Claude and GPT. I think those larger context windows have made me take context engineering for granted.
It’s kind of useful but I suppose they just admit that failure rate increases with large context windows. My guess is that what happened to the presentation of those Meta glasses where the model would not do what was asked for.
Another interesting thought might be that long horizon tasks need different tooling, and with the shift to long running tasks you can use cheaper models as well. None of the big providers have good tools for that at the moment, so the only thing they can say is: to fix our contexts but still use their models.
I think "output engineering" is equally as important, and steering with grammar (structured output with json schema or CFGs directly) is a huge win there I find:
Oh yeah, this is huge! I instruct agents to do a few things in this vein that are big improvements:
1. Have agents emit chatter in a structured format. Have them emit hypotheses, evidence, counterfactuals, invariants, etc. The fully natural language agent chatter is shit for observability, and if you have structured agent output you can actually run script hooks that are very powerful in response to agent input/output.
2. Have agents summarize key evidence from toolcalls, then just drop the tool call output from context (you can give them a tool to retrieve the old value without recomputation, cache tool output in redis and give them a key to retrieve it later if needed). Tool calls dominate context expansion bloat, and once you've extracted evidence the original tool output is very low value.
Competitive edge. Some agents will be better than others, therefore worth paying for. So for example, if one writes an AI trading agent, there’s no reason to share it similar to how it is at the moment with regular trading algos.
I’m not saying it won’t eventually be known, but not in these initial stages.
The only thing separating Claude, Gemini and ChatGPT is their context and prompt engineering, assuming the frontier models belong to the same class of capability. You can absolutely release a competitor to these things that could perform better for certain things (or even all things, if you introduce brand new context engineering ideas), if you wanted to.
No, I mean why do you think that effective context engineering will remain a black art, instead of becoming something with standard practices that work well for most use cases?
I can’t say it will remain a black art because the tech itself creates new paradigms constantly. An LLM can be fine tuned with context engineering examples, similar to Chain Of Thought tuning, and that’s how we get a reasoning loop. With enough fine tuning, we could get a similar context loop, in which case those keeping things hidden will be washed away with new paradigms.
Even if someone fine tuned an LLM with this type of data, Deepseek has shown that they can just use a teacher-student strategy to steal from whatever model you trained (exfiltrate your value-add, which is how they stole from OpenAI). Stealing is already a thing in this space, so don’t be shocked if over time you see a lot more protectionism (protectionism is something we already see geopolitically on the hardware front).
I don’t know what’s going to happen, but I can confidently say that if humans are involved at this stage, there will absolutely be some level of information siloing, and stealing.
——
But to directly answer your question:
”… instead of becoming something with standard practices that work well for most use cases?”
In no uncertain terms, the answer is because of money.
Context is often not the only issue. Really the issue is attention - context is a factor in how well the LLM handles attention to the broad scope of a task, but one can anecdotally easily observe the thing forget or go off the rails when only a fraction of the context window is being used. Oftentimes it’s effective to just say “don’t ever go above 20% of the max”
Some of that is, or at least was, down to the training: extending the context window but not training on sufficiently long data or using weak evaluation metrics caused issues. More recent models have been getting better, though long context performance is still not as good as short context performance, even if the definition of "short context" has been greatly extended.
RoPE is great and all, but doesn't magically give 100% performance over the lengthened context; that takes more work.
Sure, it’s a horrible attitude. With that said, there is a time and place for everything. At the very beginning of AI, which is where we are, it’s not necessarily evil to carve out your advantages and share later.
These companies all wax on about how important context engineering is yet not one of them has released acceptable tooling for end users to visualize and understand the context window as it grows and shrinks during a session. Best Claude code can do? Warn you when you hit 80% full
try /context in Claude Code
A very crude tool. A good start maybe, but it does not give us any information about the message part of the context, the one that matters.
We can't really do much with the information that x amount is reserved for MCP, tool calling or the system prompt.
> We can't really do much with the information that x amount is reserved for MCP, tool calling or the system prompt.
I actually think this is pretty useful information. It helps you evaluate whether an MCP server is worth the context cost. Similar for getting a feel for how much context certain tool uses use up. I feel like there's a way you can change the system prompt, and so that helps you evaluate if what you've got there is worth it also.
Sure, it's useful, once.
What we need is a way to manage the dynamic part of the context without just starting from zero each time.
My theory is that you will never get this from a frontier model provider because as is alluded to in sibling thread the context window management is actually a good hunk of the secret sauce that makes these things effective and companies do not want to give that up
The article doesn't really give helpful advice here, but please don't vibe this.
Create evals from previous issues and current tests. Use DSPy on prompts. Create hypotheses for the value of different context packs, and run an eval matrix to see what actually works and what doesn't. Instrument your agents with Otel and stratify failure cases to understand where your agents are breaking.
Otel meaning open Telemetry? Do they have special capability for tracking agents?
Yes, there is an otel standard for agent traces. You can instrument agents that don't natively support Otel via bifrost.
How hard is dspy to setup?
Isn't it a programming language type thing?
Can you even integrate that into an existing codebase easily?
It's pretty straightforward, different optimizers have different requirements. Some require example inputs/outputs, others will just optimize on whatever you've got. You can use codex/claude code to set it up in order to bootstrap quickly, they're decent at it.
Does dspy support structured outputs?
Yes, I was using it for structured outputs before the dedicated structured outputs got their act together.
Yes using signatures with types
I find you can give it a task and the full context in your 1st message, and also include (a) asking what files are needed to understand and complete task, and (b) ask if there’s anything ambiguous about the task/question. Then, when you get the response, create a new chat with just the files it recommends, and the ambiguities explained in the 1st comment. Sometimes you need a couple of rounds of this.
The you will have a good starting point, with less chance of running out of space before solving the task.
If you can’t give it full context at the beginning, you can give it a tree listing of the files involved, and maybe a couple of READMEs (if there are any) and ask it see if it can work out what files are needed, giving it a couple of files at a time, at its suggestion.
I’ve been playing around with Apple’s Foundation Models, their on device llm has a 4k context window. That’s really been an interesting exercise in context engineering coming from others like Claude and GPT. I think those larger context windows have made me take context engineering for granted.
It’s kind of useful but I suppose they just admit that failure rate increases with large context windows. My guess is that what happened to the presentation of those Meta glasses where the model would not do what was asked for.
Another interesting thought might be that long horizon tasks need different tooling, and with the shift to long running tasks you can use cheaper models as well. None of the big providers have good tools for that at the moment, so the only thing they can say is: to fix our contexts but still use their models.
I think "output engineering" is equally as important, and steering with grammar (structured output with json schema or CFGs directly) is a huge win there I find:
https://platform.openai.com/docs/guides/function-calling#con...
Oh yeah, this is huge! I instruct agents to do a few things in this vein that are big improvements:
1. Have agents emit chatter in a structured format. Have them emit hypotheses, evidence, counterfactuals, invariants, etc. The fully natural language agent chatter is shit for observability, and if you have structured agent output you can actually run script hooks that are very powerful in response to agent input/output.
2. Have agents summarize key evidence from toolcalls, then just drop the tool call output from context (you can give them a tool to retrieve the old value without recomputation, cache tool output in redis and give them a key to retrieve it later if needed). Tool calls dominate context expansion bloat, and once you've extracted evidence the original tool output is very low value.
I think any meaningful context engineering strategies will be trade secrets.
Why do you think that?
Competitive edge. Some agents will be better than others, therefore worth paying for. So for example, if one writes an AI trading agent, there’s no reason to share it similar to how it is at the moment with regular trading algos.
I’m not saying it won’t eventually be known, but not in these initial stages.
The only thing separating Claude, Gemini and ChatGPT is their context and prompt engineering, assuming the frontier models belong to the same class of capability. You can absolutely release a competitor to these things that could perform better for certain things (or even all things, if you introduce brand new context engineering ideas), if you wanted to.
No, I mean why do you think that effective context engineering will remain a black art, instead of becoming something with standard practices that work well for most use cases?
I can’t say it will remain a black art because the tech itself creates new paradigms constantly. An LLM can be fine tuned with context engineering examples, similar to Chain Of Thought tuning, and that’s how we get a reasoning loop. With enough fine tuning, we could get a similar context loop, in which case those keeping things hidden will be washed away with new paradigms.
Even if someone fine tuned an LLM with this type of data, Deepseek has shown that they can just use a teacher-student strategy to steal from whatever model you trained (exfiltrate your value-add, which is how they stole from OpenAI). Stealing is already a thing in this space, so don’t be shocked if over time you see a lot more protectionism (protectionism is something we already see geopolitically on the hardware front).
I don’t know what’s going to happen, but I can confidently say that if humans are involved at this stage, there will absolutely be some level of information siloing, and stealing.
——
But to directly answer your question:
”… instead of becoming something with standard practices that work well for most use cases?”
In no uncertain terms, the answer is because of money.
Idk if trade secrets really exist in a world where engineers at every level hop between the same x companies every other Monday.
Maybe, but we'll be getting to a place where each LLM call gets cheaper, faster and has a larger context, it may not matter long term.
Context is often not the only issue. Really the issue is attention - context is a factor in how well the LLM handles attention to the broad scope of a task, but one can anecdotally easily observe the thing forget or go off the rails when only a fraction of the context window is being used. Oftentimes it’s effective to just say “don’t ever go above 20% of the max”
Some of that is, or at least was, down to the training: extending the context window but not training on sufficiently long data or using weak evaluation metrics caused issues. More recent models have been getting better, though long context performance is still not as good as short context performance, even if the definition of "short context" has been greatly extended.
RoPE is great and all, but doesn't magically give 100% performance over the lengthened context; that takes more work.
Imagine where we would be if academia or open source had this train of tougth.
No algorithms, no Linux, no open protocols, maybe not even internet.
Sure, it’s a horrible attitude. With that said, there is a time and place for everything. At the very beginning of AI, which is where we are, it’s not necessarily evil to carve out your advantages and share later.