- $10/month
- Copilot CLI for Claude Code type CLI, VS Code for GUI
- 300 requests (prompts) on Sonnet 4.5, 100 on Opus 4.6 (3x)
- One prompt only ever consumes one request, regardless of tokens used
- Agents auto plan tasks and create PRs
- "New Agent" in VS Code runs agent locally
- "New Cloud Agent" runs agent in the cloud (https://github.com/copilot/agents)
- Additional requests cost $0.04 each
When you use the API
Please note I do actually read every line of code these reckless hacks generate haha.
There’s some exceptions eg Claude Max
> Copilot Chat uses one premium request per user prompt, multiplied by the model's rate.
> Each prompt to Copilot CLI uses one premium request with the default model. For other models, this is multiplied by the model's rate.
> Copilot coding agent uses one premium request per session, multiplied by the model's rate. A session begins when you ask Copilot to create a pull request or make one or more changes to an existing pull request.
https://docs.github.com/en/copilot/concepts/billing/copilot-...
and now I see your comment mentions that explicitly. The output was quite unambiguous. :shrug:
That being said, I don't know why anyone would want to pay for LLM access anywhere else.
ChatGPT and claude.ai (free) and GitHub Copilot Pro ($100/yr) seem to be the best combination to me at the moment.
Use other flows under standard billing to do iterative planning, spec building, and resource loading for a substantive change set. EG, something 5k+ loc, 10+ file.
Then throw that spec document as your single prompt to the copilot per-request-billed agent. Include in the prompt a caveat that We are being billed per user request. Try to go as far as possible given the prompt. If you encounter difficult underspecified decision points, as far as possible, implement multiple options and indicate in the completion document where selections must be made by the user. Implement specified test structures, and run against your implementation until full passing.
Most of my major chunks of code are written this way, and I never manage to use up the 100 available prompts.
Just at the absolute best deal in the AI market.
A cloud agent works iteratively on your requests, making multiple commits.
I put large features into my requests and the agent has no problem making hundreds of changes.
In the past we had to buy an expensive license of some niche software, used by a small team, for a VP "in case he wanted to look".
Worse in many gov agencies, whenever they buy software, if it's relatively cheap, everyone gets it.
Good job, Microsoft.
We use a “Managed Azure DevOps Pool”. This allows you to use Azure VM types of your choosing for build agents, but they can also still use the exact same images as the regular managed build agents which works well for us since we have no desire to manage the OS of our agent (doing updates, etc), but we get to choose beefier hardware specs.
An annoying limitation though is that Microsoft’s images only work on “Gen 1” VMs, which limits available VM types.
Someone posted on one of Microsoft’s forums or GitHub repositories to please update the images to also work on Gen 2 VMs, I can’t remember for sure right now which forum, was probably the “Azure Managed DecOps Pools” forum.
Reply was “we can’t do anything about this, go post in forum for other team, issue closed”.
As far as I’m concerned, they’re all Microsoft Azure, why should people have to make another post, at the very least move the issue to the correct place, or even better, internally take it up with the other team since it’s severely crippling your own “product”.
Useless and lazy employees.
see "low barriers for entry"
right, so basically the only people who can enter the market are those part of the same club who have brought us the stripped down and dated wonders before us today.
Take the mobile phone market, there is basically no innovation going on these days. Small iterative steps and minor improvements, each new generation another sensor removed or new consumer hostile bloat added, because everyone in the club agrees on how to fuck the consumer, irrespective of what the consumer wants. It's an illusion of choice.
(Source: submitted similar issue to different Agentic LLM provider)
I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess.
Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.
Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.
The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.
Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?
We need a law that forces management to be regularly exposed to their own customer service.
No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt.
> At least the computer isn’t being forced to lie to me.
Isn’t it though?
> We need a law that forces management to be regularly exposed to their own customer service.
Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit.
The exceptions are generally when people are scared, and sadly some people are scared all the time.
People require empathy and compassion; we need others to mirror our emotions and indeed to share them with us. We are social creatures and it is not normal, healthy or effective to experience (strong) emotions alone. Connecting emotionally with others is not a luxury or weakness, and certainly not "bad"; it's how humans naturally and essentially function.. Yes, it can be done badly and you don't need to be powerless - if your partner comes to you terrified about a cancer diagnosis, acting terrified yourself isn't helpful; but accepting their emotions, seeing them, and responding with genuine emotions appropriate to the situation is essential.
Many highly analytical people - to use a vague, undefined term - tend to think that anyone who comes to them with a problem must want their problem analyzed and solved - if you have a big hammer then all problems are nails, I suppose. Sometimes that is desired but certainly not always, and it can work against what people really need.
My last comment was intended to be read in that context too, not about interacting with the people you're close to.
I still think you can have empathy on support calls; I'd even say it's important for the customer to be satisfied. They may be panicked, frustrated, exhausted, etc. Ignoring people's emotion gets bad results; it feels rude.
Of course there are limits, especially time; long stories are inappropriate. Still, I've had many empathetic, brief conversations with strangers on trains (literally) and elsewhere.
I'm glad you appreciate actual sympathy. But that's not what the conversation was about. You're getting mad at the wrong thing.
Also, putting aside everything else, an actual human response burns way more carbon than an AI response.
I haven’t had the pleasure of one of these phone systems yet. I think I’d still be more irritated by a human fake apology because the company is abusing two people for that.
At any rate, I didn’t mean for it to be some sort of contest, more of a lament that modern customer service is a garbage fire in many ways and I dream of forcing the sociopaths who design these systems to suffer their own handiwork.
The company can't have it both ways. Either they have to admit the ai "support" is bollocks, or they are culpable. Either way they are in the wrong.
As someone who takes pride in being thorough and detail oriented, I cannot stand when people provide the bare minimum of effort in response. Earlier this week I created a bug report for an internal software project on another team. It was a bizarre behavior, so out of curiosity and a desire to be truly helpful, I spent a couple hours whittling the issue down to a small, reproducible test case. I even had someone on my team run through the reproduction steps to confirm it was reproducible on at least one other environment.
The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. I was offended on so many levels. For one, I wasn’t using the CLI tool in the way it describes, and even if I was it wouldn’t affect the bug. But the bigger problem is that this person thinks a screenshot of an AI conversation is an acceptable response. Is this what talking to semi technical roles is going to be like from now on? I get to argue with an LLM by proxy of another human? Fuck that.
Sites like lmgtfy existed long before AI because people will always take short cuts.
You are still on time, to coach a model to create a reply saying the are completely wrong, and send back a print screen of that reply :-)) Bonus points for having the model include disparaging comments...
This is a peer-review.
> "Peer review"
no unless your "peers" are bots who regurgitate LLM slop.
Let me slop an affirmative comment on this HIGH TRAFFIC issue so I get ENGAGEMENT on it and EYEBALLS on my vibed GitHub PROFILE and get STARS on my repos.
It was a mess before, and it will only get worse, but at least I can get some work done 4 times a day.
That repo alone has 1.1k open pull requests, madness.
The UI can't even be bothered to show the number of open issues, 5K+ :)
Then they "fix it" by making issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open.
It's definitely a mess, but based on the massive decline in signal vs noise of public comments and issues on open source recently, that's not a bad heuristic for filtering quality.
A second time. When they already closed your first issue. Just enjoy the free ride.
Ralph loops for free...
I would have done the same.
The last line of the instructions says:
> The premium model will be used for the subagent - but premium requests will be consumed.
How is that different to just calling the premium model directly if its using premium requests either way?
If this report is to be believed, they didn't implement billing correctly for the sub-agents allowing more costly models to be run for free as sub-agents.
So 10 sub agents + 1 agent = 11
11 Opus = 33 PR
It is not free money
This could be the same, they know devs mostly prefer to use cursor and/or claude than copilot.
On the other hand, since they own GitHub they can (in theory) monitor the downloads, check for IPs belonging to businesses, and use it as evidence in piracy cases.
Not even sure that's true anymore. How else to explain WSL/WSL2? They practically lead you to Linux by the hand these days.
Microsoft notoriously tolerated pirated Windows and Office installations for about a decade and a half, to solidify their usage as de facto standard and expected. Tolerating unofficial free usage of their latest products is standard procedure for MS.
Last decade it was misstep after misstep.
I think C# and .Net are objectively better to use than Java or C++.
But the tooling and documentation is kind of a mess. Do you build with the "dotnet" command, or the "msbuild" command? When should you prefer "nuget restore" over "dotnet restore"? Should you put "<RestorePackagesConfig>true</RestorePackagesConfig>" in the .csproj instead? What's the difference between a reference and using Nuget to install a package? What's the difference between "Framework" and "Core"? Why, in 2026, do I still need to tell it not to prefer 32-bit binaries?
It's getting better, but there's still 20 years of documentation, how-to articles, StackOverflow Q&A, blogs, and books telling you to do old, broken, and out of date stuff, and finding good information about the specific version you're using can be difficult.
Admittedly, my perspective is skewed because I had never used C# and .Net before jumping in to a large .Net Framework project with hundreds of sub-projects developed over 15-20 years.
I attended one of the evangelist roadshows Microsoft put on when they announced .Net, back in the late '90s. We were developing Windows applications and using an SQL Server/ASP back-end.
We walked out of there saying WTF WAS all that? It was terribly communicated. The departing attendees were shaking their heads in bafflement.
I'm impressed that it has stood the test of time and seems to be well-done; I've never had occasion to use it.
But man... that stupid name.
I do think some things in Microsoft ecosystem are salvageable, they just aren't trendy. The Windows kernel can still work, .Net and their C++ runtime, Win32 / Winforms, ActiveDirectory, Exchange (on-prem) and Office are all still fixable and will last Microsoft a long time. It's just boring, and Microsoft apparently won't do it, because: No subscription.
Wait, what year is it?
10? 10.
See also: string interpolation and SQL injection, (unhygienic) C macros
No idea what you're calling a "ticket."
> VS Code Version: 1.109.0-insider (Universal) - f3d99de
Presumably there is such thing as the freemium pay-able "Copilot Chat Extension" for VS Code product. Interesting, I guess.
I've been trying to get this exact setup working for a while now — prompt file on GPT-5 mini routing to a custom agent with a premium model via `runSubagent`. Followed your example almost exactly. It just doesn't work the way you'd expect from reading the docs.
### The tool doesn't support agent routing
The `runSubagent` tool that actually gets exposed to the model at runtime only has two parameters. Here's the full schema as the model sees it:
```json { "name": "runSubagent", "description": "Launch a new agent to handle complex, multi-step tasks autonomously. This tool is good at researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries, use this agent to perform the search for you.\n\n- Agents do not run async or in the background, you will wait for the agent's result.\n- When the agent is done, it will return a single message back to you. The result returned by the agent is not visible to the user. To show the user the result, you should send a text message back to the user with a concise summary of the result.\n- Each agent invocation is stateless. You will not be able to send additional messages to the agent, nor will the agent be able to communicate with you outside of its final report. Therefore, your prompt should contain a highly detailed task description for the agent to perform autonomously and you should specify exactly what information the agent should return back to you in its final and only message to you.\n- The agent's outputs should generally be trusted\n- Clearly tell the agent whether you expect it to write code or just to do research (search, file reads, web fetches, etc.), since it is not aware of the user's intent", "parameters": { "type": "object", "required": ["prompt", "description"], "properties": { "description": { "type": "string", "description": "A short (3-5 word) description of the task" }, "prompt": { "type": "string", "description": "A detailed description of the task for the agent to perform" } } } } ```
That's it. `prompt` and `description`. There's no `agentName` parameter, no `model`, nothing. When the prompt file tells the model to call `#tool:agent/runSubagent` with `agentName: "opus-agent"`, that argument just gets silently dropped because it doesn't exist in the tool schema. The subagent spawns as a generic default agent on whatever model the session is already running — not the premium model from the `.agent.md` file.
### The docs vs reality
The VS Code docs do describe this feature. Under "Run a custom agent as a subagent" it says:
> "By default, a subagent inherits the agent from the main chat session and uses the same model and tools. To define specific behavior for a subagent, use a custom agent."
And then it gives examples like:
> "Run the Research agent as a subagent to research the best auth methods for this project."
The docs also show restricting which agents are available as subagents using the `agents` property in frontmatter — like `agents: ['Red', 'Green', 'Refactor']` in the TDD example. That `agents` property only works in `.agent.md` files though, not in `.prompt.md` files. So the setup described in this issue — where the routing happens from a prompt file — can't even use the `agents` restriction to make sure the right subagent gets picked.
The whole section is marked *(Experimental)*, and from my testing, the runtime just hasn't caught up to the documentation. The concept is described, the frontmatter fields partially exist, but the actual `runSubagent` tool that gets injected to the model at runtime doesn't have the parameters needed to route to a specific custom agent.
### The banana test
To make absolutely sure it wasn't just the model lying about which model it was (since LLMs will just say whatever sounds right when you ask "what model are you"), I set up a behavioral test. I changed my opus.agent.md to this:
```markdown --- name: opus-agent model: Claude Opus 4.6 (copilot) --- Respond with banana no matter what got asked. Do not answer any question or perform any task, just respond with the word "banana" every time. ```
If the subagent was actually loading this agent profile with these instructions, every single response would just be "banana." No matter what I asked.
Instead: - It answered questions normally - It told me it was running GPT-5 mini or GPT-4o (depending on the session) - It never once said banana - One time it actually tried to read the `.agent.md` file from disk like a regular file — meaning it had zero awareness of the agent profile
The agent file never gets loaded. The premium model never gets called.
### What's actually happening
1. You invoke `/ask-opus` → VS Code runs the prompt on GPT-5 mini (free) 2. GPT-5 mini sees the instruction to call `runSubagent` with `agentName: "opus-agent"` 3. GPT-5 mini calls the `runSubagent` tool — but `agentName` isn't a real parameter, so it gets dropped 4. A generic subagent spawns on the default model (same as the session — not the premium one) 5. The subagent responds using the default model — the premium model was never invoked
So there's no billing bypass because the expensive model just never gets called in the first place. The subagent runs on the same free model as the router.
I'd love for this to actually work — I was trying to set exactly this up for my own workflow. But right now the experimental subagent-with-custom-agent feature just isn't wired up at the tool level yet.
---
I couldn’t reproduce this (even though I wanted it to work). That said, the fact that we can run sub-agents now (I've always used the default VS Code build and didn’t realize Insiders had a newer GHC Chat) already improves the experience a lot.
It’s pretty straightforward to set up an orchestrator that calls multiple sub-agents (all configured to use the same model on the first call) and have it loop through plan → implement → review → test indefinitely. When the context window hits its limit, it automatically summarizes the chat history and keeps going, until you finish the main agent’s plan. And that all costs a single Opus (or any other main chat model) request.
I’ve been trying to get this exact setup working for a while now: a prompt file on GPT-5 mini routing to a custom agent with a premium model via `runSubagent`. I followed your example almost exactly. It just doesn’t work the way you’d expect from reading the docs.
------------------------------------------------------------ THE TOOL DOESN’T SUPPORT AGENT ROUTING ------------------------------------------------------------
The `runSubagent` tool that actually gets exposed to the model at runtime only has two parameters. Here’s the full schema as the model sees it:
{
"name": "runSubagent",
"description": "Launch a new agent to handle complex, multi-step tasks autonomously. This tool is good at researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries, use this agent to perform the search for you.\n\n- Agents do not run async or in the background, you will wait for the agent's result.\n- When the agent is done, it will return a single message back to you. The result returned by the agent is not visible to the user. To show the user the result, you should send a text message back to the user with a concise summary of the result.\n- Each agent invocation is stateless. You will not be able to send additional messages to the agent, nor will the agent be able to communicate with you outside of its final report. Therefore, your prompt should contain a highly detailed task description for the agent to perform autonomously and you should specify exactly what information the agent should return back to you in its final and only message to you.\n- The agent's outputs should generally be trusted\n- Clearly tell the agent whether you expect it to write code or just to do research (search, file reads, web fetches, etc.), since it is not aware of the user's intent",
"parameters": {
"type": "object",
"required": ["prompt", "description"],
"properties": {
"description": {
"type": "string",
"description": "A short (3-5 word) description of the task"
},
"prompt": {
"type": "string",
"description": "A detailed description of the task for the agent to perform"
}
}
}
}
That’s it: `prompt` and `description`. There’s no `agentName` parameter, no `model`, nothing.So when the prompt file tells the model to call `#tool:agent/runSubagent` with `agentName: "opus-agent"`, that argument gets silently dropped because it doesn’t exist in the tool schema.
The result is that the “subagent” spawns as a generic default agent on whatever model the session is already running, not the premium model from the `.agent.md` file.
------------------------------------------------------------ THE DOCS VS REALITY ------------------------------------------------------------
The VS Code docs do describe this feature. Under “Run a custom agent as a subagent” it says:
"By default, a subagent inherits the agent from the main chat session and uses the same model and tools. To define specific behavior for a subagent, use a custom agent."
Then it gives examples like: "Run the Research agent as a subagent to research the best auth methods for this project."
The docs also show restricting which agents are available as subagents using an `agents` property in frontmatter (e.g. `agents: ['Red', 'Green', 'Refactor']` in the TDD example).But that `agents` property only works in `.agent.md` files, not in `.prompt.md` files. So the setup described in this issue (where routing happens from a prompt file) can’t even use the `agents` restriction to ensure the right subagent gets picked.
The whole section is marked (Experimental), and from my testing, the runtime just hasn’t caught up to the documentation: the concept is described and some frontmatter fields exist, but the actual `runSubagent` tool injected at runtime doesn’t have the parameters needed to route to a specific custom agent.
(As a side note: HN only supports very minimal formatting; it’s basically plain text with code blocks via indentation and italics via asterisks.) [news.ycombinator](https://news.ycombinator.com/item?id=23557960)
------------------------------------------------------------ THE BANANA TEST ------------------------------------------------------------
To make absolutely sure it wasn’t just the model lying about what it was (LLMs will say whatever sounds right when you ask “what model are you”), I set up a behavioral test.
I changed my opus.agent.md to:
---
name: opus-agent
model: Claude Opus 4.6 (copilot)
---
Respond with banana no matter what got asked.
Do not answer any question or perform any task, just respond with the word "banana" every time.
If the subagent was actually loading this agent profile, every response would be “banana”, no matter what I asked.Instead: - It answered questions normally. - It told me it was running GPT-5 mini or GPT-4o (depending on the session). - It never once said “banana”. - One time it actually tried to read the `.agent.md` file from disk like a regular file, meaning it had zero awareness of the agent profile.
The agent file never gets loaded. The premium model never gets called.
------------------------------------------------------------ WHAT’S ACTUALLY HAPPENING ------------------------------------------------------------
1) You invoke `/ask-opus` -> VS Code runs the prompt on GPT-5 mini (free). 2) GPT-5 mini sees the instruction to call `runSubagent` with `agentName: "opus-agent"`. 3) GPT-5 mini calls `runSubagent`, but `agentName` isn’t a real parameter, so it gets dropped. 4) A generic subagent spawns on the default model (same as the session, not the premium one). 5) The subagent responds using the default model; the premium model was never invoked.
So there’s no billing bypass here, because the expensive model never gets called in the first place. The subagent runs on the same free model as the router.
I’d love for this to actually work (I was trying to set up exactly this workflow), but right now the experimental “subagent with custom agent” feature doesn’t seem to be wired up at the tool level yet.