How stdio Works in VS MCP Bridge

How stdio Works in VS MCP Bridge

In the VS MCP Bridge architecture, stdio is the first transport boundary, not the whole system.

That detail matters because it is easy to hear “MCP server over stdio” and assume the AI client is somehow talking directly to Visual Studio. It is not. In this codebase, stdio carries requests into the local MCP server process, and then a separate local bridge hop moves those requests into the VSIX.

This post walks through exactly how that works in the current implementation.

The short version

The runtime path today is:

AI client
  -> MCP over stdio
VsMcpBridge.McpServer
  -> JSON over named pipe
VsMcpBridge.Vsix
  -> Visual Studio services / DTE / editor state

So stdio is the transport between the AI-facing side and the MCP server process. It is not the transport between the MCP server and Visual Studio.

Where stdio is enabled

The stdio transport is configured in the MCP host bootstrap:

builder.Services
    .AddMcpServer()
    .WithStdioServerTransport()
    .WithTools<VsTools>();

That configuration lives in the VsMcpBridge.McpServer project, inside McpServerHost.Configure(...). The important line is WithStdioServerTransport().

That single choice tells the MCP host to communicate through standard input and standard output instead of through HTTP, a socket, or some custom transport.

What stdio means here

Standard input and standard output are just process streams.

  • stdin is how another process writes requests into this server process
  • stdout is how this server process writes responses back out

That makes stdio a good fit for local AI tooling because the MCP server can behave like a small worker process. An AI client launches it, keeps the process alive, writes protocol messages to stdin, and reads results from stdout.

In other words, the MCP server does not need to expose a listening port. It just needs to stay attached to the calling client through those streams.

What the entry point actually does

The program entry point is intentionally small:

var builder = Host.CreateApplicationBuilder(args);
McpServerHost.Configure(builder);

await builder.Build().RunAsync();

This is useful because it keeps the startup story clear:

  1. Create the host builder.
  2. Register logging, pipe client, MCP server support, transport, and tools.
  3. Build and run the host.

Once the host runs, the stdio transport is live and the tool surface is advertised to the AI client.

What stdio does not do

It is just as important to understand what stdio is not doing.

It is not:

  • calling Visual Studio APIs directly
  • loading inside the Visual Studio process
  • owning DTE or editor access
  • applying edits in the IDE

Those responsibilities stay on the VSIX side.

The MCP server process stays outside Visual Studio and acts as the AI-facing adapter. That separation is one of the main architectural boundaries in the repo.

How tool calls cross the stdio boundary

Once stdio is active, the MCP host exposes the tool class registered with WithTools<VsTools>().

That tool class contains methods such as:

  • vs_get_active_document
  • vs_get_selected_text
  • vs_list_solution_projects
  • vs_get_error_list
  • vs_propose_text_edit

From the AI client’s perspective, these are MCP tools. The client sends a request over stdio, the MCP host resolves the tool method, and the method executes inside the VsMcpBridge.McpServer process.

But the tool method still does not talk to Visual Studio directly.

Where the request goes after stdio

Inside VsTools, the tool methods forward work through an injected IPipeClient.

That client creates a local named-pipe connection to the VSIX-hosted server:

using var pipe = new NamedPipeClientStream(".", _pipeName, PipeDirection.InOut, PipeOptions.Asynchronous);
await pipe.ConnectAsync(timeout: 5000, cancellationToken);

So the full transport chain is layered:

  1. The AI client calls an MCP tool over stdio.
  2. The MCP host routes that call to a VsTools method.
  3. The tool method uses PipeClient to connect to the VSIX over a named pipe.
  4. The VSIX executes the Visual Studio-specific work.
  5. The result comes back through the pipe.
  6. The MCP server returns that result over stdout to the AI client.

This is why stdio is only the first half of the bridge story.

Why this split is useful

The design keeps two concerns separated cleanly:

  • MCP-facing protocol work lives in the MCP server process
  • Visual Studio host work lives in the VSIX

That separation gives the system several advantages:

  • the MCP host remains lightweight and local-tool friendly
  • Visual Studio API access stays inside the Visual Studio process
  • the AI-facing side does not need direct knowledge of DTE or VS SDK details
  • transport debugging becomes easier because stdio and named-pipe issues can be reasoned about separately

It also aligns with the safety model already present in the bridge. Even edit-oriented requests become proposals first, and the host stays responsible for approval and apply behavior.

Why logging discipline matters with stdio

One practical consequence of using stdio for MCP is that stdout must stay clean for protocol traffic. If the process writes arbitrary log lines to stdout, it can corrupt the client-server conversation.

That is why the MCP host uses an app-data-backed logger rather than casually printing operational logs into the standard output stream.

This is a subtle but important design rule: once stdout is part of the protocol surface, it stops being a safe place for ad hoc diagnostics.

What to remember when studying this code

If you are learning the system, keep these boundaries in mind:

  • Program.cs starts the host.
  • McpServerHost.Configure(...) wires up stdio and the tool surface.
  • VsTools defines what the AI can ask for.
  • PipeClient bridges from the MCP server process into the VSIX.
  • The VSIX owns the actual Visual Studio operations.

Once you see those layers, the implementation becomes much easier to reason about. The server is not doing everything at once. It is playing one focused role in a larger transport chain.

Takeaway

In VS MCP Bridge, stdio is the process-to-process transport that lets an AI client speak MCP to the local server host. The server then forwards the actual IDE work through a separate named-pipe boundary into the VSIX.

So the cleanest mental model is:

stdio gets into the bridge
named pipes get into Visual Studio

That is the current implementation boundary, and it is one of the key ideas that makes the system easier to evolve without collapsing the MCP layer and the Visual Studio host layer into one process.

Understanding a Named Pipe Listener

Named Pipe Listener

Source of Truth: Derived / Educational

Understanding a Named Pipe Listener

In the VS MCP Bridge architecture, the Visual Studio side of the system does not sit and wait for a prompt from an AI tool. It waits for a structured bridge request.

That waiting point is implemented as a named pipe listener.

A named pipe is an inter-process communication channel provided by the operating system. It allows two local processes to exchange messages without exposing a network port. One side creates the pipe and waits for a client connection. The other side connects and sends a request.

In this project, the VSIX creates that listener inside Visual Studio. Once the package finishes startup and composes its services, it starts the pipe server and begins waiting for work. That means the Visual Studio host is ready even before any MCP tool call arrives.

Where the listener starts in this repository

In VS MCP Bridge, the listener is started from the VSIX package during package initialization.

That matters because the pipe server is part of host startup, not something the user manually launches from the tool window. The relevant shape in the code looks like this:

protected override async Task InitializeAsync(CancellationToken cancellationToken, IProgress<ServiceProgressData> progress)
{
    _serviceProvider = new ServiceCollection()
        .AddVsMcpBridgeServices(this)
        .AddMvpVmServices()
        .BuildServiceProvider();

    _pipeServer = _serviceProvider.Resolve<IPipeServer>();
    _pipeServer.Start();
}

Inside the shared bridge layer, PipeServer.Start() spins up a dedicated listener thread and begins waiting for pipe clients on the fixed pipe name VsMcpBridge.

What “listener” means here

A listener is simply the side that opens the pipe and blocks until a client connects. In this design, the listener lives in the VSIX because the VSIX owns access to the Visual Studio automation and editor APIs.

The important detail is that the listener is not waiting for natural-language text. It is waiting for a well-formed request envelope that has already passed through the MCP server process.

That request is not “chat text.” It is a serialized command envelope with fields such as:

  • Command
  • RequestId
  • Payload

The payload then holds the typed request body for the specific operation being executed.

Why use a named pipe here

The named pipe exists to connect two local processes:

  • the MCP server process
  • the Visual Studio VSIX host

This is a good fit because both processes run on the same machine and the communication needs to stay local to the host. The MCP server does not need direct access to Visual Studio internals. Instead, it forwards requests to the VSIX, which performs the host-side work and returns a response.

That separation is one of the architectural boundaries that keeps the design easier to reason about. The MCP-facing process stays outside Visual Studio, while the Visual Studio-specific work stays inside it.

What happens when the VSIX starts listening

Once the listener is running, the Visual Studio side is alive but idle. That idle state means:

  • the package is loaded
  • the services are composed
  • the pipe server is available
  • no bridge request has arrived yet

This is an important mental model for the project. Architecturally, the listener starts during package initialization, not when the user opens the tool window UI.

In current real-world validation, however, keeping the VS MCP Bridge tool window open has still mattered operationally for successful end-to-end work. That means there is a difference between the clean architecture model and the currently proven runtime working mode. For study purposes, keep both ideas in mind:

  • the listener itself is a package-startup concern
  • the current workflow still depends on some UI/bootstrap conditions being satisfied

How a request reaches the listener

A typical flow looks like this:

  1. An AI client sends an MCP tool request to the MCP server process.
  2. The MCP server receives that request over stdio.
  3. The MCP server connects to the named pipe exposed by the VSIX.
  4. The VSIX listener accepts the connection and reads the request.
  5. The Visual Studio-side service layer executes the requested operation.
  6. The VSIX sends the response back through the pipe.
  7. The MCP server returns the result to the AI client.

In other words, the named pipe listener is the handoff point where the external MCP-facing process crosses into the Visual Studio host boundary.

What the client side actually does

The MCP-facing process is not the listener. It is the pipe client. In this repository, the client lives in VsMcpBridge.McpServer.

At startup, the MCP server is configured to speak MCP over stdio:

builder.Services
    .AddSingleton<ILogger, AppDataFolderLogger>()
    .AddSingleton<IPipeClient, PipeClient>()
    .AddMcpServer()
    .WithStdioServerTransport()
    .WithTools<VsTools>();

And the process entry point is only a small host bootstrap:

var builder = Host.CreateApplicationBuilder(args);
McpServerHost.Configure(builder);

await builder.Build().RunAsync();

When a tool is invoked, the client connects to the named pipe, sends one serialized line, and waits for one serialized response line:

using var pipe = new NamedPipeClientStream(".", _pipeName, PipeDirection.InOut, PipeOptions.Asynchronous);
await pipe.ConnectAsync(timeout: 5000, cancellationToken);

await writer.WriteLineAsync(JsonSerializer.Serialize(envelope, JsonOptions));
var responseJson = await reader.ReadLineAsync(cancellationToken);

This is one of the most useful details to understand: stdio is the transport between the AI client and the MCP server process, but the named pipe is the transport between the MCP server process and Visual Studio.

What the listener actually does

On the Visual Studio side, the listener waits for a client connection, then hands that connection off for request handling:

pipe = new NamedPipeServerStream(
    PipeName,
    PipeDirection.InOut,
    NamedPipeServerStream.MaxAllowedServerInstances,
    PipeTransmissionMode.Byte,
    PipeOptions.Asynchronous);

pipe.WaitForConnection();
_ = Task.Run(() => HandleConnectionAsync(pipe, ct), CancellationToken.None);

That means the listener thread is responsible for accepting connections, while the actual request work happens in a separate async handler.

Inside that handler, the server reads the incoming line, deserializes the envelope, dispatches by command, and writes the response back across the same pipe connection.

How dispatch works in practice

Once the server has a PipeMessage, it does not try to interpret prose. It routes the request by command name:

VsResponseBase response = envelope.Command switch
{
    PipeCommands.GetActiveDocument => await _vsService.GetActiveDocumentAsync(),
    PipeCommands.GetSelectedText => await _vsService.GetSelectedTextAsync(),
    PipeCommands.ListSolutionProjects => await _vsService.ListSolutionProjectsAsync(),
    PipeCommands.GetErrorList => await _vsService.GetErrorListAsync(),
    PipeCommands.ProposeTextEdit => await DispatchProposeEditAsync(envelope),
    _ => new VsResponseBaseUnknown { Success = false, ErrorMessage = $"Unknown command: {envelope.Command}" }
};

This is an important learning point for the current bridge: the named-pipe listener is not a generic conversational endpoint. It is a command dispatcher sitting at the boundary of the Visual Studio host.

Where Visual Studio work actually happens

The pipe server does not directly call DTE or editor APIs. Instead, it forwards the request into the host service layer through IVsService.

That service layer is where operations such as these actually happen:

  • get active document
  • get selected text
  • list solution projects
  • read the Error List
  • create approval-gated edit proposals

That separation is useful because it keeps transport concerns in the pipe server and host-specific Visual Studio work in the service layer.

A concrete end-to-end mental model

If you want one simple way to visualize the whole path, use this:

  1. An AI tool calls an MCP tool such as vs_get_active_document.
  2. The MCP server receives the tool call over stdio.
  3. VsTools forwards the request into PipeClient.
  4. PipeClient connects to the VsMcpBridge named pipe and sends a serialized request.
  5. PipeServer in the VSIX accepts the connection.
  6. PipeServer dispatches the command to IVsService.
  7. VsService performs the Visual Studio operation.
  8. The response goes back through the pipe to the MCP server.
  9. The MCP server returns the final tool result to the AI client over stdio.

Why this matters

If you are debugging startup, it helps to remember that the named pipe listener is one of the two waiting points in the system. The MCP server waits on stdio. The VSIX waits on the named pipe. The actual bridge only becomes active when those two waiting points are connected by a tool call.

If you are debugging the current system, a few practical checkpoints are especially valuable:

  • did the VSIX package initialize
  • did PipeServer.Start() run
  • is the pipe name the expected VsMcpBridge
  • did the client connect within the timeout window
  • did the server receive a non-empty JSON line
  • did the command dispatch to a known PipeCommands value
  • did the Visual Studio service operation succeed

Takeaway

A named pipe listener is simply the Visual Studio-side endpoint that waits for local inter-process requests. In VS MCP Bridge, it exists so the VSIX can safely own Visual Studio operations while a separate MCP server process handles the AI-facing protocol.

The most accurate short version for this repository is:

stdio gets into the bridge, and the named pipe gets into Visual Studio.

Understanding AI Chat Sessions, Models, and Agents

One of the easiest mistakes to make with modern AI tools is assuming that a chat is a persistent intelligence that continues thinking between messages. That is not how these systems work. Once that clicks, a lot of confusing behavior suddenly makes sense.

In particular, it explains why an interrupted coding session can feel like a complete reset. The model may be the same, but if the working context is gone, the interaction can come back feeling almost like a different assistant with different priorities.

A Chat Is Not a Persistent Mind

A chat session is better understood as a temporary context window wrapped around a stateless model. On each turn, the application gathers the available instructions, prior messages, and any other relevant inputs, then sends that bundle to the model for a response.

[Temporary Context] + [Stateless Model] = One Response

The important consequence is that the model does not carry forward its own memory in the human sense. It only sees what is included in the current request.

A chat is a temporary context window, not a continuous intelligence.

Why Context Loss Feels So Jarring

When a machine goes to sleep, an app resets, or a session is silently recreated, the working context may no longer be available. When that happens, the model does not “remember” the flow you established earlier in the day.

That means earlier decisions may disappear, terminology alignment may vanish, and previously agreed objectives may no longer be present. The model then has to infer intent from scratch using only the information now available.

What changed in that situation? The underlying model may not have changed at all. The missing piece is usually the session context that previously carried your goals, constraints, and decisions.

Models Do Not Carry Goals Forward

It is useful to be precise here: a model does not retain goals unless those goals are reintroduced through the current context. If yesterday’s architecture decisions or this morning’s coding priorities are not present in the prompt path, they effectively no longer exist from the model’s point of view.

That is why recovered work often depends on external artifacts such as notes, design documents, code, or checklists. Those durable materials restore the context that the session itself lost.


Separating the Main Terms

 

Model

The model, or LLM, is the reasoning engine. It generates output from the input it receives. By itself, it is stateless.

Session Manager

The session manager tracks the active conversation and helps assemble the prior messages and instructions that become the next request.

Chat Interface

The chat interface is the user-facing application layer. It is what sends messages, displays responses, and manages the user experience.

Agent

An agent is a higher-level system that combines a model with tools, memory, and a control loop. That control loop may follow a pattern such as plan, act, observe, and repeat.

  • A model can exist without being part of an agent.
  • A chat application may use agent-like behavior without being a fully durable agent.
  • Persistent memory and tool orchestration are what make an agent feel more autonomous over time.

Switching Models Is Not the Same as Losing Context

These two situations can look similar from the outside, but they are fundamentally different.

Same Context + Different Model = Same memory, different reasoning style
Same Model + Lost Context = Same reasoning engine, no working memory

When users switch between models, the session manager may keep the same conversation history intact. That continuity often masks the difference between models because the new model inherits the same constraints, decisions, and terminology from the preserved chat history.

But if context is lost, even the same model can behave like it has changed personalities simply because it is no longer grounded by the earlier session.

How ChatGPT, Codex, and Copilot Relate

Applications such as ChatGPT, Codex, and Copilot are more than thin user interfaces, but they are not different species of intelligence either. They are better understood as different application layers built from the same core kinds of components.

[Application Layer]
        ↓
[Session Manager / Context Builder]
        ↓
[Model]
        ↓
[Output]
    

What differs is the orchestration around the model.

ChatGPT

General-purpose chat systems usually emphasize conversation continuity, flexible prompting, and broader interaction patterns. They are built to sustain a user-driven loop over many turns.

Codex

Task-oriented coding systems often emphasize execution toward an engineering objective. They may feel more agent-like because they operate in a tighter task loop with code, files, and iterative refinement.

Copilot

IDE-embedded assistants often rely much more heavily on immediate local context such as the current file, cursor position, or surrounding code. Their interaction loop is shorter and more inference-driven than conversation-driven.

What “Inference” Really Means

Inference simply refers to running the model on the current input to produce output. In a tool like Copilot, that often looks like:

Read local context → call model → return suggestion

That is why an embedded coding assistant can feel fast and focused while also feeling less conversational. It is often doing much less session management and much more immediate context-to-output inference.

A Cleaner Mental Model

For practical engineering work, this framing is useful:

  • Model = brain
  • Context = working memory
  • Session manager = short-term memory holder
  • Application = system wiring and control loop

The core intelligence can be similar across products. What changes is the wiring, the memory path, and the tool orchestration around it.

The Engineering Takeaway

For real development work, the safest assumption is that chat context is ephemeral. Documents, code, tests, logs, and checkpoints should remain the source of truth.

That design mindset is not only useful when using AI tools. It is the same discipline that makes software systems more reliable: avoid hidden dependence on transient state, and preserve the information that matters in durable artifacts.

Signal Map for the Current Bridge

This post continues the developer ramp-up series for VS MCP Bridge. The earlier posts explained why runtime validation matters, how to turn it into a playbook, and why the bridge needs enough evidence to avoid becoming a black box. This post makes that more concrete by mapping the current implementation to the signals developers can already observe today and the signals that still need improvement.

The goal is simple: identify what the bridge already exposes, what is usable but still weak, and what is still missing if validation is going to stay practical.

Why A Signal Map Matters

A validation playbook is only as useful as the signals behind it.

If developers are told to validate host startup, request flow, proposal creation, approval handling, and recovery, the app needs to provide enough evidence for each of those steps to be interpreted. Otherwise the playbook becomes a checklist with no reliable feedback loop.

That is why a signal map matters. It translates theory into current reality.

Bucket 1: Signals The Bridge Already Has

The current bridge already appears to expose a useful starting set of signals.

From the current implementation and documentation, the bridge already has evidence in areas such as:

  • package and bootstrap logging
  • pipe server lifecycle logging
  • request identifiers in the pipe flow
  • proposal lifecycle state
  • visible approval UI
  • approve and reject actions
  • success and failure paths around edit application

This matters because it means the app is not starting from a blank slate. The bridge already has the beginnings of a practical evidence model.

What These Existing Signals Help Prove

Those existing signals are already useful for a number of current questions:

  • did the package initialize
  • did the pipe server start
  • did a request receive an identifier
  • did a proposal get created
  • did the user see an approval prompt
  • did approve or reject happen
  • did edit application succeed or fail

That means the bridge can already support more operational reasoning than a true black box would allow.

Bucket 2: Signals That Exist But May Need Tightening

This is probably the most important bucket for the current stage.

In many systems, the real problem is not that signals are entirely absent. It is that the signals exist, but do not yet form a clear enough narrative to support easy diagnosis.

In the current bridge, that likely applies to areas such as:

  • logs that exist but may not yet tell one clean end-to-end story
  • request identifiers that exist but may not appear at every important boundary
  • UI state that is visible but may not always make active versus stale state obvious
  • failure signals that exist but may not clearly indicate whether recovery has happened

This is where a bridge often becomes either easy to validate or frustrating to debug.

Why Tightening Matters

Signals do not need to be perfect to be useful, but they do need to be coherent.

If a request starts in one layer with an identifier but loses that identity later in the path, the signal weakens. If a proposal appears in the UI but the logs do not clearly show its lifecycle, the signal weakens. If failure is visible but recovery is not, the signal weakens.

So the current bridge likely does not need a large new diagnostics subsystem. It more likely needs signal tightening at the boundaries that already exist.

Bucket 3: Signals That Are Still Missing

The final bucket is the one that will probably drive future implementation work.

For the current bridge, the likely missing signals are not futuristic or academic. They are the ones that make day-to-day validation harder than it should be.

Typical gaps in the current scope would be:

  • not enough distinction between full startup success and partial startup failure
  • not enough correlation between MCP tool call, pipe request, proposal, and final outcome
  • not enough evidence that stale proposal state has been cleared after reject or restart
  • not enough explicit separation between proposal creation and actual edit application

These are the kinds of gaps that do not always prevent functionality, but do make triage slower and confidence lower.

How This Maps To The Validation Playbook

The value of the signal map becomes clearer when it is paired with the validation playbook.

For example:

  • host startup validation depends on strong startup signals
  • listener readiness depends on pipe lifecycle signals
  • request validation depends on correlated request-flow signals
  • proposal validation depends on both logs and visible UI signals
  • recovery validation depends on explicit evidence that old state is no longer active

So each validation step needs a signal counterpart. If that counterpart is weak or missing, the step becomes harder to trust.

Why This Is A Good Stopping Point Before More Coding

This topic is important because it creates a natural bridge back into implementation work.

Once the team agrees on:

  • what evidence already exists
  • what evidence is too weak
  • what evidence is missing

the next coding work becomes easier to prioritize. Instead of �improve diagnostics� in the abstract, the work can become focused changes such as:

  • carry request identifiers further through the flow
  • make proposal state transitions more explicit
  • improve startup success and failure markers
  • make restart and recovery signals easier to interpret

That is a much better way to resume implementation.

Takeaway

The current bridge already has useful signals, but practical validation depends on more than signal existence. It depends on signal quality, continuity, and clarity.

The current work is therefore not just about adding more logs. It is about strengthening the specific signals that make the current bridge understandable when it succeeds, when it fails, and when it recovers.

Next In The Series

The next useful topic is likely the first evidence-hardening slice: choosing one concrete diagnostics gap in the current implementation and fixing it all the way through the flow.

Evidence Model for the Current Bridge

Evidence

Source of Truth: /docs/ARCHITECTURE.md (or relevant doc)
Status: Derived / Educational

This post continues the developer ramp-up series for VS MCP Bridge. The earlier posts explained how the bridge starts, how prompts become tool workflows, how runtime validation should be approached, and how to turn validation into a practical playbook. This post focuses on the next question: what evidence does the current bridge need so it does not become a black box when something fails?

The short answer is that validation only works if the system produces enough signal to explain what happened, where it happened, and whether the failure is still active or already cleared.

Why Evidence Matters

A bridge can have a clean architecture and still be painful to operate if it behaves like a black box at runtime.

Without enough evidence, every failure starts to look the same:

  • the tool did not work
  • the UI looks wrong
  • the host might be broken
  • the session might still be poisoned

That kind of uncertainty makes triage expensive. It also makes the system feel less trustworthy than it really is.

Evidence is what turns a vague failure into something localizable and explainable.

What Evidence Should Answer

For the current bridge, the evidence model does not need to become a full observability platform. It just needs to answer a few practical questions:

  • did startup complete correctly
  • did the request reach the bridge
  • which command or tool was running
  • did host execution begin
  • did UI state update
  • did the request succeed, fail, or stop at approval
  • is the failure still active, or has the system recovered

If the current app can answer those questions, then the bridge is no longer a black box in practice.

The Four Evidence Sources

For the current bridge slice, the most useful evidence comes from four places:

  • startup logs
  • request lifecycle logs
  • UI signals
  • outcome signals

That is enough for the current stage.

Startup Logs

Startup logs tell you whether the host came alive correctly.

At minimum, they should make it possible to see:

  • package initialization started
  • dependency injection composition completed
  • pipe server started
  • package initialization completed

Without these signals, host startup is mostly guesswork. That makes every later failure harder to interpret because you cannot be sure the bridge was really healthy to begin with.

Request Lifecycle Logs

Request lifecycle logs tell you whether a tool call actually moved through the bridge.

At minimum, a meaningful request should leave evidence for:

  • request received
  • command or tool name
  • request identifier
  • dispatch started
  • dispatch completed or failed

This is the signal set that helps you distinguish one layer from another.

For example, it helps answer questions like:

  • did the MCP tool call happen at all
  • did the pipe connection succeed
  • did the host service actually run
  • did the failure happen before or after dispatch

Without this layer of evidence, most failures collapse into one generic “the bridge is broken” story.

UI Signals

UI signals matter because the bridge does not end at transport. It also has a visible approval workflow.

For the current app, useful UI evidence includes:

  • the tool window opened
  • log text updated
  • an approval prompt appeared
  • the prompt cleared after approve or reject

These are not merely presentation details. They are part of the evidence model because they tell you whether the host-side workflow became visible correctly.

Outcome Signals

The final evidence source is what the system says happened as a result.

For the current bridge, useful outcomes include:

  • read tool returned data
  • proposal created
  • proposal approved
  • proposal rejected
  • edit applied
  • edit failed

This matters because the bridge has more than one kind of successful behavior. A request can succeed by returning data, or it can succeed by producing a proposal that still awaits approval. Those are different outcomes, and the evidence should make that distinction obvious.

Evidence Keeps State Understandable

One of the most important jobs of evidence is to make state visible enough that developers can tell the difference between:

  • a request that is active
  • a request that completed
  • a proposal that is waiting for approval
  • a failure that already stopped
  • stale state that should no longer matter

This is how evidence helps prevent the “poisoned session” feeling that some AI tools can create. If you cannot tell whether the system is actively failing or just displaying stale state, the tool feels unreliable even when the underlying issue may be small.

How Evidence Supports The Validation Playbook

The validation playbook from the previous post only works if each step has observable signals behind it.

That mapping looks like this:

  • host startup depends on startup logs
  • listener readiness depends on pipe startup and waiting logs
  • MCP tool validation depends on request lifecycle logs
  • proposal validation depends on visible approval prompts and proposal state signals
  • recovery validation depends on evidence that old state is cleared and new requests are independent

So the validation playbook and the evidence model are tightly connected. One defines the steps, and the other makes the steps interpretable.

Why This Also Shapes Implementation

That is why evidence is not just a logging preference. It influences design.

If the app needs to be testable and triageable, then it needs:

  • clear startup boundaries
  • clear request identifiers
  • clear proposal lifecycle transitions
  • clear separation between active and stale state

That means the evidence model quietly tells the implementation what it must expose in order to be supportable.

Takeaway

For the current bridge, evidence is what prevents the system from becoming a black box.

It is what makes failures localizable, recoveries understandable, and runtime behavior explainable. Without it, validation becomes guesswork. With it, the bridge becomes something developers can trust, diagnose, and improve step by step.

Next In The Series

The next useful topic is likely the concrete signal map for the current implementation: what the existing logs, request identifiers, and UI state already provide today, and what is still missing if the bridge is going to be easy to validate in practice.

Validation Playbook for the Current Bridge Slice

Playbook

Source of Truth: /docs/ARCHITECTURE.md (or relevant doc)
Status: Derived / Educational

This post continues the developer ramp-up series for VS MCP Bridge. The earlier posts explained how the bridge starts, how prompts become tool workflows, where async and UI-thread correctness matter, and why runtime validation and clean recovery are important. This post turns that into a practical next step: a validation playbook for the current bridge slice.

The goal is not to create a large test framework. The goal is to provide a small, repeatable order of checks that proves the current bridge works end to end and provides enough evidence when it does not.

Why A Playbook Helps

A validation playbook is useful for more than manual testing. It also acts as design guidance for the implementation.

If the playbook says a developer should be able to verify something, but the current app does not expose enough signal to verify it, that is a real implementation gap. In that way, the playbook becomes a design pressure on the bridge.

It pushes the system toward:

  • observable startup
  • clear request and response tracing
  • predictable approval flow
  • clean restart and recovery behavior

So this is not only a checklist. It is also a practical definition of what “operationally understandable” should mean for the current bridge.

Keep The Playbook In Order

The most important rule is to validate the bridge in the same order the runtime depends on it.

That means:

  1. validate the host first
  2. validate the bridge listener
  3. validate the MCP side
  4. validate one simple read-only request
  5. validate proposal flow
  6. validate approval and recovery

This keeps debugging focused. If an early step fails, later steps are not meaningful yet.

Step 1: Validate Host Startup

Start with the Visual Studio host.

At this point, you want to confirm that:

  • the VSIX loads in the experimental instance
  • the package initializes
  • the tool window can open
  • startup logs show expected initialization flow

This step matters because every other test depends on the host actually being alive.

Healthy signals:

  • startup happens once
  • no repeated initialization loop appears
  • no obvious package-load failure is present in logs

Step 2: Validate Bridge Listener Readiness

Once the host is alive, confirm that the named-pipe listener is ready.

You are looking for evidence that:

  • the pipe server started
  • the listener is waiting for connections
  • the host is not repeatedly restarting or faulting in the background

This is still a host-side validation step. If the listener is not ready, MCP-side testing is premature.

Step 3: Validate MCP Server Startup

Now move to the AI-facing side.

At this stage, confirm that:

  • VsMcpBridge.McpServer launches
  • stdio transport is active
  • tools are registered and visible
  • the client can see the tool surface without immediate failure

This proves that the outer boundary is alive before a real host request is attempted.

Step 4: Validate One Simple Read-Only Tool

The first full end-to-end proof should be the simplest possible tool.

Good candidates are:

  • vs_get_active_document
  • vs_list_solution_projects

At this point, you want to prove that:

  • the MCP call reaches VsTools
  • the pipe client connects
  • the pipe server accepts the request
  • the host service executes the request
  • the response returns to the caller

This is the first moment where the bridge has demonstrated a real round trip.

Step 5: Validate One Richer Read

After the simplest read succeeds, try a request that depends more on current host context.

Good candidates are:

  • vs_get_selected_text
  • vs_get_error_list

This helps catch cases where the bridge is alive, but the host state or context handling is not what you expected.

Step 6: Validate Proposal Creation

Only after read-only calls behave cleanly should you test edit proposal flow.

At this stage, confirm that:

  • vs_propose_text_edit creates a proposal
  • proposal state reaches the approval workflow
  • the tool window shows the correct approval prompt
  • the prompt corresponds to the request that was just made

This is where transport validation becomes UI-workflow validation.

Step 7: Validate Approve And Reject

Both approval outcomes matter.

Reject should prove that:

  • the prompt clears correctly
  • the rejected proposal does not remain active
  • the UI does not continue acting as if approval is pending

Approve should prove that:

  • the correct proposal is applied
  • the edit path succeeds or fails clearly
  • the visible state matches the actual result

This is the step that confirms the approval flow is operational, not just visible.

Step 8: Validate Failure And Recovery

This step is especially important because failure loops are often where trust is lost.

You want to confirm that after a failure:

  • the failure is visible
  • the system stops cleanly
  • a new request can still be made
  • restart returns the system to a predictable state

What you do not want is:

  • the same failing action replaying without user intent
  • stale approval UI returning unexpectedly
  • restarting the visible app while hidden state keeps the failure loop alive

If the bridge fails once and then recovers cleanly, that is still a usable system. If it fails once and gets stuck repeating itself, that is a much more serious problem.

What To Watch While Testing

The playbook only works if the system exposes enough signal to interpret each step.

Useful evidence includes:

  • startup logs
  • request and response logs
  • request identifiers
  • tool results in the client
  • tool window behavior
  • clear approval and rejection outcomes

If the app does not expose enough information to distinguish where a failure happened, that is a gap worth addressing.

Why This Also Informs Design

That is why this playbook is more than documentation. It quietly defines expectations for the app itself.

If developers should be able to validate each stage, then the app needs:

  • clear startup diagnostics
  • bounded request handling
  • predictable approval state transitions
  • restart behavior that clears transient failure state

In that sense, validation is not just about checking the implementation. It is also shaping what a good implementation looks like.

Takeaway

For the current bridge, the validation playbook should stay small, ordered, and concrete.

Start with the host. Confirm the listener. Start the MCP side. Prove one simple request. Then move to proposal flow, approval handling, and failure recovery. That sequence gives the current bridge a practical path to confidence without creating unnecessary process overhead.

Next In The Series

The next useful topic is likely the evidence model itself: what the current logs, UI signals, and request identifiers need to look like so that each validation step is easy to interpret and debug.

VS MCP Bridge Blog Series: Part 4

Source of Truth: /docs/ARCHITECTURE.md (or relevant doc)
Status: Derived / Educational

Runtime Validation, Failure Loops, and Clean Recovery

This post continues the developer ramp-up series for VS MCP Bridge. The earlier posts explained how the bridge starts, how prompts become tool workflows, and where async and UI-thread correctness matter in the current implementation. This post focuses on the next practical question: how do we validate that the bridge really works end to end, and how do we avoid getting stuck in failure loops?

The short answer is that runtime validation is not only about proving success. It is also about proving that failures stop cleanly and that the system can recover into a known-good state.

Why Runtime Validation Matters

A bridge like this can look correct in the codebase and still fail at runtime because it crosses multiple boundaries:

  • MCP client to MCP server
  • MCP server to named pipe
  • named pipe to host service
  • host service to approval UI
  • approval UI to edit application

Unit tests can validate pieces of that path, but only runtime validation proves that the full chain works in the real host environment.

For the current app, the important question is simple:

does one real request make it all the way through the system and back?

Success Is Not Enough

There is another reason runtime validation matters: some AI-assisted tools fail badly when they do not recover cleanly. Instead of one request failing once, the system can get trapped in a loop of repeated failures, stale state, or repeated retries that survive longer than the user expects.

That kind of experience is frustrating because restarting the visible app does not always clear the underlying problem. A request may still be queued, stale UI state may remain visible, or a background component may keep retrying something that should have stopped.

So the current bridge should not only prove that success works. It should also prove that:

  • failure stops cleanly
  • old state does not keep reasserting itself
  • restart returns the system to a predictable state

The Small Validation Sequence

For the current bridge, runtime validation does not need to be complicated. The smallest meaningful proof sequence is:

  1. VSIX loads and starts the bridge.
  2. MCP server starts and advertises tools.
  3. One read-only request completes end to end.
  4. One edit proposal reaches the approval UI.
  5. One approved edit applies correctly.

If those five things work reliably, the bridge has a real end-to-end story in the current implementation.

Step 1: Validate VSIX Startup

The first proof is that the host side of the bridge actually comes alive.

What needs to be true:

  • the VSIX loads in the experimental instance
  • the package initializes
  • dependency injection composition succeeds
  • the named-pipe listener starts

If this step fails, the rest of the bridge is irrelevant because nothing on the host side is reachable.

Healthy signals include:

  • startup happens once
  • logs show normal initialization flow
  • restarting Visual Studio does not create duplicate or conflicting runtime state

Step 2: Validate MCP Server Startup

The second proof is on the MCP-facing side.

What needs to be true:

  • VsMcpBridge.McpServer launches
  • stdio transport is active
  • tools are registered and visible
  • the MCP client can invoke them

This proves that the external tool surface is alive before host integration is even tested.

Healthy signals include:

  • tool registration happens once
  • tool calls either succeed or fail clearly
  • a failed call does not permanently poison the session

Step 3: Validate One Read-Only Tool End To End

This is the first full bridge proof.

A good candidate is something simple and read-only, such as:

  • vs_get_active_document
  • vs_list_solution_projects

What needs to be true:

  • the MCP tool call reaches VsTools
  • PipeClient connects successfully
  • PipeServer accepts and dispatches the request
  • VsService executes the host logic
  • the response returns to the MCP caller

This is where the architecture stops being theoretical and starts being real.

Step 4: Validate Proposal Flow

After a read-only request succeeds, the next meaningful proof is the approval path.

What needs to be true:

  • vs_propose_text_edit creates a proposal
  • proposal state reaches the approval workflow
  • the tool window shows the correct approval prompt
  • UI state updates happen correctly

This validates more than transport. It validates the handoff from bridge logic into the visible host workflow.

Step 5: Validate Edit Application

The final proof in the current slice is that an approved edit actually applies.

What needs to be true:

  • the correct proposal is approved
  • the edit application path runs correctly
  • the document changes as expected
  • success or failure is visible in diagnostics

That completes the current end-to-end scenario.

What Evidence To Look For

Runtime validation should not rely on guesswork. At each stage, the system should provide enough evidence to show what happened.

Useful evidence includes:

  • startup logs
  • request and response logs
  • request identifiers
  • visible approval prompts
  • clear success or failure messages

Logs are especially important when something goes wrong. If a request fails, you want to know whether the failure was in the MCP layer, the pipe layer, the host service layer, or the approval/application layer.

Failure Must Stop Cleanly

This is where runtime validation becomes more than a success checklist.

A healthy bridge should not enter an endless retry pattern or resurrect stale work after a visible restart. If a request fails, the failure should be bounded and understandable.

Healthy failure behavior looks like this:

  • one request fails once
  • the failure is visible
  • the system remains usable for the next request
  • restarting the host clears transient runtime state

Unhealthy failure behavior looks like this:

  • the same request keeps retrying without user intent
  • stale proposal or approval state returns unexpectedly
  • restarting the visible app does not clear the failure loop

That distinction is important because bad recovery behavior can make a tool feel far less trustworthy than a simple one-time failure.

The Practical Goal

For the current app, the runtime-validation goal can be stated very simply:

prove that the bridge can succeed cleanly,
fail cleanly,
and recover cleanly

That is enough for the current stage. It proves the bridge is not only wired up, but also usable and debuggable in the real host environment.

Takeaway

Runtime validation is where the bridge moves from code structure to operational confidence.

At this stage, the goal is not to invent a massive validation framework. It is to prove the current chain works, to confirm that approvals and edits behave predictably, and to make sure failures do not trap the user in stale or looping state.

That is the difference between a bridge that merely exists in source and a bridge that is safe to trust in practice.

Next In The Series

The next useful topic is likely the validation playbook itself: what commands to run, what screens to open, what logs to watch, and what order to test the current bridge slice in a real environment.

VS MCP Bridge Blog Series: Part 3

Source of Truth: /docs/ARCHITECTURE.md (or relevant doc)
Status: Derived / Educational

Async Work, Approval Flow, and UI Thread Safety

This post continues the developer ramp-up series for VS MCP Bridge. The earlier posts explained how the bridge starts and how prompts become MCP tool workflows. This one stays grounded in the current implementation and answers a practical question: where does async behavior matter right now, and what has to stay on the UI thread?

The short answer is that the bridge can do background transport work, but host access and UI state updates still need to be controlled carefully.

Keep The Scope Small

It is easy to turn asynchronous design into a huge theoretical topic. For the current bridge, that would be unnecessary.

At the current stage, the important concerns are only these:

  • requests can arrive asynchronously
  • Visual Studio API access still has thread requirements
  • UI-facing state must be updated safely
  • approval flow must remain simple and predictable

That is enough to reason about the current system without drifting into future complexity.

Background Transport Is Fine

The bridge transport does not need to run on the UI thread.

In the current design, the pipe server listens in the background and handles incoming requests asynchronously. That is the correct shape. A request can arrive from the MCP server without blocking the host UI while the bridge waits for input.

So the first rule is simple:

transport work can happen in the background

This is healthy and expected. The bridge would feel unnecessarily fragile if every incoming request had to begin on the UI thread.

Visual Studio API Access Is Different

Even though transport work can happen in the background, the moment the bridge needs to touch Visual Studio services, the threading rules change.

That is why the current design is strongest when VsService acts as the strict host boundary. This is the layer that talks to DTE and other Visual Studio APIs, so it is the right place to be disciplined about thread access.

For the current app, this is the key rule:

background request arrives
  -> bridge dispatches request
  -> host service switches to the proper UI thread if required
  -> Visual Studio work happens there

This keeps the async transport side separate from the host-specific execution side.

UI State Must Be Updated Deliberately

The next place where thread correctness matters is the UI state itself.

The bridge updates visible state when it:

  • writes log information to the tool window
  • shows approval prompts
  • updates viewmodel-bound properties

Those updates should not happen directly from an arbitrary background callback. They need to be marshaled back to the UI thread of the current host.

In the current design, that is exactly why the presenter and thread-helper abstractions matter. The presenter is not just a convenience layer. It is the place where the workflow can safely cross back into UI-facing state.

The practical rule is:

background result is ready
  -> presenter marshals to UI thread
  -> viewmodel state is updated
  -> UI reflects the change

This keeps the shared workflow logic compatible with both hosts while respecting the thread model of each one.

Approval Flow Should Stay Simple

The approval path is where asynchronous bridge activity meets user control.

At the current stage, the goal is not to build a large workflow engine. The goal is simply to make sure that when a proposal is created:

  • the user sees a clear approval prompt
  • the prompt reflects the correct proposal
  • approval or rejection behaves predictably

That means the bridge should keep the approval experience obvious and controlled. There is no need to over-design multi-proposal concurrency until runtime evidence says it is necessary.

For the current bridge, the important outcome is not theoretical scalability. It is that the approval experience remains understandable.

Logs Still Matter

Even in the current slice, logs can become confusing if background work and UI actions interleave without enough clarity.

This is not the biggest architectural risk in the system, but it does matter during runtime validation. If log messages from request handling, proposal creation, approval, and application appear in a confusing order, debugging becomes harder than it should be.

So one practical takeaway is that async correctness is not just about avoiding crashes. It is also about making runtime behavior legible enough to diagnose.

The Current Mental Model

If we stay strictly within the current implementation, the model is fairly clean:

request arrives in the background
  -> bridge handles transport work asynchronously
  -> host service switches to the right UI thread when required
  -> presenter switches UI-facing updates to the UI thread
  -> approval state stays clear and predictable

That is enough for the current bridge stage.

There may be deeper concurrency concerns later, but those belong to a later stage of the project. Right now, the immediate job is simply to keep host access, UI updates, approval flow, and logs correct in the slice that already exists.

One More Practical Point

The project now has both a VSIX host and a standalone WPF host. As the bridge evolves, that means thread-safe workflow changes should be made with both hosts in mind.

In practice, that means if a shared workflow assumption changes in the VSIX path, the corresponding behavior in VsMcpBridge.App should also be reviewed so the two hosts do not silently drift apart.

The details of their thread-dispatch mechanisms may differ, but the higher-level workflow should remain aligned where the feature is intended to be shared.

Takeaway

For the current bridge, async design does not need to be overcomplicated.

The important rule is simply that transport can be asynchronous, but host access and UI state changes must still be deliberate and thread-correct. If the bridge does that well, the current implementation remains understandable, stable, and safe enough for the next stage of MCP work.

Next In The Series

The next useful topic is likely runtime validation: how to verify that MCP tool calls, named-pipe dispatch, proposal creation, and approval flow all work end to end in the real host environment.

VS MCP Bridge Blog Series: Part 2

VS MCP Bridge Blog Series: Part 2

How Prompts Become MCP Tool Workflows

This post continues the developer ramp-up series for VS MCP Bridge. The first post explained the bootstrap flow from VSIX startup to the first tool call. This one focuses on a different question: how does a natural-language prompt actually turn into bridge activity?

The short answer is that the bridge does not process prompts directly. It processes tool calls that an AI decides to make while answering a prompt.

The Core Distinction

A prompt is a user request in natural language.

A tool call is a structured operation the AI chooses to invoke.

A tool result is what the bridge returns from that operation.

A completed action is the final outcome the user cares about, which may or may not happen immediately.

That means these are not equivalent concepts:

  • prompt
  • tool call
  • tool result
  • completed action

This distinction matters because the VS MCP Bridge sits behind the AI's decision layer. The user talks to the AI. The AI decides whether it needs bridge capabilities. Only then does the bridge get involved.

What The AI Actually Sees

The AI does not magically inspect Visual Studio just because the user asks a question. Instead, the MCP server advertises a set of tools, and the AI chooses from that set when it needs real IDE state.

In the current repo, that tool surface includes operations such as:

  • vs_get_active_document
  • vs_get_selected_text
  • vs_list_solution_projects
  • vs_get_error_list
  • vs_propose_text_edit

So the bridge is not acting as a prompt processor. It is acting as a capability surface that the AI may decide to use.

Prompt To Tool Call

Suppose the user asks:

What file am I editing?

The AI may decide it needs actual editor state. If so, it can call vs_get_active_document. The bridge returns the active file path, language, and content, and the AI uses that tool result to produce its answer.

So the flow is:

  1. User sends a prompt.
  2. The AI reasons about what it needs.
  3. The AI may choose a tool call.
  4. The bridge returns a tool result.
  5. The AI uses that result in its final response.

The important takeaway is that the user's prompt is broad and human, while the tool call is narrow and structured.

Tool Result Is Not Always The Final Outcome

Read-only tools are straightforward. They return information directly. If the AI asks for the active document or the error list, the returned result is usually the useful outcome.

Edit-oriented tools are different.

In the current bridge design, vs_propose_text_edit does not immediately write a file. It returns a diff proposal and triggers approval flow inside the host.

That means the AI can request an edit proposal, but the host still controls whether the change is applied.

The cleanest summary of this behavior is:

MCP proposes.
Visual Studio approves.
VSIX applies.

This is a strong separation of concerns because it gives the AI a way to suggest changes without granting direct, silent mutation of the IDE state.

How The AI Knows Which Tools Exist

The AI knows which tools exist because the MCP server advertises them.

That means tool discovery is explicit. The model is not guessing capabilities from the codebase alone. It is learning them from the MCP server's declared tool list, including:

  • tool names
  • tool descriptions
  • parameter names
  • parameter descriptions

This has an important design consequence: tool metadata influences model behavior.

If a tool name is vague, or its description fails to explain whether it is read-only, proposal-oriented, or approval-gated, the AI may use it badly or not use it at all.

That means tool definitions are not just API declarations. They are also part of the model-facing interaction design.

Why Tool Descriptions Matter

The AI relies heavily on tool descriptions to decide:

  • whether a tool is relevant
  • when to call it
  • how to populate arguments
  • what kind of outcome to expect

A bridge can be protocol-correct and still feel weak if its tool surface is unclear. A stronger bridge has:

  • clear tool names
  • strong descriptions
  • obvious safety boundaries
  • predictable results

That is why MCP work is partly protocol engineering and partly capability design.

One Prompt Can Trigger Multiple Tool Calls

Another important point is that one prompt does not necessarily map to one tool call.

In practice, one user request can turn into a short AI workflow with multiple tool calls in sequence.

For example, if the user asks:

Please fix the selected code.

A reasonable tool sequence might be:

  1. Call vs_get_selected_text.
  2. Possibly call vs_get_active_document for file path or surrounding context.
  3. Generate revised text.
  4. Call vs_propose_text_edit.
  5. Report that a proposal was created and approval is still required.

This means the bridge is not only serving isolated commands. It is supporting small, model-driven workflows inside a single conversational turn.

Why This Matters To The Bridge Design

Once you see the bridge this way, several design choices become easier to understand.

  • Read-only tools matter because they often prepare the AI for a later action-oriented call.
  • Request IDs matter because a larger workflow may involve multiple related calls.
  • Approval-gated tools matter because they insert a deliberate safety checkpoint into the workflow.
  • Tool output quality matters because each result may shape the model's next step.

So the bridge is really supporting a layered workflow:

single user prompt
  -> AI reasoning
  -> one or more tool calls
  -> data returned or proposal created
  -> optional approval step
  -> final user-facing answer

That is a more accurate mental model than imagining a one-prompt-to-one-command system.

Takeaway

If the first blog post established how the bridge starts, this post explains how the bridge is actually used.

The key idea is that the user talks in prompts, but the bridge works in tools. The AI sits in the middle and decides when those tools should be invoked, how they should be chained together, and how their results should be turned back into useful responses.

In other words, the MCP bridge is not a prompt handler. It is an AI-facing capability layer that supports controlled access to IDE state and approval-gated actions.

Next In The Series

The next useful topic is how the bridge keeps those model-driven workflows safe and host-correct, especially once multiple asynchronous operations, approval prompts, and UI-thread updates are happening over time.

VS MCP Bridge Blog Series: Part 1

VS MCP Bridge Blog Series: Part 1

From VSIX Startup to the First MCP Tool Call

This post is the first in a short developer ramp-up series for the VS MCP Bridge project. Its purpose is to make the bootstrap flow easier to understand, especially in a codebase that is intentionally decoupled.

The question this post answers is simple: what actually happens from the moment the VSIX starts until the bridge is waiting for work?

The Short Version

The system has two waiting points, not one.

  • The VSIX starts inside Visual Studio and opens a named pipe listener.
  • The MCP server starts as a separate process and waits on stdio.
  • An AI client sends an MCP tool request over stdio to the MCP server.
  • The MCP server forwards that request to the VSIX over the named pipe.
  • The VSIX executes the Visual Studio-side operation and returns a response.

The important mental model is that the VSIX is not waiting for a natural-language prompt. It is waiting for a bridge request from the MCP server.

Step 1: Visual Studio Loads the VSIX Package

The entry point is the package class, VsMcpBridgePackage. It is an AsyncPackage that auto-loads when a solution exists and can initialize in the background.

At startup, the package does four important things:

  1. Registers the command that can open the bridge tool window.
  2. Builds the dependency injection container.
  3. Resolves shared and VSIX-specific services.
  4. Starts the named pipe server.

This is the first major clarification for new developers: the bridge transport is started during package initialization, not when the tool window opens.

Step 2: Dependency Injection Assembles the Runtime

The package itself stays relatively thin. Most of the real behavior is composed through dependency injection.

During startup, the service collection registers components such as:

  • logging
  • unhandled exception capture
  • approval workflow state
  • Visual Studio service access
  • edit application
  • tool window presenter and view model
  • named pipe server

That means the startup path can feel hard to follow in the debugger unless you remember that the package is mainly a composition root.

Step 3: The VSIX Starts Listening on the Named Pipe

Once the pipe server is resolved, the package calls Start(). The pipe server spins up a background thread and begins waiting for a client connection on the VsMcpBridge pipe.

At this point, the Visual Studio side of the bridge is alive but idle.

That idle state means:

  • the package is loaded
  • the services are composed
  • the transport is available
  • no MCP request has arrived yet

Step 4: The Tool Window Is Optional at Startup

The bridge tool window is not required for the package to start listening. The command to show it is registered during initialization, but the actual window is created only when the user opens it.

When the tool window is created, it resolves the presenter and view model, binds the passive WPF control, and initializes the UI state.

The initial UI is effectively idle:

  • a log area with a placeholder message
  • proposal entry fields
  • an approval bar that remains inactive until needed

This is another useful clarification: tool window creation and bridge transport startup are related, but they are not the same event.

Step 5: The MCP Server Waits on Stdio

The project also contains a separate process, VsMcpBridge.McpServer. This process is configured as an MCP server that uses stdio for its transport.

That means an AI client can launch the MCP server process and communicate with it by writing requests to standard input and reading responses from standard output.

So the architecture now has two separate waiting states:

  • the VSIX is waiting on a named pipe connection
  • the MCP server is waiting on stdio for an MCP request

Step 6: The First Tool Call Arrives

When a user submits a prompt in an AI client, nothing special happens in the bridge unless the AI chooses to call one of the MCP tools.

If it does, the flow looks like this:

  1. The AI client sends an MCP request over stdio.
  2. VsMcpBridge.McpServer receives that request.
  3. The selected tool uses the pipe client to connect to the VSIX named pipe.
  4. The VSIX pipe server accepts the connection and reads the request envelope.
  5. The request is dispatched to the Visual Studio service layer.
  6. The VSIX produces a response and sends it back through the pipe.
  7. The MCP server returns the result over stdio to the AI client.

This is the point where the system stops being idle and starts doing useful work.

Read-Only Calls vs. Edit Proposals

Read-only operations such as reading the active document or listing solution projects are straightforward. The VSIX performs the operation and returns data.

Edit proposals are different. The bridge does not directly write files when the MCP tool is called. Instead, it creates a diff proposal, stores approval state, and updates the tool window so the user can approve or reject the change.

That design keeps Visual Studio control and file mutation inside the host while still allowing the AI side to suggest changes.

Why This Architecture Exists

At first glance, it can seem odd that both stdio and named pipes exist in the same design. The reason is that they solve two different problems:

  • stdio is the transport between an AI client and the MCP server process
  • named pipes are the transport between the MCP server process and the Visual Studio host

This split keeps Visual Studio API access inside the VSIX and keeps the MCP-facing process separate.

Takeaway

If you are trying to understand startup, the cleanest mental model is this:

Visual Studio starts VSIX
  - package initializes
  - services are composed
  - named pipe server starts listening

AI client starts MCP server
  - MCP server starts
  - stdio transport starts listening

User submits a prompt
  - AI may call an MCP tool
  - MCP server forwards request to VSIX
  - VSIX executes the host-side work
  - response flows back to the AI client

In other words, the bridge is really two connected listeners that become active before the first tool call arrives.

Next In The Series

The next post should answer a natural follow-up question: why stdio is used at all, what it is good at, and why it should not be treated as a multi-client shared bus.