Rockford Lhotka’s Blog
blog.lhotka.net.web.brid.gy
Rockford Lhotka’s Blog
@blog.lhotka.net.web.brid.gy
VP, Open Source Creator, Author, Speaker

[bridged from https://blog.lhotka.net/ on the web: https://fed.brid.gy/web/blog.lhotka.net ]
2026 - A New Year
I’ve been writing about AI over the past few months. Building MCP servers, learning now to effectively use AI to build and maintain software, and more. I’m at the point now where I no longer have doubt that AI is useful in software development. To get there requires some learning, the “vibe coding” thing is hype, but once a human understands how to work collaboratively with AI the productivity boost can be 2x up to 20x depending on the task. (that 2-20x is not just from my experience, but is also from the experiences of numerous other developers I have polled) I was recently in a conversation about why AI seems to have such a powerful boost for software development, and yet seems anemic in many other scenarios. My personal thought is that this is yet another example, like so many over my career, where software developers are able to apply technology to our own domain _because we understand our own domain_ and so we have intimate knowledge of how we’d want to change or improve it. If history is any guide, we’ll use the lessons learned in recreating our own problem domain to apply AI to other domains in the future. I very much suspect this is what will happen. Just look at the radical improvement of models and related tooling for software development over the past few months. We’ve had new models that are substantially better at understanding and creating code and related assets. Perhaps more important is the improvement in the tooling that enables the LLM models to interact with code, understand context, and assist developers in a more meaningful way. The model improvements should impact all domains, and probably do. The deficit is in the tooling that would allow a model to interact with a Powerpoint deck, or a spreadsheet, or whatever external apps or systems a user might be working with. Most tools have been written to be used by humans. And they often also have APIs that are designed for use by other software. None of the existing interfaces or tooling is designed for use by an AI, and that limits the ability of AI to use those systems like it can interact with modern software development tooling like VS Code. I hope this happens in 2026, that we take the lessons learned in building so many different software development tools like VS Code and Cursor and apply them to other domains, enabling AI to interact with a wider range of tools and systems in a meaningful way. At the very least, I wish I could have AI create and modify Powerpoint decks at a level similar to what it can do with code today.
blog.lhotka.net
December 30, 2025 at 11:20 PM
Copilots Everywhere!
Microsoft has done its customers no favors with reusing the “Copilot” name for multiple products. I honestly don’t know how many different Copilots are out there now. The big three, and the focus of this post, are: 1. **GitHub Copilot** - Provides access to developer-focused AI within various GitHub environments, including vscode and Visual Studio. 2. **M365 Copilot** - Provides access to AI-driven productivity tools and services within Microsoft 365, such as email, calendar, and Office tools like Excel. Focused on professional productivity and collaboration. 3. **Microsoft Copilot** - A consumer-focused AI assistant that sits over the top of the latest GPT model. It is available as an app on nearly every platform and device, and is designed to be used in a variety of contexts, including personal and professional settings. A lot of people don’t know that there are at least three different Copilots out there. I think it’s important to clarify the differences between them, as they serve different purposes and have different use cases. ### GitHub Copilot GitHub Copilot is available within the GitHub web experience, Visual Studio Code (vscode), and Visual Studio. The purpose of GitHub Copilot is to boost the productivity of developers or related IT professionals. It can answer questions, provide code snippets, and in Agent mode even create code or execute commands on your PC or servers. This is not a consumer-focused AI assistant. It is designed to be used by developers and IT professionals to enhance their productivity in a development environment. ### M365 Copilot M365 Copilot is available within Microsoft 365, which includes Office tools like Excel, email, calendar, and more. It is designed to enhance productivity and collaboration for professionals. In my experience, it is most useful for answering questions that require access to your email, calendar, SharePoint, and OneDrive within M365. In theory it can help with tasks in Word, Excel, and PowerPoint, but in practice I have found it to be nearly useless for this purpose. Generally speaking, this Copilot can’t manipulate existing documents, and so you can’t use it to boost your productivity in a meaningful way. > Perhaps this is because I’m used to use _GitHub_ Copilot, which is amazing in terms of its abiltiy to create and edit code files, and perform related tasks. ### Microsoft Copilot Microsoft Copilot is a consumer-focused AI assistant that is available as an app on nearly every platform and device. It is designed to be used in a variety of contexts, including personal and professional settings. You can think of this Copilot as a competitor to Gemini, ChatGPT, and Claude. It is a general-purpose AI assistant that can help with a wide range of tasks, from answering questions and providing information to helping with writing, coding, and more. If you have an M365 _Family_ subscription, you have access to more advanced features, but the basic version is available to anyone with a Microsoft account. You can use Microsoft Copilot to discuss nearly any topic, ranging from recipies and cooking, to automotive repair, to personal finance, to writing software, and more. It is a versatile tool that can be used in a variety of contexts. It is not _focused_ on writing software and doesn’t have access to your work email, calendar, or files. It can have access to your _personal_ OneDrive and Google Drive files and your outlook.com and gmail.com email and calendars. ### Summary I use all three of these Copilots daily, usually many times every day. I use GitHub Copilot to write code, answer questions about code, and perform related tasks in a development environment. I use M365 Copilot to answer questions and perform tasks within Microsoft 365, such as accessing my email, calendar, and files. I use Microsoft Copilot for a wide range of tasks, from answering questions and providing information, to helping with guidance on how to repair things around my house. Microsoft’s various Copilots are each useful in their own way. I wrote this post to help clarify the differences between them and to help others understand how they can use these tools effectively in their own workflows.
blog.lhotka.net
December 12, 2025 at 8:45 PM
What is an MCP?
Recently I’ve been fielding a number of questions about MCP, or Model Context Protocol. So I thought I’d write a quick post to explain what it is, and why it’s important. When someone asks me “what is an MCP”, it is clear that they aren’t asking about the MCP protocol, but rather what an MCP server is, and why it matters, and how it can be implemented. Does it need to be AI, or just used by AI? At its core, MCP is nothing more than a protocol like REST or GraphQL. It’s a way for different software systems to communicate with each other, specifically in the context of AI models. It is a client-server style protocol, where the client (often an AI model) makes requests to the server (a tool) to perform specific actions or retrieve information. Another way to look at it, is that “an MCP Server” is an endpoint that exposes this MCP protocol. The server can be called by any client that understands the MCP protocol, including AI models. Yet a third way people talk about MCP is that “an MCP” is an implementation of some software that does something useful - which happens to expose its behavior as an MCP Server endpoint via the MCP protocol. ## A Tiny Bit of History Not long ago (like 2 years ago), AI models had no real way to interact with the “real world”. You could ask them questions, and they could generate a response, but they couldn’t take actions or access up-to-date information. About two years ago, OpenAI introduced the idea of “function calling” in their API. This allowed developers to define specific functions that the AI model could call to perform actions or retrieve information. This was a big step forward, but it was still limited in scope. Around the same time, other companies started to explore similar ideas. For example, LangChain introduced the concept of “tools” that AI models could use to interact with external systems. These tools could be anything from simple APIs to complex workflows. Building on these ideas, the concept of MCP emerged from Anthropic as a standardized way for AI models to interact with external systems. MCP defines a protocol for how AI models can make requests to tools, and how those tools should respond. ### COM unication Excuse the poor pun, but to me, MCP feels a lot like the old COM (Component Object Model) protocol from Microsoft days of old. COM was a way for different software components to communicate with each other, regardless of the programming language they were written in. Like COM, MCP has an `IUnknown` entry point called `tools/list`, where clients can query for a list of available tools. Each tool is basically a method that can be called by the client. The client can also pass parameters to the tools, and receive results back. In this post though, I really don’t want to get into the details of the MCP protocol itself. It is enough to know that it is a client-server or RPC (Remote Procedure Call) style protocol that allows AI models to make imperative (and generally synchronous) calls to external systems in a standardized way. ## Why is MCP Important? What most people _really_ want to know, is “what is an MCP” in the context of building AI systems. Why should I care about MCP? An MCP server should be viewed as an API that is designed to be called by AI models. The idea is to allow an AI model to do things like: * Access up-to-date information (e.g. weather, news, stock prices) * Perform actions (e.g. book a flight, send an email) * Interact with other software systems (e.g. databases, CRMs) The goal is to provide AI models with a way to access current information, and to take actions in the real world. This is important because it allows AI models to be more useful and practical in real-world applications. This changes an AI from being an isolated “chatbot” into the center of a broader system that can interact with the world around it. ## MCP vs Traditional APIs The trick here, is that an MCP server _is not the same as a traditional API_. A traditional API is designed to be called by other applications. Often this means that the API is designed with a specific use case or workflow in mind. The client application is expected to know how to use the API, and what data to send and receive. In most cases, it is assume that the client software will be written to call different API methods in the correct order, and handle any errors or exceptions that may occur. An MCP server, on the other hand, is designed to be called by an AI model. This means that the API needs to be designed in a way that is easy for the AI model to understand and use. The AI model may not have any prior knowledge of the API, so the API needs to be self-describing and intuitive. It also means that the client (the AI model) is non-deterministic, or probabilistic. The AI model may not always call the API methods in the correct order, or may send unexpected data. The MCP server needs to be able to handle these situations gracefully, and provide useful feedback to the AI model. Can you imagine if you took an inexperienced human and handed them a traditional API spec, and expected them to use it correctly? That’s what it’s like for an AI model trying to use a traditional API. ## Implementing an MCP Server Implementing an MCP server involves a few key steps: 1. Figure out the scope and purpose of the MCP server. What actions will it need to perform? What information will it need to provide? 2. Define the tools and methods that the MCP server will expose. This involves identifying the actions that the AI model will need to perform, and designing the API methods to support those actions. 3. Implement the MCP protocol. This involves creating the endpoints that will handle requests from the AI model, and implementing the logic to process those requests and return results. 4. Test the MCP server with the inspector and then an AI model. 5. Iterate and improve. Based on feedback from the AI model, you may need to refine the API methods, improve error handling, or add new features to the MCP server. ### Determining the Scope As I write this, the MCP specificiation just turned one year old. As a result, there are not a lot of established best practices for designing MCP servers yet. However, I think it’s important to start with a clear understanding of the scope and purpose of the MCP server. We can hopefully draw on established principles from Domain Driven Design (DDD), object-oriented design, component-based design, and microservices architecture to help guide us. All of these, properly done, have the concept of scope or bounded context. The idea is to define clear boundaries around the functionality that the MCP server will provide, and to ensure that the API methods are cohesive and focused. For example, an MCP server might be designed to manage a specific domain, such as booking travel, managing customer relationships, or processing payments. The API methods would then be focused on the actions and information relevant to that domain. ### Tools and Methods Once the scope is defined, the next step is to identify the tools and methods that the MCP server will expose. This involves breaking down the functionality into discrete actions that the AI model can perform. The granularity of these tools will depend on the specific use case and the needs of the AI model. In general, I would expect them to be at a higher level of abstraction as compared to a traditional API or microservice. For example, instead of exposing low-level CRUD operations for managing customer data, an MCP server might expose higher-level tools for “Create Customer Profile”, “Update Customer Preferences”, or “Retrieve Customer Purchase History”. These tools would encapsulate the underlying complexity of the operations, making it easier for the AI model to use them effectively. ### Implement the MCP Protocol Actually implementing the MCP protocol by hand would involve understanding JSON-RPC and how it is used by MCP. Fortunately, there are packages available for most programming platforms and languages that already implement the protocol, allowing developers to focus on building the actual tools and methods. The most obvious choice is the ModelContextProtocol packages on GitHub. Rather than learning and implementing the protocol itself, use packages like these to focus on building the server and tools. ### Testing Testing an MCP server directly from an AI model can be challenging, especially in the early stages of development. Fortunately, there are tools available that can help with this process. A common option is the MCP Inspector. The MCP Inspector is a tool that allows developers to interactively test and debug MCP servers. It provides a user interface for exploring the available tools and methods, making requests, and viewing responses. Once you know that your MCP server is working correctly with the Inspector, you can then test it with an actual AI model. This will help ensure that the AI model can effectively use the MCP server to perform the desired actions and retrieve the necessary information. ### Iterate and Improve The odds of getting the MCP server and its tools right on the first try are very low. Remember that your _consumer_ is an AI agent, which is probabilistic and is in some ways more like a naïve human than a traditional software application. Once you are able to interact with the MCP server using an AI model, you will likely discover areas for improvement. This could include refining the API methods, improving error handling, or adding new features to better support the needs of the AI model. In particular, look at: * Ways to improve the descriptions of the tools, methods, and parameters to make them clearer and more intuitive for the AI model. * Adding examples of how to use the tools and methods effectively. * Enhancing error messages to provide more useful feedback to the AI model when something goes wrong. * Changing the output of the tools to be more (or less) structured, depending on what the AI model seems to handle better. ### Logging I didn’t mention this in the steps above, but logging is very important when building an MCP server. You need to be able to see what requests are being made by the AI model, what parameters are being sent, and what responses are being returned. These days most good server systems use open telemetry for logging and tracing. This is a good idea for MCP servers as well, as it allows you to collect detailed information about the interactions between the AI model and the MCP server. As with any logging, be mindful of privacy and security concerns. Avoid logging sensitive information, and ensure that any logged data is stored securely. ## Security and Identity Similarly, I didn’t mention security and identity in the steps above, but these are also very important considerations when building an MCP server. Usually this occurs at a couple levels: * Is the client authorized to access the MCP server at all? * Does the client represent a specific user or identity, and if so, what permissions does that identity have? Of every topic in this post, security and identity is the most complex, and the one that is most likely to evolve over time. As of this writing (end of 2025), there are few established best practices for handling security and identity in MCP servers. ## What is in an MCP Server? Everything I’ve said so far is great, but still doesn’t answer one of the underlying questions: is an MCP server AI, or just used by AI? The answer is: it depends. ### Getting Data In many cases, an MCP tool be be used by the AI model to get up to date information from external systems. In these cases, it is probably best to write traditional code that accesses databases, APIs, or other data sources, and exposes that data via an MCP server. Because context is critical for AI models, a tool like this should provide the requested data, often with additional context to help the AI model understand how to use the data. #### Retrieval-Augmented Generation You may encounter the term “RAG” or retrieval-augmented generation in this context. This is a technique where an AI model retrieves relevant information from an external source (like a database or document store) to provide context for generating a response. An MCP tool that provides up-to-date information can be a key part of a RAG system. Technically RAG uses AI models to do the work, but these AI models aren’t what most of us think of as “AI systems”. They are specialized models that are focused on encoding text into vector arrays. ### Performing Actions In other cases, an MCP tool might change or update data in external systems. Again, this is probably best done with traditional code that performs the necessary actions, and exposes those actions via an MCP server. Remember that most traditional APIs are not designed to be called by AI models, so the MCP server should provide a higher-level abstraction that is easier for the AI model to understand and use. Here too, context is critical. Usually this context comes from the descriptions of the tools, methods, and parameters. ### Using AI Inside an MCP Server In some cases though, an MCP server might be implemented using AI itself. For example, you might expose some complex functionality to other AI models, where the logic is too complex to implement with traditional code. In these cases, you might use an AI model to process the requests and generate the responses. For example, you might have a business process that involves complex and specialized knowledge and judgement. In this case, you might use an AI model tuned for that specific domain. This allows a non-specialized AI model, like a chatbot or something, to request that the specialized “sub-agent” perform the complex task on its behalf. #### Specialized Sub-Agents There are different techinques for building a specialized AI model for use in an MCP server implementation. These include: * Using prompt engineering (including system prompts) to guide the model’s behavior. * Implementing retrieval-augmented generation (RAG) to provide the model with access to relevant information. * Fine-tuning a base model with domain-specific data. I’ve listed these techniques in order of increasing complexity. Prompt engineering is the simplest, and fine-tuning is the most complex. In most cases, I would recommend starting with prompt engineering, and only moving to more complex techniques if necessary. The goal is to provide the AI model with enough context and guidance to perform the desired actions effectively. ## Conclusion When someone asks me “what is an MCP”, they are wondering what an MCP server is, and why it matters, and how it can be implemented. In this post I’ve tried to answer those questions, and provide some guidance. At least as we understand such things at the end of 2025.
blog.lhotka.net
December 6, 2025 at 2:03 AM
Agent vs Agentic
From Copilot: _The term “agent” typically refers to an entity that can act independently and make decisions based on its programming or objectives. In contrast, “agentic” describes the quality or characteristic of being an agent, often emphasizing the capacity for self-directed action and autonomy._ In practice though, the term “agent” is highly overloaded. Over the years, “agent” has been used to describe software programs, types of microservice, human roles, and now some aspects of AI systems. Less common, until now, was the term “agentic”. What seems to be emerging is the idea that an “agentic” system has one or more “agents” that that are more advanced than simple “agents”. Yeah, that makes everything more clear! ## What is an Agent? There are many ways people build agents today, and many types of software are called agents. For example, inside some AI chat software, an agent might be defined as a pre-built set of prompts, and possibly access to specific tools via MCP (model context protocol). Such an agent might be focused on building a specific type of software, or finding good flights and hotels for a trip. In any case, these types of agents are built into a chat experience, and are triggered by user requests. For a recent client project, we wrote an agent that was triggered by user requests, and routed those requests to other agents. Those other agents (subagents?) vary quite a lot in implementation and complexity. One of them, for example, was _also_ an agent that routed user requests to various APIs to find information. Another exposed a set of commands over an existing REST API. What they all have in common is that they are triggered by user requests, and they do not have any autonomy or ability to act without user input. Sometimes people talk about an agent as being able to act autonomously. To trigger on events other than direct user requests. I think this is where the term “agentic” starts to make more sense. ## What is Agentic? In my mind, I differentiate between agents that are triggered by user requests, and those that can act autonomously. The latter I would call “agentic”. Or at least autonomous agents enable the creation of agentic systems. And I guess that’s the key here: there’s a difference between simple agents that respond to user requests, and more complex agents that can act on their own, make decisions, and pursue goals without direct user input. An agentic system has at least one, but probably many, agents that can operate autonomously. These agents can perceive their environment, make decisions based on their programming and objectives, and take actions to achieve specific goals. This is not to say that there won’t _also_ be simpler agents and tools within an agentic system. In fact, an agentic system might be composed of many different types of agents, some of which are simple and user-triggered, and others that are more complex and autonomous. And others that are just tools used by the agents, probably exposed via MCP. ## How Big is an Autonomous Agent? Given all that, the next question to consider is the “size” or scope of an autonomous agent. Here I think we can draw on things like the Single Responsibility Pattern, Domain Driven Design (DDD) and microservices architecture. An autonomous agent should probably have a well-defined scope or bounded context, where it has clear responsibilities and can operate independently of other agents. I tend to think about it in terms of a human “role”. Most human workers have many different roles or responsibilities as part of their job. Some of those roles are often things that could be automated with powerful end software. Others require a level of judgement to go along with any automation. Still others require empathy, creativity, or other human qualities that are hard to replicate with software. ### Building Automation One good use of AI is to have it build software that automates specific tasks. In this case, an autonomous agent might be responsible for understanding a specific domain or task, and then generating code to automate that task. The AI is not involved in the actual task, just in understanding the task and building the automation. This is, in my view, a good use of AI, because it leverages the strengths of AI (pattern recognition, code generation, etc.) to creat tools. The tools are almost certainly more cost effective to operate than AI itself. Not just in terms of money, but also in terms of the overall ethical concerns around AI usage (power, water, training data). ### Decision Making As I mentioned earlier though, some roles require judgement and decision making. In these cases, an autonomous agent might be responsible for gathering information, analyzing options, and making decisions based on its programming and objectives. This is probably done in combination of automation. So AI might be used to create automation for parts of the task that are repetitive and well-defined, while the autonomous agent focuses on the more complex aspects that require judgement. Earlier I discussed the ambiguity around the term agent, and you can imagine how this scenario involves different types of agent: * Simple agents that are triggered by user requests to gather information or perform specific tasks. * Autonomous agents that can analyze the gathered information and make decisions based on predefined criteria. * Automation tools that are created by AI to handle repetitive tasks. What we’ve created here is an agentic system that leverages different types of agents and automation to achieve a specific goal. That goal is a single role or responsibility, that should have clear boundaries and scope. ## Science Fiction Inspiration The idea of autonomous agents is not new. It has been explored by Isaac Asimov in his Robot series, where robots are designed to act autonomously and make decisions based on the Three Laws of Robotics. More recent examples come from the works of Ian Banks, Neil Asher, Alastair Reynolds, and many others. In these stories, autonomous agents (often called AIs or Minds) are capable of complex decision making, self-improvement, and even creativity. These fictional portrayals often explore the ethical and societal implications of autonomous agents, which is an important consideration as we move towards more advanced AI systems in the real world. Some of these authors explore utopic visions of AI, while others focus on dystopic outcomes. Both perspectives are valuable, as they highlight the potential benefits and risks associated with autonomous agents. I think there’s real value in looking at these materials for terminology that can help us better understand and communicate about the evolving landscape of AI and autonomous systems. Yes, we’ll end up creating new terms because that’s how language works, but a lot of the concepts like agent, sub-agent, mind, sub-mind, and more are already out there. ## Conclusion Today the term “agent” is overloaded, overused, and ambiguous. Collectively we need to think about how to better define and communicate about different types of agents, especially as AI systems become more complex and capable. The term “agentic” seems to be less overloaded, and is useful for describing systems that have one or more autonomous agents in the mix. These autonomous agents can perceive their environment, make decisions, and take actions to achieve specific goals. We are at the beginning of this process, and this phase of every new technology involves chaos. It will be fun to learn lessons and see how the industry and terminology evolves over time.
blog.lhotka.net
December 2, 2025 at 9:49 PM
AI Skeptic to AI Pragmatist
A few months ago I was an AI skeptic. I was concerned that AI was similar to Blockchain, in that it was mostly hype, with little practical application outside of a few niche use cases. I still think AI is overhyped, but having intentionally used LLMs and AI agents to help build software, I have moved from skeptic to pragmatist. I still don’t know that AI is “good” in any objective sense. It consumes a lot of water and electricity, and the training data is often sourced in ethically questionable ways. This post isn’t about that, as people have written extensively on the topic. The thing is, I have spent the past few months intentionally using AI to help me do software design and development, primarily via Copilot in VS Code and Visual Studio, but also using Cursor and a couple other AI tools. The point of this post is to talk about my experience with actually using AI successfully, and what I’ve learned along the way. In my view, as an industry and society we need to discuss the ethics of AI. That discussion needs to move past a start point that says “AI isn’t useful”, because it turns out that AI can be useful, if you know how to use it effectively. Therefore, the discussion needs to acknowledge that AI is a useful tool, and _then_ we can discuss the ethics of its use. ## The Learning Curve The first thing I learned is that AI is not magic. You have to learn how to use it effectively, and that takes time and effort. It is also the case that the “best practices” for using AI are evolving as we use it, so it is important to interact with others who are also using AI to learn from their experiences. For example, I started out trying to just “vibe code” with simple prompts, expecting the AI to just do the right thing. AI is non-deterministic though, and the same prompt can generate different results each time, depending on the random seed used by the AI. It is literally a crap shoot. To get any reasonable results, it is necessary to provide context to the AI beyond expressing a simple desire. There are various patterns for doing this. The one I’ve been using is this: 1. Who am I? (e.g. “You are a senior software engineer with 10 years of experience in C# and .NET.”) 2. Who are you? (e.g. “You are an AI assistant that helps software engineers write high-quality code.”) 3. Who is the end user? (e.g. “The end users are financial analysts who need to access market data quickly and reliably.”) 4. What are we building? (e.g. “You are building a RESTful API that provides access to market data.”) 5. Why are we building it? (e.g. “The API will help financial analysts make better investment decisions by providing them with real-time market data.”) 6. How are we building it? (e.g. “You are using C#, .NET 8, and SQL Server to build the API.”) 7. What are the constraints? (e.g. “The API must be secure, scalable, and performant.”) You may provide other context as well, but this is a good starting point. What this means is that your initial prompt for starting any work will be fairly long - at least one sentence per item above, but in many cases each item will be a paragraph or more. Subsequent prompts in a session can be shorter, because the AI will have that context. _However_ , AI context windows are limited, so it your session gets long (enough prompts and responses), you may need to re-provide context. To that point, it is sometimes a good idea to save your context in a document, so you can reference that file in subsequent requests or sessions. This is easy to do in VS Code or Visual Studio, where you can reference files in your prompts. ## A Mindset Shift Notice that I sometimes use the term “we” when talking to the AI. This is on purpose, because I have found that it is best to think of the AI as a collaborator, rather than a tool. This mindset shift is important, because it changes the way you interact with the AI. > Don’t get me wrong - I don’t think of the AI as a person - it really _is a tool_. But it is a tool that can collaborate with you, rather than just a tool that you use. When you think of the AI as a collaborator, you are more likely to provide it with the context it needs to do its job effectively. You are also more likely to review and refine its output, rather than just accepting it at face value. ## Rate of Change Even in the short time I’ve been actively using AI, the models and tools have improved significantly. New features are being added all the time, and the capabilities of the models are expanding rapidly. If you evaluated AI a few months ago and decided it wasn’t useful for a scenario, it might well be able to handle that scenario now. Or not. My point is that you can’t base your opinion on a single snapshot in time, because the technology is evolving so quickly. ## Effective Use of AI AI itself can be expensive to use. We know that it consumes a lot of water and electricity, so minimizing its use is important from an ethical standpoint. Additionally, many AI services charge based on usage, so minimizing usage is also important from a cost standpoint. What this means to me, is that it is often best to use AI to build deterministic tools that can then be used without AI. Rather than using AI for repetative tasks during development, I often use AI to build bash scripts or other tools that can then be used to perform those tasks without AI. Also, rather than manually typing in all my AI instructions and context over and over, I store that information in files that can be referenced by the AI (and future team members who might need to maintain the software). I do find that the AI is very helpful for building these documents, especially Claude Sonnet 4.5. GitHub Copilot will automatically use a special file you can put in your repo’s root: /.github/copilot-instructions.md It now turns out that you can put numerous files in an `instructions` folder under `.github`, and Copilot will use all of them. This is great for organizing your instructions into multiple files. This file can contain any instructions you want Copilot to use when generating code. I have found this to be very helpful for providing context to Copilot without having to type it in every time. Not per-prompt instructions, but overall project instructions. It is a great place to put the “Who am I?”, “Who are you?”, “Who is the end user?”, “What are we building?”, “Why are we building it?”, “How are we building it?”, and “What are the constraints?” items mentioned above. You can also use this document to tell Copilot to use specific MCP servers, or to avoid using certain ones. This is useful if you want to ensure that your code is only generated using models that you trust. ## Prompt Rules Feel free to use terms like “always” or “never” in your prompts. These aren’t foolproof, because AI is non-deterministic, but they do help guide the AI’s behavior. For example, you might say “Always use async/await for I/O operations” or “Never use dynamic types in C#”. This helps the AI understand your coding standards and preferences. Avoid being passive or unclear in your prompts. Instead of saying “It would be great if you could…”, say “Please do X”. This clarity helps the AI understand exactly what you want. If you are asking a question, be explicit that you are asking a question and looking for an answer, otherwise the AI (in agent mode) might just try to build code or other assets based on your question, thinking it was a request. GitHub Copilot allows you to put markdown files with pre-built prompts into a `prompts` folder under `.github`. You can then reference these prompts in your code comments to have Copilot use them using the standard `#` file reference syntax. This is a great way to standardize prompts across your team. ## Switch Models as Needed You may find that different AI models work better for various purposes or tasks. For example, I often use Claude Sonnet 4.5 for writing documentation, because it seems to produce clearer and more concise text than other models. However, I often use GPT-5-Codex for code generation, because it seems to understand code better. > That said, I know other people who do the exact opposite, so your mileage may vary. The key is to experiment with different models and see which ones work best for your specific needs. The point is that you don’t have to stick with a single model for everything. You can switch models as needed to get the best results for your specific tasks. In GitHub Copilot there is a cost to using premium models, with a multiplier. So (at the moment) Sonnet 4.5 is a 1x multiplier, while Haiku is a 0.33x multiplier. Some of the older and weaker models are 0x (free). So you can balance cost and quality by choosing the appropriate model for your task. ## Agent Mode vs Ask Mode GitHub Copilot has two primary modes of operation: Agent Mode and Ask Mode. Other tools often have similar concepts. In Ask mode the AI responds to your prompts in the chat window, and doesn’t modify your code or take other actions. The vscode and Visual Studio UIs usually allow you to _apply_ the AI’s response to your code, but you have to do that manually. In Agent mode the AI can modify your code, create files, and take other actions on your behalf. This is more powerful, but also more risky, because the AI might make changes that you don’t want or expect. I’d recommend starting with Ask mode until you are comfortable with the AI’s capabilities and limitations. Once you are comfortable, you can switch to Agent mode for more complex tasks. Agent mode is a _massive_ time saver as a developer! By default, Agent mode does prompt you for confirmation in most cases, and you can disable those prompts over time to loosen the restrictions as you become more comfortable with the AI. ## Don’t Trust the AI The AI can and _will_ make mistakes. Especially if you ask it to do something complex, or look across a large codebase. For example, I asked Copilot to create a list of all the classes that implemented an interface in the CSLA .NET codebase. It got most of them, but not all of them, and it included some that didn’t implement the interface. I had to manually review and correct the list. I think it might have been better to ask the AI to give me a grep command or something that would do a search for me, rather than trying to have it do the work directly. However, I often have the AI look at a limited set of files and it is almost always correct. For example, asking the AI for a list of properties or fields in a single class is usually accurate. ## Use Git Commit like “Save Game” I’ve been a gamer for most of my life, and one thing I’ve learned from gaming is the concept of “save games”. In many games, you can save your progress at any point, and then reload that save if you make a mistake or want to try a different approach. This is true for working with AI as well. Before you ask the AI to make significant changes to your code, make a git commit. This way, if the AI makes changes that you don’t want or expect, you can easily revert to the previous state. I find myself making a commit any time I get compiling code, passing tests, or any other milestone - even a small one. THis IS how you can safely experiment with AI without fear of losing your work. > I’m not saying push to the server or do a pull request (PR) every time - just a local commit is sufficient for this purpose. Sometimes the AI will go off the rails and make a mess of your code. Having a recent commit allows you to quickly get back to a known good state. ## Create and Use MCP Servers As you might imagine, I use CSLA .NET a lot. Because CSLA is open-source, the Copilot AI generally knows all about CSLA because open-source code is part of the training data. The problem is that the training data covers everything from CSLA 1.0 to the current version - so decades of changes. This means that when you ask Copilot to help you with CSLA code, it might give you code that is out of date. I’ve created an MCP server for CSLA that has information about CSLA 9 and 10. If you add this MCP server to your Copilot settings, and ask questions about CSLA, you will get answers that are specific to CSLA 9 and 10, rather than older versions. This is the sort of thing you can put into your `/.github/copilot-instructions.md` file to ensure that everyone on your team is using the same MCP servers. The results of the AI when using an MCP server like this are _substantially_ better than without it. If you are using AI to help with a specific framework or library, consider creating an MCP server for that framework or library. You can also build your own MCP server for your organization, project, or codebase. Such a server can provide code snippets, patterns, documentation, and other information specific to your context, which can greatly improve the quality of the AI’s output. I wrote a blog post about building a simple MCP server. ## Conclusion AI is a useful tool for software development, but it is not magic. You have to learn how to use it effectively, and you have to be willing to review and refine its output. By thinking of the AI as a collaborator, providing it with context, and using MCP servers, you can get much better results. As an industry, we need to move past the idea that AI isn’t useful, and start discussing how to use it ethically and effectively. Only then can we fully understand the implications of this technology and make informed decisions about its use.
blog.lhotka.net
November 25, 2025 at 10:22 PM
.NET Terminology
I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
blog.lhotka.net
October 30, 2025 at 10:48 PM
Accessing User Identity on a Blazor Wasm Client
On the server, Blazor authentication is fairly straightforward because it uses the underlying ASP.NET Core authentication mechanism. I’ll quickly review server authentication before getting to the WebAssembly part so you have an end-to-end understanding. I should note that this post is all about a Blazor 8 app that uses per-component rendering, so there is an ASP.NET Core server hosting Blazor server pages, and there may also be pages using `InterativeAuto` or `InteractiveWebAssembly` that run in WebAssembly on the client device. ## Blazor Server Authentication Blazor Server components are running in an ASP.NET Core hosted web server environment. This means that they can have access to all that ASP.NET Core has to offer. For example, a server-static rendered Blazor server page can use HttpContext, and therefore can use the standard ASP.NET Core `SignInAsync` and `SignOutAsync` methods like you’d use in MVC or Razor Pages. ### Blazor Login Page Here’s the razor markup for a simple `Login.razor` page from a Blazor 8 server project with per-component rendering: @page "/login" @using BlazorHolWasmAuthentication.Services @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @using System.Security.Claims @inject UserValidation UserValidation @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <PageTitle>Login</PageTitle> <h1>Login</h1> <div> <EditForm Model="userInfo" OnSubmit="LoginUser" FormName="loginform"> <div> <label>Username</label> <InputText @bind-Value="userInfo.Username" /> </div> <div> <label>Password</label> <InputText type="password" @bind-Value="userInfo.Password" /> </div> <button>Login</button> </EditForm> </div> <div style="background-color:lightgray"> <p>User identities:</p> <p>admin, admin</p> <p>user, user</p> </div> <div><p class="alert-danger">@Message</p></div> This form uses the server-static form of the `EditForm` component, which does a standard postback to the server. Blazor uses the `FormName` and `OnSubmit` attributes to route the postback to a `LoginUser` method in the code block: @code { [SupplyParameterFromForm] public UserInfo userInfo { get; set; } = new(); public string Message { get; set; } = ""; private async Task LoginUser() { Message = ""; ClaimsPrincipal principal; if (UserValidation.ValidateUser(userInfo.Username, userInfo.Password)) { // create authenticated principal var identity = new ClaimsIdentity("custom"); var claims = new List<Claim>(); claims.Add(new Claim(ClaimTypes.Name, userInfo.Username)); var roles = UserValidation.GetRoles(userInfo.Username); foreach (var item in roles) claims.Add(new Claim(ClaimTypes.Role, item)); identity.AddClaims(claims); principal = new ClaimsPrincipal(identity); var httpContext = httpContextAccessor.HttpContext; if (httpContext is null) { Message = "HttpContext is null"; return; } AuthenticationProperties authProperties = new AuthenticationProperties(); await httpContext.SignInAsync( CookieAuthenticationDefaults.AuthenticationScheme, principal, authProperties); NavigationManager.NavigateTo("/"); } else { Message = "Invalid credentials"; } } public class UserInfo { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } } The username and password are validated by a `UserValidation` service. That service returns whether the credentials were valid, and if they were valid, it returns the user’s claims. The code then uses that list of claims to create a `ClaimsIdentity` and `ClaimsPrincpal`. That pair of objects represents the user’s identity in .NET. The `SignInAsync` method is then called on the `HttpContext` object to create a cookie for the user’s identity (or whatever storage option was configured in `Program.cs`). From this point forward, ASP.NET Core code (such as a web API endpoint) and Blazor server components (via the Blazor `AuthenticationStateProvider` and `CascadingAuthenticationState`) all have consistent access to the current user identity. ### Blazor Logout Page The `Logout.razor` page is simpler still, since it doesn’t require any input from the user:  @page "/logout" @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <h3>Logout</h3> @code { protected override async Task OnInitializedAsync() { var httpContext = httpContextAccessor.HttpContext; if (httpContext != null) { var principal = httpContext.User; if (principal.Identity is not null && principal.Identity.IsAuthenticated) { await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); } } NavigationManager.NavigateTo("/"); } } The important part of this code is the call to `SignOutAsync`, which removes the ASP.NET Core user token, thus ensuring the current user has been “logged out” from all ASP.NET Core and Blazor server app elements. ### Configuring the Server For the `Login.razor` and `Logout.razor` pages to work, they must be server-static (which is the default for per-component rendering), and `Program.cs` must contain some important configuration. First, some services must be registered: builder.Services.AddHttpContextAccessor(); builder.Services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(); builder.Services.AddCascadingAuthenticationState(); builder.Services.AddTransient<UserValidation>(); The `AddHttpContextAccessor` registration makes it possible to inject an `IHttpContextAccessor` service so your code can access the `HttpContext` instance. > ⚠️ Generally speaking, you should only access `HttpContext` from within a server-static rendered page. The `AddAuthentication` method registers and configures ASP.NET Core authentication. In this case to store the user token in a cookie. The `AddCascadingAuthenticationState` method enables Blazor server components to make use of cascading authentication state. Finally, the `UserValidation` service is registered. This service is implemented by you to verify the user credentials, and to return the user’s claims if the credentials are valid. Some further configuration is required after the services have been registered: app.UseAuthentication(); app.UseAuthorization(); ### Enabling Cascading Authentication State The `Routes.razor` component is where the user authentication state is made available to all Blazor components on the server: <CascadingAuthenticationState> <Router AppAssembly="typeof(Program).Assembly" AdditionalAssemblies="new[] { typeof(Client._Imports).Assembly }"> <Found Context="routeData"> <AuthorizeRouteView RouteData="routeData" DefaultLayout="typeof(Layout.MainLayout)" /> <FocusOnNavigate RouteData="routeData" Selector="h1" /> </Found> </Router> </CascadingAuthenticationState> Notice the addition of the `CascadingAuthenticationState` element, which cascades an `AuthenticationState` instance to all Blazor server components. Also notice the use of `AuthorizeRouteView`, which enables the use of the authorization attribute in Blazor pages, so only an authorized user can access those pages. ### Adding the Login/Logout Links The final step to making authentication work on the server is to enhance the `MainLayout.razor` component to add links for the login and logout pages: @using Microsoft.AspNetCore.Components.Authorization @inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <main> <div class="top-row px-4"> <AuthorizeView> <Authorized> Hello, @context!.User!.Identity!.Name <a href="logout">Logout</a> </Authorized> <NotAuthorized> <a href="login">Login</a> </NotAuthorized> </AuthorizeView> </div> <article class="content px-4"> @Body </article> </main> </div> <div id="blazor-error-ui"> An unhandled error has occurred. <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> The `AuthorizeView` component is used, with the `Authorized` block providing content for a logged in user, and the `NotAuthorized` block providing content for an anonymous user. In both cases, the user is directed to the appropriate page to login or logout. At this point, all _server-side_ Blazor components can use authorization, because they have access to the user identity via the cascading `AuthenticationState` object. This doesn’t automatically extend to pages or components running in WebAssembly on the browser. That takes some extra work. ## Blazor WebAssembly User Identity There is nothing built in to Blazor that automatically makes the user identity available to pages or components running in WebAssembly on the client device. You should also be aware that there are possible security implications to making the user identity available on the client device. This is because any client device can be hacked, and so a bad actor could gain access to any `ClaimsIdentity` object that exists on the client device. As a result, a bad actor could get a list of the user’s claims, if those claims are on the client device. In my experience, if developers are using client-side technologies such as WebAssembly, Angular, React, WPF, etc. they’ve already reconciled the security implications of running code on a client device, and so it is probably not an issue to have the user’s roles or other claims on the client. I will, however, call out where you can filter the user’s claims to prevent a sensitive claim from flowing to a client device. The basic process of making the user identity available on a WebAssembly client is to copy the user’s claims from the server, and to use that claims data to create a copy of the `ClaimsIdentity` and `ClaimsPrincipal` on the WebAssembly client. ### A Web API for ClaimsPrincipal The first step is to create a web API endpoint on the ASP.NET Core (and Blazor) server that exposes a copy of the user’s claims so they can be retrieved by the WebAssembly client. For example, here is a controller that provides this functionality: using Microsoft.AspNetCore.Mvc; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Controllers; [ApiController] [Route("[controller]")] public class AuthController(IHttpContextAccessor httpContextAccessor) { [HttpGet] public User GetUser() { ClaimsPrincipal principal = httpContextAccessor!.HttpContext!.User; if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) { // Return a user object with the username and claims var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); return new User { Username = principal.Identity!.Name, Claims = claims }; } else { // Return an empty user object return new User(); } } } public class Credentials { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } public class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } public class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } This code uses an `IHttpContextAccessor` to access `HttpContext` to get the current `ClaimsPrincipal` from ASP.NET Core. It then copies the data from the `ClaimsIdentity` into simple types that can be serialized into JSON for return to the caller. Notice how the code doesn’t have to do any work to determine the identity of the current user. This is because ASP.NET Core has already authenticated the user, and the user identity token cookie has been unpacked by ASP.NET Core before the controller is invoked. The line of code where you could filter sensitive user claims is this: var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); This line copies _all_ claims for serialization to the client. You could filter out claims considered sensitive so they don’t flow to the WebAssembly client. Keep in mind that any code that relies on such claims won’t work in WebAssembly pages or components. In the server `Program.cs` it is necessary to register and map controllers. builder.Services.AddControllers(); and app.MapControllers(); At this point the web API endpoint exists for use by the Blazor WebAssembly client. ### Getting the User Identity in WebAssembly Blazor always maintains the current user identity as a `ClaimsPrincpal` in an `AuthenticationState` object. Behind the scenes, there is an `AuthenticationStateProvider` service that provides access to the `AuthenticationState` object. On the Blazor server we generally don’t need to worry about the `AuthenticationStateProvider` because a default one is provided for our use. On the Blazor WebAssembly client however, we must implement a custom `AuthenticationStateProvider`. For example: using Microsoft.AspNetCore.Components.Authorization; using System.Net.Http.Json; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Client; public class CustomAuthenticationStateProvider(HttpClient HttpClient) : AuthenticationStateProvider { private AuthenticationState AuthenticationState { get; set; } = new AuthenticationState(new ClaimsPrincipal()); private DateTimeOffset? CacheExpire; public override async Task<AuthenticationState> GetAuthenticationStateAsync() { if (!CacheExpire.HasValue || DateTimeOffset.Now > CacheExpire) { var previousUser = AuthenticationState.User; var user = await HttpClient.GetFromJsonAsync<User>("auth"); if (user != null && !string.IsNullOrEmpty(user.Username)) { var claims = new List<System.Security.Claims.Claim>(); foreach (var claim in user.Claims) { claims.Add(new System.Security.Claims.Claim(claim.Type, claim.Value)); } var identity = new ClaimsIdentity(claims, "auth_api"); var principal = new ClaimsPrincipal(identity); AuthenticationState = new AuthenticationState(principal); } else { AuthenticationState = new AuthenticationState(new ClaimsPrincipal()); } if (!ComparePrincipals(previousUser, AuthenticationState.User)) { NotifyAuthenticationStateChanged(Task.FromResult(AuthenticationState)); } CacheExpire = DateTimeOffset.Now + TimeSpan.FromSeconds(30); } return AuthenticationState; } private static bool ComparePrincipals(ClaimsPrincipal principal1, ClaimsPrincipal principal2) { if (principal1.Identity == null || principal2.Identity == null) return false; if (principal1.Identity.Name != principal2.Identity.Name) return false; if (principal1.Claims.Count() != principal2.Claims.Count()) return false; foreach (var claim in principal1.Claims) { if (!principal2.HasClaim(claim.Type, claim.Value)) return false; } return true; } private class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } private class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } } This is a subclass of `AuthenticationStateProvider`, and it provides an implementation of the `GetAuthenticationStateAsync` method. This method invokes the server-side web API controller to get the user’s claims, and then uses them to create a `ClaimsIdentity` and `ClaimsPrincipal` for the current user. This value is then returned within an `AuthenticationState` object for use by Blazor and any other code that requires the user identity on the client device. One key detail in this code is that the `NotifyAuthenticationStateChanged` method is only called in the case that the user identity has changed. The `ComparePrincipals` method compares the existing principal with the one just retrieved from the web API to see if there’s been a change. It is quite common for Blazor and other code to request the `AuthenticationState` very frequently, and that can result in a lot of calls to the web API. Even a cache that lasts a few seconds will reduce the volume of repetitive calls significantly. This code uses a 30 second cache. ### Configuring the WebAssembly Client To make Blazor use our custom provider, and to enable authentication on the client, it is necessary to add some code to `Program.cs` _in the client project_ : using BlazorHolWasmAuthentication.Client; using Marimer.Blazor.RenderMode.WebAssembly; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); builder.Services.AddAuthorizationCore(); builder.Services.AddScoped<AuthenticationStateProvider, CustomAuthenticationStateProvider>(); builder.Services.AddCascadingAuthenticationState(); await builder.Build().RunAsync(); The `CustomAuthenticationStateProvider` requires an `HttpClient` service, and relies on the `AddAuthorizationCore` and `AddCascadingAuthenticationState` to properly function. ## Summary The preexisting integration between ASP.NET Core and Blazor on the server make server-side user authentication fairly straightforward. Extending the authenticated user identity to WebAssembly hosted pages and components requires a little extra work: creating a controller on the server and custom `AuthenticationStateProvider` on the client.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Blazor EditForm OnSubmit behavior
I am working on the open-source KidsIdKit app and have encountered some “interesting” behavior with the `EditForm` component and how buttons trigger the `OnSubmit` event. An `EditForm` is declared similar to this: <EditForm Model="CurrentChild" OnSubmit="SaveData"> I would expect that any `button` component with `type="submit"` would trigger the `OnSubmit` handler. <button class="btn btn-primary" type="submit">Save</button> I would also expect that any `button` component _without_ `type="submit"` would _not_ trigger the `OnSubmit` handler. <button class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> I’d think this was true _especially_ if that second button was in a nested component, so it isn’t even in the `EditForm` directly, but is actually in its own component, and it uses an `EventCallback` to tell the parent component that the button was clicked. ### Actual Results In Blazor 8 I see different behaviors between MAUI Hybrid and Blazor WebAssembly hosts. In a Blazor WebAssembly (web) scenario, my expectations are met. The secondary button in the sub-component does _not_ cause `EditForm` to submit. In a MAUI Hybrid scenario however, the secondary button in the sub-component _does_ cause `EditForm` to submit. I also tried this using the new Blazor 9 MAUI Hybrid plus Web template - though in this case the web version is Blazor server. In my Blazor 9 scenarios, in _both_ hosting cases the secondary button triggers the submit of the `EditForm` - even though the secondary button is in a sub-component (its own `.razor` file)! What I’m getting out of this is that we must assume that _any button_ , even if it is in a nested component, will trigger the `OnSubmit` event of an `EditForm`. Nasty! ### Solution The solution (thanks to @jeffhandley) is to add `type="button"` to all non-submit `button` components. It turns out that the default HTML for `<button />` is `type="submit"`, so if you don’t override that value, then all buttons trigger a submit. What this means is that I _could_ shorten my actual submit button: <button class="btn btn-primary">Save</button> I probably won’t do this though, as being explicit probably increases readability. And I _absolutely must_ be explicit with all my other buttons: <button type="button" class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> This prevents the other buttons (even in nested Razor components) from accidentally triggering the submit behavior in the `EditForm` component.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Do not throw away your old PCs
As many people know, Windows 10 is coming to its end of life (or at least end of support) in 2025. Because Windows 11 requires specialized hardware that isn’t built into a lot of existing PCs running Windows 10, there is no _Microsoft-based_ upgrade path for those devices. The thing is, a lot of those “old” Windows 10 devices are serving their users perfectly well, and there is often no compelling reason for a person to replace their PC just because they can’t upgrade to Windows 11. > ℹ️ If you can afford to replace your PC with a new one, that’s excellent, and I’m not trying to discourage that! However, you can still avoid throwing away your old PC, and you should consider alternatives. Throwing away a PC or laptop - like in the trash - is a _horrible_ thing to do, because PCs contain toxic elements that are bad for the environment. In many places it might actually be illegal. Besides which, whether you want to keep and continue to use your old PC or not, _someone_ can probably make good use of it. > ️⚠️ If you do need to “throw away” your old PC, please make sure to turn it in to a recycling center for e-waste or hazardous waste center. I’d like to discuss some possible alternatives to throwing away or recycling your old PC. Things that provide much better outcomes for people and the environment! It might be that you can continue to use your PC or laptop, or someone else may be able to give it new life. Here are some options. ## Continue Using the PC Although you may be unable to upgrade to Windows 11, there are alternative operating systems that will breathe new life into your existing PC. The question you should ask first, is what do you do on your PC? The following may require Windows: * Windows-only software (like CAD drawing or other software) * Hard-core gaming On the other hand, if you use your PC entirely for things like: * Browsing the web * Writing documents * Simple spreadsheets * Web-based games in a browser Then you can probably replace Windows with an alternative and continue to be very happy with your PC. What are these “alternative operating systems”? They are all variations of Linux. If you’ve never heard of Linux, or have heard it is complicated and only for geeks, rest assured that there are some variations of Linux that are no more complex than Windows 10. ### “Friendly” Variations of Linux Some of the friendliest variations of Linux include: * Cinnamon Mint - Linux with a desktop that is very similar to Windows * Ubuntu Desktop - Linux with its own style of graphical desktop that isn’t too hard to learn if you are used to Windows There are many others, these are just a couple that I’ve used and found to be easy to install and learn. > 🛑 Before installing Linux on your PC make sure to copy all the files you want to keep onto a thumb drive or something! Installing Linux will _entirely delete your existing hard drive_ and none of your existing files will be on the PC when you are done. Once you’ve installed Linux, you’ll need software to do the things you do today. ### Browsers on Linux Linux often comes with the Firefox browser pre-installed. Other browsers that you can install include: * Chrome * Edge I am sure other browsers are available as well. Keep in mind that most modern browsers provide comparable features and let you use nearly every web site, so you may be happy with Firefox or whatever comes pre-installed with Linux. ### Software similar to Office on Linux Finally, most people use their PC to write documents, create spreadsheets and do other things that are often done using Microsoft Office. Some alternatives to Office available on Linux include: * OneDrive - Microsoft on-line file storage and web-based versions of Word, Excel, and more * Google Docs - Google on-line file storage and web-based word processor, spreadsheet, and more * LibreOffice - Software you install on your PC that provides word processing, spreadsheets, and more. File formats are compatible with Word, Excel, and other Office tools. Other options exist, these are the ones I’ve used and find to be most common. ## Donate your PC Even if your needs can’t be met by running Linux on your old PC, or perhaps installing a new operating system just isn’t for you - please consider that there are people all over the world, including near you, that would _love_ to have access to a free computer. This might include kids, adults, or seniors in your area who can’t afford a PC (or to have their own PC). In the US, rural and urban areas are _filled_ with young people who could benefit from having a PC to do school work, learn about computers, and more. > 🛑 Before donating your PC, make sure to use the Windows 10 feature to reset the PC to factory settings. This will delete all _your_ files from the PC, ensuring that the new owner can’t access any of your information. Check with your church and community organizations to find people who may benefit from having access to a computer. ## Build a Server If you know people, or are someone, who likes to tinker with computers, there are a lot of alternative uses for an old PC or laptop. You can install Linux _server_ software on an old PC and then use that server for all sorts of fun things: * Create a file server for your photos and other media - can be done with a low-end PC that has a large hard drive * Build a Kubernetes cluster out of discarded devices - requires PCs with at least 2 CPU cores and 8 gigs of memory, though more is better Here are a couple articles with other good ideas: * Avoid the Trash Heap: 17 Creative Uses for an Old Computer * 10 Creative Things to Do With an Old Computer If you aren’t the type to tinker with computers, just ask around your family and community. It is amazing how many people do enjoy this sort of thing, and would love to have access to a free device that can be used for something other than being hazardous waste. ## Conclusion I worry that 2025 will be a bad year for e-waste and hazardous waste buildup in landfills and elsewhere around the world, as people realize that their Windows 10 PC or laptop can’t be upgraded and “needs to be replaced”. My intent in writing this post is to provide some options to consider that may breathe new life into your “old” PC. For yourself, or someone else, that computer may have many more years of productivity ahead of it.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Why MAUI Blazor Hybrid
It can be challenging to choose a UI technology in today’s world. Even if you narrow it down to wanting to build “native” apps for phones, tablets, and PCs there are so many options. In the Microsoft .NET space, there are _still_ many options, including .NET MAUI, Uno, Avalonia, and others. The good news is that these are good options - Uno and Avalonia are excellent, and MAUI is coming along nicely. At this point in time, my _default_ choice is usually something called a MAUI Hybrid app, where you build your app using Blazor, and the app is hosted in MAUI so it is built as a native app for iOS, Android, Windows, and Mac. Before I get into why this is my default, I want to point out that I (personally) rarely build mobile apps that “represent the brand” of a company. Take the Marriott or Delta apps as examples - the quality of these apps and the way they work differently on iOS vs Android can literally cost these companies customers. They are a primary contact point that can irritate a customer or please them. This is not the space for MAUI Blazor Hybrid in my view. ## Common Code MAUI Blazor Hybrid is (in my opinion) for apps that need to have rich functionality, good design, and be _common across platforms_ , often including phones, tablets, and PCs. Most of my personal work is building business apps - apps that a business creates to enable their employees, vendors, partners, and sometimes even customers, to interact with important business systems and functionality. Blazor (the .NET web UI framework) turns out to be an excellent choice for building business apps. Though this is a bit of a tangent, Blazor is my go-to for modernizing (aka replacing) Windows Forms, WPF, Web Forms, MVC, Silverlight, and other “legacy” .NET app user experiences. The one thing Blazor doesn’t do by itself, is create native apps that can run on devices. It creates web sites (server hosted) or web apps (browser hosted) or a combination of the two. Which is wonderful for a lot of scenarios, but sometimes you really need things like offline functionality or access to per-platform APIs and capabilities. This is where MAUI Hybrid comes into the picture, because in this model you build your Blazor app, and that app is _hosted_ by MAUI, and therefore is a native app on each platform: iOS, Android, Windows, Mac. That means that your Blazor app is installed as a native app (therefore can run offline), and it can tap into per-platform APIs like any other native app. ## Per-Platform In most business apps there is little (or no) per-platform difference, and so the vast majority of your app is just Blazor - C#, html, css. It is common across all the native platforms, and optionally (but importantly) also the browser. When you do have per-platform differences, like needing to interact with serial or USB port devices, or arbitrary interactions with local storage/hard drives, you can do that. And if you do that with a little care, you still end up with the vast majority of your app in Blazor, with small bits of C# that are per-platform. ## End User Testing I mentioned that a MAUI Hybrid app can not only create native apps but that it can also target the browser. This is fantastic for end user testing, because it can be challenging to do testing via the Apple, Google, and Microsoft stores. Each requires app validation, on their schedule not yours, and some have limits on the numbers of test users. > In .NET 9, the ability to create a MAUI Hyrid that also targets the browser is a pre-built template. Previously you had to set it up yourself. What this means is that you can build your Blazor app, have your users do a lot of testing of your app via the browser, and once you are sure it is ready to go, then you can do some final testing on a per-platform basis via the stores (or whatever scheme you use to install native apps). ## Rich User Experience Blazor, with its use of html and css backed by C#, directly enables rich user experiences and high levels of interactivity. The defacto UI language is html/css after all, and we all know how effective it can be at building great experiences in browsers - as well as native apps. There is a rapidly growing and already excellent ecosystem around Blazor, with open-source and commercial UI toolkits and frameworks available that make it easy to create many different types of user experience, including Material design and others. From a developer perspective, it is nice to know that learning any of these Blazor toolsets is a skill that spans native and web development, as does Blazor itself. In some cases you’ll want to tap into per-platform capabilities as well. The MAUI Community Toolkit is available and often provides pre-existing abstractions for many per-platform needs. Some highlights include: * File system interaction * Badge/notification systems * Images * Speech to text Between the basic features of Blazor, advanced html/css, and things like the toolkit, it is pretty easy to build some amazing experiences for phones, tablets, and PCs - as well as the browser. ## Offline Usage Blazor itself can provide a level of offline app support if you build a Progressive Web App (PWA). To do this, you create a standlone Blazor WebAssembly app that includes the PWA manifest and worker job code (in JavaScript). PWAs are quite powerful and are absolutely something to consider as an option for some offline app requirements. The challenge with a PWA is that it is running in a browser (even though it _looks_ like a native app) and therefore is limited by the browser sandbox and the local operating system. For example, iOS devices place substantial limitations on what a PWA can do and how much data it can store locally. There are commercial reasons why Apple doesn’t like PWAs competing with “real apps” in its store, and the end result is that PWAs _might_ work for you, as long as you don’t need too much local storage or too many native features. MAUI Hybrid apps are actual native apps installed on the end user’s device, and so they can do anything a native app can do. Usually this means asking the end user for permission to access things like storage, location, and other services. As a smartphone user you are certainly aware of that type of request as an app is installed. The benefit then, is that if the user gives your app permission, your app can do things it couldn’t do in a PWA from inside the browser sandbox. In my experience, the most important of these things is effectively unlimited access to local storage for offline data that is required by the app. ## Conclusion This has been a high level overview of my rationale for why MAUI Blazor Hybrid is my “default start point” when thinking about building native apps for iOS, Android, Windows, and/or Mac. Can I be convinced that some other option is better for a specific set of business and technical requirements? Of course!! However, having a well-known and very capable option as a starting point provides a short-cut for discussing the business and technical requirements - to determine if each requirement is or isn’t already met. And in many cases, MAUI Hybrid apps offer very high developer productivity, the functionality needed by end users, and long-term maintainability.
blog.lhotka.net
October 30, 2025 at 10:48 PM
CSLA 2-tier Data Portal Behavior History
The CSLA data portal originally treated 2- and 3-tier differently, primarily for performance reasons. Back in the early 2000’s, the data portal did not serialize the business object graph in 2-tier scenarios. That behavior still exists and can be enabled via configuration, but is not the default for the reasons discussed in this post. Passing the object graph by reference (instead of serializing it) does provide much better performance, but at the cost of being behaviorally/semantically different from 3-tier. In a 3-tier (or generally n-tier) deployment, there is at least one network hop between the client and any server, and the object graph _must be serialized_ to cross that network boundary. When different 2-tier and 3-tier behaviors existed, a lot of people did their dev work in 2-tier and then tried to switch to 3-tier. Usually they’d discover all sorts of issues in their code, because they were counting on the logical client and server using the same reference to the object graph. A variety of issues are solved by serializing the graph even in 2-tier scenarios, including: 1. Consistency with 3-tier deployment (enabling location transparency in code) 2. Preventing data binding from reacting to changes to the object graph on the logical server (nasty performance and other issues would occur) 3. Ensuring that a failure on the logical server (especially part-way through the graph) leaves the graph on the logical client in a stable/known state There are other issues as well - and ultimately those issues drove the decision (I want to say around 2006 or 2007?) to default to serializing the object graph even in 2-tier scenarios. There is a performance cost to that serialization, but having _all_ n-tier scenarios enjoy the same semamantic behaviors has eliminated so many issues and support questions on the forums that I regret nothing.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Running Linux on My Surface Go
I have a first-generation Surface Go, the little 10” tablet Microsoft created to try and compete with the iPad. I’ll confess that I never used it a lot. I _tried_ , I really did! But it is underpowered, and I found that my Surface Pro devices were better for nearly everything. My reasoning for having a smaller tablet was that I travel quite a lot, more back then than now, and I thought having a tablet might be nicer for watching movies and that sort of thing, especially on the plane. It turns out that the Surface Pro does that too, without having to carry a second device. Even when I switched to my Surface Studio Laptop, I _still_ didn’t see the need to carry a second device - though the Surface Pro is absolutely better for traveling in my view. I’ve been saying for quite some time that I think people need to look at Linux as a way to avoid the e-waste involved in discarding their Windows 10 PCs - the ones that can’t run Windows 11. I use Linux regularly, though usually via the command line for software development, and so I thought I’d put it on my Surface Go to gain real-world experience. > I have quite a few friends and family who have Windows 10 devices that are perfectly good. Some of those folks don’t want to buy a new PC, due to financial constraints, or just because their current PC works fine. End of support for Windows 10 is a problem for them! The Surface Go is a bit trickier than most mainstream Windows 10 laptops or desktops, because it is a convertable tablet with a touch screen and specialized (rare) hardware - as compared to most of the devices in the market. So I did some reading, and used Copilot, and found a decent (if old) article on installing Linux on a Surface Go. > ⚠️ One quick warning: Surface Go was designed around Windows, and while it does work reasonably well with Linux, it isn’t as good. Scrolling is a bit laggy, and the cameras don’t have the same quality (by far). If you want to use the Surface Go as a small, lightweight laptop I think it is pretty good; if you are looking for a good _tablet_ experience you should probably just buy a new device - and donate the old one to someone who needs a basic PC. Fortunately, Linux hasn’t evolved all that much or all that rapidly, and so this article remains pretty valid even today. ## Using Ubuntu Desktop I chose to install Ubuntu, identified in the article as a Linux distro (distribution, or variant, or version) that has decent support for the Surface Go. I also chose Ubuntu because this is normally what I use for my other purposes, and so I’m familiar with it in general. However, I installed the latest Ubuntu Desktop (version 25.04), not the older version mentioned in the article. This was a good choice, because support for the Surface hardware has improved over time - though the other steps in the article remain valid. ## Download and Set Up Media The steps to get ready are: 1. Download Ubuntu Desktop - this downloads a file with a `.iso` extension 2. Download software to create a bootable flash drive based on the `.iso` file. I used software called Rufus - just be careful to avoid the flashy (spammy) download buttons, and find the actual download link text in the page 3. Get a flash drive (either new, or one you can erase) and insert it into your PC 4. Run rufus, and identify the `.iso` file and your flash drive 5. Rufus will write the data to the flash drive, and make the flash drive bootable so you can use it to install Linux on any PC 6. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 ## Install Ubuntu on the Surface At this point you have a bootable flash drive and a Surface Go device, and you can do the installation. This is where the zdnet article is a bit dated - the process is smoother and simpler than it was back then, so just do the install like this: 1. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 2. Insert the flash drive into the Surface USB port (for the Surface Go I had to use an adapter from type C to type A) 3. Press the Windows key and type “reset” and choose the settings option to reset your PC 4. That will bring up the settings page where you can choose Advanced and reset the PC for booting from a USB device 5. What I found is that the first time I did this, my Linux boot device didn’t appear, so I rebooted to Windows and did step 4 again 6. The second time, an option was there for Linux. It had an odd name: Linpus (as described in the zdnet article) 7. Boot from “Linpus” and your PC will sit and spin for quite some time (the Surface Go is quite old and slow by modern standards), and eventually will come up with Ubuntu 8. The thing is, it is _running_ Ubuntu, but it hasn’t _installed_ Ubuntu. So go through the wizard and answer the questions - especially the wifi setup 9. Once you are on the Ubuntu (really Gnome) desktop, you’ll see an icon for _installing_ Ubuntu. Double-click that and the actual installation process will begin 10. I chose to have the installer totally reformat my hard drive, and I recommend doing that, because the Surface Go doesn’t have a huge drive to start with, and I want all of it available for my new operating system 11. Follow the rest of the installer steps and let the PC reboot 12. Once it has rebooted, you can remove the flash drive ## Installing Updates At this point you should be sitting at your new desktop. The first thing Linux will want to do is install updates, and you should let it do so. I laugh a bit, because people make fun of Windows updates, and Patch Tuesday. Yet all modern and secure operating systems need regular updates to remain functional and secure, and Linux is no exception. Whether automated or not, you should do regular (at least monthly) updates to keep Linux secure and happy. ## Installing Missing Features Immediately upon installation, Ubuntu 25.04 seems to have very good support for the Surface Go, including multi-touch on the screen and trackpad, use of the Surface Pen, speakers, and the external (physical) keyboard. What doesn’t work right away, at least what I found, are the cameras or any sort of onscreen/soft keyboard. You need to take extra steps for these. The zdnet article is helpful here. ### Getting the Cameras Working The zdnet article walks through the process to get the cameras working. I actually think the camera drivers are now just part of Ubuntu, but I did have to take steps to get them working, and even then they don’t have great quality - this is clearly an area where moving to Linux is a step backward. At times I found the process a bit confusing, but just plowed ahead figuring I could always reinstall Linux again if necessary. It did work fine in the end, no reinstall needed. 1. Install the Linux Surface kernel - which sounds intimidating, but is really just following some steps as documented in their GitHub repo; other stuff in the document is quite intimidating, but isn’t really relevant if all you want to do is get things running 2. That GitHub repo also has information about the various camera drivers for different Surface devices, and I found that to be a bit overwhelming; fortunately, it really amounts to just running one command 3. Make sure you also run these commands to give your Linux account permissions to use the camera 4. At this point I was able to follow instructions to run `cam` and see the cameras - including some other odd entries I igored 5. And I was able to run `qcam`, which is a command that brings up a graphical app so you can see through each camera > ⚠️ Although the cameras technically work, I am finding that a lot of apps still don’t see the cameras, and in all cases the camera quality is quite poor. ### Getting a Soft or Onscreen Keyboard Because the Surface Go is _technically_ a tablet, I expected there to be a soft or onscreen keyboard. It turns out that there is a primitive one built into Ubuntu, but it really doesn’t work very well. It is pretty, but I was unable to figure out how to get it to appear via touch, which kind of defeats the purpose (I needed my physical keyboard to get the virtual one to appear). I found an article that has some good suggestions for Linux onscreen keyboard (OSK) improvements. I used what the article calls “Method 2” to install an Extension Manager, which allowed me to install extensions for the keyboard. 1. Install the Extension Manager `sudo apt install gnome-shell-extension-manager` 2. Open the Extension Manager app 3. This is where the article fell down, because the extension they suggested doesn’t seem to exist any longer, and there are numerous other options to explore 4. I installed an extension called “Touch X” which has the ability to add an icon to the upper-right corner of the screen by which you can open the virtual keyboard at any time (it can also do a cool ripple animation when you touch the screen if you’d like) 5. I also installed “GJS OSK”, which is a replacement soft keyboard that has a lot more configurability than the built-in default; you can try both and see which you prefer ## Installing Important Apps This section is mostly editorial, because I use certain apps on a regular basis, and you might use other apps. Still, you should be aware that there are a couple ways to install apps on Ubuntu: snap and apt. The “snap” concept is specific to Ubuntu, and can be quite nice, as it installs each app into a sort of sandbox that is managed by Ubuntu. The “app store” in Ubuntu lists and installs apps via snap. The “apt” concept actually comes from Ubuntu’s parent, Debian. Since Debian and Ubuntu make up a very large percentage of the Linux install base, the `apt` command is extremely common. This is something you do from a terminal command line. Using snap is very convenient, and when it works I love it. Sometimes I find that apps installed via snap don’t have access to things like speakers, cameras, or other things. I think that’s because they run in a sandbox. I’m pretty sure there are ways to address these issues - my normal way of addressing them is to uninstall the snap and use `apt`. ### My “Important” Apps I installed apps via snap, apt, and as PWAs. #### Snap and Apt Apps Here are the apps I installed right away: 1. Microsoft Edge browser - because I use Edge on my Windows devices and Android phone, I want to use the same browser here to sync all my history, settings, etc. - I installed this using the default Firefix browser, then switched the default to Edge 2. Visual Studio Code - I’m a developer, and find it hard to imagine having a device without some way to write code - and I use vscode on Windows, so I’m used to it, and it works the same on Linux - I installed this as a snap via App Center 3. git - again, I’m a developer and all my stuff is on GitHub, which means using git as a primary tool - I installed this using `apt` 4. Discord - I use discord for many reasons - talking to friends, gaming, hosting the CSLA .NET Discord server - so it is something I use all the time - I installed this as a snap via App Center 5. Thunderbird Email - I’m not sold on this yet - it seems to be the “default” email app for Linux, but feels like Outlook from 10-15 years ago, and I do hope to find something a lot more modern - I installed this as a snap via App Center 6. Copilot Desktop - I’ve been increasingly using Copilot on Windows 11, and was delighted to find that Ken VanDine wrote a Copilot shell for Linux; it is in the App Center and installs as a snap, providing the same basic experience as Copilot on Windows or Android - I installed this as a snap via App Center 7. .NET SDK - I mostly develop using .NET and Blazor, and so installing the .NET software developer kit seemed obvious; Ubuntu has a snap to install version 8, but I used apt to install version 9 #### PWA Apps Once I got Edge installed, I used it to install a number of progressive web apps (PWAs) that I use on nearly every device. A PWA is an app that is installed and updates via your browser, and is a great way to get cross-platform apps. Exactly how you install a PWA will vary from browser to browser. Some have a little icon when you are on the web page, others have an “install app” option or “install on desktop” or similar. The end result is that you get what appears to be an app icon on your phone, PC, whatever - and when you click the icon the PWA app runs in a window like any other app. 1. Elk - I use Mastodon (social media) a lot, and my preferred client is Elk - fast, clean, works great 2. Bluesky - I use Bluesky (social media) a lot, and Bluesky can be installed as a PWA 3. LinkedIn - I use LinkedIn quite a bit, and it can be installed as a PWA 4. Facebook - I still use Facebook a little, and it can be installed as a PWA #### Using Microsoft 365 Office Most people want the edit documents and maybe spreadsheets on their PC. A lot of people, including me, use Word and Excel for this purpose. Those apps aren’t available on Linux - at least not directly. Fortunately there are good alternatives, including: 1. Use https://onedrive.com to create and edit documents and spreadsheets in the browser 2. Use https://office.com to access Office online if you have a subscription 3. Install LibreOffice, an open-source office productivity suite sort of like Office I use OneDrive for a lot of personal documents, photos, etc. And I use actual Office for work. The LibreOffice idea is something I might explore at some point, but the online versions of the Office apps are usually enough for casual work - which is all I’m going to do on the little Surface Go device anyway. One feature of Edge is the ability to have multiple profiles. I use this all the time on Windows, having a personal and two work profiles. This feature works on Linux as well, though I found it had some glitches. My default Edge profile is my personal one, so all those PWAs I installed are connected to that profile. I set up another Edge profile for my CSLA work, and it is connected to my marimer.llc email address. This is where I log into the M365 office.com apps, and I have that page installed as a PWA. When I run “Office” it opens in my work profile and I have access to all my work documents. On my personal profile I don’t use the Office apps as much, but when I do open something from my personal OneDrive, it opens in that profile. The limitation is that I can only edit documents while online, but for my purposes with this device, that’s fine. I can edit my documents and spreadsheets as necessary. ## Conclusion At this point I’m pretty happy. I don’t expect to use this little device to do any major software development, but it actually does run vscode and .NET just fine (and also Jetbrains Rider if you prefer a more powerful option). I mostly use it for browsing the web, discord, Mastodon, and Bluesky. Will I bring this with when I travel? No, because my normal Windows 11 PC does everything I want. Could I live with this as my “one device”? Well, no, but that’s because it is underpowered and physically too small. But could I live with a modern laptop running Ubuntu? Yes, I certainly could. I wouldn’t _prefer_ it, because I like full-blown Visual Studio and way too many high end Steam games. The thing is, I am finding myself leaving the Surface Go in the living room, and reaching for it to scan the socials while watching TV. Something I could have done just as well with Windows, and can now do with Linux.
blog.lhotka.net
October 30, 2025 at 10:48 PM
A Simple CSLA MCP Server
In a recent CSLA discussion thread, a user asked about setting up a simple CSLA Mobile Client Platform (MCP) server. https://github.com/MarimerLLC/csla/discussions/4685 I’ve written a few MCP servers over the past several months with varying degrees of success. Getting the MCP protocol right is tricky (or was), and using semantic matching with vectors isn’t always the best approach, because I find it often misses the most obvious results. Recently however, Anthropic published a C# SDK (and NuGet package) that makes it easier to create and host an MCP server. The SDK handles the MCP protocol details, so you can focus on implementing your business logic. https://github.com/modelcontextprotocol/csharp-sdk Also, I’ve been reading up on the idea of hybrid search, which combines traditional search techniques with vector-based semantic search. This approach can help improve the relevance of search results by leveraging the strengths of both methods. The code I’m going to walk through in this post can be easily adapted to any scenario, not just CSLA. In fact, the MCP server just searches and returns markdown files from a folder. To use it for any scenario, you just need to change the source files and update the descriptions of the server, tools, and parameters that are in the attributes in code. Perhaps a future enhancement for this project will be to make those dynamic so you can change them without recompiling the code. The code for this article can be found in this GitHub repository. > ℹ️ Most of the code was actually written by Claude Sonnet 4 with my collaboration. Or maybe I wrote it with the collaboration of the AI? The point is, I didn’t do much of the typing myself. Before getting into the code, I want to point out that this MCP server really is useful. Yes, the LLMs already know all about CSLA because CSLA is open source. However, the LLMs often return outdated or incorrect information. By providing a custom MCP server that searches the actual CSLA code samples and snippets, the LLM can return accurate and up-to-date information. ## The MCP Server Host The MCP server itself is a console app that uses Spectre.Console to provide a nice command-line interface. The project also references the Anthropic C# SDK and some other packages. It targets .NET 10.0, though I believe the code should work with .NET 8.0 or later. I am not going to walk through every line of code, but I will highlight the key parts. > ⚠️ The modelcontextprotocol/csharp-sdk package is evolving rapidly, so you may need to adapt to use whatever is latest when you try to build your own. Also, all the samples in their GitHub repository use static tool methods, and I do as well. At some point I hope to figure out how to use instance methods instead, because that will allow the use of dependency injection. Right now the code has a lot of `Console.WriteLine` statements that would be better handled by a logging framework. Although the project is a console app, it does use ASP.NET Core to host the MCP server. var builder = WebApplication.CreateBuilder(); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<CslaCodeTool>(); The `AddMcpServer` method adds the MCP server services to the ASP.NET Core dependency injection container. The `WithHttpTransport` method configures the server to use HTTP as the transport protocol. The `WithTools<CslaCodeTool>` method registers the `CslaCodeTool` class as a tool that can be used by the MCP server. There is also a `WithStdioTransport` method that can be used to configure the server to use standard input and output as the transport protocol. This is useful if you want to run the server locally when using a locally hosted LLM client. The nice thing about using the modelcontextprotocol/csharp-sdk package is that it handles all the details of the MCP protocol for you. You just need to implement your tools and their methods. All the subtleties of the MCP protocol are handled by the SDK. ## Implementing the Tools The `CslaCodeTool` class is where the main logic of the MCP server resides. This class is decorated with the `McpServerToolType` attribute, which indicates that this class will contain MCP tool methods. [McpServerToolType] public class CslaCodeTool ### The Search Method The first tool is Search, defined by the `Search` method. This method is decorated with the `McpServerTool` attribute, which indicates that this method is an MCP tool method. The attribute also provides a description of the tool and what it will return. This description is used by the LLM to determine when to use this tool. My description here is probably a bit too short, but it seems to work okay. Any parameters for the tool method are decorated with the `Description` attribute, which provides a description of the parameter. This description is used by the LLM to understand what the parameter is for, and what kind of value to provide. [McpServerTool, Description("Searches CSLA .NET code samples and snippets for examples of how to implement code that makes use of #cslanet. Returns a JSON object with two sections: SemanticMatches (vector-based semantic similarity) and WordMatches (traditional keyword matching). Both sections are ordered by their respective scores.")] public static string Search([Description("Keywords used to match against CSLA code samples and snippets. For example, read-write property, editable root, read-only list.")]string message) #### Word Matching The orginal implementation (which works very well) uses only word matching. To do this, it gets a list of all the files in the target directory, and searches them for any words from the LLM’s `message` parameter that are 4 characters or longer. It counts the number of matches in each file to generate a score for that file. Here’s the code that gets the list of search terms from `message`: // Extract words longer than 4 characters from the message var searchWords = message .Split(new char[] { ' ', '\t', '\n', '\r', '.', ',', ';', ':', '!', '?', '(', ')', '[', ']', '{', '}', '"', '\'', '-', '_' }, StringSplitOptions.RemoveEmptyEntries) .Where(word => word.Length > 3) .Select(word => word.ToLowerInvariant()) .Distinct() .ToList(); Console.WriteLine($"[CslaCodeTool.Search] Extracted search words: [{string.Join(", ", searchWords)}]"); It then loops through each file and counts the number of matching words. The final result is sorted by score and then file name: var sortedResults = results.OrderByDescending(r => r.Score).ThenBy(r => r.FileName).ToList(); #### Semantic Matching More recently I added semantic matching as well, resulting in a hybrid search approach. The search tool now returns two sets of results: one based on traditional word matching, and one based on vector-based semantic similarity. The semantic search behavior comes in two parts: indexing the source files, and then matching against the message parameter from the LLM. ##### Indexing the Source Files Indexing source files takes time and effort. To minimize startup time, the MCP server actually starts and will work without the vector data. In that case it relies on the word matching only. After a few minutes, the vector indexing will be complete and the semantic search results will be available. The indexing is done by calling a text embedding model to generate a vector representation of the text in each file. The vectors are then stored in memory along with the file name and content. Or the vectors could be stored in a database to avoid having to re-index the files each time the server is started. I’m relying on a `vectorStore` object to index each document: await vectorStore.IndexDocumentAsync(fileName, content); This `VectorStoreService` class is a simple in-memory vector store that uses Ollama to generate the embeddings: public VectorStoreService(string ollamaEndpoint = "http://localhost:11434", string modelName = "nomic-embed-text:latest") { _httpClient = new HttpClient(); _vectorStore = new Dictionary<string, DocumentEmbedding>(); _ollamaEndpoint = ollamaEndpoint; _modelName = modelName; } This could be (and probably will be) adapted to use a cloud-based embedding model instead of a local Ollama instance. Ollama is free and easy to use, but it does require a local installation. The actual embedding is created by a call to the Ollama endpoint: var response = await _httpClient.PostAsync($"{_ollamaEndpoint}/api/embeddings", content); The embedding is just a list of floating-point numbers that represent the semantic meaning of the text. This needs to be extracted from the JSON response returned by the Ollama endpoint. var responseJson = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize<JsonElement>(responseJson); if (result.TryGetProperty("embedding", out var embeddingElement)) { var embedding = embeddingElement.EnumerateArray() .Select(e => (float)e.GetDouble()) .ToArray(); return embedding; } > 👩‍🔬 All those floating-point numbers are the magic of this whole thing. I don’t understand any of the math, but it obviously represents the semantic “meaning” of the file in a way that a query can be compared later to see if it is a good match. All those embeddings are stored in memory for later use. ##### Matching Against the Message When the `Search` method is called, it first generates an embedding for the `message` parameter using the same embedding model. It then compares that embedding to each of the document embeddings in the vector store to calculate a similarity score. All that work is delegated to the `VectorStoreService`: var semanticResults = VectorStore.SearchAsync(message, topK: 10).GetAwaiter().GetResult(); In the `VectorStoreService` class, the `SearchAsync` method generates the embedding for the query message: var queryEmbedding = await GetTextEmbeddingAsync(query); It then calculates the cosine similarity between the query embedding and each document embedding in the vector store: foreach (var doc in _vectorStore.Values) { var similarity = CosineSimilarity(queryEmbedding, doc.Embedding); results.Add(new SemanticSearchResult { FileName = doc.FileName, SimilarityScore = similarity }); } The results are then sorted by similarity score and the top K results are returned. var topResults = results .OrderByDescending(r => r.SimilarityScore) .Take(topK) .Where(r => r.SimilarityScore > 0.5f) // Filter out low similarity scores .ToList(); ##### The Final Result The final result of the `Search` method is a JSON object that contains two sections: `SemanticMatches` and `WordMatches`. Each section contains a list of results ordered by their respective scores. var combinedResult = new CombinedSearchResult { SemanticMatches = semanticMatches, WordMatches = sortedResults }; It is up to the calling LLM to decide which set of results to use. In the end, the LLM will use the fetch tool to retrieve the content of one or more of the files returned by the search tool. ### The Fetch Method The second tool is Fetch, defined by the `Fetch` method. This method is also decorated with the `McpServerTool` attribute, which provides a description of the tool and what it will return. [McpServerTool, Description("Fetches a specific CSLA .NET code sample or snippet by name. Returns the content of the file that can be used to properly implement code that uses #cslanet.")] public static string Fetch([Description("FileName from the search tool.")]string fileName) This method has some defensive code to prevent path traversal attacks and other things, but ultimately it just reads the content of the specified file and returns it as a string. var content = File.ReadAllText(filePath); return content; ## Hosting the MCP Server The MCP server can be hosted in a variety of ways. The simplest is to run it as a console app on your local machine. This is useful for development and testing. You can also host it in a cloud environment, such as Azure App Service or AWS Elastic Beanstalk. This allows you to make the MCP server available to other applications and services. Like most things, I am running it in a Docker container so I can choose to host it anywhere, including on my local Kubernetes cluster. For real use in your organization, you will want to ensure that the MCP server endpoint is available to all your developers from their vscode or Visual Studio environments. This might be a public IP, or one behind a VPN, or some other secure way to access it. I often use tools like Tailscale or ngrok to make local services available to remote clients. ## Testing the MCP Server Testing an MCP server isn’t as straightforward as testing a regular web API. You need an LLM client that can communicate with the MCP server using the MCP protocol. Anthropic has an npm package that can be used to test the MCP server. You can find it here: https://github.com/modelcontextprotocol/inspector This package provides a GUI or CLI tool that can be used to interact with the MCP server. You can use it to send messages to the server and see the responses. It is a great way to test and debug your MCP server. Another option is to use the MCP support built into recent vscode versions. Once you add your MCP server endpoint to your vscode settings, you can use the normal AI chat interface to ask the chat bot to interact with the MCP server. For example: call the csla-mcp-server tools to see if they work This will cause the chat bot to invoke the `Search` tool, and then the `Fetch` tool to get the content of one of the files returned by the search. Once you have the MCP server working and returning the types of results you want, add it to your vscode or Visual Studio settings so all your developers can use it. In my experience the LLM chat clients are pretty good about invoking the MCP server to determine the best way to author code that uses CSLA .NET. ## Conclusion Setting up a simple CSLA MCP server is not too difficult, especially with the help of the Anthropic C# SDK. By implementing a couple of tools to search and fetch code samples, you can provide a powerful resource for developers using CSLA .NET. The hybrid search approach, combining traditional word matching with vector-based semantic similarity, helps improve the relevance of search results. This makes it easier for developers to find the code samples they need. I hope this article has been helpful in understanding how to set up a simple CSLA MCP server. If you have any questions or need further assistance, feel free to reach out on the CSLA discussion forums or GitHub repository for the csla-mcp project.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Unit Testing CSLA Rules With Rocks
One of the most powerful features of CSLA .NET is its business rules engine. It allows you to encapsulate validation, authorization, and other business logic in a way that is easy to manage and maintain. In CSLA, a rule is a class that implements `IBusinessRule`, `IBusinessRuleAsync`, `IAuthorizationRule`, or `IAuthorizationRuleAsync`. These interfaces define the contract for a rule, including methods for executing the rule and properties for defining the rule’s behavior. Normally a rule inherits from an existing base class that implements one of these interfaces. When you create a rule, you typically associate it with a specific property or set of properties on a business object. The rule is then executed automatically by the CSLA framework whenever the associated property or properties change. The advantage of a CSLA rule being a class, is that you can unit test it in isolation. This is where the Rocks mocking framework comes in. Rocks allows you to create mock objects for your unit tests, making it easier to isolate the behavior of the rule you are testing. You can create a mock business object and set up expectations for how the rule should interact with that object. This allows you to test the rule’s behavior without having to worry about the complexities of the entire business object. In summary, the combination of CSLA’s business rules engine and the Rocks mocking framework provides a powerful way to create and test business rules in isolation, ensuring that your business logic is both robust and maintainable. All code for this article can be found in this GitHub repository in Lab 02. ## Creating a Business Rule As an example, consider a business rule that sets an `IsActive` property based on the value of a `LastOrderDate` property. If the `LastOrderDate` is within the last year, then `IsActive` should be true; otherwise, it should be false. using Csla.Core; using Csla.Rules; namespace BusinessLibrary.Rules; public class LastOrderDateRule : BusinessRule { public LastOrderDateRule(IPropertyInfo lastOrderDateProperty, IPropertyInfo isActiveProperty) : base(lastOrderDateProperty) { InputProperties.Add(lastOrderDateProperty); AffectedProperties.Add(isActiveProperty); } protected override void Execute(IRuleContext context) { var lastOrderDate = (DateTime)context.InputPropertyValues[PrimaryProperty]; var isActive = lastOrderDate > DateTime.Now.AddYears(-1); context.AddOutValue(AffectedProperties[1], isActive); } } This rule inherits from `BusinessRule`, which is a base class provided by CSLA that implements the `IBusinessRule` interface. The constructor takes two `IPropertyInfo` parameters: one for the `LastOrderDate` property and one for the `IsActive` property. The `InputProperties` collection is used to specify which properties the rule depends on, and the `AffectedProperties` collection is used to specify which properties the rule affects. The `Execute` method is where the rule’s logic is implemented. It retrieves the value of the `LastOrderDate` property from the `InputPropertyValues` dictionary, checks if it is within the last year, and then sets the value of the `IsActive` property using the `AddOutValue` method. ## Unit Testing the Business Rule Now that we have our business rule, we can create a unit test for it using the Rocks mocking framework. First, we need to bring in a few namespaces: using BusinessLibrary.Rules; using Csla; using Csla.Configuration; using Csla.Core; using Csla.Rules; using Microsoft.Extensions.DependencyInjection; using Rocks; using System.Security.Claims; Next, we can use Rocks attributes to define the mock types we need for our test: [assembly: Rock(typeof(IPropertyInfo), BuildType.Create | BuildType.Make)] [assembly: Rock(typeof(IRuleContext), BuildType.Create | BuildType.Make)] These lines of code only need to be included once in your test project, because they are assembly-level attributes. They tell Rocks to create mock implementations of the `IPropertyInfo` and `IRuleContext` interfaces, which we will use in our unit test. Now we can create our unit test method to test the `LastOrderDateRule`. To do this, we need to arrange the necessary mock objects and set up their expectations. Then we can execute the rule and verify that it behaves as expected. The rule has a constructor that takes two `IPropertyInfo` parameters, so we need to create mock implementations of that interface. We also need to create a mock implementation of the `IRuleContext` interface, which is used to pass information to the rule when it is executed. [TestMethod] public void LastOrderDateRule_SetsIsActiveBasedOnLastOrderDate() { // Arrange var inputProperties = new Dictionary<IPropertyInfo, object>(); using var context = new RockContext(); var lastOrderPropertyExpectations = context.Create<IPropertyInfoCreateExpectations>(); lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); var lastOrderProperty = lastOrderPropertyExpectations.Instance(); var isActiveProperty = new IPropertyInfoMakeExpectations().Instance(); var ruleContextExpectations = context.Create<IRuleContextCreateExpectations>(); ruleContextExpectations.Properties.Getters.InputPropertyValues().ReturnValue(inputProperties); ruleContextExpectations.Methods.AddOutValue(Arg.Is(isActiveProperty), true); inputProperties.Add(lastOrderProperty, new DateTime(2025, 9, 24, 18, 3, 40)); // Act var rule = new LastOrderDateRule(lastOrderProperty, isActiveProperty); (rule as IBusinessRule).Execute(ruleContextExpectations.Instance()); // Assert is automatically done by Rocks when disposing the context } Notice how the Rocks mock objects have expectations set up for their properties and methods. This allows us to verify that the rule interacts with the context as expected. This is a little different from more explicit `Assert` statements, but it is a powerful way to ensure that the rule behaves correctly. For example, notice how the `Name` property of the `lastOrderProperty` mock is expected to be called twice. If the rule does not call this property the expected number of times, the test will fail when the `context` is disposed at the end of the `using` block: lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); This is a powerful feature of Rocks that allows you to verify the behavior of your code without having to write explicit assertions. The test creates an instance of the `LastOrderDateRule` and calls its `Execute` method, passing in the mock `IRuleContext`. The rule should set the `IsActive` property to true because the `LastOrderDate` is within the last year. When the test completes, Rocks will automatically verify that all expectations were met. If any expectations were not met, the test will fail. This is a simple example, but it demonstrates how you can use Rocks to unit test CSLA business rules in isolation. By creating mock objects for the dependencies of the rule, you can focus on testing the rule’s behavior without having to worry about the complexities of the entire business object. ## Conclusion CSLA’s business rules engine is a powerful feature that allows you to encapsulate business logic in a way that is easy to manage and maintain. By using the Rocks mocking framework, you can create unit tests for your business rules that isolate their behavior and ensure that they work as expected. This combination of CSLA and Rocks provides a robust and maintainable way to implement and test business logic in your applications.
blog.lhotka.net
October 30, 2025 at 10:48 PM
MCP and A2A Basics
I have been spending a lot of time lately, learning about the Model Context Protocol (MCP) and Agent to Agent (A2A) protocols. And a little about a slightly older technology called the activity protocol that comes from the Microsoft bot framework. I’m writing this blog post mostly for myself, because writing content helps me organize my thoughts and solidify my understanding of concepts. As they say with AIs, mistakes are possible, because my understanding of all this technology is still evolving. (disclaimer: unless otherwise noted, I wrote this post myself, with my own fingers on a keyboard) ## Client-Server is Alive and Well First off, I think it is important to recognize that the activity protocol basically sits on top of REST, and so is client-server. The MCP protocol is also client-server, sitting on top of JSON-RPC. A2A _can be_ client-server, or peer-to-peer, depending on how you use it. The sipmlest form is client-server, with peer-to-peer provide a lot more capability, but also complexity. ## Overall Architecture These protocols (in particular MCP and A2A) exist to enable communication between LLM “AI” agents and their environments, or other tools, or other agents. ### Activity Protocol The activity protocol is a client-server protocol that sits on top of REST. It is primarily used for communication between a user and a bot, or between bots. The protocol defines a set of RESTful APIs for sending and receiving activities, which are JSON objects that represent a message, event, or command. The activity protocol is widely used in the Microsoft Bot Framework and is supported by many bot channels, such as Microsoft Teams, Slack, and Facebook Messenger. (that previous paragraph was written by AI - but it is pretty good) ### MCP The Model Context Protocol is really a standard and flexible way to expand the older concept of LLM tool or function calling. The primary intent is to allow an LLM AI to call tools that interact with the environment, call other apps, get data from services, or do other client-server style interactions. The rate of change here is pretty staggering. The idea of an LLM being able to call functions or “tools” isn’t that old. The limitation of that approach was that these functions had to be registered with the LLM in a way that wasn’t standard across LLM tools or platforms. MCP provides a standard for registration and interaction, allowing an MCP-enabled LLM to call in-process tools (via standard IO) or remotely (via HTTP). If you dig a little into the MCP protocol, it is erily reminiscent of COM from the 1990’s (and I suspect CORBA as well). We provide the LLM “client” with an endpoint for the MCP server. The client can ask the MCP server what it does, and also for a list of tools it provides. Much like `IUnknown` in COM. Once the LLM client has the description of the server and all the tools, it can then decide when and if it should call those tools to solve problems. You might create a tool that deletes a file, or creates a file, or blinks a light on a device, or returns some data, or sends a message, or creates a record in a database. Really, the sky is the limit in terms of what you can build with MCP. ### A2A Agent to Agent (A2A) communication is a newer and more flexible protocol that (I think) has the potential to do a couple things: 1. I could see it replacing MCP, because you can use A2A for client-server calls from an LLM client to an A2A “tool” or agent. This is often done over HTTP. 2. It also can be used to implement bi-directional, peer-to-peer communication between agents, enabling more complex and dynamic interactions. This is often done over WebSockets or (better yet) queuing systems like RabbitMQ. ## Metadata Rules In any case, the LLM that is going to call a tool or send a message to another agent needs a way to understand the capabilities and requirements of that tool or agent. This is where metadata comes into play. Metadata provides essential information about the tool or agent, such as its name, description, input and output parameters, and more. “Metadata” in this context is human language descriptions. Remember that the calling LLM is an AI model that is generally good with language. However, some of the metadata might also describe JSON schemas or other structured data formats to precisely define the inputs and outputs. But even that is usually surrounded by human-readable text that describes the purpose of the scheme or data formats. This is where the older activity protocol falls down, because it doesn’t provide metadata like MCP or A2A. The newer protocols include the ability to provide descriptions of the service/agent, and of tool methods or messages that are exchanged. ## Authentication and Identity In all cases, these protocols aren’t terribly complex. Even the A2A peer-to-peer isn’t that difficult if you have an understanding of async messaging concepts and protocols. What does seem to _always_ be complex is managing authentication and identity across these interactions. There seem to be multiple layers at work here: 1. The client needs to authenticate to call the service - often with some sort of service identity represented by a token. 2. The service needs to authenticate the client, so that service token is important 3. HOWEVER, the service also usually needs to “impersonate” or act on behalf of a user or another identity, which can be a separate token or credential Getting these tokens, and validating them correctly, is often the hardest part of implementing these protocols. This is especially true when you are using abstract AI/LLM hosting environments. It is hard enough in code like C#, where you can see the token handling explicitly, but in many AI hosting platforms, these details are abstracted away, making it challenging to implement robust security. ## Summary The whole concept of an LLM AI calling tools and then service and then having peer-to-peer interactions has evolved very rapidly over the past couple of years, and it is _still_ evolving very rapidly. Just this week, for example, Microsoft announced the Microsoft Agent Framework that replaces Semantic Kernel and Autogen. And that’s just one example! What makes me feel better though, is that at their heart, these protocols are just client-server protocols with some added layers for metadata. Or a peer-to-peer communication protocol that relies on asynchronous messaging patterns. While these frameworks (to a greater or lesser degree) have some support for authentication and token passing, that seems to be the weakest part of the tooling, and the hardest to solve in real-life implementations.
blog.lhotka.net
October 30, 2025 at 10:48 PM
Accessing User Identity on a Blazor Wasm Client
On the server, Blazor authentication is fairly straightforward because it uses the underlying ASP.NET Core authentication mechanism. I’ll quickly review server authentication before getting to the WebAssembly part so you have an end-to-end understanding. I should note that this post is all about a Blazor 8 app that uses per-component rendering, so there is an ASP.NET Core server hosting Blazor server pages, and there may also be pages using `InterativeAuto` or `InteractiveWebAssembly` that run in WebAssembly on the client device. ## Blazor Server Authentication Blazor Server components are running in an ASP.NET Core hosted web server environment. This means that they can have access to all that ASP.NET Core has to offer. For example, a server-static rendered Blazor server page can use HttpContext, and therefore can use the standard ASP.NET Core `SignInAsync` and `SignOutAsync` methods like you’d use in MVC or Razor Pages. ### Blazor Login Page Here’s the razor markup for a simple `Login.razor` page from a Blazor 8 server project with per-component rendering: @page "/login" @using BlazorHolWasmAuthentication.Services @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @using System.Security.Claims @inject UserValidation UserValidation @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <PageTitle>Login</PageTitle> <h1>Login</h1> <div> <EditForm Model="userInfo" OnSubmit="LoginUser" FormName="loginform"> <div> <label>Username</label> <InputText @bind-Value="userInfo.Username" /> </div> <div> <label>Password</label> <InputText type="password" @bind-Value="userInfo.Password" /> </div> <button>Login</button> </EditForm> </div> <div style="background-color:lightgray"> <p>User identities:</p> <p>admin, admin</p> <p>user, user</p> </div> <div><p class="alert-danger">@Message</p></div> This form uses the server-static form of the `EditForm` component, which does a standard postback to the server. Blazor uses the `FormName` and `OnSubmit` attributes to route the postback to a `LoginUser` method in the code block: @code { [SupplyParameterFromForm] public UserInfo userInfo { get; set; } = new(); public string Message { get; set; } = ""; private async Task LoginUser() { Message = ""; ClaimsPrincipal principal; if (UserValidation.ValidateUser(userInfo.Username, userInfo.Password)) { // create authenticated principal var identity = new ClaimsIdentity("custom"); var claims = new List<Claim>(); claims.Add(new Claim(ClaimTypes.Name, userInfo.Username)); var roles = UserValidation.GetRoles(userInfo.Username); foreach (var item in roles) claims.Add(new Claim(ClaimTypes.Role, item)); identity.AddClaims(claims); principal = new ClaimsPrincipal(identity); var httpContext = httpContextAccessor.HttpContext; if (httpContext is null) { Message = "HttpContext is null"; return; } AuthenticationProperties authProperties = new AuthenticationProperties(); await httpContext.SignInAsync( CookieAuthenticationDefaults.AuthenticationScheme, principal, authProperties); NavigationManager.NavigateTo("/"); } else { Message = "Invalid credentials"; } } public class UserInfo { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } } The username and password are validated by a `UserValidation` service. That service returns whether the credentials were valid, and if they were valid, it returns the user’s claims. The code then uses that list of claims to create a `ClaimsIdentity` and `ClaimsPrincpal`. That pair of objects represents the user’s identity in .NET. The `SignInAsync` method is then called on the `HttpContext` object to create a cookie for the user’s identity (or whatever storage option was configured in `Program.cs`). From this point forward, ASP.NET Core code (such as a web API endpoint) and Blazor server components (via the Blazor `AuthenticationStateProvider` and `CascadingAuthenticationState`) all have consistent access to the current user identity. ### Blazor Logout Page The `Logout.razor` page is simpler still, since it doesn’t require any input from the user:  @page "/logout" @using Microsoft.AspNetCore.Authentication @using Microsoft.AspNetCore.Authentication.Cookies @inject IHttpContextAccessor httpContextAccessor @inject NavigationManager NavigationManager <h3>Logout</h3> @code { protected override async Task OnInitializedAsync() { var httpContext = httpContextAccessor.HttpContext; if (httpContext != null) { var principal = httpContext.User; if (principal.Identity is not null && principal.Identity.IsAuthenticated) { await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); } } NavigationManager.NavigateTo("/"); } } The important part of this code is the call to `SignOutAsync`, which removes the ASP.NET Core user token, thus ensuring the current user has been “logged out” from all ASP.NET Core and Blazor server app elements. ### Configuring the Server For the `Login.razor` and `Logout.razor` pages to work, they must be server-static (which is the default for per-component rendering), and `Program.cs` must contain some important configuration. First, some services must be registered: builder.Services.AddHttpContextAccessor(); builder.Services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(); builder.Services.AddCascadingAuthenticationState(); builder.Services.AddTransient<UserValidation>(); The `AddHttpContextAccessor` registration makes it possible to inject an `IHttpContextAccessor` service so your code can access the `HttpContext` instance. > ⚠️ Generally speaking, you should only access `HttpContext` from within a server-static rendered page. The `AddAuthentication` method registers and configures ASP.NET Core authentication. In this case to store the user token in a cookie. The `AddCascadingAuthenticationState` method enables Blazor server components to make use of cascading authentication state. Finally, the `UserValidation` service is registered. This service is implemented by you to verify the user credentials, and to return the user’s claims if the credentials are valid. Some further configuration is required after the services have been registered: app.UseAuthentication(); app.UseAuthorization(); ### Enabling Cascading Authentication State The `Routes.razor` component is where the user authentication state is made available to all Blazor components on the server: <CascadingAuthenticationState> <Router AppAssembly="typeof(Program).Assembly" AdditionalAssemblies="new[] { typeof(Client._Imports).Assembly }"> <Found Context="routeData"> <AuthorizeRouteView RouteData="routeData" DefaultLayout="typeof(Layout.MainLayout)" /> <FocusOnNavigate RouteData="routeData" Selector="h1" /> </Found> </Router> </CascadingAuthenticationState> Notice the addition of the `CascadingAuthenticationState` element, which cascades an `AuthenticationState` instance to all Blazor server components. Also notice the use of `AuthorizeRouteView`, which enables the use of the authorization attribute in Blazor pages, so only an authorized user can access those pages. ### Adding the Login/Logout Links The final step to making authentication work on the server is to enhance the `MainLayout.razor` component to add links for the login and logout pages: @using Microsoft.AspNetCore.Components.Authorization @inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <main> <div class="top-row px-4"> <AuthorizeView> <Authorized> Hello, @context!.User!.Identity!.Name <a href="logout">Logout</a> </Authorized> <NotAuthorized> <a href="login">Login</a> </NotAuthorized> </AuthorizeView> </div> <article class="content px-4"> @Body </article> </main> </div> <div id="blazor-error-ui"> An unhandled error has occurred. <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> The `AuthorizeView` component is used, with the `Authorized` block providing content for a logged in user, and the `NotAuthorized` block providing content for an anonymous user. In both cases, the user is directed to the appropriate page to login or logout. At this point, all _server-side_ Blazor components can use authorization, because they have access to the user identity via the cascading `AuthenticationState` object. This doesn’t automatically extend to pages or components running in WebAssembly on the browser. That takes some extra work. ## Blazor WebAssembly User Identity There is nothing built in to Blazor that automatically makes the user identity available to pages or components running in WebAssembly on the client device. You should also be aware that there are possible security implications to making the user identity available on the client device. This is because any client device can be hacked, and so a bad actor could gain access to any `ClaimsIdentity` object that exists on the client device. As a result, a bad actor could get a list of the user’s claims, if those claims are on the client device. In my experience, if developers are using client-side technologies such as WebAssembly, Angular, React, WPF, etc. they’ve already reconciled the security implications of running code on a client device, and so it is probably not an issue to have the user’s roles or other claims on the client. I will, however, call out where you can filter the user’s claims to prevent a sensitive claim from flowing to a client device. The basic process of making the user identity available on a WebAssembly client is to copy the user’s claims from the server, and to use that claims data to create a copy of the `ClaimsIdentity` and `ClaimsPrincipal` on the WebAssembly client. ### A Web API for ClaimsPrincipal The first step is to create a web API endpoint on the ASP.NET Core (and Blazor) server that exposes a copy of the user’s claims so they can be retrieved by the WebAssembly client. For example, here is a controller that provides this functionality: using Microsoft.AspNetCore.Mvc; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Controllers; [ApiController] [Route("[controller]")] public class AuthController(IHttpContextAccessor httpContextAccessor) { [HttpGet] public User GetUser() { ClaimsPrincipal principal = httpContextAccessor!.HttpContext!.User; if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) { // Return a user object with the username and claims var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); return new User { Username = principal.Identity!.Name, Claims = claims }; } else { // Return an empty user object return new User(); } } } public class Credentials { public string Username { get; set; } = string.Empty; public string Password { get; set; } = string.Empty; } public class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } public class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } This code uses an `IHttpContextAccessor` to access `HttpContext` to get the current `ClaimsPrincipal` from ASP.NET Core. It then copies the data from the `ClaimsIdentity` into simple types that can be serialized into JSON for return to the caller. Notice how the code doesn’t have to do any work to determine the identity of the current user. This is because ASP.NET Core has already authenticated the user, and the user identity token cookie has been unpacked by ASP.NET Core before the controller is invoked. The line of code where you could filter sensitive user claims is this: var claims = principal.Claims.Select(c => new Claim { Type = c.Type, Value = c.Value }).ToList(); This line copies _all_ claims for serialization to the client. You could filter out claims considered sensitive so they don’t flow to the WebAssembly client. Keep in mind that any code that relies on such claims won’t work in WebAssembly pages or components. In the server `Program.cs` it is necessary to register and map controllers. builder.Services.AddControllers(); and app.MapControllers(); At this point the web API endpoint exists for use by the Blazor WebAssembly client. ### Getting the User Identity in WebAssembly Blazor always maintains the current user identity as a `ClaimsPrincpal` in an `AuthenticationState` object. Behind the scenes, there is an `AuthenticationStateProvider` service that provides access to the `AuthenticationState` object. On the Blazor server we generally don’t need to worry about the `AuthenticationStateProvider` because a default one is provided for our use. On the Blazor WebAssembly client however, we must implement a custom `AuthenticationStateProvider`. For example: using Microsoft.AspNetCore.Components.Authorization; using System.Net.Http.Json; using System.Security.Claims; namespace BlazorHolWasmAuthentication.Client; public class CustomAuthenticationStateProvider(HttpClient HttpClient) : AuthenticationStateProvider { private AuthenticationState AuthenticationState { get; set; } = new AuthenticationState(new ClaimsPrincipal()); private DateTimeOffset? CacheExpire; public override async Task<AuthenticationState> GetAuthenticationStateAsync() { if (!CacheExpire.HasValue || DateTimeOffset.Now > CacheExpire) { var previousUser = AuthenticationState.User; var user = await HttpClient.GetFromJsonAsync<User>("auth"); if (user != null && !string.IsNullOrEmpty(user.Username)) { var claims = new List<System.Security.Claims.Claim>(); foreach (var claim in user.Claims) { claims.Add(new System.Security.Claims.Claim(claim.Type, claim.Value)); } var identity = new ClaimsIdentity(claims, "auth_api"); var principal = new ClaimsPrincipal(identity); AuthenticationState = new AuthenticationState(principal); } else { AuthenticationState = new AuthenticationState(new ClaimsPrincipal()); } if (!ComparePrincipals(previousUser, AuthenticationState.User)) { NotifyAuthenticationStateChanged(Task.FromResult(AuthenticationState)); } CacheExpire = DateTimeOffset.Now + TimeSpan.FromSeconds(30); } return AuthenticationState; } private static bool ComparePrincipals(ClaimsPrincipal principal1, ClaimsPrincipal principal2) { if (principal1.Identity == null || principal2.Identity == null) return false; if (principal1.Identity.Name != principal2.Identity.Name) return false; if (principal1.Claims.Count() != principal2.Claims.Count()) return false; foreach (var claim in principal1.Claims) { if (!principal2.HasClaim(claim.Type, claim.Value)) return false; } return true; } private class User { public string Username { get; set; } = string.Empty; public List<Claim> Claims { get; set; } = []; } private class Claim { public string Type { get; set; } = string.Empty; public string Value { get; set; } = string.Empty; } } This is a subclass of `AuthenticationStateProvider`, and it provides an implementation of the `GetAuthenticationStateAsync` method. This method invokes the server-side web API controller to get the user’s claims, and then uses them to create a `ClaimsIdentity` and `ClaimsPrincipal` for the current user. This value is then returned within an `AuthenticationState` object for use by Blazor and any other code that requires the user identity on the client device. One key detail in this code is that the `NotifyAuthenticationStateChanged` method is only called in the case that the user identity has changed. The `ComparePrincipals` method compares the existing principal with the one just retrieved from the web API to see if there’s been a change. It is quite common for Blazor and other code to request the `AuthenticationState` very frequently, and that can result in a lot of calls to the web API. Even a cache that lasts a few seconds will reduce the volume of repetitive calls significantly. This code uses a 30 second cache. ### Configuring the WebAssembly Client To make Blazor use our custom provider, and to enable authentication on the client, it is necessary to add some code to `Program.cs` _in the client project_ : using BlazorHolWasmAuthentication.Client; using Marimer.Blazor.RenderMode.WebAssembly; using Microsoft.AspNetCore.Components.Authorization; using Microsoft.AspNetCore.Components.WebAssembly.Hosting; var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); builder.Services.AddAuthorizationCore(); builder.Services.AddScoped<AuthenticationStateProvider, CustomAuthenticationStateProvider>(); builder.Services.AddCascadingAuthenticationState(); await builder.Build().RunAsync(); The `CustomAuthenticationStateProvider` requires an `HttpClient` service, and relies on the `AddAuthorizationCore` and `AddCascadingAuthenticationState` to properly function. ## Summary The preexisting integration between ASP.NET Core and Blazor on the server make server-side user authentication fairly straightforward. Extending the authenticated user identity to WebAssembly hosted pages and components requires a little extra work: creating a controller on the server and custom `AuthenticationStateProvider` on the client.
blog.lhotka.net
October 30, 2025 at 8:47 PM
.NET Terminology
I was recently part of a conversation thread online, which reinforced the naming confusion that exists around the .NET (dotnet) ecosystem. I thought I’d summarize my responses to that thread, as it surely can be confusing to a newcomer, or even someone who blinked and missed a bit of time, as things change fast. ## .NET Framework There is the Microsoft .NET Framework, which is tied to Windows and has been around since 2002 (give or take). It is now considered “mature” and is at version 4.8. We all expect that’s the last version, as it is in maintenance mode. I consider .NET Framework (netfx) to be legacy. ## Modern .NET There is modern .NET (dotnet), which is cross-platform and isn’t generally tied to any specific operating system. I suppose the term “.NET” encompasses both, but most of us that write and speak in this space tend to use “.NET Framework” for legacy, and “.NET” for modern .NET. The .NET Framework and modern .NET both have a bunch of sub-components that have their own names too. Subsystems for talking to databases, creating various types of user experience, and much more. Some are tied to Windows, others are cross platform. Some are legacy, others are modern. It is important to remember that modern .NET is cross-platform and you can develop and deploy to Linux, Mac, Android, iOS, Windows, and other operating systems. It also supports various CPU architectures, and isn’t tied to x64. ## Modern Terminology The following table tries to capture most of the major terminology around .NET today. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET (dotnet) 5+ | modern | No | Platform ASP.NET Core | modern | No | Web Framework Blazor | modern | No | Web SPA framework ASP.NET Core MVC | modern | No | Web UI framework ASP.NET Core Razor Pages | modern | No | Web UI framework .NET MAUI | modern | No | Mobile/Desktop UI framework MAUI Blazor Hybrid | modern | no | Mobile/Desktop UI framework ADO.NET | modern | No | Data access framework Entity Framework | modern | No | Data access framework WPF | modern | Yes | Windows UI Framework Windows Forms | modern | Yes | Windows UI Framework ## Legacy Terminology And here is the legacy terminology. Tech | Status | Tied to Windows | Purpose ---|---|---|--- .NET Framework (netfx) 4.8 | legacy | Yes | Platform ASP.NET | legacy | Yes | Web Framework ASP.NET Web Forms | legacy | Yes | Web UI Framework ASP.NET MVC | legacy | Yes | Web UI Framework Xamarin | legacy (deprecated) | No | Mobile UI Framework ADO.NET | legacy | Yes | Data access framework Entity Framework | legacy | Yes | Data access framework UWP | legacy | Yes | Windows UI Framework WPF | legacy | Yes | Windows UI Framework Windows Forms | legacy | Yes | Windows UI Framework ## Messy History Did I leave out some history? Sure, there’s the whole “.NET Core” thing, and the .NET Core 1.0-3.1 timespan, and .NET Standard (2 versions). Are those relevant in the world right now, today? Hopefully not really! They are cool bits of history, but just add confusion to anyone trying to approach modern .NET today. ## What I Typically Use What do _I personally_ tend to use these days? I mostly: * Develop modern dotnet on Windows using mostly Visual Studio, but also VS Code and Rider * Build my user experiences using Blazor and/or MAUI Blazor Hybrid * Build my web API services using ASP.NET Core * Use ADO.NET (often with the open source Dapper) for data access * Use the open source CSLA .NET for maintainable business logic * Test on Linux using Ubuntu on WSL * Deploy to Linux containers on the server (Azure, Kubernetes, etc.) ## Other .NET UI Frameworks Finally, I would be remiss if I didn’t mention some other fantastic cross-platform UI frameworks based on modern .NET: * Uno Platform * Avalonia * OpenSilver
blog.lhotka.net
October 30, 2025 at 8:47 PM
Blazor EditForm OnSubmit behavior
I am working on the open-source KidsIdKit app and have encountered some “interesting” behavior with the `EditForm` component and how buttons trigger the `OnSubmit` event. An `EditForm` is declared similar to this: <EditForm Model="CurrentChild" OnSubmit="SaveData"> I would expect that any `button` component with `type="submit"` would trigger the `OnSubmit` handler. <button class="btn btn-primary" type="submit">Save</button> I would also expect that any `button` component _without_ `type="submit"` would _not_ trigger the `OnSubmit` handler. <button class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> I’d think this was true _especially_ if that second button was in a nested component, so it isn’t even in the `EditForm` directly, but is actually in its own component, and it uses an `EventCallback` to tell the parent component that the button was clicked. ### Actual Results In Blazor 8 I see different behaviors between MAUI Hybrid and Blazor WebAssembly hosts. In a Blazor WebAssembly (web) scenario, my expectations are met. The secondary button in the sub-component does _not_ cause `EditForm` to submit. In a MAUI Hybrid scenario however, the secondary button in the sub-component _does_ cause `EditForm` to submit. I also tried this using the new Blazor 9 MAUI Hybrid plus Web template - though in this case the web version is Blazor server. In my Blazor 9 scenarios, in _both_ hosting cases the secondary button triggers the submit of the `EditForm` - even though the secondary button is in a sub-component (its own `.razor` file)! What I’m getting out of this is that we must assume that _any button_ , even if it is in a nested component, will trigger the `OnSubmit` event of an `EditForm`. Nasty! ### Solution The solution (thanks to @jeffhandley) is to add `type="button"` to all non-submit `button` components. It turns out that the default HTML for `<button />` is `type="submit"`, so if you don’t override that value, then all buttons trigger a submit. What this means is that I _could_ shorten my actual submit button: <button class="btn btn-primary">Save</button> I probably won’t do this though, as being explicit probably increases readability. And I _absolutely must_ be explicit with all my other buttons: <button type="button" class="btn btn-secondary" @onclick="CancelChoice">Cancel</button> This prevents the other buttons (even in nested Razor components) from accidentally triggering the submit behavior in the `EditForm` component.
blog.lhotka.net
October 30, 2025 at 8:47 PM
Do not throw away your old PCs
As many people know, Windows 10 is coming to its end of life (or at least end of support) in 2025. Because Windows 11 requires specialized hardware that isn’t built into a lot of existing PCs running Windows 10, there is no _Microsoft-based_ upgrade path for those devices. The thing is, a lot of those “old” Windows 10 devices are serving their users perfectly well, and there is often no compelling reason for a person to replace their PC just because they can’t upgrade to Windows 11. > ℹ️ If you can afford to replace your PC with a new one, that’s excellent, and I’m not trying to discourage that! However, you can still avoid throwing away your old PC, and you should consider alternatives. Throwing away a PC or laptop - like in the trash - is a _horrible_ thing to do, because PCs contain toxic elements that are bad for the environment. In many places it might actually be illegal. Besides which, whether you want to keep and continue to use your old PC or not, _someone_ can probably make good use of it. > ️⚠️ If you do need to “throw away” your old PC, please make sure to turn it in to a recycling center for e-waste or hazardous waste center. I’d like to discuss some possible alternatives to throwing away or recycling your old PC. Things that provide much better outcomes for people and the environment! It might be that you can continue to use your PC or laptop, or someone else may be able to give it new life. Here are some options. ## Continue Using the PC Although you may be unable to upgrade to Windows 11, there are alternative operating systems that will breathe new life into your existing PC. The question you should ask first, is what do you do on your PC? The following may require Windows: * Windows-only software (like CAD drawing or other software) * Hard-core gaming On the other hand, if you use your PC entirely for things like: * Browsing the web * Writing documents * Simple spreadsheets * Web-based games in a browser Then you can probably replace Windows with an alternative and continue to be very happy with your PC. What are these “alternative operating systems”? They are all variations of Linux. If you’ve never heard of Linux, or have heard it is complicated and only for geeks, rest assured that there are some variations of Linux that are no more complex than Windows 10. ### “Friendly” Variations of Linux Some of the friendliest variations of Linux include: * Cinnamon Mint - Linux with a desktop that is very similar to Windows * Ubuntu Desktop - Linux with its own style of graphical desktop that isn’t too hard to learn if you are used to Windows There are many others, these are just a couple that I’ve used and found to be easy to install and learn. > 🛑 Before installing Linux on your PC make sure to copy all the files you want to keep onto a thumb drive or something! Installing Linux will _entirely delete your existing hard drive_ and none of your existing files will be on the PC when you are done. Once you’ve installed Linux, you’ll need software to do the things you do today. ### Browsers on Linux Linux often comes with the Firefox browser pre-installed. Other browsers that you can install include: * Chrome * Edge I am sure other browsers are available as well. Keep in mind that most modern browsers provide comparable features and let you use nearly every web site, so you may be happy with Firefox or whatever comes pre-installed with Linux. ### Software similar to Office on Linux Finally, most people use their PC to write documents, create spreadsheets and do other things that are often done using Microsoft Office. Some alternatives to Office available on Linux include: * OneDrive - Microsoft on-line file storage and web-based versions of Word, Excel, and more * Google Docs - Google on-line file storage and web-based word processor, spreadsheet, and more * LibreOffice - Software you install on your PC that provides word processing, spreadsheets, and more. File formats are compatible with Word, Excel, and other Office tools. Other options exist, these are the ones I’ve used and find to be most common. ## Donate your PC Even if your needs can’t be met by running Linux on your old PC, or perhaps installing a new operating system just isn’t for you - please consider that there are people all over the world, including near you, that would _love_ to have access to a free computer. This might include kids, adults, or seniors in your area who can’t afford a PC (or to have their own PC). In the US, rural and urban areas are _filled_ with young people who could benefit from having a PC to do school work, learn about computers, and more. > 🛑 Before donating your PC, make sure to use the Windows 10 feature to reset the PC to factory settings. This will delete all _your_ files from the PC, ensuring that the new owner can’t access any of your information. Check with your church and community organizations to find people who may benefit from having access to a computer. ## Build a Server If you know people, or are someone, who likes to tinker with computers, there are a lot of alternative uses for an old PC or laptop. You can install Linux _server_ software on an old PC and then use that server for all sorts of fun things: * Create a file server for your photos and other media - can be done with a low-end PC that has a large hard drive * Build a Kubernetes cluster out of discarded devices - requires PCs with at least 2 CPU cores and 8 gigs of memory, though more is better Here are a couple articles with other good ideas: * Avoid the Trash Heap: 17 Creative Uses for an Old Computer * 10 Creative Things to Do With an Old Computer If you aren’t the type to tinker with computers, just ask around your family and community. It is amazing how many people do enjoy this sort of thing, and would love to have access to a free device that can be used for something other than being hazardous waste. ## Conclusion I worry that 2025 will be a bad year for e-waste and hazardous waste buildup in landfills and elsewhere around the world, as people realize that their Windows 10 PC or laptop can’t be upgraded and “needs to be replaced”. My intent in writing this post is to provide some options to consider that may breathe new life into your “old” PC. For yourself, or someone else, that computer may have many more years of productivity ahead of it.
blog.lhotka.net
October 30, 2025 at 8:47 PM
Why MAUI Blazor Hybrid
It can be challenging to choose a UI technology in today’s world. Even if you narrow it down to wanting to build “native” apps for phones, tablets, and PCs there are so many options. In the Microsoft .NET space, there are _still_ many options, including .NET MAUI, Uno, Avalonia, and others. The good news is that these are good options - Uno and Avalonia are excellent, and MAUI is coming along nicely. At this point in time, my _default_ choice is usually something called a MAUI Hybrid app, where you build your app using Blazor, and the app is hosted in MAUI so it is built as a native app for iOS, Android, Windows, and Mac. Before I get into why this is my default, I want to point out that I (personally) rarely build mobile apps that “represent the brand” of a company. Take the Marriott or Delta apps as examples - the quality of these apps and the way they work differently on iOS vs Android can literally cost these companies customers. They are a primary contact point that can irritate a customer or please them. This is not the space for MAUI Blazor Hybrid in my view. ## Common Code MAUI Blazor Hybrid is (in my opinion) for apps that need to have rich functionality, good design, and be _common across platforms_ , often including phones, tablets, and PCs. Most of my personal work is building business apps - apps that a business creates to enable their employees, vendors, partners, and sometimes even customers, to interact with important business systems and functionality. Blazor (the .NET web UI framework) turns out to be an excellent choice for building business apps. Though this is a bit of a tangent, Blazor is my go-to for modernizing (aka replacing) Windows Forms, WPF, Web Forms, MVC, Silverlight, and other “legacy” .NET app user experiences. The one thing Blazor doesn’t do by itself, is create native apps that can run on devices. It creates web sites (server hosted) or web apps (browser hosted) or a combination of the two. Which is wonderful for a lot of scenarios, but sometimes you really need things like offline functionality or access to per-platform APIs and capabilities. This is where MAUI Hybrid comes into the picture, because in this model you build your Blazor app, and that app is _hosted_ by MAUI, and therefore is a native app on each platform: iOS, Android, Windows, Mac. That means that your Blazor app is installed as a native app (therefore can run offline), and it can tap into per-platform APIs like any other native app. ## Per-Platform In most business apps there is little (or no) per-platform difference, and so the vast majority of your app is just Blazor - C#, html, css. It is common across all the native platforms, and optionally (but importantly) also the browser. When you do have per-platform differences, like needing to interact with serial or USB port devices, or arbitrary interactions with local storage/hard drives, you can do that. And if you do that with a little care, you still end up with the vast majority of your app in Blazor, with small bits of C# that are per-platform. ## End User Testing I mentioned that a MAUI Hybrid app can not only create native apps but that it can also target the browser. This is fantastic for end user testing, because it can be challenging to do testing via the Apple, Google, and Microsoft stores. Each requires app validation, on their schedule not yours, and some have limits on the numbers of test users. > In .NET 9, the ability to create a MAUI Hyrid that also targets the browser is a pre-built template. Previously you had to set it up yourself. What this means is that you can build your Blazor app, have your users do a lot of testing of your app via the browser, and once you are sure it is ready to go, then you can do some final testing on a per-platform basis via the stores (or whatever scheme you use to install native apps). ## Rich User Experience Blazor, with its use of html and css backed by C#, directly enables rich user experiences and high levels of interactivity. The defacto UI language is html/css after all, and we all know how effective it can be at building great experiences in browsers - as well as native apps. There is a rapidly growing and already excellent ecosystem around Blazor, with open-source and commercial UI toolkits and frameworks available that make it easy to create many different types of user experience, including Material design and others. From a developer perspective, it is nice to know that learning any of these Blazor toolsets is a skill that spans native and web development, as does Blazor itself. In some cases you’ll want to tap into per-platform capabilities as well. The MAUI Community Toolkit is available and often provides pre-existing abstractions for many per-platform needs. Some highlights include: * File system interaction * Badge/notification systems * Images * Speech to text Between the basic features of Blazor, advanced html/css, and things like the toolkit, it is pretty easy to build some amazing experiences for phones, tablets, and PCs - as well as the browser. ## Offline Usage Blazor itself can provide a level of offline app support if you build a Progressive Web App (PWA). To do this, you create a standlone Blazor WebAssembly app that includes the PWA manifest and worker job code (in JavaScript). PWAs are quite powerful and are absolutely something to consider as an option for some offline app requirements. The challenge with a PWA is that it is running in a browser (even though it _looks_ like a native app) and therefore is limited by the browser sandbox and the local operating system. For example, iOS devices place substantial limitations on what a PWA can do and how much data it can store locally. There are commercial reasons why Apple doesn’t like PWAs competing with “real apps” in its store, and the end result is that PWAs _might_ work for you, as long as you don’t need too much local storage or too many native features. MAUI Hybrid apps are actual native apps installed on the end user’s device, and so they can do anything a native app can do. Usually this means asking the end user for permission to access things like storage, location, and other services. As a smartphone user you are certainly aware of that type of request as an app is installed. The benefit then, is that if the user gives your app permission, your app can do things it couldn’t do in a PWA from inside the browser sandbox. In my experience, the most important of these things is effectively unlimited access to local storage for offline data that is required by the app. ## Conclusion This has been a high level overview of my rationale for why MAUI Blazor Hybrid is my “default start point” when thinking about building native apps for iOS, Android, Windows, and/or Mac. Can I be convinced that some other option is better for a specific set of business and technical requirements? Of course!! However, having a well-known and very capable option as a starting point provides a short-cut for discussing the business and technical requirements - to determine if each requirement is or isn’t already met. And in many cases, MAUI Hybrid apps offer very high developer productivity, the functionality needed by end users, and long-term maintainability.
blog.lhotka.net
October 30, 2025 at 8:47 PM
Running Linux on My Surface Go
I have a first-generation Surface Go, the little 10” tablet Microsoft created to try and compete with the iPad. I’ll confess that I never used it a lot. I _tried_ , I really did! But it is underpowered, and I found that my Surface Pro devices were better for nearly everything. My reasoning for having a smaller tablet was that I travel quite a lot, more back then than now, and I thought having a tablet might be nicer for watching movies and that sort of thing, especially on the plane. It turns out that the Surface Pro does that too, without having to carry a second device. Even when I switched to my Surface Studio Laptop, I _still_ didn’t see the need to carry a second device - though the Surface Pro is absolutely better for traveling in my view. I’ve been saying for quite some time that I think people need to look at Linux as a way to avoid the e-waste involved in discarding their Windows 10 PCs - the ones that can’t run Windows 11. I use Linux regularly, though usually via the command line for software development, and so I thought I’d put it on my Surface Go to gain real-world experience. > I have quite a few friends and family who have Windows 10 devices that are perfectly good. Some of those folks don’t want to buy a new PC, due to financial constraints, or just because their current PC works fine. End of support for Windows 10 is a problem for them! The Surface Go is a bit trickier than most mainstream Windows 10 laptops or desktops, because it is a convertable tablet with a touch screen and specialized (rare) hardware - as compared to most of the devices in the market. So I did some reading, and used Copilot, and found a decent (if old) article on installing Linux on a Surface Go. > ⚠️ One quick warning: Surface Go was designed around Windows, and while it does work reasonably well with Linux, it isn’t as good. Scrolling is a bit laggy, and the cameras don’t have the same quality (by far). If you want to use the Surface Go as a small, lightweight laptop I think it is pretty good; if you are looking for a good _tablet_ experience you should probably just buy a new device - and donate the old one to someone who needs a basic PC. Fortunately, Linux hasn’t evolved all that much or all that rapidly, and so this article remains pretty valid even today. ## Using Ubuntu Desktop I chose to install Ubuntu, identified in the article as a Linux distro (distribution, or variant, or version) that has decent support for the Surface Go. I also chose Ubuntu because this is normally what I use for my other purposes, and so I’m familiar with it in general. However, I installed the latest Ubuntu Desktop (version 25.04), not the older version mentioned in the article. This was a good choice, because support for the Surface hardware has improved over time - though the other steps in the article remain valid. ## Download and Set Up Media The steps to get ready are: 1. Download Ubuntu Desktop - this downloads a file with a `.iso` extension 2. Download software to create a bootable flash drive based on the `.iso` file. I used software called Rufus - just be careful to avoid the flashy (spammy) download buttons, and find the actual download link text in the page 3. Get a flash drive (either new, or one you can erase) and insert it into your PC 4. Run rufus, and identify the `.iso` file and your flash drive 5. Rufus will write the data to the flash drive, and make the flash drive bootable so you can use it to install Linux on any PC 6. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 ## Install Ubuntu on the Surface At this point you have a bootable flash drive and a Surface Go device, and you can do the installation. This is where the zdnet article is a bit dated - the process is smoother and simpler than it was back then, so just do the install like this: 1. 🛑 BACK UP ANY DATA on your Surface Go; in my case all my data is already backed up in OneDrive (and other places) and so I had nothing to do - but this process WILL BLANK YOUR HARD DRIVE! 🛑 2. Insert the flash drive into the Surface USB port (for the Surface Go I had to use an adapter from type C to type A) 3. Press the Windows key and type “reset” and choose the settings option to reset your PC 4. That will bring up the settings page where you can choose Advanced and reset the PC for booting from a USB device 5. What I found is that the first time I did this, my Linux boot device didn’t appear, so I rebooted to Windows and did step 4 again 6. The second time, an option was there for Linux. It had an odd name: Linpus (as described in the zdnet article) 7. Boot from “Linpus” and your PC will sit and spin for quite some time (the Surface Go is quite old and slow by modern standards), and eventually will come up with Ubuntu 8. The thing is, it is _running_ Ubuntu, but it hasn’t _installed_ Ubuntu. So go through the wizard and answer the questions - especially the wifi setup 9. Once you are on the Ubuntu (really Gnome) desktop, you’ll see an icon for _installing_ Ubuntu. Double-click that and the actual installation process will begin 10. I chose to have the installer totally reformat my hard drive, and I recommend doing that, because the Surface Go doesn’t have a huge drive to start with, and I want all of it available for my new operating system 11. Follow the rest of the installer steps and let the PC reboot 12. Once it has rebooted, you can remove the flash drive ## Installing Updates At this point you should be sitting at your new desktop. The first thing Linux will want to do is install updates, and you should let it do so. I laugh a bit, because people make fun of Windows updates, and Patch Tuesday. Yet all modern and secure operating systems need regular updates to remain functional and secure, and Linux is no exception. Whether automated or not, you should do regular (at least monthly) updates to keep Linux secure and happy. ## Installing Missing Features Immediately upon installation, Ubuntu 25.04 seems to have very good support for the Surface Go, including multi-touch on the screen and trackpad, use of the Surface Pen, speakers, and the external (physical) keyboard. What doesn’t work right away, at least what I found, are the cameras or any sort of onscreen/soft keyboard. You need to take extra steps for these. The zdnet article is helpful here. ### Getting the Cameras Working The zdnet article walks through the process to get the cameras working. I actually think the camera drivers are now just part of Ubuntu, but I did have to take steps to get them working, and even then they don’t have great quality - this is clearly an area where moving to Linux is a step backward. At times I found the process a bit confusing, but just plowed ahead figuring I could always reinstall Linux again if necessary. It did work fine in the end, no reinstall needed. 1. Install the Linux Surface kernel - which sounds intimidating, but is really just following some steps as documented in their GitHub repo; other stuff in the document is quite intimidating, but isn’t really relevant if all you want to do is get things running 2. That GitHub repo also has information about the various camera drivers for different Surface devices, and I found that to be a bit overwhelming; fortunately, it really amounts to just running one command 3. Make sure you also run these commands to give your Linux account permissions to use the camera 4. At this point I was able to follow instructions to run `cam` and see the cameras - including some other odd entries I igored 5. And I was able to run `qcam`, which is a command that brings up a graphical app so you can see through each camera > ⚠️ Although the cameras technically work, I am finding that a lot of apps still don’t see the cameras, and in all cases the camera quality is quite poor. ### Getting a Soft or Onscreen Keyboard Because the Surface Go is _technically_ a tablet, I expected there to be a soft or onscreen keyboard. It turns out that there is a primitive one built into Ubuntu, but it really doesn’t work very well. It is pretty, but I was unable to figure out how to get it to appear via touch, which kind of defeats the purpose (I needed my physical keyboard to get the virtual one to appear). I found an article that has some good suggestions for Linux onscreen keyboard (OSK) improvements. I used what the article calls “Method 2” to install an Extension Manager, which allowed me to install extensions for the keyboard. 1. Install the Extension Manager `sudo apt install gnome-shell-extension-manager` 2. Open the Extension Manager app 3. This is where the article fell down, because the extension they suggested doesn’t seem to exist any longer, and there are numerous other options to explore 4. I installed an extension called “Touch X” which has the ability to add an icon to the upper-right corner of the screen by which you can open the virtual keyboard at any time (it can also do a cool ripple animation when you touch the screen if you’d like) 5. I also installed “GJS OSK”, which is a replacement soft keyboard that has a lot more configurability than the built-in default; you can try both and see which you prefer ## Installing Important Apps This section is mostly editorial, because I use certain apps on a regular basis, and you might use other apps. Still, you should be aware that there are a couple ways to install apps on Ubuntu: snap and apt. The “snap” concept is specific to Ubuntu, and can be quite nice, as it installs each app into a sort of sandbox that is managed by Ubuntu. The “app store” in Ubuntu lists and installs apps via snap. The “apt” concept actually comes from Ubuntu’s parent, Debian. Since Debian and Ubuntu make up a very large percentage of the Linux install base, the `apt` command is extremely common. This is something you do from a terminal command line. Using snap is very convenient, and when it works I love it. Sometimes I find that apps installed via snap don’t have access to things like speakers, cameras, or other things. I think that’s because they run in a sandbox. I’m pretty sure there are ways to address these issues - my normal way of addressing them is to uninstall the snap and use `apt`. ### My “Important” Apps I installed apps via snap, apt, and as PWAs. #### Snap and Apt Apps Here are the apps I installed right away: 1. Microsoft Edge browser - because I use Edge on my Windows devices and Android phone, I want to use the same browser here to sync all my history, settings, etc. - I installed this using the default Firefix browser, then switched the default to Edge 2. Visual Studio Code - I’m a developer, and find it hard to imagine having a device without some way to write code - and I use vscode on Windows, so I’m used to it, and it works the same on Linux - I installed this as a snap via App Center 3. git - again, I’m a developer and all my stuff is on GitHub, which means using git as a primary tool - I installed this using `apt` 4. Discord - I use discord for many reasons - talking to friends, gaming, hosting the CSLA .NET Discord server - so it is something I use all the time - I installed this as a snap via App Center 5. Thunderbird Email - I’m not sold on this yet - it seems to be the “default” email app for Linux, but feels like Outlook from 10-15 years ago, and I do hope to find something a lot more modern - I installed this as a snap via App Center 6. Copilot Desktop - I’ve been increasingly using Copilot on Windows 11, and was delighted to find that Ken VanDine wrote a Copilot shell for Linux; it is in the App Center and installs as a snap, providing the same basic experience as Copilot on Windows or Android - I installed this as a snap via App Center 7. .NET SDK - I mostly develop using .NET and Blazor, and so installing the .NET software developer kit seemed obvious; Ubuntu has a snap to install version 8, but I used apt to install version 9 #### PWA Apps Once I got Edge installed, I used it to install a number of progressive web apps (PWAs) that I use on nearly every device. A PWA is an app that is installed and updates via your browser, and is a great way to get cross-platform apps. Exactly how you install a PWA will vary from browser to browser. Some have a little icon when you are on the web page, others have an “install app” option or “install on desktop” or similar. The end result is that you get what appears to be an app icon on your phone, PC, whatever - and when you click the icon the PWA app runs in a window like any other app. 1. Elk - I use Mastodon (social media) a lot, and my preferred client is Elk - fast, clean, works great 2. Bluesky - I use Bluesky (social media) a lot, and Bluesky can be installed as a PWA 3. LinkedIn - I use LinkedIn quite a bit, and it can be installed as a PWA 4. Facebook - I still use Facebook a little, and it can be installed as a PWA #### Using Microsoft 365 Office Most people want the edit documents and maybe spreadsheets on their PC. A lot of people, including me, use Word and Excel for this purpose. Those apps aren’t available on Linux - at least not directly. Fortunately there are good alternatives, including: 1. Use https://onedrive.com to create and edit documents and spreadsheets in the browser 2. Use https://office.com to access Office online if you have a subscription 3. Install LibreOffice, an open-source office productivity suite sort of like Office I use OneDrive for a lot of personal documents, photos, etc. And I use actual Office for work. The LibreOffice idea is something I might explore at some point, but the online versions of the Office apps are usually enough for casual work - which is all I’m going to do on the little Surface Go device anyway. One feature of Edge is the ability to have multiple profiles. I use this all the time on Windows, having a personal and two work profiles. This feature works on Linux as well, though I found it had some glitches. My default Edge profile is my personal one, so all those PWAs I installed are connected to that profile. I set up another Edge profile for my CSLA work, and it is connected to my marimer.llc email address. This is where I log into the M365 office.com apps, and I have that page installed as a PWA. When I run “Office” it opens in my work profile and I have access to all my work documents. On my personal profile I don’t use the Office apps as much, but when I do open something from my personal OneDrive, it opens in that profile. The limitation is that I can only edit documents while online, but for my purposes with this device, that’s fine. I can edit my documents and spreadsheets as necessary. ## Conclusion At this point I’m pretty happy. I don’t expect to use this little device to do any major software development, but it actually does run vscode and .NET just fine (and also Jetbrains Rider if you prefer a more powerful option). I mostly use it for browsing the web, discord, Mastodon, and Bluesky. Will I bring this with when I travel? No, because my normal Windows 11 PC does everything I want. Could I live with this as my “one device”? Well, no, but that’s because it is underpowered and physically too small. But could I live with a modern laptop running Ubuntu? Yes, I certainly could. I wouldn’t _prefer_ it, because I like full-blown Visual Studio and way too many high end Steam games. The thing is, I am finding myself leaving the Surface Go in the living room, and reaching for it to scan the socials while watching TV. Something I could have done just as well with Windows, and can now do with Linux.
blog.lhotka.net
October 30, 2025 at 8:47 PM
CSLA 2-tier Data Portal Behavior History
The CSLA data portal originally treated 2- and 3-tier differently, primarily for performance reasons. Back in the early 2000’s, the data portal did not serialize the business object graph in 2-tier scenarios. That behavior still exists and can be enabled via configuration, but is not the default for the reasons discussed in this post. Passing the object graph by reference (instead of serializing it) does provide much better performance, but at the cost of being behaviorally/semantically different from 3-tier. In a 3-tier (or generally n-tier) deployment, there is at least one network hop between the client and any server, and the object graph _must be serialized_ to cross that network boundary. When different 2-tier and 3-tier behaviors existed, a lot of people did their dev work in 2-tier and then tried to switch to 3-tier. Usually they’d discover all sorts of issues in their code, because they were counting on the logical client and server using the same reference to the object graph. A variety of issues are solved by serializing the graph even in 2-tier scenarios, including: 1. Consistency with 3-tier deployment (enabling location transparency in code) 2. Preventing data binding from reacting to changes to the object graph on the logical server (nasty performance and other issues would occur) 3. Ensuring that a failure on the logical server (especially part-way through the graph) leaves the graph on the logical client in a stable/known state There are other issues as well - and ultimately those issues drove the decision (I want to say around 2006 or 2007?) to default to serializing the object graph even in 2-tier scenarios. There is a performance cost to that serialization, but having _all_ n-tier scenarios enjoy the same semamantic behaviors has eliminated so many issues and support questions on the forums that I regret nothing.
blog.lhotka.net
October 30, 2025 at 8:47 PM
A Simple CSLA MCP Server
In a recent CSLA discussion thread, a user asked about setting up a simple CSLA Mobile Client Platform (MCP) server. https://github.com/MarimerLLC/csla/discussions/4685 I’ve written a few MCP servers over the past several months with varying degrees of success. Getting the MCP protocol right is tricky (or was), and using semantic matching with vectors isn’t always the best approach, because I find it often misses the most obvious results. Recently however, Anthropic published a C# SDK (and NuGet package) that makes it easier to create and host an MCP server. The SDK handles the MCP protocol details, so you can focus on implementing your business logic. https://github.com/modelcontextprotocol/csharp-sdk Also, I’ve been reading up on the idea of hybrid search, which combines traditional search techniques with vector-based semantic search. This approach can help improve the relevance of search results by leveraging the strengths of both methods. The code I’m going to walk through in this post can be easily adapted to any scenario, not just CSLA. In fact, the MCP server just searches and returns markdown files from a folder. To use it for any scenario, you just need to change the source files and update the descriptions of the server, tools, and parameters that are in the attributes in code. Perhaps a future enhancement for this project will be to make those dynamic so you can change them without recompiling the code. The code for this article can be found in this GitHub repository. > ℹ️ Most of the code was actually written by Claude Sonnet 4 with my collaboration. Or maybe I wrote it with the collaboration of the AI? The point is, I didn’t do much of the typing myself. Before getting into the code, I want to point out that this MCP server really is useful. Yes, the LLMs already know all about CSLA because CSLA is open source. However, the LLMs often return outdated or incorrect information. By providing a custom MCP server that searches the actual CSLA code samples and snippets, the LLM can return accurate and up-to-date information. ## The MCP Server Host The MCP server itself is a console app that uses Spectre.Console to provide a nice command-line interface. The project also references the Anthropic C# SDK and some other packages. It targets .NET 10.0, though I believe the code should work with .NET 8.0 or later. I am not going to walk through every line of code, but I will highlight the key parts. > ⚠️ The modelcontextprotocol/csharp-sdk package is evolving rapidly, so you may need to adapt to use whatever is latest when you try to build your own. Also, all the samples in their GitHub repository use static tool methods, and I do as well. At some point I hope to figure out how to use instance methods instead, because that will allow the use of dependency injection. Right now the code has a lot of `Console.WriteLine` statements that would be better handled by a logging framework. Although the project is a console app, it does use ASP.NET Core to host the MCP server. var builder = WebApplication.CreateBuilder(); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<CslaCodeTool>(); The `AddMcpServer` method adds the MCP server services to the ASP.NET Core dependency injection container. The `WithHttpTransport` method configures the server to use HTTP as the transport protocol. The `WithTools<CslaCodeTool>` method registers the `CslaCodeTool` class as a tool that can be used by the MCP server. There is also a `WithStdioTransport` method that can be used to configure the server to use standard input and output as the transport protocol. This is useful if you want to run the server locally when using a locally hosted LLM client. The nice thing about using the modelcontextprotocol/csharp-sdk package is that it handles all the details of the MCP protocol for you. You just need to implement your tools and their methods. All the subtleties of the MCP protocol are handled by the SDK. ## Implementing the Tools The `CslaCodeTool` class is where the main logic of the MCP server resides. This class is decorated with the `McpServerToolType` attribute, which indicates that this class will contain MCP tool methods. [McpServerToolType] public class CslaCodeTool ### The Search Method The first tool is Search, defined by the `Search` method. This method is decorated with the `McpServerTool` attribute, which indicates that this method is an MCP tool method. The attribute also provides a description of the tool and what it will return. This description is used by the LLM to determine when to use this tool. My description here is probably a bit too short, but it seems to work okay. Any parameters for the tool method are decorated with the `Description` attribute, which provides a description of the parameter. This description is used by the LLM to understand what the parameter is for, and what kind of value to provide. [McpServerTool, Description("Searches CSLA .NET code samples and snippets for examples of how to implement code that makes use of #cslanet. Returns a JSON object with two sections: SemanticMatches (vector-based semantic similarity) and WordMatches (traditional keyword matching). Both sections are ordered by their respective scores.")] public static string Search([Description("Keywords used to match against CSLA code samples and snippets. For example, read-write property, editable root, read-only list.")]string message) #### Word Matching The orginal implementation (which works very well) uses only word matching. To do this, it gets a list of all the files in the target directory, and searches them for any words from the LLM’s `message` parameter that are 4 characters or longer. It counts the number of matches in each file to generate a score for that file. Here’s the code that gets the list of search terms from `message`: // Extract words longer than 4 characters from the message var searchWords = message .Split(new char[] { ' ', '\t', '\n', '\r', '.', ',', ';', ':', '!', '?', '(', ')', '[', ']', '{', '}', '"', '\'', '-', '_' }, StringSplitOptions.RemoveEmptyEntries) .Where(word => word.Length > 3) .Select(word => word.ToLowerInvariant()) .Distinct() .ToList(); Console.WriteLine($"[CslaCodeTool.Search] Extracted search words: [{string.Join(", ", searchWords)}]"); It then loops through each file and counts the number of matching words. The final result is sorted by score and then file name: var sortedResults = results.OrderByDescending(r => r.Score).ThenBy(r => r.FileName).ToList(); #### Semantic Matching More recently I added semantic matching as well, resulting in a hybrid search approach. The search tool now returns two sets of results: one based on traditional word matching, and one based on vector-based semantic similarity. The semantic search behavior comes in two parts: indexing the source files, and then matching against the message parameter from the LLM. ##### Indexing the Source Files Indexing source files takes time and effort. To minimize startup time, the MCP server actually starts and will work without the vector data. In that case it relies on the word matching only. After a few minutes, the vector indexing will be complete and the semantic search results will be available. The indexing is done by calling a text embedding model to generate a vector representation of the text in each file. The vectors are then stored in memory along with the file name and content. Or the vectors could be stored in a database to avoid having to re-index the files each time the server is started. I’m relying on a `vectorStore` object to index each document: await vectorStore.IndexDocumentAsync(fileName, content); This `VectorStoreService` class is a simple in-memory vector store that uses Ollama to generate the embeddings: public VectorStoreService(string ollamaEndpoint = "http://localhost:11434", string modelName = "nomic-embed-text:latest") { _httpClient = new HttpClient(); _vectorStore = new Dictionary<string, DocumentEmbedding>(); _ollamaEndpoint = ollamaEndpoint; _modelName = modelName; } This could be (and probably will be) adapted to use a cloud-based embedding model instead of a local Ollama instance. Ollama is free and easy to use, but it does require a local installation. The actual embedding is created by a call to the Ollama endpoint: var response = await _httpClient.PostAsync($"{_ollamaEndpoint}/api/embeddings", content); The embedding is just a list of floating-point numbers that represent the semantic meaning of the text. This needs to be extracted from the JSON response returned by the Ollama endpoint. var responseJson = await response.Content.ReadAsStringAsync(); var result = JsonSerializer.Deserialize<JsonElement>(responseJson); if (result.TryGetProperty("embedding", out var embeddingElement)) { var embedding = embeddingElement.EnumerateArray() .Select(e => (float)e.GetDouble()) .ToArray(); return embedding; } > 👩‍🔬 All those floating-point numbers are the magic of this whole thing. I don’t understand any of the math, but it obviously represents the semantic “meaning” of the file in a way that a query can be compared later to see if it is a good match. All those embeddings are stored in memory for later use. ##### Matching Against the Message When the `Search` method is called, it first generates an embedding for the `message` parameter using the same embedding model. It then compares that embedding to each of the document embeddings in the vector store to calculate a similarity score. All that work is delegated to the `VectorStoreService`: var semanticResults = VectorStore.SearchAsync(message, topK: 10).GetAwaiter().GetResult(); In the `VectorStoreService` class, the `SearchAsync` method generates the embedding for the query message: var queryEmbedding = await GetTextEmbeddingAsync(query); It then calculates the cosine similarity between the query embedding and each document embedding in the vector store: foreach (var doc in _vectorStore.Values) { var similarity = CosineSimilarity(queryEmbedding, doc.Embedding); results.Add(new SemanticSearchResult { FileName = doc.FileName, SimilarityScore = similarity }); } The results are then sorted by similarity score and the top K results are returned. var topResults = results .OrderByDescending(r => r.SimilarityScore) .Take(topK) .Where(r => r.SimilarityScore > 0.5f) // Filter out low similarity scores .ToList(); ##### The Final Result The final result of the `Search` method is a JSON object that contains two sections: `SemanticMatches` and `WordMatches`. Each section contains a list of results ordered by their respective scores. var combinedResult = new CombinedSearchResult { SemanticMatches = semanticMatches, WordMatches = sortedResults }; It is up to the calling LLM to decide which set of results to use. In the end, the LLM will use the fetch tool to retrieve the content of one or more of the files returned by the search tool. ### The Fetch Method The second tool is Fetch, defined by the `Fetch` method. This method is also decorated with the `McpServerTool` attribute, which provides a description of the tool and what it will return. [McpServerTool, Description("Fetches a specific CSLA .NET code sample or snippet by name. Returns the content of the file that can be used to properly implement code that uses #cslanet.")] public static string Fetch([Description("FileName from the search tool.")]string fileName) This method has some defensive code to prevent path traversal attacks and other things, but ultimately it just reads the content of the specified file and returns it as a string. var content = File.ReadAllText(filePath); return content; ## Hosting the MCP Server The MCP server can be hosted in a variety of ways. The simplest is to run it as a console app on your local machine. This is useful for development and testing. You can also host it in a cloud environment, such as Azure App Service or AWS Elastic Beanstalk. This allows you to make the MCP server available to other applications and services. Like most things, I am running it in a Docker container so I can choose to host it anywhere, including on my local Kubernetes cluster. For real use in your organization, you will want to ensure that the MCP server endpoint is available to all your developers from their vscode or Visual Studio environments. This might be a public IP, or one behind a VPN, or some other secure way to access it. I often use tools like Tailscale or ngrok to make local services available to remote clients. ## Testing the MCP Server Testing an MCP server isn’t as straightforward as testing a regular web API. You need an LLM client that can communicate with the MCP server using the MCP protocol. Anthropic has an npm package that can be used to test the MCP server. You can find it here: https://github.com/modelcontextprotocol/inspector This package provides a GUI or CLI tool that can be used to interact with the MCP server. You can use it to send messages to the server and see the responses. It is a great way to test and debug your MCP server. Another option is to use the MCP support built into recent vscode versions. Once you add your MCP server endpoint to your vscode settings, you can use the normal AI chat interface to ask the chat bot to interact with the MCP server. For example: call the csla-mcp-server tools to see if they work This will cause the chat bot to invoke the `Search` tool, and then the `Fetch` tool to get the content of one of the files returned by the search. Once you have the MCP server working and returning the types of results you want, add it to your vscode or Visual Studio settings so all your developers can use it. In my experience the LLM chat clients are pretty good about invoking the MCP server to determine the best way to author code that uses CSLA .NET. ## Conclusion Setting up a simple CSLA MCP server is not too difficult, especially with the help of the Anthropic C# SDK. By implementing a couple of tools to search and fetch code samples, you can provide a powerful resource for developers using CSLA .NET. The hybrid search approach, combining traditional word matching with vector-based semantic similarity, helps improve the relevance of search results. This makes it easier for developers to find the code samples they need. I hope this article has been helpful in understanding how to set up a simple CSLA MCP server. If you have any questions or need further assistance, feel free to reach out on the CSLA discussion forums or GitHub repository for the csla-mcp project.
blog.lhotka.net
October 30, 2025 at 8:47 PM
Unit Testing CSLA Rules With Rocks
One of the most powerful features of CSLA .NET is its business rules engine. It allows you to encapsulate validation, authorization, and other business logic in a way that is easy to manage and maintain. In CSLA, a rule is a class that implements `IBusinessRule`, `IBusinessRuleAsync`, `IAuthorizationRule`, or `IAuthorizationRuleAsync`. These interfaces define the contract for a rule, including methods for executing the rule and properties for defining the rule’s behavior. Normally a rule inherits from an existing base class that implements one of these interfaces. When you create a rule, you typically associate it with a specific property or set of properties on a business object. The rule is then executed automatically by the CSLA framework whenever the associated property or properties change. The advantage of a CSLA rule being a class, is that you can unit test it in isolation. This is where the Rocks mocking framework comes in. Rocks allows you to create mock objects for your unit tests, making it easier to isolate the behavior of the rule you are testing. You can create a mock business object and set up expectations for how the rule should interact with that object. This allows you to test the rule’s behavior without having to worry about the complexities of the entire business object. In summary, the combination of CSLA’s business rules engine and the Rocks mocking framework provides a powerful way to create and test business rules in isolation, ensuring that your business logic is both robust and maintainable. All code for this article can be found in this GitHub repository in Lab 02. ## Creating a Business Rule As an example, consider a business rule that sets an `IsActive` property based on the value of a `LastOrderDate` property. If the `LastOrderDate` is within the last year, then `IsActive` should be true; otherwise, it should be false. using Csla.Core; using Csla.Rules; namespace BusinessLibrary.Rules; public class LastOrderDateRule : BusinessRule { public LastOrderDateRule(IPropertyInfo lastOrderDateProperty, IPropertyInfo isActiveProperty) : base(lastOrderDateProperty) { InputProperties.Add(lastOrderDateProperty); AffectedProperties.Add(isActiveProperty); } protected override void Execute(IRuleContext context) { var lastOrderDate = (DateTime)context.InputPropertyValues[PrimaryProperty]; var isActive = lastOrderDate > DateTime.Now.AddYears(-1); context.AddOutValue(AffectedProperties[1], isActive); } } This rule inherits from `BusinessRule`, which is a base class provided by CSLA that implements the `IBusinessRule` interface. The constructor takes two `IPropertyInfo` parameters: one for the `LastOrderDate` property and one for the `IsActive` property. The `InputProperties` collection is used to specify which properties the rule depends on, and the `AffectedProperties` collection is used to specify which properties the rule affects. The `Execute` method is where the rule’s logic is implemented. It retrieves the value of the `LastOrderDate` property from the `InputPropertyValues` dictionary, checks if it is within the last year, and then sets the value of the `IsActive` property using the `AddOutValue` method. ## Unit Testing the Business Rule Now that we have our business rule, we can create a unit test for it using the Rocks mocking framework. First, we need to bring in a few namespaces: using BusinessLibrary.Rules; using Csla; using Csla.Configuration; using Csla.Core; using Csla.Rules; using Microsoft.Extensions.DependencyInjection; using Rocks; using System.Security.Claims; Next, we can use Rocks attributes to define the mock types we need for our test: [assembly: Rock(typeof(IPropertyInfo), BuildType.Create | BuildType.Make)] [assembly: Rock(typeof(IRuleContext), BuildType.Create | BuildType.Make)] These lines of code only need to be included once in your test project, because they are assembly-level attributes. They tell Rocks to create mock implementations of the `IPropertyInfo` and `IRuleContext` interfaces, which we will use in our unit test. Now we can create our unit test method to test the `LastOrderDateRule`. To do this, we need to arrange the necessary mock objects and set up their expectations. Then we can execute the rule and verify that it behaves as expected. The rule has a constructor that takes two `IPropertyInfo` parameters, so we need to create mock implementations of that interface. We also need to create a mock implementation of the `IRuleContext` interface, which is used to pass information to the rule when it is executed. [TestMethod] public void LastOrderDateRule_SetsIsActiveBasedOnLastOrderDate() { // Arrange var inputProperties = new Dictionary<IPropertyInfo, object>(); using var context = new RockContext(); var lastOrderPropertyExpectations = context.Create<IPropertyInfoCreateExpectations>(); lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); var lastOrderProperty = lastOrderPropertyExpectations.Instance(); var isActiveProperty = new IPropertyInfoMakeExpectations().Instance(); var ruleContextExpectations = context.Create<IRuleContextCreateExpectations>(); ruleContextExpectations.Properties.Getters.InputPropertyValues().ReturnValue(inputProperties); ruleContextExpectations.Methods.AddOutValue(Arg.Is(isActiveProperty), true); inputProperties.Add(lastOrderProperty, new DateTime(2025, 9, 24, 18, 3, 40)); // Act var rule = new LastOrderDateRule(lastOrderProperty, isActiveProperty); (rule as IBusinessRule).Execute(ruleContextExpectations.Instance()); // Assert is automatically done by Rocks when disposing the context } Notice how the Rocks mock objects have expectations set up for their properties and methods. This allows us to verify that the rule interacts with the context as expected. This is a little different from more explicit `Assert` statements, but it is a powerful way to ensure that the rule behaves correctly. For example, notice how the `Name` property of the `lastOrderProperty` mock is expected to be called twice. If the rule does not call this property the expected number of times, the test will fail when the `context` is disposed at the end of the `using` block: lastOrderPropertyExpectations.Properties.Getters.Name() .ReturnValue("name") .ExpectedCallCount(2); This is a powerful feature of Rocks that allows you to verify the behavior of your code without having to write explicit assertions. The test creates an instance of the `LastOrderDateRule` and calls its `Execute` method, passing in the mock `IRuleContext`. The rule should set the `IsActive` property to true because the `LastOrderDate` is within the last year. When the test completes, Rocks will automatically verify that all expectations were met. If any expectations were not met, the test will fail. This is a simple example, but it demonstrates how you can use Rocks to unit test CSLA business rules in isolation. By creating mock objects for the dependencies of the rule, you can focus on testing the rule’s behavior without having to worry about the complexities of the entire business object. ## Conclusion CSLA’s business rules engine is a powerful feature that allows you to encapsulate business logic in a way that is easy to manage and maintain. By using the Rocks mocking framework, you can create unit tests for your business rules that isolate their behavior and ensure that they work as expected. This combination of CSLA and Rocks provides a robust and maintainable way to implement and test business logic in your applications.
blog.lhotka.net
October 30, 2025 at 8:47 PM
MCP and A2A Basics
I have been spending a lot of time lately, learning about the Model Context Protocol (MCP) and Agent to Agent (A2A) protocols. And a little about a slightly older technology called the activity protocol that comes from the Microsoft bot framework. I’m writing this blog post mostly for myself, because writing content helps me organize my thoughts and solidify my understanding of concepts. As they say with AIs, mistakes are possible, because my understanding of all this technology is still evolving. (disclaimer: unless otherwise noted, I wrote this post myself, with my own fingers on a keyboard) ## Client-Server is Alive and Well First off, I think it is important to recognize that the activity protocol basically sits on top of REST, and so is client-server. The MCP protocol is also client-server, sitting on top of JSON-RPC. A2A _can be_ client-server, or peer-to-peer, depending on how you use it. The sipmlest form is client-server, with peer-to-peer provide a lot more capability, but also complexity. ## Overall Architecture These protocols (in particular MCP and A2A) exist to enable communication between LLM “AI” agents and their environments, or other tools, or other agents. ### Activity Protocol The activity protocol is a client-server protocol that sits on top of REST. It is primarily used for communication between a user and a bot, or between bots. The protocol defines a set of RESTful APIs for sending and receiving activities, which are JSON objects that represent a message, event, or command. The activity protocol is widely used in the Microsoft Bot Framework and is supported by many bot channels, such as Microsoft Teams, Slack, and Facebook Messenger. (that previous paragraph was written by AI - but it is pretty good) ### MCP The Model Context Protocol is really a standard and flexible way to expand the older concept of LLM tool or function calling. The primary intent is to allow an LLM AI to call tools that interact with the environment, call other apps, get data from services, or do other client-server style interactions. The rate of change here is pretty staggering. The idea of an LLM being able to call functions or “tools” isn’t that old. The limitation of that approach was that these functions had to be registered with the LLM in a way that wasn’t standard across LLM tools or platforms. MCP provides a standard for registration and interaction, allowing an MCP-enabled LLM to call in-process tools (via standard IO) or remotely (via HTTP). If you dig a little into the MCP protocol, it is erily reminiscent of COM from the 1990’s (and I suspect CORBA as well). We provide the LLM “client” with an endpoint for the MCP server. The client can ask the MCP server what it does, and also for a list of tools it provides. Much like `IUnknown` in COM. Once the LLM client has the description of the server and all the tools, it can then decide when and if it should call those tools to solve problems. You might create a tool that deletes a file, or creates a file, or blinks a light on a device, or returns some data, or sends a message, or creates a record in a database. Really, the sky is the limit in terms of what you can build with MCP. ### A2A Agent to Agent (A2A) communication is a newer and more flexible protocol that (I think) has the potential to do a couple things: 1. I could see it replacing MCP, because you can use A2A for client-server calls from an LLM client to an A2A “tool” or agent. This is often done over HTTP. 2. It also can be used to implement bi-directional, peer-to-peer communication between agents, enabling more complex and dynamic interactions. This is often done over WebSockets or (better yet) queuing systems like RabbitMQ. ## Metadata Rules In any case, the LLM that is going to call a tool or send a message to another agent needs a way to understand the capabilities and requirements of that tool or agent. This is where metadata comes into play. Metadata provides essential information about the tool or agent, such as its name, description, input and output parameters, and more. “Metadata” in this context is human language descriptions. Remember that the calling LLM is an AI model that is generally good with language. However, some of the metadata might also describe JSON schemas or other structured data formats to precisely define the inputs and outputs. But even that is usually surrounded by human-readable text that describes the purpose of the scheme or data formats. This is where the older activity protocol falls down, because it doesn’t provide metadata like MCP or A2A. The newer protocols include the ability to provide descriptions of the service/agent, and of tool methods or messages that are exchanged. ## Authentication and Identity In all cases, these protocols aren’t terribly complex. Even the A2A peer-to-peer isn’t that difficult if you have an understanding of async messaging concepts and protocols. What does seem to _always_ be complex is managing authentication and identity across these interactions. There seem to be multiple layers at work here: 1. The client needs to authenticate to call the service - often with some sort of service identity represented by a token. 2. The service needs to authenticate the client, so that service token is important 3. HOWEVER, the service also usually needs to “impersonate” or act on behalf of a user or another identity, which can be a separate token or credential Getting these tokens, and validating them correctly, is often the hardest part of implementing these protocols. This is especially true when you are using abstract AI/LLM hosting environments. It is hard enough in code like C#, where you can see the token handling explicitly, but in many AI hosting platforms, these details are abstracted away, making it challenging to implement robust security. ## Summary The whole concept of an LLM AI calling tools and then service and then having peer-to-peer interactions has evolved very rapidly over the past couple of years, and it is _still_ evolving very rapidly. Just this week, for example, Microsoft announced the Microsoft Agent Framework that replaces Semantic Kernel and Autogen. And that’s just one example! What makes me feel better though, is that at their heart, these protocols are just client-server protocols with some added layers for metadata. Or a peer-to-peer communication protocol that relies on asynchronous messaging patterns. While these frameworks (to a greater or lesser degree) have some support for authentication and token passing, that seems to be the weakest part of the tooling, and the hardest to solve in real-life implementations.
blog.lhotka.net
October 30, 2025 at 8:47 PM