Ron
cnopius.bsky.social
Ron
@cnopius.bsky.social
Extending LLM Capabilities with Model Context Protocol, Part 4

From Laptop to Production If you’ve read through the previous posts (Part 1, Part 2, Part 3) on this topic you know that my motivation to create my own MCP was inspired by my desire to generate accurate guitar tabs–something Claude…
Extending LLM Capabilities with Model Context Protocol, Part 4
From Laptop to Production If you’ve read through the previous posts (Part 1, Part 2, Part 3) on this topic you know that my motivation to create my own MCP was inspired by my desire to generate accurate guitar tabs–something Claude could not do reliably. In the previous post I walked through the development process, to the point where I was able to get the Tab Generator working well with Claude-but only on my desktop.
rfischer.com
January 10, 2026 at 1:51 AM
Extending LLM Capabilities with Model Context Protocol, Part 3

MCPs solve real problems—but they are far from perfect. In the first post on Model Context Protocol I laid out the general purpose and value of MCPs, along with some issues you might encounter. In the second post I showed a useful…
Extending LLM Capabilities with Model Context Protocol, Part 3
MCPs solve real problems—but they are far from perfect. In the first post on Model Context Protocol I laid out the general purpose and value of MCPs, along with some issues you might encounter. In the second post I showed a useful example and focused on the utility, current immature functionality, and likely future of MCPs. This time we'll focus on the development side: the choices that shaped the Tab Generator, the unexpected problems that emerged, and what I learned about making MCPs that actually work with LLMs. 
rfischer.com
December 12, 2025 at 4:22 PM
Extending LLM Capabilities with Model Context Protocol, Part 2

In Part 1, I showed how MCPs can solve fundamental LLM limitations - and how the interaction between LLMs and MCPs can still fail in subtle ways. Now I want to look at the practical reality: What's it actually like to use MCPs today?…
Extending LLM Capabilities with Model Context Protocol, Part 2
In Part 1, I showed how MCPs can solve fundamental LLM limitations - and how the interaction between LLMs and MCPs can still fail in subtle ways. Now I want to look at the practical reality: What's it actually like to use MCPs today? What can they do? And what should you expect when you try them?  TL:DR There are three themes that I will touch on consistently throughout this post:
rfischer.com
October 16, 2025 at 11:56 PM
Extending LLM Capabilities with Model Context Protocol, Part 1

The Guitar Tab Problem I wanted to generate guitar tabs to help me study various guitar techniques. My LLM (Claude by Anthropic) had plenty of suggestions and could create tabs for hammer-ons, walk downs, double stops, and scales.…
Extending LLM Capabilities with Model Context Protocol, Part 1
The Guitar Tab Problem I wanted to generate guitar tabs to help me study various guitar techniques. My LLM (Claude by Anthropic) had plenty of suggestions and could create tabs for hammer-ons, walk downs, double stops, and scales. Within minutes, I had pages of practice material that seemed musical and well-explained. But there were issues. Sometimes the tabs were fine, sometimes there were small alignment issues, sometimes they were completely unusable.
rfischer.com
October 8, 2025 at 7:22 AM
LLMs Don’t Work the Way You Think They Do, Part 2

In the previous post I shared a story about my attempts to produce guitar tabs with Claude, Anthropic’s LLM. My initial pleasure in generating dozens of practice tabs gave way to frustration as I tried to convince Claude to fix existing errors and…
LLMs Don’t Work the Way You Think They Do, Part 2
In the previous post I shared a story about my attempts to produce guitar tabs with Claude, Anthropic’s LLM. My initial pleasure in generating dozens of practice tabs gave way to frustration as I tried to convince Claude to fix existing errors and avoid the same mistakes when generating new tabs. But my attempts to guide, instruct, or even command Claude to do as I wanted were doomed to fail.
rfischer.com
September 21, 2025 at 10:11 PM
LLMs Don’t Work the Way You Think They Do, Part 1

The Power of LLMs I wanted to learn some particular guitar techniques, and was pleased to find that Claude, Anthropic’s LLM, could generate tabs for a variety of techniques and styles, with many options. I could specify particular areas I wanted to…
LLMs Don’t Work the Way You Think They Do, Part 1
The Power of LLMs I wanted to learn some particular guitar techniques, and was pleased to find that Claude, Anthropic’s LLM, could generate tabs for a variety of techniques and styles, with many options. I could specify particular areas I wanted to practice such as hammer-ons, walk downs, double stops, and scales, or ask it to suggest a course of study, and generate relevant materials.
rfischer.com
September 12, 2025 at 1:30 AM
Are LLMs Replacements for Developers or Productivity Tools? (Part 4)

In the previous posts (Part 1, Part 2, Part 3)  I followed a structured approach suggested by Harper Reed to generate a greenfield (completely new) application. My secondary goal was to generate a useful application that could…
Are LLMs Replacements for Developers or Productivity Tools? (Part 4)
In the previous posts (Part 1, Part 2, Part 3)  I followed a structured approach suggested by Harper Reed to generate a greenfield (completely new) application. My secondary goal was to generate a useful application that could extract statistics from published pdf papers. But the primary goal was to determine if the hype is true and LLMs can replace professional software engineers.
rfischer.com
July 11, 2025 at 12:14 AM
Are LLMs Replacements for Developers or Productivity Tools? (Part 3)

This is the third post in a series exploring how well large language models (LLMs) can write real-world software. In Part 1, I explained the motivation behind this investigation and outlined the development process I would follow…
Are LLMs Replacements for Developers or Productivity Tools? (Part 3)
This is the third post in a series exploring how well large language models (LLMs) can write real-world software. In Part 1, I explained the motivation behind this investigation and outlined the development process I would follow (inspired by Harper Reed). My application of choice: a program that automatically extracts statistical data from academic PDF papers. In Part 2…
rfischer.com
July 1, 2025 at 4:06 AM
Are LLMs Replacements for Developers or Productivity Tools? (Part 2)

The hype surrounding LLMs suggests that professional developers may soon be obsolete—that tools like ChatGPT and Anthropic’s Claude can now handle all your coding needs with just a few prompts. Obviously, that’s an exaggeration.…
Are LLMs Replacements for Developers or Productivity Tools? (Part 2)
The hype surrounding LLMs suggests that professional developers may soon be obsolete—that tools like ChatGPT and Anthropic’s Claude can now handle all your coding needs with just a few prompts. Obviously, that’s an exaggeration. But what is the real state of things? How good are LLMs at writing code today? Can they truly replace professional engineers, or are they better thought of as productivity tools?
rfischer.com
June 18, 2025 at 9:46 PM
Are LLMs Replacements for Developers or Productivity Tools? (Part 1)

Introduction For years, each new release of a Large Language Model (LLM) has been accompanied by waves of hype—particularly around the claim that these models can write software. Headlines declare that programmers are obsolete…
Are LLMs Replacements for Developers or Productivity Tools? (Part 1)
Introduction For years, each new release of a Large Language Model (LLM) has been accompanied by waves of hype—particularly around the claim that these models can write software. Headlines declare that programmers are obsolete and that AI will be generating all code within a few years. Most articles eventually back off from their attention-grabbing titles, but the core questions remain: …
rfischer.com
June 8, 2025 at 4:16 PM
Chain of Thought, Part 3

In the past two posts I have been digging deep into the Chain of Thought (CoT) prompting technique for improving responses from LLMs. Originally this was devised by LLM users seeking to improve the results returned from models, by convincing the models to mimic a…
Chain of Thought, Part 3
In the past two posts I have been digging deep into the Chain of Thought (CoT) prompting technique for improving responses from LLMs. Originally this was devised by LLM users seeking to improve the results returned from models, by convincing the models to mimic a structured approach to problem solving. This approach works very well in logic-centric problems, and adds some value across a range of queries.
rfischer.com
April 2, 2025 at 9:18 PM
Chain Of Thought, Part 2

As I wrote in the first segment of this blog, Chain of Thought (CoT) was originally a human prompting technique that proved so useful that it has been integrated into modern LLMs. Inspiration for algorithms has come from humans, animals, nature, and other sources in the…
Chain Of Thought, Part 2
As I wrote in the first segment of this blog, Chain of Thought (CoT) was originally a human prompting technique that proved so useful that it has been integrated into modern LLMs. Inspiration for algorithms has come from humans, animals, nature, and other sources in the past. Scientists and engineers studied various approaches, techniques, and examples they encountered, then determined how to best approximate the functionality in their favorite programming language. 
rfischer.com
March 18, 2025 at 2:26 AM
Chain of Thought, Part 1

Chain-of-Thought (CoT) prompting has emerged as an useful technique for improving the query results from Large Language Models (LLMs or simply ‘models’). Originally, humans arranged their questions in CoT form to coax better answers from LLMs. The results were so often…
Chain of Thought, Part 1
Chain-of-Thought (CoT) prompting has emerged as an useful technique for improving the query results from Large Language Models (LLMs or simply ‘models’). Originally, humans arranged their questions in CoT form to coax better answers from LLMs. The results were so often positive that the teams creating LLMs took notice and began training this technique directly into their models. That an approach initially created by and for humans is now used to train LLMs is interesting in a number of ways.
rfischer.com
March 7, 2025 at 8:02 PM
Establishing Your Engineering Culture, Part 2

A Positive Culture In my last post I began discussing the idea of purposefully building a strong, positive engineering culture. I pointed out many roadblocks--improving a culture is not easy. So why bother? Why Bother? Investing time and effort into…
Establishing Your Engineering Culture, Part 2
A Positive Culture In my last post I began discussing the idea of purposefully building a strong, positive engineering culture. I pointed out many roadblocks--improving a culture is not easy. So why bother? Why Bother? Investing time and effort into creating and sustaining a great engineering culture is not just a “nice-to-have” – it’s essential for the success of your team and company.
rfischer.com
January 27, 2025 at 10:45 AM