Fernando Zampieri
fgzamp.bsky.social
Fernando Zampieri
@fgzamp.bsky.social
Critical Care Physician. Opinions are personal. Data should be public.
We will probably reach interplanetary life before pivoting a data.table becomes an easy and logical task.
June 22, 2025 at 5:18 PM
Also depends on time of the day and how busy servers are. You can get a great response followed by an awful one depending on how busy servers are and, of course, if you are using free or paid versions.
June 16, 2025 at 4:42 PM
Finally, acknowledge LLMs helped you writing the analysis code in your paper.
June 16, 2025 at 4:25 PM
Once your analysis look good, you should:
1. Make sure you understand what's happening.
2. Test, test, test.
3. Ask for a critical review from one or two LLM. Go back if needed. Confirm with a statistician.
June 16, 2025 at 4:25 PM
b. Go for another LLM. Gemini 2.5 is good, and can also write artifacts (called Canvas). You, of course, need to provide all context to it. Just pasting your R code and saying "Fix this" won't help, unless it is a simple issue like missing brackets.
June 16, 2025 at 4:25 PM
a. Open a new conversation and project and supply the broken code. Provide evidence on where it seems to be broken. Ask for a fix. This may help (+)
June 16, 2025 at 4:25 PM
5. Using a single LLM. While Claude is great, sometimes it derails and becomes blind to its mistakes. The first thing you must try to do is fix it yourself, but sometimes it's hard. You can proceed with two approaches: (+)
June 16, 2025 at 4:25 PM
4. Not create a writing style. This can be done easily in Claude. You can even add papers you have written that are open access so Claude knows what you are usually talking about. (+)
June 16, 2025 at 4:25 PM
I only move to vibe coding when I am sure Claude understood what I am trying to do and I understood what they did on first pass. They need to grasp the idea of the project and my coding style and I need to understand what's happening.
June 16, 2025 at 4:25 PM
3. Failing to provide context. When I start coding with Claude I usually create a project, update the code, and the first thing I do is asking Claude if he understands what I am trying to accomplish with that code. Then, if they do, I proceed asking them to clean it a bit. (+)
June 16, 2025 at 4:25 PM
This avoids a lot of "That's fantastic!", or "Good catch!", etc.
June 16, 2025 at 4:25 PM
For example, my default comments for Claude include: "Do not flatter me. If I am wrong, say I am wrong. If I try to correct you and you still feel you are right, debate. Do not make compliments. Provide direct answers whenever possible" (+)
June 16, 2025 at 4:25 PM
2. Failing to tweak the LLM to taste. In Claude, you can create projects where you can add files for context, write what you plan to achieve and how Claude should behave. In the overall Claude settings you can also provide an overall guide to how you want the replies (+)
June 16, 2025 at 4:25 PM
...Code should use base R and {marginaleffects}. Code should be as succinct as possible. Please confirm you understood before proceeding. Also please make sure the code uses dead as the outcome, and not as reference". This will produce a much more useful output. (+)
June 16, 2025 at 4:25 PM
...I need to run a logistic regression and extract the effect in OR and RD. In addition, I need also an adjusted model for covariates X, Y, and Z, all continuous, and then get the marginal effects for the model. I finally need a plot comparing the effects from adjusted and unadjusted models....
June 16, 2025 at 4:25 PM
Consider: "Hi Claude, we will be working with R today and I need artifact for a code I plan to run. I have a data frame loaded with outcome labelled as outcome (binary, alive or dead) and intervention (binary, 0 for control and 1 for intervention)...
June 16, 2025 at 4:25 PM
1. Believe they are talking to Alexa or Siri. This is the most common one. Many people make short, small prompts as if they want Alexa to play a song. For example, people will go and ask Claude the code for a glm. Claude will give back a generic code, if anything. (+)
June 16, 2025 at 4:25 PM
Agreed. I guess we didn't try hard enough to further validate the concept. The last 10 years have failed to give the nextlgical step forward: Validate prospectively inside a trial. The longer it takes, the more skeptical people will be. Adding more attempts to the mix is not helpful, I guess....
June 15, 2025 at 10:31 PM
That seems a lot of legwork to say "more evidence is needed", heh?
February 13, 2025 at 7:34 AM
So nice to hear from you again Lars!
February 12, 2025 at 5:41 AM