Grigori Guitchounts
guitchounts.bsky.social
Grigori Guitchounts
@guitchounts.bsky.social
AI + biology, venture creation @FlagshipPioneer | Neuroscientist @Harvard | Writer, pianist, runner, painter, dilettante
Human-AI collaboration hinges on anticipating others’ next moves, yet prevailing behavior-cloning or inverse-planning approaches either guzzle data or demand heavy online computation.
October 30, 2025 at 2:30 PM
This is a pretty wild one and tbh I don’t know what to make of it. Might be total b.s. or just way over my head…
October 29, 2025 at 2:30 PM
A lot of recent papers + conversation around LLM mode-collapse… Post-training alignment of LLMs with human preference data often squeezes their output into a narrow band of familiar, “safe” answers.
October 28, 2025 at 2:36 PM
AI-driven research assistants have been held back by one-off engineering, brittle pipelines, and the absence of a shared home for the myriad models, databases, and lab instruments they need to reason with the real world.
October 27, 2025 at 2:36 PM
Current attempts to surface an LLM’s doubt typically bolt a percentage or a hedging phrase onto a single answer, hiding the richer landscape of possibilities bubbling beneath.
October 24, 2025 at 2:30 PM
Gadget search, long a manual bottleneck in hardness proofs, has limited how far complexity theorists can push inapproximability bounds.
October 23, 2025 at 11:52 PM
August 12, 2025 at 2:30 PM
August 11, 2025 at 2:30 PM
What can pigeons teach us about collective intelligence?
August 8, 2025 at 2:30 PM
Today’s computer input devices—mice, keyboards, etc—juggle an awkward trade-off between mobility and bandwidth, while non-invasive BCIs have remained low-throughput and calibration-hungry.
August 7, 2025 at 2:30 PM
Urban rats thrive despite control efforts, yet their behavior in the city’s tangled, noisy habitats remains largely uncharted.
August 6, 2025 at 2:30 PM
Pretrained transformers, originally designed for natural language processing, have shown potential for generalizing to other modalities with minimal fine-tuning.
August 4, 2025 at 2:34 PM
Mapping how visual areas share or guard information has been hamstrung by experiments that probe one patch of cortex at a time with a meagre buffet of hand-picked images.
August 1, 2025 at 2:31 PM
Liquid biopsies, which analyze cell-free DNA in the bloodstream, offer a promising new approach to early cancer detection by potentially identifying malignancies without invasive procedures.
July 30, 2025 at 2:30 PM
LLMs, trained on vast amounts of text data, have demonstrated surprising capabilities such as syntax learning and code generation, leading to claims of emergence where certain abilities appear suddenly as models scale up.
July 29, 2025 at 2:30 PM
Human-AI collaborations are increasingly common, which prompts a reevaluation of how we perceive the integration of non-biological resources into our cognitive processes.
July 24, 2025 at 2:30 PM
Foundation models, which aim to uncover deeper domain understanding through sequence prediction, face challenges in demonstrating whether they truly capture underlying structures.
July 22, 2025 at 2:30 PM
LLMs typically require monolithic, end-to-end training, which is resource-intensive and inflexible.
July 20, 2025 at 2:30 PM
In neuroscience, the concept of attractors is often mistakenly viewed as a complete mechanistic explanation for neural phenomena, despite lacking verification of the necessary connectivity, dynamics, and organization.
July 18, 2025 at 2:30 PM
Unified theories of cognition aim to predict human behavior across various settings, and Johnson et al. introduce Centaur, a computational model fine-tuned on the extensive Psych-101 dataset to simulate human behavior in experiments expressed in natural language.
July 17, 2025 at 2:30 PM
Cognitive dissonance, a psychological phenomenon where individuals experience discomfort from holding conflicting beliefs, was tested in OpenAI's GPT-4o to see if it would alter its stance on figures like Putin after generating essays with different perspectives.
July 16, 2025 at 2:30 PM
Steering language models, as opposed to traditional prompting, offers a novel approach to guiding narrative generation by adjusting specific features that map to complex concepts.
July 15, 2025 at 2:30 PM
Scientific reasoning models have advanced significantly, particularly in fields like math and programming, but specialized models are needed for tasks requiring specific scientific intelligence, such as chemistry.
July 14, 2025 at 2:30 PM
Biomedical research faces challenges due to fragmented workflows and the overwhelming volume of data, necessitating innovative approaches to streamline and enhance research capabilities.
July 11, 2025 at 2:30 PM
Peter Putnam, once associated with luminaries like Einstein, developed a fascinating, yet largely incomprehensible theory of the mind.
July 10, 2025 at 2:30 PM