Vitalii Kleshchevnikov, PhD
vitaliikl.bsky.social
Vitalii Kleshchevnikov, PhD
@vitaliikl.bsky.social
Researcher @bayraktar_lab @teichlab @steglelab.bsky.social @sangerinstitute.bsky.social | Using models & AI to study cells, cell circuits & brains 🧠 | #SingleCell+spatial | 🌍+🇺🇦
I tried voice mode a few times and found it quite unusable for anything complex.

I mean time writing, reading, responding.
November 6, 2025 at 2:17 AM
Very exciting future to come.

9/n
November 5, 2025 at 2:14 AM
Now my journey transitioning to mapping out how TF-CRE-GRN dynamics underpins how cells know what to do (the cell plasticity landscape), their collective roles/functions/goals within the organism and what models+data are needed to program cells and tissues to design the new therapies.

8/n
November 5, 2025 at 2:14 AM
In addition to fascination with multicellular life and how brains become you, a big part of what motivates me to drive these challenging tasks forward is gaining understanding needed to accelerate medicine, last few months at Relation helped make it more practical.

7/n
November 5, 2025 at 2:13 AM
- working towards decoding DNA syntax code of human organogenesis: how transcription factors interact with each other & with regulatory DNA elements (cell2state, ongoing work, @steglelab.bsky.social @bayraktarlab.bsky.social @teichlab.bsky.social @mhaniffa.bsky.social @leopoldparts.bsky.social)
6/n
November 5, 2025 at 2:12 AM
This journey started with learning how proteins use peptide motifs to interact (@epetsalaki.bsky.social), building modeling foundation with @leopoldparts.bsky.social, then comprehensively mapping spatial tissue architecture (cell2location @bayraktarlab.bsky.social @steglelab.bsky.social )
5/n
November 5, 2025 at 2:09 AM
- as well as how none of that is possible without collaborative and open science.

4/n
November 5, 2025 at 2:04 AM
- learning and embracing how to understand cells with large scale multimodal cell atlases (towards whole organism), big biophysical AI/ML models (theory+data guide modelling and questions), and data-driven approach to discovering organising principles (catalogue then formalise)

3/
November 5, 2025 at 2:03 AM
My my past 8 years at Wellcome Sanger Institute & EMBL-EBI (Bayraktar, Stegle, Teichmann, Parts, Haniffa, Petsalaki labs) have been both productive and formative

2/n
November 5, 2025 at 2:02 AM
That internal use can be covered by internal regulations and RCA/MTA as well as GDPR - and could already be quite useful especially for large already collaborative organisations such as EMBL, Sanger, Crick.

Re purpose - I would suggest reading the full thread as it communicates examples.
November 3, 2025 at 1:37 AM
Do you mean data protection/GDPR or do you mean legal regulation of how scientists interact (eg RCA/MTA)?

This full thread is more about the vision rather than implementation - but I think staged implementation with parameter sharing in specific institutions/labs could help test this.
November 3, 2025 at 1:35 AM
There is also a risk that by not building an AI system that helps researcher community and mandating its use - non ethical use of generative AI will break the current system leading to AI use bans, people only trusting papers from those you know and halting the discovery and medical progress.
November 2, 2025 at 6:57 PM
So if you use someone’s ideas suggested by the system - the system would show funders who contributed. You also still need to be the best person for the job and capable of delivering the grant.

It is possible to design the fine tuning steps in a way that allows attributing credit.
November 2, 2025 at 6:50 PM
Staged rollout could help understand how original ideas from one chat are used for subsequent suggestions.

bsky.app/profile/vita...

Funders will be aware that the grant was written with help from this co-pilot system and the system will link related grants.
This can be first tested with collections of labs that already collaborate or within companies. Eg having different LORA fine tuning weights for specified sets of users.

As it works more as intended it can be rolled out globally.

It could also be useful to do a RCT to test effectiveness.
November 2, 2025 at 6:47 PM
This system will have ethical use constraints by design - because it will be designed as a co-pilot for research community. Someone writing grants by getting ChatGPT to generate most of the proposal will do worse than someone who uses our community co-pilot to help shape the project.
November 2, 2025 at 6:40 PM
But it doesn’t understand the full context of what I am doing, to get good results I need to write very detailed prompts that give what context I think is necessary. It also doesn’t have useful templates for a more structured discussions. I also don’t want OpenAI to train a public model on my chats.
November 2, 2025 at 6:36 PM
Early on I was quite critical of ChatGPT but now it massively improves my ability to get things done. There are things it’s not good at but it’s more helpful than not and probably saved me many weeks and also helped refine many ideas - in addition to support with admin/life tasks.
November 2, 2025 at 6:31 PM
This can be first tested with collections of labs that already collaborate or within companies. Eg having different LORA fine tuning weights for specified sets of users.

As it works more as intended it can be rolled out globally.

It could also be useful to do a RCT to test effectiveness.
November 2, 2025 at 6:26 PM
Ei how do you get it to compare your discussions with discussions by others, how do you get it to walk people through refining their project to work better with their strengths and interests compared to interests of others users, how do you prevent sharing of details that are unreasonable to share.
November 2, 2025 at 6:23 PM
Getting enough people to join could also be an issue - but I think if it works better than ChatGPT at helping you with research - people will switch.

This needs a smaller scale pilot (maybe collaboration with OpenAI or Antropic or DeepMind) to understand how to create rules and constraints.
November 2, 2025 at 6:20 PM
Who pays for this. To do it well you need big money. The system needs to be really well designed to actually succeed in each point.

Big funders have a record of getting people to adopt new practices - so mass adoption could be less of an issue.

But nobody will use a tool that doesn’t work.
November 2, 2025 at 6:16 PM
Importantly for an executive assistant, this AI can have curated templates for guiding people through common tasks (especially admin) - helping people to create/submit more complete and clear requests to various support teams.
This would not replace those teams but give researchers more time.
24/n
November 2, 2025 at 6:10 PM