Srividya Pattisapu
banner
drsissyfuss.bsky.social
Srividya Pattisapu
@drsissyfuss.bsky.social
PhD Student, Stony Brook University | Neuroscience, Science Communication.

Address all correspondence to Dr. SissyFuss, Valley of Despair, Dunning-Kruger Curve. Waiting for the boulder to roll back down.
But, well, there's nothing else I'd rather do. : )
July 14, 2025 at 3:09 PM
Sometimes after lab meets (my PI and lab mates have math/physics backgrounds), I call it a success if I managed to recognise all the symbols in the equations; forget understanding them.
July 14, 2025 at 3:05 PM
I can't tell you how strongly I empathise with what you feel. I went to med school, and then somehow decided that comp neuro PhD is the logical next step.
July 14, 2025 at 3:05 PM
4/? The method requires different trials to be of the same sizes though. Time warping is a common way to tackle this, but what do people do if their data has, for example, oscillation locked activity? Trimming does not align the task events, even if it preserves the oscillation structure.
May 6, 2025 at 3:07 AM
3/? This paper, from what I understand, can help with this, by giving an interpretable look at the components themselves and not just the reconstructed data. But from what I undertand, TCA also serves a similar purpose. Why isn't it more popularly used?
May 6, 2025 at 3:07 AM
2/? Sometimes the interesting aspect of data is not in the component with the highest variance explained (usually motor variability dominates, and cognitive variability, for eg, is in the smaller variance components) and there is no principled way to pick the components interesting for your task.
May 6, 2025 at 3:06 AM
1/? It always left me uncomfortable that we smush the time and trial axis of the neuron*time*trial data tensor into one for some analyses. We thought this paper offered an elegant, interpretable (and perhaps more importantly, easy to execute!) way to overcome this.
May 6, 2025 at 3:05 AM