Ragavan Thurairatnam 🛸 NeurlPS
ragavan.bsky.social
Ragavan Thurairatnam 🛸 NeurlPS
@ragavan.bsky.social
Deep learning since 2012. Leading a new AI lab at Jack Dorsey's Block
Can you share some of those bad code examples? What model did you use?
May 18, 2025 at 3:18 AM
Reminds me of this
May 17, 2025 at 5:56 AM
From what I heard viral infections can sometimes actually make allergies or the immune system worse. It's possible some are beneficial but I'm not aware (I'm also not an expert). Alternatively to viruses, you can also get tons of exposure to bacteria, fungi, etc for the hygiene hypothesis
December 24, 2024 at 2:23 AM
I feel like in old school tensorflow 1.x it wouldn't do the common graph nodes twice. Don't know about pytorch though
December 19, 2024 at 5:43 PM
At Neurips this year, no one seemed to bat an eye at me wearing an N95 all day. I actually feel like the prevalence of N95s was higher at the conference than the general population!
December 19, 2024 at 5:00 PM
Human-computer interaction already sounds risque enough!
December 19, 2024 at 1:13 PM
Oh nevermind, I misread your post. 90 percent of discussions, not people. My bad
December 18, 2024 at 2:43 PM
Who are you talking to such that 90% say CICO as a simple statement isn't true? From my sampling, almost everyone says it's true. There's a small percentage of doctors that have a nuanced point of view (that it's technically true, but not a good strategy, which is different than saying it's false).
December 18, 2024 at 2:15 PM
I heard visa problems for many :(
December 14, 2024 at 6:45 AM
How about Neonatal Insane Clown Unit?

Scary in multiple ways.
December 8, 2024 at 10:01 PM
I imagine it's worse for you, probably getting harassed a lot?
December 8, 2024 at 1:34 PM
Is it even true for base models 100% of the time? I can imagine with certain architectures/training, you don't actually get a pure statistical representation of the words, you get some learned function/hypothesis that is much simpler/compressed
December 5, 2024 at 4:32 PM
"I must be a robot. Why else would human women refuse to date me"
December 5, 2024 at 2:17 PM
Ah I see. So you're saying if you ran an experiment, and showed a bunch of people the same paragraph, but told half of them, the paragraph was written by an LLM model and told the other half it was by a human writer. They would feel different about it?

I imagine that's true. Fun experiment to try
December 4, 2024 at 12:33 AM
I thought I understood, but now I'm not sure.

I do think LLM writing is weird/hollow at times. Just sounding/looking right but, might not say a lot.

But why can't future models learn the gap?
December 4, 2024 at 12:17 AM
That's cool! But I just meant Alex did cuda kernels for DL before CUDAMat. Not before everyone else.

My other claim was just that Alex's implementation in cuda (for DL) was very well engineered and his was the most efficient of ones I'm aware of (for DL).

I see where we crossed wires now
December 1, 2024 at 11:36 PM
That's the only EyeTap project I remembered! But I didn't know they used deep learning with cuda so early. When did they start doing that? Were the GPUs remote? Or just experiments?
December 1, 2024 at 10:44 PM
I genuinely can't remember anymore.
Catastrophic forgetting.

I do remember hearing about "Artificial Neural Networks" (the old school term no one uses anymore[?]) and ZISC when I was young and remember thinking they sounded super cool. But that's all I remember haha
December 1, 2024 at 6:24 PM
Which EyeTap project are you referring to?
December 1, 2024 at 5:45 PM
Separately, Alex was writing cuda kernels for neural nets before CUDAMat.

Also I think Alex's code was much faster for deep conv nets.
December 1, 2024 at 4:12 PM
I'm not saying the next shift has to be for NNs. But I can imagine just like NNs were a dark horse before, there could be other old techniques that have issues/gaps that need to be addressed to trigger another shift
December 1, 2024 at 4:06 PM
What I meant by another AlexNet moment was a huge proof point and shift to a major new algo. Deep nets were around for many years before. GPUs for DNNs were used as well. But AlexNet was on a major unsolved problem and triggered a shift for everyone to move away from old techniques to new.
December 1, 2024 at 4:04 PM
Extreme analog:

Testing a XOR gate vs an LLM.
You can fully specify all inputs and outputs for the XOR gate. Hard with LLM.

But maybe I'm taking safety a little too literally.
December 1, 2024 at 7:21 AM
From a system (not model) point of view and for medical/critical system applications, I can imagine end to end safety makes sense if it's AI or not.

the difference I can imagine is the degree of testing is very different if theres a huge combination of possible outputs (like we see in LLMs)
December 1, 2024 at 7:19 AM