Hassan
hssndeatomized.bsky.social
Hassan
@hssndeatomized.bsky.social
I work in AI. Trying to understand the world around me a bit better.
Faces in London is like the last supper of misery - it's hilarious if you are an outsider sad if you are not
June 20, 2025 at 7:57 AM
Cramer Rao bound is probably what you are looking for
April 2, 2025 at 12:58 AM
And don't forget put an end to all conflict and everything's all peaceable and civil now - the ignorance and stupidity of this article is a thing to behold.
Fuck. me.
February 13, 2025 at 6:07 PM
3/3 ...and we can't really collect at scale. Having all those things that I mention would make me much more confident and likely the product much more successful once deployed.
February 3, 2025 at 8:40 PM
2/3 Don't get me wrong - we are in the midst of using LLMs to build a product that solves a major industry challenge that couldn't be solved otherwise - but I do worry quite a bit about how it will deal with data that was not in its training set at all (like complex engineering data among others)...
February 3, 2025 at 8:40 PM
1/3 Sounds Interesting - I'll have a look from your feed - my quibble would be - what kind evidence do we have that it will be effective when you try to use it in anger. Benchmarks are good - but we need a solid theory backing that up to inspire proper confidence - reality is just far too slippery
February 3, 2025 at 8:40 PM
Yeah I don't know that we have enough of a scientific understanding of LLMs to prove anything of that nature one way or another
February 3, 2025 at 8:18 PM
Also, alternative approaches should be investigated because that's the point of scientific research - discovery! Whether it's better or not is beside the point - may be it's better 10,20 30 years from now who knows
February 3, 2025 at 8:11 PM
3/4 I am really interested in ideas that break the mold and tries to address these fundamental issues. To give an idea of the kinds of papers I am thinking of: www.science.org/doi/10.1126/... and the much less well known but still intriguing arxiv.org/abs/1803.05252 and it's follow ups
Human-level concept learning through probabilistic program induction
Combining the capacity to handle noise with probabilistic learning yields humanlike performance in a computational model.
www.science.org
February 3, 2025 at 8:11 PM
2/4 Related to the above limitations, DL models also don't have compositionality and abstraction which I think are major requirements for powerful and flexible cognitive processing (if they're emergent I'd like to see a scientific argument for it)
February 3, 2025 at 8:11 PM
1/4 This may get me voted off the AI island but DL generally speaking has some fundamental flaws - the data and resource hungriness, the lack of interpretability let alone verifiability - are hugely undesirable IMO and new algorithmic approaches are needed & should be investigated simply for that
February 3, 2025 at 8:11 PM
Do you think this is relevant to AI or it's another interest of yours?
January 15, 2025 at 11:19 PM
Some task that requires non-trivial amounts of information processing, pattern recognition, working with levels of abstraction and composing of solutions to other cognitive or information processing tasks. Just pulled this out as a repoonse - but I think it covers a lot of what I think is relevant.
January 15, 2025 at 11:01 PM
Sounds exciting - what's the use case you are looking at?
January 6, 2025 at 4:02 PM
Indeed - lot of talent in there
January 4, 2025 at 8:43 PM
Ok that's very fair -

I also agree with you that the intersection of AI and other fields is exciting and seems relatively early stages at this point. In addition to neuro other areas that I personally find interesting are climate/renewables, materials, QC and synthetic biology
January 4, 2025 at 3:32 PM
I'd say don't do a PhD unless you are genuinely curious and excited about a topic - irrespective of the prospect of future financial gain. Otherwise you are just screwing yourself on multiple fronts. You really need to have a good understanding of what you want out of life before you sign on.
January 4, 2025 at 2:26 PM
(3/3 ) Now the good(ish) news: From my personal experience PhDs are essential when you are trying to use these big models in a novel way - so you need to go beyond mere API calls and do some genuine innovating. The market for that is still pretty big and will likely continue to remain so.
January 4, 2025 at 2:19 PM
(2/3) Indeed the number of roles at industry labs are limited & so expecting to land one of those feels unrealistic, particularly given the number of AI PhDs. If I were in grad school now, I would've gone for something instead of AI (type theory tops my list now!)
January 4, 2025 at 2:19 PM
(1/3) I find the post a bit contradictory tbh. Jobs where ML/Ops and/or API use skills are paramount are misaligned with AI PhD skillset. This sounds like a normal amount/type of post grad school anxiety - & contd. failure of grad schools in supporting students in terms of career planning
January 4, 2025 at 2:19 PM
Ah! Got it - thanks 🙂
December 29, 2024 at 2:53 AM
Interesting - what would be some key papers/books on this procedural representation? I would have thought this is a ref. to type theories which seems to be coming along nicely. Or are you referring to the success of neural representations in AI?
December 29, 2024 at 2:43 AM
Dipshit
December 29, 2024 at 1:05 AM