Appu Shaji
appughar.bsky.social
Appu Shaji
@appughar.bsky.social
CEO and Founder at Mobius Labs.

Here are for discussions in various facets of AI, such as multimodality, quantisation, efficiency and more. A few of our recent work appears at https://blog.mobiuslabs.com/
Interestingly GluGlu activations are demonstrating significant gains on Winograd-like datasets, with performance curiously peaking as we approach the winter holiday period. 🍷 Inspired by this, we will release a new dataset called GluWine following the completion of extensive experimentation.
December 3, 2024 at 11:48 AM
Agree. Long-form podcasts are often consumed while multitasking—at least for me (e.g., driving, cooking working out etc.)—unlike the focused attention typical of books or TV shows from yesteryear. What’s absorbed becomes a mix of the discussion and one’s mental interludes.
December 1, 2024 at 10:11 PM
Considering @roydanroy.bsky.socials little one has finished the Harry Potter series, The Hobbit should definitely be accessible. I believe Tolkien originally wrote it for his own children. The Lord of the Rings, however, can be a bit trickier.
December 1, 2024 at 3:46 PM
Hobbit and Lord of the rings series. I read aloud Hobbit with my daughter a few years back, and not sure who among us enjoyed it the most. Tolkien’s use of language is really beautiful.
December 1, 2024 at 12:15 PM
The script of serious researchers fading away while nefarious actors take over as technology becomes practical is as old as time. Are we unintentionally contributing to this cycle by staying out of it?
November 30, 2024 at 9:58 AM
100% agree on continuum—applies broadly across other aspects of AI (e.g., models enabling bio-weapons, surveillance, etc.). Why not redirect your efforts to safety research? i.e entry barrier for setting up a tracking system has become so low, ergo tackling FP and safety concerns is relevant & open.
November 30, 2024 at 9:55 AM
I suggest you keep a version private/commercial. With your data moat, you might able to raise better valuation than these guys : techcrunch.com/2014/07/18/y...
Yo Raises $1.5M In Funding At A $10M Valuation, Investors Include Betaworks And Pete Cashmore | TechCrunch
Yo, the simple app that just sends a "yo" to your friends, has closed $1.5 million in seed funding with a $10 million valuation and is finally ready to talk about its investors. They include Betaworks...
techcrunch.com
November 29, 2024 at 1:47 PM
arxiv.org/abs/1606.04474 this was getting a lot of eyeballs a few years back: and in generally spun out a few meta learning work ( especially in context and few shot context ).
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how...
arxiv.org
November 29, 2024 at 10:35 AM
Is there a better estimator here?

In stochastic gradients, as the number of mini-batches (assuming i.i.d.) grows, the Central Limit Theorem kicks in, making gradient estimation more robust. (Ergo, if you have scale, this is a sensible thing to do)
November 29, 2024 at 7:53 AM
Still remember the time when people were freaking out when #papers breached 1000 ( sometime in 2000s ).

Was not uncommon for the submission servers going down and getting more time to iterate and submit past the deadline.
November 28, 2024 at 4:51 PM
Treat it as noise. These are low entropy/utility interactions and not worth the headspace.
November 28, 2024 at 2:41 PM
Anti-censorship is something I agree with in principle, like Elon (though I don’t think he practices what he preaches). As adults, we should be able to block and disengage ourselves; but platform maintainers jumping in to censor feels like going from the frying pan into the fire (or vice versa 🤔 ).
November 28, 2024 at 12:08 PM
This is incredibly unfortunate and sets a very bad precedent. What’s even more appalling is the complete lack of explanation for the ban.
November 28, 2024 at 11:25 AM
Let’s not make this us (ML folks) vs them (anti-AI crowd). Many are anxious about being replaced by AI, and their frustration is often misdirected in loud, mob-like ways at the wrong targets. While we shouldn’t tolerate toxicity ( thanks for the list btw), siloing ourselves can be equally harmful.
November 28, 2024 at 5:46 AM
False negatives are also a major issue. My most cited paper, which I coauthored and is widely used by practitioners, was rejected four times before we turned it into a technical report and finally published it in PAMI (scholar.google.de/scholar?q=sl...). We had almost given up.
Google Scholar
scholar.google.de
November 25, 2024 at 12:03 PM
Imho, peer review is a system designed to verify, reproduce, and push the boundaries of our collective scientific knowledge. My two cents are seeking out a better system that aligns with these goals is the need of the hour.
November 24, 2024 at 7:34 PM
I agree that this is, unfortunately, the era of anti-intellectualism. That said, if the glaring inefficiencies in current peer-review systems are not addressed, it will only add fuel to the fire. It is far better to hash things out collectively and proactively.
November 24, 2024 at 7:32 PM
The paths my colleagues took were through intermediate adjunct roles. Given the current climate & the pay scale differences in ML, this rarely happens.

That said, such avenues should be normalized, as they ultimately represent valid contributions to scientific knowledge and pushing a field forward.
November 24, 2024 at 1:12 PM
🫡
November 24, 2024 at 9:35 AM
For forward-looking ideas, there should be an equivalent platform. Limiting papers per reviewer is low-hanging fruit—I’ve seen reviewers rush 10+ papers a few hrs before deadlines. Further, with arXiv, double-blind reviews feel like a smoke screen & might work better for peer or community feedback.
November 24, 2024 at 9:31 AM