ari
banner
aryanmann.com
ari
@aryanmann.com
i like philos and computers and other things - building llms @ cohere
Agree! Practically for most use cases it doesn't matter. If it does then you can post train it out of them (most likely)
May 6, 2025 at 3:41 PM
That's probably why Nathan said "in different areas." In the area of anti-China sentiment, they definitely are censored (as the post says).
May 6, 2025 at 3:32 PM
I haven't seen a performant graph-based inmplementation yet .
May 1, 2025 at 3:44 PM
arent graphs only useful if theres rich relationships between entities?

wouldn't a rich vector search + reranking be better?
May 1, 2025 at 9:05 AM
can't wait till all companies merge into one
April 16, 2025 at 3:42 AM
it's one bad article vs a few decent books; i wouldn't discredit him entirely over that
March 31, 2025 at 7:06 PM
🥳🥳
March 14, 2025 at 4:40 AM
any ablations on a task to very effectiveness?
February 22, 2025 at 11:58 AM
DEI! everyone knows you need atleast 25 grammies to perform
February 11, 2025 at 8:23 AM
i have never seen genuine allyship from anyone higher than a middle manager
February 11, 2025 at 8:22 AM
python is absolutely fine. a lot of scientific computing libraries use C under the hiod so it's fast too. it is user friendly & the ecosystem is becoming more reliable because of Ruff. the type system is a bit of a mess tho but no biggie
February 11, 2025 at 8:17 AM
honestly if its not AI, it's either unmoderated or human moderators (a job which can cause psychological damage)
February 11, 2025 at 8:02 AM
organizing methodically is not the vibe
February 10, 2025 at 6:09 PM
same but can't conflate bad vibes with facism
February 7, 2025 at 8:05 AM
yeah, the tech right ideology seems more like an e/acc thing than an EA thing
February 6, 2025 at 11:24 PM
model collapse is kinda overblown tho. you just need to put some "energy" in with synthetic data -- it can be BoN sampling, verifiers, etc.
January 29, 2025 at 10:01 AM
What does this prove? Isn't it obvious a Chinese company would have these "safeguards" in place? You can probably fine tune that behavior out with 50 examples.
January 28, 2025 at 6:48 AM
its been independently verified multiple times at a smaller scale. a larger scale will just take more time/compute and no reason to believe the RL will not scale
January 27, 2025 at 6:31 PM
yeah good point about fear vs risk. i agree there's virtually no risk, might be good to socially reinforce that
January 27, 2025 at 4:58 PM
these are just my observations from 4 years at a LAC. sure, it's mostly right wingers afraid of folks calling them out on blatant racism, etc. nonetheless, it's also good intentioned folks afraid of saying the wrong thing.
January 27, 2025 at 6:13 AM
The issue is that it's marketed as a conscious oracle whereas it is a just an impressive search tool that operates in natural language space.

There are issues with it, most of which can also be attributed to humans too, but some uniquely challenging
January 27, 2025 at 5:20 AM
you're overestimating folks in tech
January 25, 2025 at 11:44 AM
scissors
January 23, 2025 at 5:42 AM
cancel culture (by my definiton, calling out powerful people publicly) is generally a very good thing imo.

however, there is a general fear of "misspeaking" in leftist circles - i think stems partly from discussing heavier topics but also partly from a sense of moral purity used as a status symbol
January 23, 2025 at 5:40 AM
thank you for your service 🫡 i listened to a minute before schulz voice became unbearable
January 23, 2025 at 5:32 AM