Enjoying Bsky without understanding. Fullstack dev. Working out of NYPL when their wifi works.
As instruction count increases, instruction-following quality decreases uniformly. This means that as you give the LLM more instructions, it doesn't simply ignore the newer ("further down in the file") instructions - it begins to ignore all of them uniformly
"""
As instruction count increases, instruction-following quality decreases uniformly. This means that as you give the LLM more instructions, it doesn't simply ignore the newer ("further down in the file") instructions - it begins to ignore all of them uniformly
"""
Is this what they call "dickensian writing"? I can't believe this was any more decipherable in David Copperfield's time.
Is this what they call "dickensian writing"? I can't believe this was any more decipherable in David Copperfield's time.
Here's some apps that you may want to double-check & turn off personalized ad data for:
NETFLIX: account > security & privacy > privacy & data settings > data privacy > opt out
go to Settings > Data & Privacy > Manage shared info > Personalized shopping, and toggle that shit off
Here's some apps that you may want to double-check & turn off personalized ad data for:
NETFLIX: account > security & privacy > privacy & data settings > data privacy > opt out
I have mixed feelings about Michael Lewis (the author), but this article is like 🤌😙
gift link: wapo.st/4ggsRLV
I have mixed feelings about Michael Lewis (the author), but this article is like 🤌😙
gift link: wapo.st/4ggsRLV
Gift link about biases in child learning studies and how to identify them
wapo.st/3WZ9HRQ
Gift link about biases in child learning studies and how to identify them
wapo.st/3WZ9HRQ