Kanishka Misra 🌊
@kanishka.bsky.social
Assistant Professor of Linguistics, and Harrington Fellow at UT Austin. Works on computational understanding of language, concepts, and generalization.
🕸️👁️: https://kanishka.website
🕸️👁️: https://kanishka.website
Though I also wonder if all of this generalizes to the inverse case of meaning-seeking minimal pair stimuli
bsky.app/profile/kani...
bsky.app/profile/kani...
I am 100% on board with this framing, but wondering if it’ll hold for minimal pair stimuli that are both grammatical but convey very different messages, an example from COMPS:
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
November 11, 2025 at 1:32 PM
Though I also wonder if all of this generalizes to the inverse case of meaning-seeking minimal pair stimuli
bsky.app/profile/kani...
bsky.app/profile/kani...
I am 100% on board with this framing, but wondering if it’ll hold for minimal pair stimuli that are both grammatical but convey very different messages, an example from COMPS:
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
November 11, 2025 at 1:15 PM
I am 100% on board with this framing, but wondering if it’ll hold for minimal pair stimuli that are both grammatical but convey very different messages, an example from COMPS:
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
A robin can fly.
A penguin can fly.
(Similarly for datasets like EWoK)
Chris Potts on Twitter, circa 2023: language models are outstanding (lmao)
November 10, 2025 at 12:18 PM
Chris Potts on Twitter, circa 2023: language models are outstanding (lmao)
Congratulations!!!
November 10, 2025 at 1:09 AM
Congratulations!!!
Keeping it ambiguous as always 🥸😎
November 5, 2025 at 1:32 PM
Keeping it ambiguous as always 🥸😎
Of course!! Thanks so much for the kind words — it’s just a bunch of modifications I made over the old Hugo academic theme!
November 4, 2025 at 2:29 PM
Of course!! Thanks so much for the kind words — it’s just a bunch of modifications I made over the old Hugo academic theme!
And here’s an alternate link: semanticsarchive.net/Archive/zg1Z... my bad!
semanticsarchive.net
October 24, 2025 at 1:10 AM
And here’s an alternate link: semanticsarchive.net/Archive/zg1Z... my bad!
Thanks for explaining! Sure, calibration of common ground can potentially change expectations about naturalness. Another way this can happen is from specific constructions (e.g., “hey, wait a min”), which has been shown to have an effect in humans w/o any other context (S&K). See our exp 2 for this!
October 24, 2025 at 1:10 AM
Thanks for explaining! Sure, calibration of common ground can potentially change expectations about naturalness. Another way this can happen is from specific constructions (e.g., “hey, wait a min”), which has been shown to have an effect in humans w/o any other context (S&K). See our exp 2 for this!
Check out Syrett and Koev’s work (academic.oup.com/jos/article/32…) among others as evidence for human sensitivity to one being a more surprising response than the other
Also check out our second exp — we saw weakened sensitivity when there were digression triggers (like “hey wait a minute”)
Also check out our second exp — we saw weakened sensitivity when there were digression triggers (like “hey wait a minute”)
https://academic.oup.com/jos/article/32…
October 23, 2025 at 10:22 PM
Check out Syrett and Koev’s work (academic.oup.com/jos/article/32…) among others as evidence for human sensitivity to one being a more surprising response than the other
Also check out our second exp — we saw weakened sensitivity when there were digression triggers (like “hey wait a minute”)
Also check out our second exp — we saw weakened sensitivity when there were digression triggers (like “hey wait a minute”)
Yes — we test this in exp 2 where we use digression triggers to see if the preference for the at issue content is weakened (it is; check out our exp 2 for details)
October 23, 2025 at 10:20 PM
Yes — we test this in exp 2 where we use digression triggers to see if the preference for the at issue content is weakened (it is; check out our exp 2 for details)