Shan Chen
@shan23chen.bsky.social
PhDing @AIM_Harvard @MassGenBrigham|PhD Fellow @Google | Previously @Bos_CHIP @BrandeisU
More robustness and explainabilities 🧐 for Health AI.
shanchen.dev
More robustness and explainabilities 🧐 for Health AI.
shanchen.dev
Source: t.co/mV27ZZg5MN
https://www.reddit.com/r/OpenAI/comments/1ieonxv/comment/ma9f5me/
t.co
February 1, 2025 at 4:01 AM
Source: t.co/mV27ZZg5MN
Yea… he does have problems portraying female in stereotypical ways, big critics in China too
January 4, 2025 at 11:06 PM
Yea… he does have problems portraying female in stereotypical ways, big critics in China too
During the QA session, one stood up to her regarding this issue really respectfully and her response was: “That was not based on my judgment. That was based on the student's quote saying that the school was not teaching it, which meant that it applied to a lot of people from there."
December 14, 2024 at 6:10 PM
During the QA session, one stood up to her regarding this issue really respectfully and her response was: “That was not based on my judgment. That was based on the student's quote saying that the school was not teaching it, which meant that it applied to a lot of people from there."
Most of the talk discussed about bad practices. But only one slide mentioned specific group of people.
December 14, 2024 at 6:10 PM
Most of the talk discussed about bad practices. But only one slide mentioned specific group of people.
Haha which one has more nowadays?
December 11, 2024 at 5:26 AM
Haha which one has more nowadays?
Haha transformers really transformed both.
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
December 11, 2024 at 5:24 AM
Haha transformers really transformed both.
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
Imagine a world where these will be positively correlated
December 6, 2024 at 2:59 AM
Imagine a world where these will be positively correlated
Quite possible!
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
December 5, 2024 at 8:16 PM
Quite possible!
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
More on future potential reliance on LLM agent doing reviews and audits
November 27, 2024 at 9:46 PM
More on future potential reliance on LLM agent doing reviews and audits
I’m terrified by the massive openreview data. Potentially gonna bite back on us 🥲😥
November 27, 2024 at 5:50 PM
I’m terrified by the massive openreview data. Potentially gonna bite back on us 🥲😥
END/🧵 Thanks to all our awesome co-authors:
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
November 27, 2024 at 3:17 PM
END/🧵 Thanks to all our awesome co-authors:
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
5/🧵 Dive deeper into our methods, findings, and the implications of our research by checking out the full 📜 paper here: arxiv.org/abs/2405.05506
All our data can be downloaded from our website: crosscare.net
All our data can be downloaded from our website: crosscare.net
Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias
Large language models (LLMs) are increasingly essential in processing natural languages, yet their application is frequently compromised by biases and inaccuracies originating in their training data. ...
arxiv.org
November 27, 2024 at 3:14 PM
5/🧵 Dive deeper into our methods, findings, and the implications of our research by checking out the full 📜 paper here: arxiv.org/abs/2405.05506
All our data can be downloaded from our website: crosscare.net
All our data can be downloaded from our website: crosscare.net
4.5/🧵 For the arxiv pretraining dataset, we also have an overall trend based on entity mentions! Guess which two terms are the big bump there back in 2019
November 27, 2024 at 3:13 PM
4.5/🧵 For the arxiv pretraining dataset, we also have an overall trend based on entity mentions! Guess which two terms are the big bump there back in 2019
4/🧵 We've also developed a new data visualization tool, available at [http://crosscare.net], to allow researchers and practitioners to explore these biases from different pretraining corpus and understand their implications better. Tools in progress! 🛠️📊
November 27, 2024 at 3:13 PM
4/🧵 We've also developed a new data visualization tool, available at [http://crosscare.net], to allow researchers and practitioners to explore these biases from different pretraining corpus and understand their implications better. Tools in progress! 🛠️📊