Vaibhav
@vaibhavadlakha.bsky.social
PhD candidate @Mila and @McGill
Interested in interplay of knowledge and language
Love being outdoors!
Interested in interplay of knowledge and language
Love being outdoors!
Check out this amazing work by @karstanczak.bsky.social on rethinking LLM alignment through frameworks from multiple disciplines!
📢New Paper Alert!🚀
Human alignment balances social expectations, economic incentives, and legal frameworks. What if LLM alignment worked the same way?🤔
Our latest work explores how social, economic, and contractual alignment can address incomplete contracts in LLM alignment🧵
Human alignment balances social expectations, economic incentives, and legal frameworks. What if LLM alignment worked the same way?🤔
Our latest work explores how social, economic, and contractual alignment can address incomplete contracts in LLM alignment🧵
March 4, 2025 at 8:17 PM
Check out this amazing work by @karstanczak.bsky.social on rethinking LLM alignment through frameworks from multiple disciplines!
Check out the new MMTEB benchmark🙌 if you are looking for an extensive, reproducible and open-source evaluation of text embedders!
I am delighted to announce that we have released 🎊 MMTEB 🎊, a large-scale collaboration working on efficient multilingual evaluation of embedding models.
This work implements >500 evaluation tasks across >1000 languages and covers a wide range of use cases and domains🩺👩💻⚖️
This work implements >500 evaluation tasks across >1000 languages and covers a wide range of use cases and domains🩺👩💻⚖️
February 20, 2025 at 3:44 PM
Check out the new MMTEB benchmark🙌 if you are looking for an extensive, reproducible and open-source evaluation of text embedders!
#Repl4NLP will be co-located with NAACL this year in Albuquerque, New Mexico!
📣📣 Call for papers #Repl4NLP @naaclmeeting.bsky.social
Consider submitting your work -full papers, extended abstracts, or cross-submissions!
✨ Direct paper submission deadline: Jan 30, 2025
✨ ARR commitment deadline: Feb 20, 2025
More details on our website: sites.google.com/view/repl4nl...
Consider submitting your work -full papers, extended abstracts, or cross-submissions!
✨ Direct paper submission deadline: Jan 30, 2025
✨ ARR commitment deadline: Feb 20, 2025
More details on our website: sites.google.com/view/repl4nl...
10th Workshop on Representation Learning for NLP - Call for Papers
The 10th Workshop on Representation Learning for NLP (RepL4NLP 2025), co-located with NAACL 2025 in Albuquerque, New Mexico, invites papers of a theoretical or experimental nature describing recent ad...
sites.google.com
December 24, 2024 at 5:03 PM
#Repl4NLP will be co-located with NAACL this year in Albuquerque, New Mexico!
Excited to be at #NeurIPS2024 this week. Happy to meet up and chat about retrievers, RAG, embedders etc, or anything LLM-related!
December 10, 2024 at 6:06 PM
Excited to be at #NeurIPS2024 this week. Happy to meet up and chat about retrievers, RAG, embedders etc, or anything LLM-related!
(1/2) jumping back into this! read OpenScholar by @akariasai.bsky.social et al
I am quite excited by the abilities of LLMs to assist in scientific discovery and literature review.
I am quite excited by the abilities of LLMs to assist in scientific discovery and literature review.
Restarting an old routine "Daily Dose of Good Papers" together w @vaibhavadlakha.bsky.social
Sharing my notes and thoughts here 🧵
Sharing my notes and thoughts here 🧵
November 29, 2024 at 4:59 PM
(1/2) jumping back into this! read OpenScholar by @akariasai.bsky.social et al
I am quite excited by the abilities of LLMs to assist in scientific discovery and literature review.
I am quite excited by the abilities of LLMs to assist in scientific discovery and literature review.
Reposted by Vaibhav
Restarting an old routine "Daily Dose of Good Papers" together w @vaibhavadlakha.bsky.social
Sharing my notes and thoughts here 🧵
Sharing my notes and thoughts here 🧵
November 23, 2024 at 12:04 AM
Restarting an old routine "Daily Dose of Good Papers" together w @vaibhavadlakha.bsky.social
Sharing my notes and thoughts here 🧵
Sharing my notes and thoughts here 🧵
Honoured to be on the list! https://t.co/15CucCbxOu
November 29, 2024 at 5:14 PM
Honoured to be on the list! https://t.co/15CucCbxOu
Join us and be part of an amazing research community! Feel free to reach out of your want to know more about Mila or the application process. https://t.co/Z3QT7hFAS7
November 29, 2024 at 5:14 PM
Join us and be part of an amazing research community! Feel free to reach out of your want to know more about Mila or the application process. https://t.co/Z3QT7hFAS7
Completely agree, super well organised and executed! 👏 https://t.co/wGkts8EGAb
November 29, 2024 at 5:14 PM
Completely agree, super well organised and executed! 👏 https://t.co/wGkts8EGAb
Excited to welcome @COLM_conf to the city of best bagels! 🥯 Looking forward to it! https://t.co/wUxyrDr3x6
November 29, 2024 at 5:14 PM
Excited to welcome @COLM_conf to the city of best bagels! 🥯 Looking forward to it! https://t.co/wUxyrDr3x6
A little teaser for LLM2Vec @COLM_conf!
Stop by Tuesday morning poster session to know how we officiated the marriage of BERTs and Llamas! 🦙 https://t.co/E3HB1mwVvv
Stop by Tuesday morning poster session to know how we officiated the marriage of BERTs and Llamas! 🦙 https://t.co/E3HB1mwVvv
November 29, 2024 at 5:14 PM
A little teaser for LLM2Vec @COLM_conf!
Stop by Tuesday morning poster session to know how we officiated the marriage of BERTs and Llamas! 🦙 https://t.co/E3HB1mwVvv
Stop by Tuesday morning poster session to know how we officiated the marriage of BERTs and Llamas! 🦙 https://t.co/E3HB1mwVvv
RIP freedom of speech! https://t.co/PXMS9xMnvH
November 29, 2024 at 5:14 PM
RIP freedom of speech! https://t.co/PXMS9xMnvH
🚀🚀 LLMs are the new text encoders! https://t.co/4FZ2LXCPSd
November 29, 2024 at 5:14 PM
🚀🚀 LLMs are the new text encoders! https://t.co/4FZ2LXCPSd
Amazing talk by @PontiEdoardo. 🙌🚀It is interesting how many different ways exist to make LLMs more efficient! https://t.co/2f2L8zLiH3
November 29, 2024 at 5:14 PM
Amazing talk by @PontiEdoardo. 🙌🚀It is interesting how many different ways exist to make LLMs more efficient! https://t.co/2f2L8zLiH3
First ever arena for embedding models! ⚔️
Excited to see how this will change evaluation in this space! 🚀 https://t.co/H4FoMJrQaA
Excited to see how this will change evaluation in this space! 🚀 https://t.co/H4FoMJrQaA
November 29, 2024 at 5:14 PM
First ever arena for embedding models! ⚔️
Excited to see how this will change evaluation in this space! 🚀 https://t.co/H4FoMJrQaA
Excited to see how this will change evaluation in this space! 🚀 https://t.co/H4FoMJrQaA
Looking for an emergency reviewer for EMNLP / ARR familiar with RAG and language models. Please reach out if you can review a paper in the next couple of days.
November 29, 2024 at 5:14 PM
Looking for an emergency reviewer for EMNLP / ARR familiar with RAG and language models. Please reach out if you can review a paper in the next couple of days.
Great to see LLM2Vec being used for multilingual machine translation! 🚀 I believe LLM2Vec will serve as backbone of many more applications in the future! https://t.co/G18aqJ2xuv
November 29, 2024 at 5:14 PM
Great to see LLM2Vec being used for multilingual machine translation! 🚀 I believe LLM2Vec will serve as backbone of many more applications in the future! https://t.co/G18aqJ2xuv
However, this could mean we are past the point where MTEB serves as a useful signal 👀. Improving beyond the numbers we are seeing today (by training on synthetic data) carries the risk of optimizing for the benchmark rather than building general purpose embedding models. 5/N
November 29, 2024 at 5:14 PM
However, this could mean we are past the point where MTEB serves as a useful signal 👀. Improving beyond the numbers we are seeing today (by training on synthetic data) carries the risk of optimizing for the benchmark rather than building general purpose embedding models. 5/N
Interestingly, Meta-Llama-3-8B only slightly outperforms Mistral-7B, the previously best model when combined with LLM2Vec 🤔. We might have reached a point where better base models are not sufficient to make substantial improvements on MTEB. 3/N
November 29, 2024 at 5:14 PM
Interestingly, Meta-Llama-3-8B only slightly outperforms Mistral-7B, the previously best model when combined with LLM2Vec 🤔. We might have reached a point where better base models are not sufficient to make substantial improvements on MTEB. 3/N
In the supervised setting, applying LLM2Vec to Meta-Llama-3-8B leads to a new state-of-the-art performance (65.01) on MTEB among models trained on publicly available data only. 2/N https://t.co/UJoOTJ4L5r
November 29, 2024 at 5:14 PM
In the supervised setting, applying LLM2Vec to Meta-Llama-3-8B leads to a new state-of-the-art performance (65.01) on MTEB among models trained on publicly available data only. 2/N https://t.co/UJoOTJ4L5r
Exciting discovery! Triggers DON’T transfer universally 😮. Check out the paper for detailed experiments and analysis. https://t.co/Op7gGWBEdb
November 29, 2024 at 5:14 PM
Exciting discovery! Triggers DON’T transfer universally 😮. Check out the paper for detailed experiments and analysis. https://t.co/Op7gGWBEdb
Applying LLM2Vec costs same as ~2 cappuccinos! https://t.co/O6iFXAJgoB
November 29, 2024 at 5:14 PM
Applying LLM2Vec costs same as ~2 cappuccinos! https://t.co/O6iFXAJgoB
Very nice and intuitive explanation of our work lLM2Vec by @IntuitMachine!
Using causal LLMs for representation tasks without any architecture modifications is like driving a sports car in reverse 🏎️🤯
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/RfBNydFW9y
Using causal LLMs for representation tasks without any architecture modifications is like driving a sports car in reverse 🏎️🤯
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/RfBNydFW9y
November 29, 2024 at 5:14 PM
Very nice and intuitive explanation of our work lLM2Vec by @IntuitMachine!
Using causal LLMs for representation tasks without any architecture modifications is like driving a sports car in reverse 🏎️🤯
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/RfBNydFW9y
Using causal LLMs for representation tasks without any architecture modifications is like driving a sports car in reverse 🏎️🤯
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/RfBNydFW9y
Great summary of our recent LLM2Vec paper! Thanks @ADarmouni!
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/bY4DoP5ms1
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/bY4DoP5ms1
November 29, 2024 at 5:14 PM
Great summary of our recent LLM2Vec paper! Thanks @ADarmouni!
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/bY4DoP5ms1
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/bY4DoP5ms1
This is going to be my new way of bookmarking papers now! https://t.co/Dbn5juFBD9
November 29, 2024 at 5:14 PM
This is going to be my new way of bookmarking papers now! https://t.co/Dbn5juFBD9