https://krishnacharya.github.io/
🗓️ I’ll be at the KDD Workshop on Online and Adaptive Recommender Systems (OARS) — happy to chat about this work, online and in person in Toronto!
#GLoSS #KDD2025 #OARS #LLM #RecommenderSystems #SemanticSearch #DenseRetrieval #LoRA #LLaMA3
🗓️ I’ll be at the KDD Workshop on Online and Adaptive Recommender Systems (OARS) — happy to chat about this work, online and in person in Toronto!
#GLoSS #KDD2025 #OARS #LLM #RecommenderSystems #SemanticSearch #DenseRetrieval #LoRA #LLaMA3
In addition, user segment-wise evaluation shows:
- Strong gains for cold-start users in Toys and Sports
- Benefits from longer user histories in Beauty
This highlights GLoSS’s robustness across interaction lengths.
#ColdStart #Personalization
In addition, user segment-wise evaluation shows:
- Strong gains for cold-start users in Toys and Sports
- Benefits from longer user histories in Beauty
This highlights GLoSS’s robustness across interaction lengths.
#ColdStart #Personalization
📈 Results on Amazon Beauty, Toys, and Sports datasets, GLoSS improves :
Recall@5 by +33.3%, +52.8%, +15.2%
- NDCG@5 by +30.0%, +42.6%, +16.1% over ID-based baselines.
GLoSS also outperforms LLM-based models(P5, GPT4Rec, LlamaRec, E4SRec) with Recall@5 gains of +4.3%, +22.8%, +29.5% respectively.
📈 Results on Amazon Beauty, Toys, and Sports datasets, GLoSS improves :
Recall@5 by +33.3%, +52.8%, +15.2%
- NDCG@5 by +30.0%, +42.6%, +16.1% over ID-based baselines.
GLoSS also outperforms LLM-based models(P5, GPT4Rec, LlamaRec, E4SRec) with Recall@5 gains of +4.3%, +22.8%, +29.5% respectively.
Prior LLM-based recommenders often rely on lexical search methods like BM25. GLoSS instead uses dense retrieval, going beyond frequency-based token overlap to capture deeper semantic relevance.
Prior LLM-based recommenders often rely on lexical search methods like BM25. GLoSS instead uses dense retrieval, going beyond frequency-based token overlap to capture deeper semantic relevance.
Classic ID-based approaches like SASRec, BERT4Rec, and SemanticID based models like TIGER are effective—
but usually require retraining when new items are added and struggle to generalize beyond patterns seen in training data, especially without rich metadata.
Classic ID-based approaches like SASRec, BERT4Rec, and SemanticID based models like TIGER are effective—
but usually require retraining when new items are added and struggle to generalize beyond patterns seen in training data, especially without rich metadata.
GLoSS is a generative recommendation framework that integrates LLMs with semantic search (aka dense retrieval) for sequential recommendation.
#LLM #RecommenderSystems #DenseRetrieval
GLoSS is a generative recommendation framework that integrates LLMs with semantic search (aka dense retrieval) for sequential recommendation.
#LLM #RecommenderSystems #DenseRetrieval
Among these baselines, a classic retrieval approach (using BM25) based on the text of the last item performs the best. I also explore how often-overlooked steps, like failing to deduplicate exact user-item interactions, can lead to significant inflation in metrics.
Among these baselines, a classic retrieval approach (using BM25) based on the text of the last item performs the best. I also explore how often-overlooked steps, like failing to deduplicate exact user-item interactions, can lead to significant inflation in metrics.
In this post, I dive into different model types—from ID-based to fully metadata-based models, key preprocessing steps, the leave-one-item-out split, evaluation metrics, and four baselines that any trained recommender should aim to beat.
In this post, I dive into different model types—from ID-based to fully metadata-based models, key preprocessing steps, the leave-one-item-out split, evaluation metrics, and four baselines that any trained recommender should aim to beat.