David Dukić
ddaviddukic.bsky.social
David Dukić
@ddaviddukic.bsky.social
PhD student in NLP | TakeLab 🇭🇷 | Information extraction, representation learning & analysis | Making LLMs better one step at a time
👋🌊🇭🇷
August 18, 2025 at 10:51 AM
So, news becomes more positive as the years go by. Or does it? We trained sentiment classifiers on STONE & 24sata, then analyzed sentiment over 5 periods of the TL Retriever. We find that positivity rises at the expense of neutrality. But negativity in news headlines also increases.
July 15, 2025 at 12:14 PM
We detect sentiment shift by swapping embeddings across periods. Using later-period embeddings in earlier periods results in increased positive sentiment. Using earlier-period embeddings in later periods results in decreased positive sentiment.
July 15, 2025 at 12:14 PM
We wondered if the trained embeddings could tell us something about the shift in sentiment. Can we detect changes in positivity and negativity just using the trained embeddings? The answer is yes!
July 15, 2025 at 12:14 PM
We identify words that change the most by their cumulative cosine distance scores within the last 25 years. For these words, we unveil the change in meaning by picking five nearest neighbors per period. We group the words into three major topics: EU, technology, and COVID.
July 15, 2025 at 12:14 PM
We train embeddings using skip-gram with negative sampling (SGNS) method from Word2Vec. We align embeddings between different periods using Procrustes alignment. We validate the quality of embeddings on two word similarity datasets.
July 15, 2025 at 12:14 PM
We leverage the TakeLab Retriever 🐕 (retriever.takelab.fer.hr) corpus of 10 million articles from Croatian news outlets, which we split into five equal periods (2000--2024).
Semantic change is measured using the cumulative cosine distance between embeddings in neighboring periods.
TakeLab Retriever
TakeLab Retriever
retriever.takelab.fer.hr
July 15, 2025 at 12:14 PM
Despite traditional diachronic studies using corpora spanning centuries, we also find interesting results when training diachronic embeddings on only 25 years of news data. We detect words from 3 turbulent topics—EU, Technology, and COVID—whose semantics were strongly affected.
July 15, 2025 at 12:14 PM
So, news becomes more positive as the years go by. Or does it? We trained sentiment classifiers on STONE & 24sata, then analyzed sentiment over 5 periods of the TL Retriever. We find that positivity rises at the expense of neutrality. But negativity in news headlines also increases.
July 15, 2025 at 12:09 PM
We detect sentiment shift by swapping embeddings across periods. Using later-period embeddings in earlier periods results in increased positive sentiment. Using earlier-period embeddings in later periods results in decreased positive sentiment.
July 15, 2025 at 12:09 PM
We wondered if the trained embeddings could tell us something about the shift in sentiment. Can we detect changes in positivity and negativity just using the trained embeddings? The answer is yes!
July 15, 2025 at 12:09 PM
We identify words that change the most by their cumulative cosine distance scores within the last 25 years. For these words, we unveil the change in meaning by picking five nearest neighbors per period. We group the words into three major topics: EU, technology, and COVID.
July 15, 2025 at 12:09 PM
We train embeddings using skip-gram with negative sampling (SGNS) method from Word2Vec. We align embeddings between different periods using Procrustes alignment. We validate the quality of embeddings on two word similarity datasets.
July 15, 2025 at 12:09 PM
We leverage the TakeLab Retriever 🐕 (retriever.takelab.fer.hr) corpus of 10 million articles from Croatian news outlets, which we split into five equal periods (2000--2024).
Semantic change is measured using the cumulative cosine distance between embeddings in neighboring periods.
TakeLab Retriever
TakeLab Retriever
retriever.takelab.fer.hr
July 15, 2025 at 12:09 PM
Despite traditional diachronic studies using corpora spanning centuries, we also find interesting results when training diachronic embeddings on only 25 years of news data. We detect words from 3 turbulent topics—EU, Technology, and COVID—whose semantics were strongly affected.
July 15, 2025 at 12:09 PM