Erik Arakelyan
banner
kirekara.bsky.social
Erik Arakelyan
@kirekara.bsky.social
Researcher @Nvidia | PhD from @CopeNLU | Formerly doing magic at @Amazon Alexa AI and @ARM. ML MSc graduate from @UCL. Research is the name of the game. ᓚᘏᗢ

http://osoblanco.github.io
Reposted by Erik Arakelyan
November 12, 2025 at 7:53 AM
Reposted by Erik Arakelyan
I defended my PhD at the University of Copenhagen ☺️ What a journey! I want to give massive thanks to my amazing supervisors, @iaugenstein.bsky.social and @neuralnoise.com who were there with me throughout the whole process.

Thesis on: osoblanco.github.io/thesis/
The Arxiv version is coming soon!
April 3, 2025 at 12:54 PM
@dfdazac.bsky.social was an honor to work with someone as amazing as you.

The line made me teary 🥹🥹♥️♥️
🥰🥰🥰🥰🥰🥰🥰
November 30, 2024 at 12:04 PM
Reposted by Erik Arakelyan
Hello bluesky!
I'm using this first post to share that my PhD thesis is now available online at research.vu.nl/en/publicati...
Thanks to all my collaborators who joined me in this journey!
November 29, 2024 at 4:42 PM
I think given the current weird/awful state of how reviewing is handled in major ML venues we would explicitly need ranking the reviewers even if they are anonymous. This can help (S)ACs at least internally filter out malicious and unqualified ones.

Will work on smth like this closer to ~ICML.
November 28, 2024 at 5:59 PM
The main question about the current LLM “reasoning” research is what to do next. Most go into synthetic generation and training on maybe with self-Refinement in hopes the model becomes better. I think we are missing controlled task formalization, step by step reasoning and strict step verification.
November 19, 2024 at 5:34 AM
Reposted by Erik Arakelyan
My amazing collaborators will be presenting three papers next week at EMNLP 2024! I wrote a blog post about our EMNLP papers and some of the other projects we're brewing 🚀🙂 neuralnoise.com/2024/nov-res...
November 9, 2024 at 11:02 PM
👋Psst! Want more faithful, verifiable and robust #LLM reasoning than with CoT, but using external solvers is meh? Our FLARE💫uses Logic Prog with Exhaustive Simulated Search to achieve this.🧵
@pminervini.bsky.social, Patrick Lewis, Pat Verga and @iaugenstein.bsky.social

arxiv.org/abs/2410.11900
November 8, 2024 at 2:13 PM
Reposted by Erik Arakelyan
At #EMNLP2024 we will present our paper on LLM values and opinions!

We introduce tropes: repeated and consistent phrases which LLMs generate to argue for political stances.

Read the paper to learn more! arxiv.org/abs/2406.19238
Work done Uni Copenhagen + Pioneer Center for AI
November 7, 2024 at 2:57 PM
Reposted by Erik Arakelyan
Hey! 🙂 we analysed what happens during pre-training, and for causal LMs, intra-document causal masking helps quite a bit both in terms of pre-training dynamics and downstream task performance: arxiv.org/abs/2402.13991
Analysing The Impact of Sequence Composition on Language Model Pre-Training
Most language model pre-training frameworks concatenate multiple documents into fixed-length sequences and use causal masking to compute the likelihood of each token given its context; this strategy i...
arxiv.org
November 8, 2024 at 9:05 AM