Simen
eide.ai
Simen
@eide.ai
AI statistician (is that something?) building algorithms for Schibsted news media during daytime, outdoors and always injured during other times. Based in Norway. Not sure what ill do here yet
denne tenkte jeg også på, så ble bare forvirra selv og mistet poenget ;)
June 6, 2025 at 9:27 AM
heldigvis er en av fordelene med språkmodeller at de er veldig gode til å oversette til det språket man ønsker ;)
June 5, 2025 at 5:33 PM
i was thinking in terms of methodology. Im reading, but it takes some time 😅
December 11, 2024 at 7:27 AM
😅
December 10, 2024 at 5:36 PM
For someone new to this, how does these normalizing flows compare to the cnf and flow matching in lipman et al (2024)?
December 10, 2024 at 5:28 PM
Men joda joda 😅
December 4, 2024 at 9:56 PM
Summen kan jo bli det samme, bare mindre skatt på arbeid før det blir arv
December 4, 2024 at 9:55 PM
Skattlegge rike folk som er døde, hvem kan være imot det? ;)
December 4, 2024 at 9:00 AM
Skatt er en uting. Men alle er enige om at vi må finansiere staten på et eller annet nivå (med unntak av anarkistene)
December 4, 2024 at 8:54 AM
not that anyone else is owning that verb
November 28, 2024 at 12:54 PM
The paper will be presented on the #NLDL conference in Tromsø in January.

And I have successfully managed to confuse my bsky algo by mixing some personal paragliding and professional #LLM content here
November 25, 2024 at 2:36 PM
The added benefit of being bayesian here is that tasks with less data will be more similar to the hierarchical mean parameter set Θ. And therefore learn from other tasks!
November 25, 2024 at 2:36 PM
Effectively, each task adapter will be optimized both towards the training data, but also to be similar to the other adapters. And therefore the adapters will share knowledge between them!
November 25, 2024 at 2:36 PM
It works by constructing a hierarchical LLM where each task adapter parameter set θ_d has a prior to a shared hierarchical mean parameter set Θ.
November 25, 2024 at 2:36 PM
It outperforms both the case when you train a shared adapter for all tasks, and the case when you train one adapter per task indepedently on our dataset.
November 25, 2024 at 2:36 PM
You find an interesting plattform and immediately pipe it into your slow and legacy work-chat-platform?
November 23, 2024 at 10:41 AM