Bahaeddin ERAVCI
banner
beravci.bsky.social
Bahaeddin ERAVCI
@beravci.bsky.social
Learning with machines&data
Also interested in neuroscience & philosophy of mind
PhD @BilkentCS, Asst. Prof. of AI @TOBB ETU
ML | AI | HealthAI | Multimodal
Started my NLP lectures today exploring the fascinating levels of natural language.

This slide features an interesting example: Turkish, using Greek letters (orthography) on a historic tombstone from Istanbul. A poetic and meaningful common lesson transcending barriers...
May 6, 2025 at 8:59 AM
The universe, in a baffling sense, creates local pockets of complexity (decreasing entropy through biological/physical self-organized structures) while relentlessly advancing toward a state of maximum global entropy.

www.quantamagazine.org/why-everythi...
Why Everything in the Universe Turns More Complex | Quanta Magazine
A new suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution.
www.quantamagazine.org
April 7, 2025 at 5:31 AM
Reposted by Bahaeddin ERAVCI
Don's main distinction for a CS mentality:
- ability to jump very quickly between levels of abstraction, between a low level and a high level, almost unconsciously
- deal with non-uniform (he means mathematically dis-continuous, discrete IMO) structures
March 15, 2025 at 10:49 AM
Came across a book (actually a transcript of lectures at @mitofficial.bsky.social) from a CS legend Donald Knuth, the author of The Art of Computer Programming. Not nearly as popular as TAOCP.

Love the line "Computer God talks about God" in the foreword, we'll see where it leads...
March 9, 2025 at 1:11 PM
The origins of aesthetics is really fascinating. Why deem this scene utterly spectacular and even tie it to the "long-tailed mountain lady"? What mechanisms shaped this "taste" and how?
March 2, 2025 at 5:44 AM
#Severance isn’t a typical tv show. It’s a sharp dive into philosophy of mind, probing identity, memory, and mind-body duality with surprising depth. Highly recommend...
February 27, 2025 at 6:02 AM
There appears to be a striking correlation between ignorance on a topic and the confidence with which people make bold statements about it.

Can easily use this principle as a de-noising filter...
January 18, 2025 at 8:31 AM
Self (and its counterpart the other) is a very handy abstraction to make the most of our limited processing power.

Illusion of free will is a beneficial yet erroneous causal explanation we created after we observed the self interacting with the other(s) for some time.

m.youtube.com/watch?v=_Ig9...
The illusion of self and the illusion of free will, explained | Annaka Harris
YouTube video by Big Think
m.youtube.com
January 12, 2025 at 6:47 PM
People with absolute no theoretic or practical knowledge/experience (not a single call to nvidia-smi) about deep learning seems to easily predict the future of AI.

Their self-proclaimed prophetic confidence -still- amazes me.
January 1, 2025 at 1:43 PM
Some reflections and insights after 1993 NIPS by Leo Breiman known for developing CART, bagging, and random forests.

Always find less formal writings of the pioneers more insightful.
December 29, 2024 at 8:46 AM
Intersection of info theory and complexity theory has always been very interesting.

While entropy quantifies global uncertainty (potential information), observer-dependent entropy brings observer’s view (its world model) to define subjective uncertainty.

www.quantamagazine.org/what-is-entr...
What Is Entropy? A Measure of Just How Little We Really Know. | Quanta Magazine
Exactly 200 years ago, a French engineer introduced an idea that would quantify the universe’s inexorable slide into decay. But entropy, as it’s currently understood, is less a fact about the world th...
www.quantamagazine.org
December 29, 2024 at 6:59 AM
AGI isn't around the corner and scaling auto-regressive LLMs won't get us AGI.

I argue, while AR-LLMs are great improvements, we need some very important paradigm shifts with lessons from the past.

open.substack.com/pub/beravci/...
Bridging Generative AI and Truth: Ancient Lessons for Modern Tech
When a seasoned lawyer last year filed a brief citing six precedent-setting cases, he trusted the AI chatbot that assisted him.
open.substack.com
December 22, 2024 at 1:37 PM
The infamous event of "cultural generalization made by a keynote speaker" shows how bias and miss-generalization are hard problems even for humans (even if a MIT professor). So, we maybe more compassionate with LLMs trained on our data.
December 14, 2024 at 7:37 PM
#NeurIPS and other major conferences should consider making presentations, at least important keynotes/highlights, publicly available.

I could easily make an argument with public fundings for research presented. Funding agencies can also support this for more open science.
December 12, 2024 at 6:33 AM
Feels like the beginning of 1900s with huge discoveries each year but this time huge strides in tech.

Exciting to be a witness of the tech revolution ranging from AI to quantum compute...

blog.google/technology/r...
Meet Willow, our state-of-the-art quantum chip
Our new quantum chip demonstrates error correction and performance that paves the way to a useful, large-scale quantum computer.
blog.google
December 9, 2024 at 8:32 PM
GPU poor man's home setup ready for a long night...
December 7, 2024 at 8:05 PM
This saga reminds me of the access modifiers I teach in my Java OOP course.

Maybe we need something similar in the generative AI age:
- Public: Content accessible to both AI and humans
- Protected: Human-only consumable public content

Easier said than done with a lot of technicalities though...
I've removed the Bluesky data from the repo. While I wanted to support tool development for the platform, I recognize this approach violated principles of transparency and consent in data collection. I apologize for this mistake.
First dataset for the new @huggingface.bsky.social @bsky.app community organisation: one-million-bluesky-posts 🦋

📊 1M public posts from Bluesky's firehose API
🔍 Includes text, metadata, and language predictions
🔬 Perfect to experiment with using ML for Bluesky 🤗

huggingface.co/datasets/blu...
November 28, 2024 at 5:27 AM
Test-of-time awards are the **real impact** metrics akin to revolutionary science in
Kuhnian sense.

Congrats to @ian-goodfellow.bsky.social and Ilya with GANs and Seq2Seq.

blog.neurips.cc/2024/11/27/a...
Announcing the NeurIPS 2024 Test of Time Paper Awards  – NeurIPS Blog
blog.neurips.cc
November 27, 2024 at 5:57 PM
When we're talking about learning (machine/biological) we should not forget about the giant feedback loop which we are trying to model and infer.

Folks who think AGI can be achieved from internet text with scale alone are either:
- Hyping for their personal gains
- Don't have a clue whatsoever
November 25, 2024 at 5:31 PM
Swiss church installs AI-powered Jesus:
Two-thirds of the users found it to be a “spiritual experience”

"I think there is a thirst to talk with Jesus."
www.theguardian.com/technology/2...
Deus in machina: Swiss church installs AI-powered Jesus
Peter’s chapel in Lucerne swaps out its priest to set up a computer and cables in confessional booth
www.theguardian.com
November 23, 2024 at 8:06 PM
A lot of contemporary discussions in AI/ML have a corresponding philosophical discourse. For example, use of copyrighted material in LLM is linked with the Ship of Theseus paradox.

We frankly don't have answers for these problems and maybe will never have an objective answer.
November 23, 2024 at 5:42 PM
Intelligence is hard to define but is related to in-domain vs out-domain generalization.

Trivial case being k-NN in-domain retrieval. AR-LLMs have better interpolation performance but usually fail catastrophically out-domain. Defining "in-out domain" for any task isn't easy as it looks though.
November 22, 2024 at 6:36 PM
Starter packs were very instrumental in getting the conversation going.

But is there any way we can save a starter pack as a list @support.bsky.team? If not would really appreciate if we could.
November 21, 2024 at 4:39 PM
When the ancients, whether rationalists or mystics, urged 'Know thyself,' they were pointing to our faculty of reason and cognition.

Just started and it feels like it is going to be a classic. Grateful for the dopamine hits with intellectual pleasure...
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work: mitpress.mit.edu/978026204941...
November 21, 2024 at 5:24 AM
Spinning up #Bluesky to increase signal to noise ratio. I do hope their recommendation system doesn't mess up @bsky.app.

Frankly **anything** is probably better than X right now.
November 20, 2024 at 8:58 AM