Bahaeddin ERAVCI
banner
beravci.bsky.social
Bahaeddin ERAVCI
@beravci.bsky.social
Learning with machines&data
Also interested in neuroscience & philosophy of mind
PhD @BilkentCS, Asst. Prof. of AI @TOBB ETU
ML | AI | HealthAI | Multimodal
Started my NLP lectures today exploring the fascinating levels of natural language.

This slide features an interesting example: Turkish, using Greek letters (orthography) on a historic tombstone from Istanbul. A poetic and meaningful common lesson transcending barriers...
May 6, 2025 at 8:59 AM
Don's main distinction for a CS mentality:
- ability to jump very quickly between levels of abstraction, between a low level and a high level, almost unconsciously
- deal with non-uniform (he means mathematically dis-continuous, discrete IMO) structures
March 15, 2025 at 10:49 AM
Came across a book (actually a transcript of lectures at @mitofficial.bsky.social) from a CS legend Donald Knuth, the author of The Art of Computer Programming. Not nearly as popular as TAOCP.

Love the line "Computer God talks about God" in the foreword, we'll see where it leads...
March 9, 2025 at 1:11 PM
#Severance isn’t a typical tv show. It’s a sharp dive into philosophy of mind, probing identity, memory, and mind-body duality with surprising depth. Highly recommend...
February 27, 2025 at 6:02 AM
Some reflections and insights after 1993 NIPS by Leo Breiman known for developing CART, bagging, and random forests.

Always find less formal writings of the pioneers more insightful.
December 29, 2024 at 8:46 AM
The infamous event of "cultural generalization made by a keynote speaker" shows how bias and miss-generalization are hard problems even for humans (even if a MIT professor). So, we maybe more compassionate with LLMs trained on our data.
December 14, 2024 at 7:37 PM
#NeurIPS and other major conferences should consider making presentations, at least important keynotes/highlights, publicly available.

I could easily make an argument with public fundings for research presented. Funding agencies can also support this for more open science.
December 12, 2024 at 6:33 AM
GPU poor man's home setup ready for a long night...
December 7, 2024 at 8:05 PM
When we're talking about learning (machine/biological) we should not forget about the giant feedback loop which we are trying to model and infer.

Folks who think AGI can be achieved from internet text with scale alone are either:
- Hyping for their personal gains
- Don't have a clue whatsoever
November 25, 2024 at 5:31 PM
Along with the great lectures on MIT OCW.

RIP Gilbert. Fun fact: his favorite matrix was a basic differentiator.
November 24, 2024 at 6:37 AM