Marc Lanctot
banner
sharky6000.bsky.social
Marc Lanctot
@sharky6000.bsky.social
Research Scientist at Google DeepMind, interested in multiagent reinforcement learning, game theory, games, and search/planning.

Lover of Linux 🐧, coffee ☕, and retro gaming. Big fan of open-source. #gohabsgo 🇨🇦

For more info: https://linktr.ee/sharky6000
Super happy to see Gemini 3 released! ✨️🥳

And thanks for posting this thread on Bluesky! 🙏💙
November 19, 2025 at 4:57 AM
+1, following now, thanks @tedunderwood.com 👍
November 19, 2025 at 4:48 AM
Reposted by Marc Lanctot
Learn more about how Gemini 3 can help you learn, build and plan anything → goo.gle/4oUEkVu
A new era of intelligence with Gemini 3
Today we’re releasing Gemini 3 – our most intelligent model that helps you bring any idea to life.
goo.gle
November 18, 2025 at 4:53 PM
💯 !
November 19, 2025 at 4:47 AM
Thanks!
November 19, 2025 at 4:04 AM
💯 I am thinking it's exactly this.
November 18, 2025 at 1:05 PM
Specifically, I thought Buffet said:

"If the ratio approaches 200% -- as it did in 1999 and a part of 2000 -- you are playing with fire."

recently, but he didn't. He said this in 2001. 😅

So has he changed his opinion in 24 years... ? Or just looking to "make a quick buck"?

2/2
November 18, 2025 at 12:39 PM
Yes I have heard similar stories.. that's an example of conference reviewing being broken. No consequences for this AC/SAC, I am guessing.
November 15, 2025 at 6:29 PM
If we *do 🙄
November 15, 2025 at 5:55 PM
I mean calling them "guidelines" practically implies zero enforcement out of the gate.

So, sure, I can verbally abuse my reviewers and AC and still get my paper in.

I believe any code of conducts we have are for in person conference behavior. I meant one for reviewing.
November 15, 2025 at 5:47 PM
Someone else said the same but I don't believe so...? Or, if we don't, they are not strict or descriptive enough because there's no enforcement/consequences AFAICT.
November 15, 2025 at 5:44 PM
Won't be me if that's what you are saying.

I am taking an indefinite break from conference reviewing after this year's AAMAS.
November 15, 2025 at 5:42 PM
Hmm.. not sure I would agree that AI safety is imaginary or not as practical. These models are being very widely adopted, and if multiagent/games approaches could improve safety generally, I would say that'd be a practical use of games.
November 15, 2025 at 3:51 PM
It's the inherent difficult involved. Cicero took a whole team and Meta-level resources. It'll def be cool to see that kind of result again but it's always a big risk and so sometimes hard to justify even if you are willing to put in the hard technical work.
November 15, 2025 at 3:48 PM
This is a concern I have over the wider use of LLMs. Everything is starting to feel fake.. so much that the smallest dose of authenticity is really noticeable and will become overvalued in the next few years.
November 15, 2025 at 3:43 PM
I still do, but I admit that I tried once. LLM generated reference letters are either so obviously fake that I feel embarrassed using them or it costs more time providing context that I don't save any time.
November 15, 2025 at 3:43 PM
But there is a still a lot to learn and understand, and these communities don't always talk to each other nor hang out at the same venues. 😅
November 15, 2025 at 1:23 AM