OPTIMA Research Fellow working on Modelling Decision and Optimization problems, Programming Languages, and AI at Monash University, Melbourne, Australia. Developer of the MiniZinc language. I also talk about music, birds, cycling, and cricket.
The second talk in the session titled "Multi-task Representation Learning for Mixed Integer Linear Programming" won the best paper award. Congratulations to Junyang Cai, Taoan Huang, and Bistra Dilkina! doi.org/10.1007/978-...
November 12, 2025 at 4:55 AM
The second talk in the session titled "Multi-task Representation Learning for Mixed Integer Linear Programming" won the best paper award. Congratulations to Junyang Cai, Taoan Huang, and Bistra Dilkina! doi.org/10.1007/978-...
I think you’re right that ORCID capture a small subset. I’m not sure whether DOIs are the problem. Many other (prepublish) systems, such as Zenodo and ResearchGate, allow you to create DOIs. I think the question really is just what is indexed.
October 27, 2025 at 10:54 AM
I think you’re right that ORCID capture a small subset. I’m not sure whether DOIs are the problem. Many other (prepublish) systems, such as Zenodo and ResearchGate, allow you to create DOIs. I think the question really is just what is indexed.
Glad to see that DTrace is looking a lot better than last I looked in Rust. The the usdt crate is much better documented and has many more features now. Also excited to see some examples of scripts using the nextest probes!
October 19, 2025 at 6:48 AM
Glad to see that DTrace is looking a lot better than last I looked in Rust. The the usdt crate is much better documented and has many more features now. Also excited to see some examples of scripts using the nextest probes!
The top line is we're never going to get rid of hallucinations as it's just the way LLMs are built: they're not understanding, they're guessing based on stats. But maybe LLMs can be better fine-tuned to sound less confident, so humans aren't so taken in by them & use them more appropriately?
September 11, 2025 at 9:11 AM
The top line is we're never going to get rid of hallucinations as it's just the way LLMs are built: they're not understanding, they're guessing based on stats. But maybe LLMs can be better fine-tuned to sound less confident, so humans aren't so taken in by them & use them more appropriately?