Petal Mokryn
petalmokryn.bsky.social
Petal Mokryn
@petalmokryn.bsky.social
B.Sc. double-majored in physics & applied math. Aspiring interdisciplinary mathematician. Data driven, information-theoretic, Bayesian approaches to modeling complex & nonequilibrium systems.

They/Them

Most likes are papers added to my literature search
- on an emotional level. Concrete predictions, put into brief but compelling story form.

Would that be any good in your opinion, or nah? 😅
September 21, 2025 at 12:03 PM
If the goal is to get the message across to people not already informed on the matter, maybe specific forecasts on likely ways society will collapse if the issues aren’t resolved?

A few different possibilities (gotta represent forecasting uncertainty ofc), each a story hammering the point home -
September 21, 2025 at 12:02 PM
There are still limitations of course. Especially in mathematical tractability - IFT is a statistical field theory, and things can get complicated fast if the spatial/spatiotemporal data you’re trying to infer has particularly complicated dynamics/statistics.
September 19, 2025 at 1:11 PM
I also think there’s a lot of room both in making new & exciting variations on the method, and in applying it in various applications.

Oh it’s also very scalable. That’s a major bonus. Can’t forget about the scalability.
September 19, 2025 at 12:25 PM
* decisions on points in a functional Hilbert space rather than R^n.

As in the points segregated by hyperplane cuts being functions in the Hilbert space.
August 17, 2025 at 11:08 PM
- is deciding on how to continue a medical treatment, with one of the factors being the trajectory of a patient’s health indicators data throughout ongoing treatment.
August 17, 2025 at 11:06 PM
I wonder if kernel decision trees (kernels in decision trees) may be useful when working with functional data. Essentially the decision tree works the same, just on decisions in a function Hilbert space rather than just in R^n.

One example I’m thinking of - (1/2)
August 17, 2025 at 11:04 PM
I think there can be a lot of nuance to it! At the end it’s all about what we want to infer from the data.

If e.g. the behavior of interest is simple but how predictors cause it is super complicated, maybe a good choice is a feature-extracting ML algo whose output is fed as input to a simple model
August 14, 2025 at 6:26 PM
Personally, I feel ML methods are appropriate specifically when trying to capture dependencies in the data that are a-priority expected to be too complicated & subtle to capture otherwise.

In general, I think model complexity should scale with the anticipated complexity of the patterns of interest
August 14, 2025 at 6:21 PM
Yeah, also his machinery of Bayesian updating in the face of constraints rather than in the face of data, is pretty powerful in my opinion.

One does have to already be extremely familiar with the type of system in question to come up with good constraints though…
August 11, 2025 at 8:37 AM
I personally feel it was very lucky for me that my intro to statistical inference was E. T. Jaynes’ MaxEnt & MaxCal methods, followed by Solomonoff induction & Minimum Message Length.

Gave me some great Bayesian information theoretic fundamentals to use as a lens to parse everything else with.
August 10, 2025 at 12:07 PM
Their paper doesn’t actually implement everything they talk about in the theory section - the guiding principle isn’t actually fully implemented - but their actual results are still nice as far as I can tell, and their elucidation of the guiding principle is a very good thing to keep in mind.
August 9, 2025 at 5:08 AM
I have nothing to do with this paper or any of the authors, I just read its abstract & felt compelled to share it
August 6, 2025 at 8:32 AM
There’s an interesting ongoing project by Gabriele Carcassi et al: assumptionsofphysics.org

Trying to understand what are the fundamental physical principles from which the mathematical structure of quantum mechanics (& how they differ from classical) emerge.
Assumptions of Physics - Home
assumptionsofphysics.org
July 30, 2025 at 5:30 PM
But yeah, I like having a good intro to the different parts of a field of study, which refer me to specific sources on specific topics for further study.

I personally feel it helps me (a) get an initial gist of the field and (b) make an informed choice on what parts of it to study more in depth.
June 20, 2025 at 1:03 PM
For me personally as a student trying to enter research, I really like reading field-of-study lit reviews when I’m trying to study a field that’s new to me. They can really make it feel less overwhelming and more accessible for me.

Ofc, my perspective is that of a beginner, so 😅
June 20, 2025 at 12:59 PM
And I wonder, has any work been done on exploring the chaos-theoretic properties of gen-AI models?

Their spectrum of Lyapunov exponents, their attractors, etc?
April 30, 2025 at 10:47 PM
“…In this paper, we develop our method ClusterCluck TLK, and demonstrate its superiority on real and synthetic small world networks…”
March 24, 2025 at 8:33 AM
I am concerned about AI being used for the following, though:
1) Plagiarism
2) Faked data
3) Flooding journals with garbage papers wasting everyone’s time & labor
March 17, 2025 at 8:38 PM
Still, given my current understanding of LLMs and ML in general, I’m not holding my breath for them to take our research jobs, rather than just being another tool in our toolbox, that can be good or bad depending on usage, like any other tool.
March 17, 2025 at 8:36 PM
Tbh in that kinda case it’d just a legitimate research tool, I think.

If it’s meritorious, not plagiarizing, and not faking the data, then I don’t think it’s an issue.
March 17, 2025 at 8:34 PM
I think the more insidious thing might be AI generated data rather than LLM generated papers, honestly.

The latter can be reliably subjected to the standard of “Merit, y/n? Plagiarism, y/n?”, but the former is much more difficult to detect, especially if the data is faked in sophisticated ways.
March 17, 2025 at 8:25 PM