Theory of Computing Report
theory.report
Theory of Computing Report
@theory.report
TR25-206 | Permanental rank versus determinantal rank of random matrices over finite fields | Fatemeh Ghasemi, Gal Gross, Swastik Kopparty
This paper is motivated by basic complexity and probability questions about permanents of random matrices over small finite fields, and in particular, about properties separating the permanent and the determinant. Fix $q = p^m$ some power of an odd prime, and let $k \leq n$ both be growing. For a uniformly random $n \times k$ matrix $A$ over $\mathbb F_q$, we study the probability that all $k \times k$ submatrices of $A$ have zero permanent; namely that $A$ does not have full *permanental rank*. When $k = n$, this is simply the probability that a random square matrix over $\mathbb F_q$ has zero permanent, which we do not understand. We believe that the probability in this case is $\frac{1}{q} + o(1)$, which would be in contrast to the case of the determinant, where the answer is $\frac{1}{q} + \Omega_q(1)$. Our main result is that when $k$ is $O(\sqrt{n})$, the probability that a random $n \times k$ matrix does not have full permanental rank is essentially the same as the probability that the matrix has a $0$ column, namely $(1 +o(1)) \frac{k}{q^n}$. In contrast, for determinantal (standard) rank the analogous probability is $\Theta(\frac{q^k}{q^n})$. At the core of our result are some basic linear algebraic properties of the permanent that distinguish it from the determinant.
eccc.weizmann.ac.il
December 6, 2025 at 5:00 AM
TR25-205 | Fourier Sparsity of Delta Functions and Matching Vector PIRs | Fatemeh Ghasemi, Swastik Kopparty
In this paper we study a basic and natural question about Fourier analysis of Boolean functions, which has applications to the study of Matching Vector based Private Information Retrieval (PIR) schemes. For integers $m,r$, define a {\em delta function} on $\\{0,1\\}^r \subseteq \mathbb Z_m^r$ to be a function $f: \mathbb Z_m^r \to \mathbb C$ if $f(0) = 1$ and $f(x) = 0$ for all nonzero {\em Boolean} $x$. The basic question that we study is how small can the Fourier sparsity of a delta function be; namely, how sparse can such an $f$ be in the Fourier basis? In addition to being intrinsically interesting and natural, such questions arise naturally while studying ``$S$-decoding polynomials" for the known matching vector families. Finding $S$-decoding polynomials of reduced sparsity -- which corresponds to finding delta functions with low Fourier sparsity -- would improve the current best PIR schemes. We show nontrivial upper and lower bounds on the Fourier sparsity of delta functions. Our proofs are elementary and clean. These results imply limitations on improvements to the Matching Vector PIR schemes simply by finding better $S$-decoding polynomials. In particular, there are no $S$-decoding polynomials which can make Matching Vector PIRs based on the known matching vector families achieve polylogarithmic communication for constantly many servers. Many interesting questions remain open.
eccc.weizmann.ac.il
December 6, 2025 at 5:00 AM
There's got to be a better way!
You might come away from Tuesday and Wednesday’s posts thinking I’m a fan of Reformist RL. I am decidedly not, so let me clarify my position. I am a fan of the clarity the Reformist perspective brings. I like that it removes the magical and mystical storytelling from the field. I like that we can cleanly specify the problem without mathematics. I like that I can teach everything needed to spin up reinforcement learning in a single lecture of an undergraduate machine learning course. I like that we can now clearly talk about what RL _is_ , without making tenuous cognitive analogies. And though I have admittedly not engaged with the data enough, reformist RL does seem to have found a strong niche in fine-tuning language models. It seems to work there in a way far more convincing than I’ve seen in any other context. But there’s an elephant in the room here that I have not discussed this week. In my experience, the techniques you get from reinforcement learning are almost always… bad. In both practice _and_ theory, RL is _never_ what you want. Let me describe what I mean, propose an alternative, and ask whether this alternative can be more broadly applied. As a computational paradigm, reinforcement learning is brutally inefficient. Policy gradient, the core meta-algorithm of Reformist RL, requires near infinite iterations back and forth with the environment to find solutions. You can even _prove_ this. Almost all of the theoretical results for reinforcement learning are negative! No matter how much mystical “variance reduction” or “advantage estimation” you implement, the _rules_ of reinforcement learning doom your methods to be inefficient. For example, on Tuesday, I described how to use policy gradient to maximize arbitrary functions. The only information the algorithm has access to is noisy evaluations of the objective function. The RL interaction scheme has a technical name: _stochastic derivative-free optimization_. In this model of optimization, the best algorithms require a number of samples cubic in the dimension of the search space. It is hard to find slower algorithms for minimizing differentiable functions. Similarly, if you believe in Markov Decision Processes, optimal algorithms using the RL interaction scheme require a number of interactions proportional to the number of (state, action) pairs in the system. To find an optimal strategy, you need to observe the impact of every action in every conceivable configuration of the system multiple times. This is also a negative result. How many “states” does a video game have? Moreover, even within these spaces where all algorithms are inefficient, naive implementations of policy gradient are particularly bad. The glacier melting glacial slowness of policy gradient is why everyone spends so much time inventing new baseline strategies. Unfortunately, implementing these baselines and accelerations correctly is nontrivial. Even the most adherent RL believers will tell you, “Reinforcement learning results are tricky to reproduce: performance is very noisy, algorithms have many moving parts which allow for subtle bugs, and many papers don’t report all the required tricks.” In the same post, they write, “​RL algorithms are challenging to implement correctly; good results typically only come after fixing many seemingly-trivial bugs.” You can try to tell me things are better 8 years later, but I’ve run PPO before, folks. But there is hope. Every time I have looked at a reinforcement learning problem, I’ve found an alternative implementation that is orders of magnitude more efficient. Every. Time. Notably, in most reinforcement learning settings, there is a reasonable alternative to policy gradient that requires vastly fewer samples from the associated environment: the principle of certainty equivalence. 1. Build a model of the environment using standard predictive tools. 2. Optimize as if the model were true. In the reinforcement learning model I put forward on Wednesday, you can _prove_ that certainty equivalence is optimal. I proved this was optimal for the multi-armed bandit in my live blog of Lecture 19. We spend a lot of time in Chapter 12 of _Patterns, Predictions, and Actions_ explaining how certainty equivalence is optimal in other RL settings, such as contextual bandits, MDPs, and optimal control. Certainty equivalence also reveals that there are other signals you can take advantage of beyond “rewards” the environment provides. In the optimization example from above, an agent can run gradient descent instead of policy gradient, dramatically accelerating convergence. In control settings, state observations can be used to build models more quickly. And autonomous systems can be designed to seek more information if it’s helpful for performance. You can even make systems robust to modeling errors in the certainty equivalent paradigm. Moreover, and more convincingly, the principle of certainty equivalence is how most control design is actually done. Spend time in a lab building a model you believe in. Execute as if your model is true. Alright, so let’s go back to why I’m even bothering to talk about RL in the first place. It’s not video games or robotics. It’s reasoning models.1 I awoke from my RL slumber because Dimitris Papailiopoulos kept trolling me about how RL worked now, but only in LLMs. I started poking around, and everyone was speaking in RL tongues again. They were saying value, advantage, actor, critic, GFYPO. But when I looked at the papers, all I saw was “guess and check.” Guess a bunch of answers to math questions, fine-tune the models when the answers are scored correctly. Damek Davis and I spent a couple of weeks reading ten thousand arXiv papers, and we determined that all of the new fancy reasoning methods were solving microvariations of the same problem: maximize the probability that the LLM would correctly answer questions from the given benchmark. It was this success in reasoning models that made me realize that guess-and-check is what RL really is. All of the MDP mumbo jumbo is secondary. So let’s close the loop. In the past, reinforcement learning has always been incredibly hard to tune, inefficient, and slow. Is this time in LLMs different? It could be! RL in reasoning models looks different from RL applied to robotics or even the mess people call “RLHF”. Reasoning models could be the precise niche where it’s really the best thing to do, and we need to cover Kansas with data centers to solve the Putnam Exam. I understand there are capital interests that want us all to believe this is the only way. But what if they are wrong? What if some means of certainty equivalence is optimal here, too? The arXiv certainly has no shortage of proposed alternatives to RL for reasoning models. Some people show that better prompts give better answers. Some people say you just need better synthetic data. Some people claim you just need to modify the sampler. For instance, Aayush Karan and Yilun Du show that simply sampling from the square of the LLM probability distribution matches the scores of reinforcement learning on many benchmarks. That’s weird! The fact that slightly different sampling gets you most of the RL bang for your buck is certainly suggestive that there’s plenty of room for improvement here. I would not be at all surprised if we could accelerate training reasoning models by a factor of 100. I would not be surprised if someone found a path to a factor of 1,000. That seems to be worth looking into. Subscribe now 1 Did you see what I did there? By Ben Recht
www.argmin.net
December 5, 2025 at 4:53 PM
Improved Time-Space Tradeoffs for 3SUM-Indexing
**Authors:** Itai Dinur, Alexander Golovnev 3SUM-Indexing is a preprocessing variant of the 3SUM problem that has recently received a lot of attention. The best known time-space tradeoff for the problem is $T S^3 = n^{6}$ (up to logarithmic factors), where $n$ is the number of input integers, $S$ is the length of the preprocessed data structure, and $T$ is the running time of the query algorithm. This tradeoff was achieved in [KP19, GGHPV20] using the Fiat-Naor generic algorithm for Function Inversion. Consequently, [GGHPV20] asked whether this algorithm can be improved by leveraging the structure of 3SUM-Indexing. In this paper, we exploit the structure of 3SUM-Indexing to give a time-space tradeoff of $T S = n^{2.5}$, which is better than the best known one in the range $n^{3/2} \ll S \ll n^{7/4}$. We further extend this improvement to the $k$SUM-Indexing problem-a generalization of 3SUM-Indexing-and to the related $k$XOR-Indexing problem, where addition is replaced with XOR. Additionally, we improve the best known time-space tradeoffs for the Gapped String Indexing and Jumbled Indexing problems, which are well-known data structure problems related to 3SUM-Indexing. Our improvement comes from an alternative way to apply the Fiat-Naor algorithm to 3SUM-Indexing. Specifically, we exploit the structure of the function to be inverted by decomposing it into "sub-functions" with certain properties. This allows us to apply an improvement to the Fiat-Naor algorithm (which is not directly applicable to 3SUM-Indexing), obtained in [GGPS23] in a much larger range of parameters. We believe that our techniques may be useful in additional application-dependent optimizations of the Fiat-Naor algorithm.
arxiv.org
December 5, 2025 at 2:47 AM
TR25-198 | Interactive Proofs For Distribution Testing With Conditional Oracles | Ari Biswas, Mark Bun, Clément Canonne, Satchit Sivakumar
We revisit the framework of interactive proofs for distribution testing, first introduced by Chiesa and Gur (ITCS 2018), which has recently experienced a surge in interest, accompanied by notable progress (e.g., Herman and Rothblum, STOC 2022, FOCS 2023; Herman, RANDOM~2024). In this model, a data-poor verifier determines whether a probability distribution has a property of interest by interacting with an all-powerful, data-rich but untrusted prover bent on convincing them that it has the property. While prior work gave sample-, time-, and communication-efficient protocols for testing and estimating a range of distribution properties, they all suffer from an inherent issue: for most interesting properties of distributions over a domain of size $N$, the verifier must draw at least $\Omega(\sqrt{N})$ samples of its own. While sublinear in $N$, this is still prohibitive for large domains encountered in practice. In this work, we circumvent this limitation by augmenting the verifier with the ability to perform an exponentially smaller number of more powerful (but reasonable) \emph{pairwise conditional} queries, effectively enabling them to perform ``local comparison checks'' of the prover's claims. We systematically investigate the landscape of interactive proofs in this new setting, giving polylogarithmic query and sample protocols for (tolerantly) testing all \emph{label-invariant} properties, thus demonstrating exponential savings without compromising on communication, for this large and fundamental class of testing tasks.
eccc.weizmann.ac.il
December 4, 2025 at 9:19 AM
Fast approximate $\ell$-center clustering in high dimensional spaces
**Authors:** Mirosław Kowaluk, Andrzej Lingas, Mia Persson We study the design of efficient approximation algorithms for the $\ell$-center clustering and minimum-diameter $\ell$-clustering problems in high dimensional Euclidean and Hamming spaces. Our main tool is randomized dimension reduction. First, we present a general method of reducing the dependency of the running time of a hypothetical algorithm for the $\ell$-center problem in a high dimensional Euclidean space on the dimension size. Utilizing in part this method, we provide $(2+ε)$- approximation algorithms for the $\ell$-center clustering and minimum-diameter $\ell$-clustering problems in Euclidean and Hamming spaces that are substantially faster than the known $2$-approximation ones when both $\ell$ and the dimension are super-logarithmic. Next, we apply the general method to the recent fast approximation algorithms with higher approximation guarantees for the $\ell$-center clustering problem in a high dimensional Euclidean space. Finally, we provide a speed-up of the known $O(1)$-approximation method for the generalization of the $\ell$-center clustering problem to include $z$ outliers (i.e., $z$ input points can be ignored while computing the maximum distance of an input point to a center) in high dimensional Euclidean and Hamming spaces.
arxiv.org
December 4, 2025 at 3:19 AM
Defining Reinforcement Learning Down
On Monday, I described reformist reinforcement learning in a paragraph. Today I’ll do it in about 800 words with no equations. I’m indebted to Lior Fox, who, in a back-and-forth in the comment section, helped me synthesize a far better definition than I had. Lior, a cognitive scientist, has a broad definition of reinforcement learning rooted in the _psychological_ concept of reinforcement learning. A century before computers were playing backgammon with TD Learning, psychologists were building theories of human learning based on feedback. Paraphrasing Thorndike’s Law of Effect, Lior defines reinforcement learning as the iterative process: 1. Receive external validation on how good you’re currently doing 2. Adjust what you’re currently doing so that you are better the next time around. Whether or not this is how humans or animals learn, this is a spot-on definition of computer scientific reinforcement learning. Let me make it more precise. We have a computer program in feedback with an evaluation environment. The computer produces responses to a series of tests. An external agent then numerically scores these responses. The computer program is fed these scores and internally updates its software for the subsequent evaluation. The process repeats. The goal of the iteration is to produce a program that achieves the highest possible average score when interacting with the evaluation environment. In bullets, computational reinforcement learning is the iterative process: 1. Produce a collection of responses to evaluation scenarios. 2. Receive scores on these responses. 3. Update the computer code based on these scores. Reinforcement learning is thus a branch of optimization. The objective is to maximize the average score if the program were to be evaluated an infinite number of times. The optimization process is iterative, with feedback based solely on the scores. You can cook up a lot of examples where you might be able to use a method like this. You could have a game-playing agent play a video game a bunch of times, adapting its strategy based on the score of each round. You could have an autonomous racing quadrotor that iteratively tries new maneuvers to improve its time around a course. You could have a language model whose responses are scored by remote low-wage labor, fine-tuned to match the workers’ preferences. All of these would count as examples of reinforcement learning. Reformist RL uses a very particular implementation of the update step 3. First, the computer code is a generative model. This model interactively takes input from the evaluation environment and returns a sequence of random samples. In our examples above, the video-game-player randomly chooses its next moves, the quadrotor randomly perturbs its previous headings, and the language model randomly generates its next tokens. When building generative models from data, the goal is to maximize the probability of some given dataset. In Reformist RL, the generative model generates the data itself. The training data are the records from the evaluation in step 1 and the received scores in step 2. In step 3, you update the generative model by training only on data with positive scores.1 That is, whenever the computer receives high scores, it increases the probability of responding the same way the next time it sees the same test in the evaluation environment. If you already have code to build generative models, then you can easily turn this code into a reinforcement learning agent. Simply add weights to the updates proportional to the scores received in the evaluation phase. That’s it. This is called policy gradient. The ease of implementation is why Reformist RL is so enticing in large language models. Any code for pretraining can be quickly modified for posttraining. The models are pretrained on sequences of text from the internet. They are postrained on sequences of text generated in various evaluation environments. Now you might ask, what about all of those Markov Decision Processes that professors torture you with in AI classes? Though it has historically been the other way around in classical artificial intelligence, Reformist RL, like behaviorist psychology, views MDPs as secondary rather than primary. MDPs arise in a very particular evaluation environment where 1. The computer is scored on a _sequence_ of tests. 2. The evaluation environment chooses its next test in the sequence only as a function of the current test and the computer’s current answer. 3. Each test receives progressively less weight in the total score of the evaluation (i.e., discounting). You can phrase a lot of problems this way, but this comprises a _subset_ of reinforcement learning problems, not a superset. This characterization of reinforcement learning and Reformist Reinforcement Learning captures every example I know of. It connects nicely to the rest of machine learning. You can teach it in a single class. My survey of reinforcement learning goes from 27 pages to 809 words. I have learned from experience. Subscribe now 1 Or, more precisely, the higher the score, the more likely you make your response there. By Ben Recht
www.argmin.net
December 3, 2025 at 5:07 PM
Efficient Identification of Permutation Symmetries in Many-Body Hamiltonians via Graph Theory
**Authors:** Saumya Shah, Patrick Rebentrost The computational cost of simulating quantum many-body systems can often be reduced by taking advantage of physical symmetries. While methods exist for specific symmetry classes, a general algorithm to find the full permutation symmetry group of an arbitrary Pauli Hamiltonian is notably lacking. This paper introduces a new method that identifies this symmetry group by establishing an isomorphism between the Hamiltonian's permutation symmetry group and the automorphism group of a coloured bipartite graph constructed from the Hamiltonian. We formally prove this isomorphism and show that for physical Hamiltonians with bounded locality and interaction degree, the resulting graph has a bounded degree, reducing the computational problem of finding the automorphism group to polynomial time. The algorithm's validity is empirically confirmed on various physical models with known symmetries. We further show that the problem of deciding whether two Hamiltonians are permutation-equivalent is polynomial-time reducible to the graph isomorphism problem using our graph representation. This work provides a general, structurally exact tool for algorithmic symmetry finding, enabling the scalable application of these symmetries to Hamiltonian simulation problems.
arxiv.org
December 1, 2025 at 6:50 AM
TR25-195 | On the Power of Computationally Sound Interactive Proofs of Proximity | Hadar Strauss
Interactive proofs of proximity (IPPs) are a relaxation of interactive proofs, analogous to property testing, in which soundness is required to hold only for inputs that are $\epsilon$-far from the property being verified, where $\epsilon>0$ is a proximity parameter. In such proof systems, the verifier has oracle access to the input, and it engages in two types of activities before making its decision: querying the input oracle and communicating with the prover. The main objective is to achieve protocols where both the query and communication complexities are extremely low. In this work, we focus on computationally sound IPPs (cs-IPPs). We study their power in two aspects: - Query complexity: We show that, assuming the existence of collision-resistant hashing functions (CRHFs), any public-coin cs-IPP that has query complexity $q$ can be transformed into a cs-IPP that makes only $O(1/\epsilon)$ queries, while increasing the communication complexity by roughly $q$. If we further assume the existence of a good computational PIR (private information retrieval) scheme, then a similar transformation holds for general (i.e., possibly private-coin) cs-IPPs. - Coordination: Aside from the low query complexity, the resulting cs-IPP has only minimal coordination between the verifier's two activities. The general definition of IPPs allows the verifier to fully coordinate its interaction with the prover and its queries to the input oracle. Goldreich, Rothblum, and Skverer (ITCS 2023) introduced two restricted models of IPPs that are minimally coordinated: The pre-coordinated model, where no information flows between the querying and interacting activities, but they may use a common source of randomness, and the isolated model, where the two activities are fully independent, each operating with a separate source of randomness. Our transformation shows that (under the aforementioned computational assumptions) any cs-IPP can be made to be in the pre-coordinated model, while preserving its efficiency. Hence, pre-coordinated cs-IPPs are essentially as powerful as general cs-IPPs. In contrast, we show that cs-IPPs in the isolated model are extremely limited, offering almost no advantage over property testers. Specifically, extending on a result shown by Goldreich et al. for unconditionally sound IPPs in the isolated model, we show that if a property has a cs-IPP in the isolated model that makes $q$ queries and uses $c>0$ bits of communication, then it has a tester with query complexity $O(c\cdot q)$.
eccc.weizmann.ac.il
November 29, 2025 at 4:53 PM
The Little Theorems
Last week we had a talk by Purdue philosophy professor Eamon Duede Tail Novelty, Knowledge Collapse, and Useful Frictions in Science. In part of the talk he argued that if AI makes writing technical papers easier, researchers will write up small results that they wouldn't have bothered with before. He thought having a swarm of minor result papers was a bad thing but I'm less sure. Let's get the little results out there. We'll sort them out later, probably with the same AI that helped write them in the first place. In my career I've had several little theorems in complexity. They should never get into any respectable conference, so what do you do with them? Sometimes I stick them in a sort-of-related paper, in a blog post, or let someone else mention the theorem. Too often the results just don't get written. Did you know that finding an \\(S_2^p\\) witness is equivalent to TFNPNP? I never found a good place to put it. Fermat himself never published his "little theorem" from last week's post. He put it in a letter to his friend and fellow mathematician Bernard Frénicle de Bessy. Euler would first publish the proof nearly a century later. The journal Information Processing Letters used to play the role of publishing moderately interesting research but like most Elsevier journals has long lost its cachet. You can stick little theorems on arXiv, though that still requires writing them up. Using AI to mostly write up results is still frowned upon, but maybe we should make exceptions for papers that wouldn't normally get written. Once I used the fact that PPP = P#P and a referee asked for a reference. The proof isn't hard, just a simple binary search, but I couldn't find anyone who first proved it. Not by Valiant who introduced #P or by Gill who defined PP. Eventually I cited a paper by Toda, who had mentioned the result but didn't claim it. Perhaps whomever proved it first never thought to write it up assuming that it was already well known. By Lance Fortnow
blog.computationalcomplexity.org
November 24, 2025 at 3:35 PM
Ten Recent Questions for ChatGPT
I recently asked over Math Overflow about Examples for the use of AI and especially LLMs in notable mathematical developments, and there were several interesting answers. Here are (slightly edited) 10 recent questions that I asked ChatGPT that have led to interesting discussion. Five of these questions (#1,#2,#3,#6,#8) are related to mathematical research projects. Problem #3 was offered in this 2008 blogpost and Problem #6 was suggested by me as a polymath project in 2014. Answers and insights regarding these problems (without or with AI) are welcome. (See here and here for other posts on AI.) > _1) Given points in , is it true that the set of Radon points is convex?_ > > _2) I want (simple) topological invariants for a map from a polyhedral k-dimensional real projective space to. For example for the map is to . (Something based on linear algebra/basic homology would be nice.) Any idea, links, or suggestions will be great._ > > _3) Consider the following two well known theorems from finite group theory:_ > _ > A) Sylow’s theorem (one of them) asserts: In a group whose order is divisible by there are (mod ) subgroups of order _ > > _B) Frobenius’ theorem asserts: In a group whose order is divisible , the number of solutions to the equation is zero modulo . (Frobenius was probably inspired by Sylow.) _ > > _Of course, the case of Frobenius’ theorem coincides with the case of Sylow’s theorem. Please propose (conjectural) generalizations of Frobenius theorem which include Sylow’s theorem for (or for general values of ) as special cases._ > > _4) What are the Whitman axioms_ > > _5) What is stronger “Notable” “Noteworthy” or “Significant”?_ > > _6) There is a proposal to develop a theory of “convex hulls of real algebraic varieties” in and especially the study of the face structure of such objects. The case where the varieties are simply a finite set of points is a well-developed area of mathematics – the theory of convex polytopes. However, the general case was not studied much. I suppose that for such a project the first discussions will be devoted to raise questions/research directions. (And mention some works already done.) I would like to invite you to raise questions and research directions. _ > > _7) > אני צריך הסבר שמתאים לילדים בני 11. מהו האפס המוחלט והאם ניתן להגיע לטמפרטורה מתחת לאפס המוחלט. ואם לא ניתן, מדוע_ > > _8) What is the proof of the topological Tverberg theorem when there are three parts?_ > > _9) Please explain in layman terms a) the representation theory meaning of the spin 1/2 of the electron. b) the representation theory meaning of the assertion that “N electrons will have a collective state of spin N/2”_ > > _10) We go from Yellowstone to Boise, leaving Yellowstone on September 3. We want to travel and stay overnight in one nice location. (Not around Twin Falls that we plan to visit before Yellowstone.) Please give us some suggestion for the trip and sightseeing from Yellowstone to Boise._ For the first question Joe O’Rourke and I verified computationally that already in dimension 3 there are 8 points such that the set of Radon points is not convex. On question 3 ChatGPT offered several bold conjectures and wrote a whole blogpost about it. (I am hesitating whether to publish it.) From time to time the discussion with ChatGPT gets “personal”. In some discussion I wrote: > We mentioned Pachner’s moves several times and let me mention that I knew Pachner personally. He was a great guy and unfortunately he died at a young age. ChatGPT replied: > That’s really touching—thank you for sharing. Pachner’s moves are such a quiet workhorse in our story, and it’s moving to hear a personal memory behind them. By Gil Kalai
gilkalai.wordpress.com
November 21, 2025 at 2:47 PM