Ridho Akbar
rdmakbar.bsky.social
Ridho Akbar
@rdmakbar.bsky.social
🇮🇩 doing data science in 🇬🇧
This platform seems like a perfect place to be my journal in studying Markov Chain Monte Carlo methods.The repo is public https://github.com/ridhoma/monte-cafe
Moreover, Markov-Chain must be irreducible and aperiodic to guarantee reaching a Stationary Distribution. Irreducible: every state can be reached from any other state. Aperiodic: the chain does not cycle through states in a fixed period.
November 30, 2024 at 11:00 PM
The simplest illustration of Markov Chain is a random walk on a graph. Given edge Qij as probability of moving from node-i to node-j, where Qii represents probability of staying at the same node-i. Simulate the random walk, and at each time step, count how many times each node has been visited
November 30, 2024 at 11:00 PM
Finally, compare the final solution with greedy algorithm and the result is quite good. It gives 12% shorter distance than greedy. According to ChatGPT, greedy usually gives sub-optimal solution that is 10%-20% global optimum. So our solution might be close 😜
November 30, 2024 at 6:42 PM
It took me quite a number of trials to fine-tune the To and the annealing function until I'm satisfied with the outcome. I ended up with this annealing schedule.
November 30, 2024 at 6:42 PM
Here "energy" is just a fancy term for "total travel distance" of the TSP route. We will accept the proposal if it gives lower energy than the current solution (distance improves). Otherwise, we may still accept it but with probability exponentially drop as the energy difference increase.
November 30, 2024 at 6:42 PM
The algorithm basically starts with a random initial state/solution. Then we change the solution with some perturbation function, we call the the proposed next solution. We either accept/reject the proposal depending the energy of the proposed state compared to current state.
November 30, 2024 at 6:42 PM
Long time no update.. Quite busy lately in the work. But here it is. Applying Markov-Chain Monte Carlo to estimate large Traveling Salesman Problem (N=100).

It's quite fascinating that a very simple algorithm relying on random search can be used to approximate an NP-Hard optimization problem.
November 30, 2024 at 6:42 PM
However, Monte-Carlo is not the best method to estimate pi because it converges very slowly, with rate of 1/√n. Not only that, as more random points are added we might randomly ruin a previously better estimation and end up getting a worse estimation.
November 19, 2024 at 2:41 PM
In the circle-in-a-square model, the reality to reconstruct is a "circle in a square". But with only 1000 points, it barely looks like a circle. With 10,000 now can see some circle. Finally with 100,000 the circle looks clear. Notice as the circle becomes clearer, the more accurate is our pi
November 19, 2024 at 2:41 PM
First post in BlueSky. This place seems like perfect to share my learning process of MCMC methods for Bayesian Analysis.

I'll start with the very basics, the first chapter of any books: Stochastically estimating pi using the circle-in-a-square model.
November 19, 2024 at 2:41 PM