Tom Hodson
thomashodson.com.web.brid.gy
Tom Hodson
@thomashodson.com.web.brid.gy
Maker, Baker Programmer Reformed Physicist RSE@ECMWF

[bridged from https://thomashodson.com/ on the web: https://fed.brid.gy/web/thomashodson.com ]
This site now has an .onion address
I was thinking about the UK’s new online safety act, which, at best, is very poorly thought out legislation and, at worst, is a plot to further de-anonymise UK internet users. This got me thinking about VPNs and Tor which lead me to wonder how hard it is to serve a site through a .onion address. Behold! `thod62d5r447cbmzxysd7xxpotsbbddspmn7q74ekewry4v7hbixv7yd.onion` Turns out it’s really easy, install Tor then point Tor at a server on your VPS that’s serving the website in question. I had to fiddle a bit to prevent Caddy from redirecting HTTP to HTTPS but apart from that it was simple. ## Optional **Generate avanity address**. I used this to set the first 4 characters of mine to `thod` followed by. a number. It works by randomly generating addresses so the more characters you want the longer it takes, exponentially. **Setup HTTPS**. Most .onion sites don’t use https on the basis that tor traffic is already encrypted. However as of 2020 you can create TLS certs for .onion address. The main blocker is that currently Let’s Encrypt, the de facto TLS CA for everything now, does support generating certs for .onion addresses yet. You can still do it if you’re willing to shell out 30 euros a year for it, Neil has a write up. **Add anOnion-Location to your cleartext site** so that Tor users know there’s an onion alternative available. The best way to do this is with a custom HTTP header, see the linked site for ways to do this, for caddy it looks like: header Onion-Location http://example.onion{path} You can also do it using a HTML, here I’ve used the jekyll `page.url` variable, which I can actually also evaluate inline look: `/2025/07/30/this-site-now-has-a-onion-address.html`, to make the link point to whichever page you’re currently on. <meta http-equiv="onion-location" content="http://example.onion{{ page.url }}" /> The net effect is that if you load my cleartext site in a Tor browser you’ll get a little button suggesting you go to the onion site.
thomashodson.com
August 6, 2025 at 11:48 AM
A tricky index proof
I’ve been trying to prove a theorem from these notes for a few days and I finally figured it out so I thought I’d share. ## Line Element So the setup is this: Imagine we draw a very short line vector $\vec{v}$ and let it flow along in a fluid with velocity field $u(\vec{x}, t)$. A line element $\delta \vec{v}$ being dragged aloung in a fluid with velocity field $u(\vec{x}, t)$ Three things will happen, the vector will be translated along, it will change length and it will change direction. If we ignore the translation, we can ask what the equation would be for the change in length and direction of $\vec{v}$. I’ll drop the vector symbols on $v$, $u$ and $x$ from now on. \\[D_t \; v = ?\\] If we assume $v$ is very small we can think about expanding $u$ to first order along $v$ \\[u(x + v, t) = u(x, t) + v \cdot \nabla u\\] where $v \cdot \nabla$ is the directional derivative $v_x \partial_x + v_y \partial_y + v_y \partial_y$ and when $v$ is infinitesimal it just directly tells us how $u$ will change if we move from point $x$ to point $x + v$. So from this we can see that one end of our vector $v$ is moving along at $u(x, t)$ while the other end will move at $u(x, t) + v \cdot \nabla u$ hence: \\[D_t \; v = v \cdot \nabla u\\] ## Surface Element Now the natural next thing you might ask is how will a little surface element will be stretched and rotated as it moves along in the fluid? It’s natural to represent the infinitesumal surface element spanned by two small vectors $v^1$ and $v^2$ with the normal vector $S$ whose length is related to the area of the quadrilateral spanned by $v^1$ and $v^2$. A surface element $\vec{S}$ that represents the quadrilateral swept out by $v^1$ and $v^2$ Probably I call it ‘natural’ because it’s easy to compute it with the cross product: \\[S = v^1 \times v^2\\] Now, how does this change over time? Slightly surprisingly to me, you can write down a differential equation purely in terms of $S$ and not $x^1$ or $x^2$. \\[D_t \; S = (\nabla \cdot u) S - \nabla u \cdot S\\] The proof of this is left as an exercise to the reader in the lecture notes and trying to figure it out has bugged me for the last few days. Spoilers ahead if you’d like to try yourself. Ok first let’s apply chain rule (after first squinting at the index definition of the cross product to make sure chain rule still works, never trust vectors). \\[D_t \; S = (D_t \; v^1) \times v^2 + v^1 \times (D_t \; v^2)\\] And I’ll reorder the second term because I want to align indices and pull some stuff out. \\[D_t \; S = (D_t \; v^1) \times v^2 - (D_t \; v^2) \times v^1\\] using our equation for the derivative of $v$ from the last bit: \\[D_t \; S = (v^1 \cdot \nabla u) \times v^2 - (v^2 \cdot \nabla u) \times v^1\\] Now to convert this to index notation let’s go bit by bit: \\[[v\cdot \nabla u]_i = v_k \partial_k u_i\\] then using \\([a \times b]_i = \varepsilon_{ijk} a_j b_k\\) the two terms become: \\[[(v^1 \cdot \nabla u) \times v^2 ]_i = v^1_l \partial_l \; u_j \; \varepsilon_{ijk} v^2_k\\] \\[[(v^2 \cdot \nabla u) \times v^1 ]_i = v^2_l \partial_l \; u_j \; \varepsilon_{ijk} v^1_k\\] Putting these together we can pull out the $\varepsilon$, $\partial$ and $u$ terms. We can move $\partial$ about liberally as long as we remember it is always acting on $u$ and nothing else. \\[D_t S_i = \varepsilon_{ijk} \partial_l u_j \; (v^1_l v^2_k - v^1_k v^2_l)\\] Now this is where I got stuck for ages, We know the answer should have a $v^1 \times v^2$ term in it but looking at this all the indices seem to be connect up wrong!! I tried various things, manage to `prove` it in the case that $v^1 = (1,0,0)$ and $v^1 = (0,1,0)$ etc but was left scratching my head for the more general proof. _Then I saw it_ , there’s this identity about products of Levi-Civita symbols \\(\varepsilon_{iab} \varepsilon_{inm} = \delta_{an}\delta_{bm} - \delta_{am}\delta_{bn}\\) I’ve only used this this to turn two Levi-Civita symbols into something simpler. But that right hand side actually looks suspiciously like the term \\((v^1_l v^2_k - v^1_k v^2_l)\\) What if we use the identity in reverse??????? I sometimes think of $\delta_{ij}$ as the “rename i->j or j->i” operator because in the presence of Einstein summation that’s kinda what it does. Using that idea: \\[(v^1_l v^2_k - v^1_k v^2_l) = v^1_\alpha v^2_\beta (\delta_{\alpha l}\delta_{\beta k} - \delta_{\alpha k}\delta_{\beta l})\\] This seems good, we’ve managed to disconnect $v^1$ and $v^2$ a bit. Now applying the identity in reverse we get: \\(v^1_\alpha v^2_\beta \varepsilon_{m\alpha\beta} \varepsilon_{mlk}\\) This is amazing because \\(v^1_\alpha v^2_\beta \varepsilon_{m\alpha\beta} = [v^1 \times v^2]_m = S_m\\) giving us \\[D_t S_i = \varepsilon_{ijk} \partial_l u_j \; \varepsilon_{mlk} S_m\\] applying the identity again in the normal direction this time: \\[D_t S_i = \partial_l u_j \; S_m (\delta_{im}\delta_{jl} - \delta_{il}\delta_{jm})\\] and finally performing the renaming: \\(D_t S_i = \partial_j u_j S_i - \partial_i u_j S_j\\) gives us what we want! \\(D_t S = (\nabla \cdot u) \; S - (\nabla u) \cdot S\\)
thomashodson.com
January 10, 2025 at 9:56 AM
Manipulate image pixels in Python
The way I’ve implemented dark mode on this site is to mark some images with a class `invertable` that means they still look good inverted. And then in CSS I go ahead and invert them if you’re in dark mode. For other images I just dim them a bit. Try switching back and forth between dark mode and light mode, the one on the right works, the one on the left gets and ugly black background. However for some images like this black and white png it looks a bit weird when inverted because the background becomes hard black but my site’s background is a dark grey. So I wanted to make the white pixels transparent instead. Anyway the point of this post is that I knew in terms of pixel values what I wanted to do but wasn’t sure how to do this in an image editors. So here’s the code for you and my future reference: import sys import numpy as np from PIL import Image if len(sys.argv) < 3: print("Usage: python convert_to_transparent.py <input_image_path> <output_image_path>") sys.exit(1) input_path, output_path = sys.argv[1], sys.argv[2] grey = np.array(Image.open(input_path).convert("L")) alpha_channel = 255 - grey rgba_array = np.zeros((grey.shape[0], grey.shape[1], 4), dtype=np.uint8) rgba_array[..., 0] = 0 # Red channel rgba_array[..., 1] = 0 # Green channel rgba_array[..., 2] = 0 # Blue channel rgba_array[..., 3] = alpha_channel # Alpha channel # Create a new image for the output rgba_img = Image.fromarray(rgba_array, mode="RGBA") # Save the result rgba_img.save(output_path, "PNG") print(f"Image saved to {output_path}")
thomashodson.com
January 10, 2025 at 9:56 AM
Einstein summation is really nice.
Just a short thought. Lately I’ve been starting to read though these lecture notes on astrophysical fluid dynamics and this morning I came across this nice blogpost about numpy’s `einsum` function. Both reminded how lovely einstein summation is as a mathematical notation. Here’s a quick intro to how it works and why it’s cool. So Einstein notation usually comes up when you’re multiplying lots of high dimensional tensors together. Basically the idea is that you make summations implicit. For example the definition of the product of two matrices $ A = BC $ can be written like this in terms of indices: \\[ A_{ik} = \sum B_{ij} C_{jk} \\] Here I’ve left off the sum limits because you usually know them from context or they can be defined later. For example here we could add that this equation only makes sense if A, B and C are of shapes `(m, l), (m, n) and (n, l)` respectively. So we know the sum should be over `n` indices. Einstein summation notation is basically the statement “Whenever you see a pair of indices in an expression, pretend their is a sum over that index (over the appropriate range)”. With this, the above equation becomes: \\[ A_{ik} = B_{ij} C_{jk} \\] You can find really good expositions of this online so I won’t go into more basic detail here but here is a list of nice quality of life improvements you can add on top of this. ## Div, Grad, Curl In Vector calculus you deal with a lot with partial derivatives $\tfrac{\partial}{\partial \alpha}$ where $\alpha$ can be x, y, z, or t. My first hack is that you should start writing your partial derivatives as $\partial_\alpha$ instead. Then div, grad and curl become quite succinct: Div: \\[ \nabla \cdot \vec{u} \rightarrow \partial_i A_i \\] Grad: \\[ \nabla \vec{u} \rightarrow \partial_i u_j \\] Curl: \\[ \nabla \times \vec{u} \rightarrow \epsilon_{ijk} \partial_j u_k \\] Where these indices implicitly sum over x, y, and z. That $\epsilon$ is the Levi-Civita symbol. ## Spacetime In relativity, space and time get put on a more equal footing. They’re not the same exactly but it makes sense to start to let our indices run over both space and time. Though there are occasions where we would like to just run over space too. The first hack for this is we say that indices from the Greek alphabet should be read as running over t, x, y, and z while those from the Latin alphabet run over just the spatial indices as before. This lets us take something like a continuity equation \\[\partial_t \rho + \nabla \cdot \vec{u} = 0 \\] which expresses the idea of conservation of ‘stuff’ if $\rho$ is a density field of stuff and $\vec{u}$ is its velocity field, and transform it to: \\[ \partial_\mu v_\mu = 0 \\] where $v = (\rho, \vec{u})$ is called a 4-vector and nicely packs up the very closely related concepts of $\rho$ i.e “how much stuff is right here” and $\vec{u}$ i.e “where is the stuff here flowing to?”. You can do similar tricks for many physical quantities such as charge and current, the magnetic field and the electric field, energy and momentum etc. ## Flat and curved spacetime This next bit starts to touch on things beyond my ken but I’ll gesture at it anyway. When you start to talk about curved space and even spacetime it becomes useful to define a special two dimensional tensor called `the metric`. In flat spacetime the metric is called ‘the Minkowski metric’ and it’s a diagonal matrix $\eta_{\mu\nu}$ with (1, -1, -1, -1) along the diagonal. It could also be (-1, 1, 1, 1) if you like. Now for flat spacetime the metric really just helps us keep track of signs. The simplest place it crops up is that the ‘spacetime interval’ between two events in space and time is: \\[ \delta s^2 = \delta t^2 - \delta x^2 - \delta y^2 - \delta z^2 = \delta_\mu \eta_{\mu\nu} \delta_\nu \\] or \\[ \delta s^2 = \vec{\delta} \cdot \eta \cdot \vec{\delta} \\] For curved spacetimes, the metric gets more complicated and can have arbitrary off diagonal terms too which describe the curvature of spacetime and other effects. The final trick is that this insertion of the metric in the middle of tensor contractions comes up so much that we can define a new notation just for it, we say that when you contract a superscript index with a subscript index, you have to insert the metric in between: \\[\delta^\mu \delta_\nu = \delta_\mu \eta_{\mu\nu} \delta_\nu \\] Now I’ve called these things hacks and tricks but they also connect to much deeper mathematical concepts such as covariance and contravariance. This seems like it’s usually the case with nice notation. It makes me think things like the relationship between the derivative operator $\tfrac{d}{dt}$ and and the infinitesimal $dt$. I used this to generate the thumbnail for this post.
thomashodson.com
January 7, 2025 at 9:56 AM
MicroPython
Original My first exposures to programming as a kid were through processing and arduino. After a while as a kid playing with Arduino, I started to understand from forum posts and other places the magical truth: Arduino is is just C with some libraries and a easy to setup dev environment! I started to read the datasheets of things like the ATmega328, I guess this way my version of the minicomputers that the generations before mine cut their teeth on. The atmega328 is a relatively simple computer and you can just about read and understand the datasheet in a reasonable amount of time. It has all sorts of hardware that you can configure, timers that you can set up to count up/down, to trigger interupts, toggle pins etc. I had loads of fun with this as a nerdy kid. However the compile and upload time is kinda long, C that hits registers directly is both ugly and hard to debug and when you start to work with bigger systems like the ESP32 and RP2040 that have WiFi and multiple cores and stuff this all starts to get a bit less fun, at least for me. But because the likes of the ESP32 and the RP2040 are so much more powerful **they can run a python interpreter and it’s surprisingly fast and really fun!** Everyone loves to hate on python for being slow, and obviously don’t write your tight loops in it (or do, I can’t tell you what to do). But even on a resource constrained microprocessor you can have a fun time with python! So anyway, here is a compendium of things I’ve being doing with micropython. Some of this is so that I don’t forget how to do it later so there’s a little more detail than might be warranted. ## Get yourself a dev board The Raspberry Pi Pico (The offical dev board for the RP2040 micro-controller) is really nice, if you went to EMFcamp you can use the badge, and ESP32 based boards work really well too! The easiest way to start is to flash a prebuilt firmware. For RP2040 boards that means putting the board in boot mode (by holding the BOOT button and powering it up) and then dragging and dropping a .uf2 file onto a virtual file system that appears. ## Run some code! mpremote is a really handy little tool for interacting with a micropython board. My first scripts to play with this looked like this 1mpremote cp main.py :main.py 2mpremote run main.py You can also go straight to a REPL: 1mpremote REPL ## Next Steps In the next few posts I’ll talk a little about: * Drawing graphics * Using nice fonts * Compiling your own custom micropython firmware builds and when that makes sense. * Compiling you firmware to webassmebly so you can make a web based simulator. * Debugging the RP2040 with the RP Debbug probe. * Using the DMA hardware on the RP240 to offload work from the main CPU * Async programming with micropython Here’s a little webassembly simulator of the micropython project I’ve been working on. I’ll expand on this in later posts but very quickly: * It’s targeted at a 240x240 pixel circular display that stores RGB colors with 5, 6 and 5 bits for each channel, respectively. * This is running under webasembly with some custom code to convert the RGB 565 data and display it in a `<canvas>` tag * I’m using a ttf font called gunship converted to bitmap format and frozen into the firmware.
thomashodson.com
January 7, 2025 at 9:56 AM
Sensor.Community Workshop at EMFcamp
Welcome to the guide for the workshop Build your own Sensor.Community air quality monitoring station! See an issue on this page? Open a PR! ## Obligatory Spiel Air pollution is a major public health issue but you’d be surprised how few official monitoring stations there are in Europe. That’s an issue because pollution levels can vary a lot, even from one street to the next! To get the best picture possible we need more sensors which is where citizen lead projects like Sensor.Community are having a lot of success! Sensor.Community started life as "Luftdaten" in Stuttgart, Germany. It rebranded but you will occasionally see references to "luftdaten" and "airrhor" in the docs and firmware. These are also useful alternate keywords to try when searching for information on the project. In this workshop you’ll put together an air quality monitor made from an esp8266 and a few sensors, load up the Sensor.Community firmware and connect it to their network so that other people, scientists and policy makers can see where the problems are and hopefully change something. It will also contribute to this cool interactive map. We’ll discuss options for weather proofing, where to place the sensor and how to hook it into your own smart home setup if you have one. ## The Kits The base kit (£15) contains: * An esp8266 Dev board pre-flashed with the firmware * A BME280 Pressure/Temperature/Humidity sensor * A 2m micro USB cable * A long F-F header cable (dupont) with 4 wires * A USB power supply **is not** included, let’s try to prevent some e-waste by reusing an old one! * There will be a pack of zip ties lying around somewhere that you can grab from The base kit The base+addon kit (£40) also contains: * An SDS011 particulate matter sensor (PM2.5-10) * A length of black plastic tube to separate the intake of the sensor a bit from the exhaust * A short header cable with 4 pairs The addon kit: an SDS011, a length of black tube and, not shown, a short length of f-f header cable with 4 conductors. ## In the workshop 1. Come buy a kit from me, either exact change or contactless. 2. Assemble it 3. Configure it ### Assembly Attach the black plastic tube to the port on the SDS011. If your BME280 is unsoldered, solder the 4 pin header on now. If you can’t find a soldering iron, you can always skip this step for now and do it later, the kit will still work with just the SDS011 or even no sensors attached. Each esp8266 has a unique chipID, similar to a MAC address. When I flashed the firmware I noted down the chipID on a piece of tape on the back of each board, you need this id for a couple steps in a minute so don’t lose it! If you do, you can use the firmware flasher to find it again. There are links to the firmware flasher binaries on the official guide. Connect the headers up using the wiring diagram below, use the longer headers for the BME280 and the shorter ones for the SDS011. Wiring Diagram **WARNING** Be careful with the pin connections here! Accidentally swapping 5v/VIN and GND can destroy components. On the SDS011 one of the pins is labelled TXD, use this to orient yourself and double check all the wiring before plugging it in for the first time. If you smell a weird smell, unplug the power and triple check your wiring! We want the input of the SDS011 tube to be close to the BME280, hence the different cables. Don’t worry about this too much now, but try to do this when you install it into a permanent position. It should look roughly like this once assembled, though you'll have a black plastic tube too. Note that I've written your ChipID on that piece of plastic protecting the pins, don't lose it! Done! If you were doing this at home, you would have also needed to install the firmware but I did that step for you to save time in the workshop. Now plug the sensor in. When it starts up, the firmware searches for any configured wifi networks it knows about, which initially is none. When it doesn’t find one **it starts up a hotspot called “airRohr-{ChipID}” with password “airrohrcfg”.** Once you see the “airRohr-{Your ChipID} network you’re done and can move onto the configuration. There is a chance that 30 wifi hotspots all starting up in the same location might cause some issues so be patient if you don’t immediately see the new network. ### Configuration Connect to this network on a device, it will likely open the config page in a captive portal for you but if it doesn’t (depends on the device) go to 192.168.4.1. While you’re at EMF, let’s connect the sensor to the emfcamp wifi SSID: emf2gc024-open. There's currently one sensor running at EMF2024 (mine), let's get a few more up! In the More settings tab you can change the interval at which measurements are taken. For radio spectrum politeness at EMF it would also be good to shorten the “Duration router mode”, this reduces how long the sensor broadcasts a hotpot for if it can’t find a network. In “sensors” you can configure which sensors are connected, which for this workshop will be one of SDS011 and BME280 or both. _EDIT_ : This section longer works as of October 2024, the grafana link now requires authentication. Next, you can check if sensor.community is receiving data from your board **even before it is registered**. Go to “https://api-rrd.madavi.de/grafana/d/GUaL5aZMz/pm-sensors?orgId=1&var-chipID=esp8266-{your ChipID}”. You should see some wifi signal strength data if your board is successfully sending data to Sensor.Community, even if it’s not registered. This may take a few minutes to happen. If you don’t see any data, first wait a few minutes then double check your chipID is right. I have misread at least one in the past. You can use the firmware flasher to do this. Another useful way to check if your board is working is to connect it to your laptop over usb and check the serial output, the baudrate is 9600. The easiest program to use is the serial monitor included in arduino but you can also use `screen` or `minicom`. ### Registering with Sensor.Community Whether you intend to run the sensor out of your tent or village at the EMF (which I encourage!) or wait until you get home to install it in a more permanent location, the next step is registration. You’ll need to provide some details about the location of the sensor so wait until you’ve installed it in somewhere, at least semi-permanently. Go to devices.sensor.community and start by making an account. Once you receive the email you confirm your account and can go ahead with registering the sensor. Where it asks for “Sensor ID” that’s your board’s chipId, in the end your device will be identified by a string like “esp8266-{chipId}”. While you’re at EMF it might be nice to tick “Publish exact location” so that the sensor data is high resolution enough that we can map the site. However when you install it at home you may consider turning this off again. For installation at EMF you can use the offical map to get accurate (lat,lon) coordinates for the sensor by right clicking. Change the sensors to “SDS011” and “BME280”. Once you’ve registerd the sensor, its the data will start appearing on the map!. From your devices dashboard there’s a data link, for the sensor I set up this morning it looks like this. ### Recap So at this point you (and/or I) have: * Physically assembled the sensor * Flashed the firmware onto the esp8266* * Written down your **ChipId** for later * Logged onto the **airRohr-{Your ChipID}** hotspot and configured the sensor * Registered your sensor with the Sensor.Community project. * If you’re in a workshop I likely did this step for you. If you have gotten stuck with any of these steps head to the troubleshooting section for some suggestions. ## After the workshop Find a proper location for the sensor. This could be your home but you can also get creative and ask local schools or the like if they would like a sensor installed. Practically, you’ll need some weather proofing and a 5V power source. It’s recommended to place the sensor 1-3m above ground level in a well ventilated outdoor area. Basically you’re trying to measure the same air we’re all breathing. Options for weather proofing: * Use a U bend piece of drain pipe as recommended by the project * Browse some of the many 3D printed case designs online Congratulations! You’re now a part of a global network contributing to fighting air pollution! ## Troubleshooting If you’re in a workshop, come find me or one of the helpers! You can get useful debug output from the sensor by connecting to your laptop and opening the serial terminal with baudrate 9600. The arduino IDE is an easy way to do that but you can also use terminal commands like screen, minicom or cu. Sparkfun has a good guide on this. ### No Hotspot If you can’t see the **airRohr-{Your ChipID}** hotspot it means one of three things: 1. Your esp8266 started a hotspot for 10 minutes after it got power but it’s been on longer than that so it turned it off again. 2. Your esp8266 successfully connected to your wifi network. 3. Your esp8266 is broken or has no firmware Eliminate 1 as a possibility by power cycling the board. You can check 2 either by opening “http://airRohr-{Your ChipID}.local” in your browser while connected to your home wifi or looking at the serial output. If you think the problem might be 3, try reflashing the esp8266 firmware. If that doesn’t help, maybe your 5v power supply is a little weak or your micro usb cable has a high resistance, swap them both out to eliminate that as a possible issue. Failing all of the above you might have have to replace the esp8266. Definitely check the serial output first if you can, that always helps to see what’s going on. ### Won’t connect to the wifi I.e the hotspot does not disappear after configuration. Try a different network if you can, the firmware doesn’t support anything fancy like username/passwords or WPA3. Connecting to a phone hotspot is a good test to see if your other wifi might be the problem. I have had an issue with the board just stubbornly refusing to connect to a wifi network that it really should be compatible with. For reasons I don’t understand a firmware reflash fixed this, so you can always try that too, it can’t hurt. ### Data not flowing after registration Your board connected to your wifi and you registered it but now the data doesn’t seem to be showing up on the Sensor.community site. First, wait at least 5 minutes. Next, double check your chipID is right, in the workshop I had misread at least 1 of the 30 kits we set up. If you realise your registered chipID if wrong, make a new device with the right chipID and email tech@sensor.community letting them know to release the _incorrect_ chipID that your registered so that if anyone else has that id they won’t get the dreaded “Sensor ID is already registered”, it’s not enough to just delete it in the interface. ### Sensor ID is already registered See here.
thomashodson.com
January 7, 2025 at 9:56 AM
Interactive web maps from a static file
PMTiles is a new project that lets you serve vector map data from static files through the magic of HTTP range requests. The vector data is entirely served from a static file on this server. Most interactive web maps work by constantly requesting little map images from an external server at different zoom levels. This approach uses much less data and doesn’t require an external server to host all the map data. Getting this to work was a little tricky, I mostly followed the steps from Simon Willison’s post but I didn’t want to use npm. As I write this I realise that this site is generated with jekyll which uses npm anyway but somehow I would like the individual posts to Just Work™ without worrying about updating libraries and npm. So I grabbed `maplibre-gl.css`, `maplibre-gl.js` and `pmtiles.js`, plonked them into this site and started hacking around. I ended up mashing up the code from Simon Willison’s post and the official examples to get something that worked. I figured out from this github issue how to grab a module version of `protomaps-themes-base` without npm. However I don’t really like the styles in produces. Instead I played around a bit with the generated json styles to make something that looks a bit more like the Stamen Toner theme. Looking at the source code for `protomaps-themes-base` I realise I could probably make custom themes much more easily by just swapping out the theme variables in the package. Todo: * Figure out how to use maputnik to generate styles for PMTiles.
thomashodson.com
January 7, 2025 at 9:56 AM
Maps Maps Maps: Part 2
A last minute leaving gift idea for a friend inspired me to finish my first actual laser cut map. I used leaflet.js to overlay the names of some places we had visited together in London onto those nice Stamen Design map tiles from before. You can see the digital version here. This is what it looks like straight off the laser cutter. The contrast is super washed out because the smoke from the cutting process darkens all the surrounding wood. I had a bunch of issues with getting that to work mostly based around the fact that these tiles are raster images that are intended for streaming to a zoomable and panable viewer on a screen. The design tradeoff of the maps don’t quite make as much sense when you start transfering them to a static image. I did some hacks to use the tiles intended for a higher zoom level but you can only take that so far before the text starts getting unreadable. To deal with the darkending from the smoke I sand the whole thing back with 80 grit sandpaper on an orbital sander. I did break a few small features off here and there but it's ok! I think there is a better approach that involves getting raw OpenStreetMap data and rendering it directly using something like QGIS and some kind of map style files but that seems like a whole new deep rabbit hole I’m not ready to fall into just yet. The final reveal!
thomashodson.com
December 29, 2024 at 9:57 AM