Assets by kenney(dot)nl.
Assets by kenney(dot)nl.
Collision detection is still "future me's problem." I'm way more interested in NPC dialogue anyway.
Collision detection is still "future me's problem." I'm way more interested in NPC dialogue anyway.
How Hash Maps Work (From Scratch)
🛠️ Buckets, hashes, resizing, the whole thing:
👉 lbreede.github.io/quartz/how-h...
#BuildSlowThings
#TIL
#programming #hashmaps #rust #python #dataStructures
How Hash Maps Work (From Scratch)
🛠️ Buckets, hashes, resizing, the whole thing:
👉 lbreede.github.io/quartz/how-h...
#BuildSlowThings
#TIL
#programming #hashmaps #rust #python #dataStructures
Good hashing
Resizing at the right time
Rehashing keys to rebalance the buckets
All this work to make map["key"] feel instant.
Good hashing
Resizing at the right time
Rehashing keys to rebalance the buckets
All this work to make map["key"] feel instant.
…and then we rehash every key.
Because:
hf("some key") % 3 == 1
hf("some key") % 20 == 10
Changing the capacity changes everything.
…and then we rehash every key.
Because:
hf("some key") % 3 == 1
hf("some key") % 20 == 10
Changing the capacity changes everything.
Buckets get longer → scan time goes up → things get slow.
So we resize.
Buckets get longer → scan time goes up → things get slow.
So we resize.
If the key exists, update the value.
If not, append it.
Lookup? Same thing:
Hash, mod, scan the bucket for a match.
If the key exists, update the value.
If not, append it.
Lookup? Same thing:
Hash, mod, scan the bucket for a match.
We don’t overwrite — we use separate chaining:
[
[],
[("a", 1), ("b", 2)],
[]
]
Each bucket is a list of key–value pairs.
We don’t overwrite — we use separate chaining:
[
[],
[("a", 1), ("b", 2)],
[]
]
Each bucket is a list of key–value pairs.
hf(key) -> int
Hash the key → turn it into an index with % capacity → store it in an array.
Simple, right?
hf(key) -> int
Hash the key → turn it into an index with % capacity → store it in an array.
Simple, right?