"We have a really great [insert technical solution]. Shame we went bankrupt before any customers got to use it".
"We have a really great [insert technical solution]. Shame we went bankrupt before any customers got to use it".
It's easy to forget the _entire_ repeatable runs when the file changes. This particular repeatable ran `create or replace trigger ` on most of the tables in our system, causing fun locking issues.
It's easy to forget the _entire_ repeatable runs when the file changes. This particular repeatable ran `create or replace trigger ` on most of the tables in our system, causing fun locking issues.
When we asked, they told us their network is sufficiently reliable, so we won't have issues with missing or duplicate data.
Still blows my mind when I think about it today.
When we asked, they told us their network is sufficiently reliable, so we won't have issues with missing or duplicate data.
Still blows my mind when I think about it today.
So, if you do BigDecimal("10075") / BigDecimal("100"), you'll get BigDecimal("101").
Fixed by using .divide directly and explicit rounding mode, but did not expect precision loss by default
So, if you do BigDecimal("10075") / BigDecimal("100"), you'll get BigDecimal("101").
Fixed by using .divide directly and explicit rounding mode, but did not expect precision loss by default
In RAM mode, our test suite only takes 20s to run, down from 1m4s.
This is such an obvious optimization that I wonder why we haven't thought of this before.
In docker-compose.yml:
environment:
PGDATA: /pgtmpfs
tmpfs:
- /pgtmpfs
In RAM mode, our test suite only takes 20s to run, down from 1m4s.
This is such an obvious optimization that I wonder why we haven't thought of this before.
In docker-compose.yml:
environment:
PGDATA: /pgtmpfs
tmpfs:
- /pgtmpfs