braaannigan.bsky.social
@braaannigan.bsky.social
Polars has built in date/datetime/duration functions. I use them a lot because they have a consistent API across python versions and the syntax for working with timezones is a lot easier to remember than Python datetimes!
September 26, 2025 at 10:03 AM
Polars has neat built-in approaches for casting common string datetime formats these days, so long .str.strfmt followed by some pattern I could never remember
September 25, 2025 at 3:32 PM
Need to find performance bottlenecks? Then pyinstrument is an excellent tool. Recently it showed me that my pipeline run weren't slow because of my data - it was because I was re-authenticating to AWS every time. You get this nice visual which makes it easy to spot the laggards
September 8, 2025 at 10:03 AM
I'm finding that O3 generates technically valid Polars code, but it leans very heavily on working with Series like numpy arrays and never comes close to proper lazy mode Polars syntax
July 10, 2025 at 2:03 PM
New blog post from NVIDIA and Polars showing how you can process datasets too large to fit on GPU memory (link below). For a single GPU it may be best to use the spill-to-system-memory approach while for mutli-gpus there is a new streaming engine approach
July 2, 2025 at 1:02 PM
I put together a user guide page on getting the best Polars code from LLMs. That was months ago, however!  How do you think it needs to be updated?

Generating Polars code with LLMs - Polars user guide
Generating Polars code with LLMs Large Language Models (LLMs) can sometimes return Pandas code or invalid Polars code in their output. This guide presents approaches that help LLMs generate valid Polars code more consistently. These approaches have been developed by the Polars community through test...
docs.pola.rs
June 20, 2025 at 10:31 AM
As projects mature you will want to invest in a tool to validate the schema and data in your dataframes. This blog post sets out a good summary on the different options for Polars users: https://posit-dev.github.io/pointblank/blog/validation-libs-2025/
June 18, 2025 at 10:03 AM
Pypi download stats work in mysterious ways. In the last few months Polars exhibited low continuous growth. Then basically overnight downloads almost double and become much more variable. Why?
June 9, 2025 at 11:31 AM
Let me count the ways that lazy mode in Polars ❤️ Parquet files

1. Polars can get the schema to start the query
2. Polars can use projection pushdown to subset columns
3. Polars can use predicate pushdown to limit the row groups it reads from the file when a filter is applied
May 19, 2025 at 9:11 AM
Interested in forecasting in python? A major new free online textbook by the leading forecasting academics and practitioners has been released: https://otexts.com/fpppy/

This adapts Rob Hyndman's excellent R forecast book to the python world
Forecasting: Principles and Practice, the Pythonic Way
otexts.com
May 7, 2025 at 9:02 AM
Using pytest with Polars? When there's an error the default traceback is often very long and you have to scroll through a lot to get to the relevant part. You can make it snappier by passing --tb=short to your pytest command to get to the point!
May 1, 2025 at 12:31 PM
You can add a new column to a Polars DataFrame at a specified index position with insert_column. Your data needs to be a Polars Series first
May 1, 2025 at 10:03 AM
One habit I've picked up with LLMs: if I'm working in a terminal but have to much data to read then I generate a function that takes my dataframe and produces a html page with plotly charts that I can then open in the browser. Basically an on-demand dashboard
April 30, 2025 at 1:03 PM
We can handle tricky JSON with Polars nested dtypes.

Here we have a list of dicts. But each row also contains a list of dicts. We deal with this by exploding the inner list of dicts to get each entry on its own row. Then we unnest the inner dicts so each field is its own column
April 30, 2025 at 9:12 AM
It should be called look-at-the-data science
April 24, 2025 at 12:30 PM
One thing to be careful with Polars is using pl.when.then in cases where it isn't needed as Polars pre-calculates all of the possible paths. It may be that a pl.when.then can be replaced by a join or replace_strict. This query is 5x faster as a join for example
April 22, 2025 at 9:02 AM
One thing to be careful with Polars is using pl.when.then in cases where it isn't needed as Polars pre-calculates all of the possible paths. It may be that a pl.when.then can be replaced by a join or replace_strict. This query is 5x faster as a join for example
April 17, 2025 at 9:02 AM
GPUs are a great fit for for dataframes, but use remains niche. However, the sheer volume of GPU manufacturing capacity means the cost/hassle of using them will drop. NVIDIA is pushing forward on the software side with Polars to make this a much more common experience
April 14, 2025 at 3:02 PM
The XGBoost Random Forest (XGBRFRegressor) is a criminally underrated forecasting model. You can see how overlooked it is by the fact that if I ask LLMs to use the XGBoost Random Forest they still start using the extremely slow sklearn Random Forest instead
April 6, 2025 at 9:01 AM
Polars has native support for nested data types - it's a long way from object columns with Python dictionaries in Pandas. Native support means Polars has an API built to work with nested data and a query engine that can do vectorized transformations on nested data
April 2, 2025 at 10:02 AM
One tool I use a lot these days is token-count. I use it to check how many tokens there are in one or more files before adding them to model context. It's a command line tool that can be pip installed. In this example we see that there are 300k tokens in just one Polars crate!
April 1, 2025 at 8:01 AM
Frantically trying to finish my Polars LLM evals experiments before my online event on Wednesday. I'll be evaluating which models work best for Polars and how you can prompt engineer to even better results. Deepseek-v3 the hot (and cheap) new entrant!
March 30, 2025 at 9:01 AM
You can change display properties for Polars with pl.Config settings. In the snippet below I change to markdown format. This can be very handy - in JIRA, for example, with the markdown format a dataframe renders as a nice table rather than a mess of data
March 28, 2025 at 10:45 AM
You can set a default engine for Polars instead of specifying it in every .collect statement. You do this with the POLARS_ENGINE_AFFINITY env var. The options are in-memory (default), streaming or gpu. If your query isn't supported with the last 2 then it reverts to in-memory
March 26, 2025 at 10:02 AM
We can make a column based on if-elif-else in Polars with when.then.otherwise. The trick is that we can chain together as many when.thens as we need.

In this example we classify under 18 as a child, 18-64 as working age and over 64 as retired (as if any of us will retire at 65😭
March 25, 2025 at 9:18 AM