As part of generating GPS trail overlays for timelapse videos, I needed to import part of the OSM dataset in to a PostGIS database. After a couple of days struggling with the import, I figured it might be worth understanding the problem space better to optimize the process to my needs.

The rough plan:

  1. Understand OSM PBF files
  2. Understand the import process
  3. Change the system to be more efficient (draw the rest of the owl)

The series of blogs will document the final solution. If you aren't technical, this isn't going to be very interesting. If you don't care about maps, this isn't going to be very interesting. If you don't care about optimizing things, this isn't going to be very interesting. Can you see the pattern? The pattern is not very interesting.

This is a results driven project. The correctness of the PostGIS database is not important, only correct maps at the end. The entire process is free to be manipulated.

That being said, everything in this space is C/C++/Java, none of which are languages I have a strong familiarity with, so the less manipulation in the existing tools, the better.

The problem

squirrel@emu:~/osm2pgsql/build$ ./osm2pgsql us-20200905.osm.pbf
osm2pgsql version 1.3.0 (1.3.0-changed)

Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=8000MB, maxblocks=128000*65536, allocation method=11
Mid: pgsql, cache=8000
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Using lua based tag processing pipeline with script openstreetmap-carto.lua
Using projection SRS 3857 (Spherical Mercator)
Setting up table: planet_osm_point
Setting up table: planet_osm_line
Setting up table: planet_osm_polygon
Setting up table: planet_osm_roads

Reading in file: us-20200905.osm.pbf
Using PBF parser.
Processing: Node(882792k 495.1k/s) Way(81909k 2.04k/s) Relation(271710 16.6/s)
node cache: stored: 638575926(72.34%), storage efficiency: 60.90% (dense blocks: 52425, sparse nodes: 309555201), hit rate: 73.02%
Osm2pgsql failed due to ERROR: std::bad_alloc
Command line has been removed to simplify the output, however this and future imports will be with 8GB cache, slim, drop, hstore, the OSM transformation/style, and 12 threads unless specified otherwise.

This took approximately 16 hours to fail.


We have two goals:

  1. don't fail
  2. do it in less than 16 hours

Side note: it was able to import when the system had swap enabled, and it took approximately 3 days*. Server is an i7-5820K with 32GB of RAM, with a bunch of stuff running on it. Disk (including swap) is spinning rust. Lowering the cache size helps with crashing, but that makes us swap**, which is exponentially slower, and I aborted it after a week.

* This journey started early September. I had a complete import of the US months ago, it just took 3 days. But that wasn't good enough.

** I use the term swap throughout this blog to refer to disk thrashing in any form, because it's 4 letters ("swap") instead of 14 letters ("disk thrashing")***. Much of the swapping is paging in memory mapped files, which is all a swap file is****

*** It's because of that kind of efficiency that they pay me the big bucks.

**** I know this is wrong, deal with it.