Processing stalled on flatnode install of nominatim-docker #648
Replies: 1 comment
-
|
The last line indicates osm2psql is running. It's single threaded and only uses one CPU core. The "0.04k/s" is very slow indeed. OSM ways are properties (key-value pairs) plus a list of nodes. osm2pgsql looks up the position (coordinates) of each node in the flatnode file. And other processing of the actual properties of course but that's similar to nodes. I suspect the external harddrive is too slow. It needs to be able to handle a lot of random reads. Is that a SSD/NvME drive? Is the cable fast (USB protocol version, in gigabytes/second). A tool like iotop might shine some light on what the throughput is. MacOS's Activity Monitor has a 'Disk' tab. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to set up a local-only docker based nominatim server for Europe. I am running docker on a 64Gb mac running Tahoe 26.3. I am using an external hard drive for the PBF, PostgreSql, and flatnodes. I am installing and running nominatim into docker using this command:
Everything ran well up to a point.
I was closely monitoring memory on my local SSD and on the external hard drive. The
flatnodes.binfile grew to be 108.8 GB, the size of my local SSD did not change, and thenominatim-datafolder looked like a normal database install. This assured me that the configuration for using the external HD was working and I believed that the flatnodes option was working as opposed to putting all of the data into the database.The processing ran for about two hours and has been stopped for over a day on this:
When I look at the Activity Monitor on the mac, there is very little pressure on the machine. Only two cores are busy, and they are not that busy and there is almost no pressure on the memory.
When I look at Docker, the container CPU usage is very low (<1% / 1000% - 10 CPUs available), but the Container memory usage is red and close to what appears the max (7.03GB / 7.47GB).
Am I right in concluding that the process is actually still working but at a crawl because the processing is probably all tied up in swapping inside the container?
Is the solution to increase the memory available to the container?
If so, what should I make it and how would I do that in the arguments to the
dockercommand?If not, does anyone know what is going on?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions