Skip to content
Discussion options

You must be logged in to vote

What I ended up doing is using the /stations endpoint with pagination, although the pagination cursor seems bugged to me but maybe I'm missing something. The loop would go on infinitely and the cursor would never run out/change to null

Ultimately I just hacked around it with using a counter. There are 45832 stations, if we can get 500 every time then we need 91 iterations to get all stations. I then loaded them into a pandas dataframe which then allows me to do some of the analysis I wanted to do with the metadata. Would be great if there was a csv or xml file somewhere so one does not have to make 91 individual requests to get all the data but not a huge deal

import requests
import pandas 

Replies: 3 comments 4 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
4 replies
@DrewScatterday
Comment options

@ftomasel
Comment options

@bcbeta
Comment options

@ftomasel
Comment options

Answer selected by DrewScatterday
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants