For once, I'm wondering a bit if this post can be helpful to somebody else. I believe my context is pretty specific. Anyway, just in case it might be the case, here it is.
My Jet Train project makes use of GTFS. GTFS stands for General Transit Feed Specification. It models public transportation schedules and their associated geographic information.
GTFS is based on two kinds of data, static data, and dynamic data. Static data may change but do so rarely, e.g., transit agencies and bus stations. They are available as static files that you need to download now and then. Before, I had to download and overwrite them every time I run the demo.
As a developer, I'm lazy and wanted to automate this task. I used GitHub Actions for that:
It's not an issue to commit directly. Indeed, it's not code but data. The code should already have all built-in safeguards to prevent unexpected data from causing exceptions at runtime. I already had a couple of surprises previously and applied a lot of defensive programming techniques.
Yet, I was not happy with the above automation:
Commits happen every week, regardless of whether I need to run the demo or not. It creates a lot of unnecessary commits. That's the reason I scheduled the action weekly and not more often.
The action is scheduled on Mondays. If I run the demo on a Friday, I'll need to update the data files anyway.
Hence, I decided to switch to an alternative approach. Instead of committing, I updated the script to open a Pull Request. If I need to run the demo, I'll merge it (and pull locally); if not, it will stay open. If an opened PR already exists, the action will overwrite it. Now, I can schedule the action more frequently.
name:Refresh Dataseton:schedule:-cron:'122***'# 1jobs:build:name:Refresh Datasetruns-on:ubuntu-lateststeps:-name:Checkoutuses:actions/checkout@v2# 2-name:Fetch dataset archiverun:curl -o archive.zip https://api.511.org/transit/datafeeds\?api_key\=${ {secrets.FIVEONEONE_API_KEY} }\&operator_id\=RG# 3-name:Extract files of interest from the archiverun:unzip -o -j archive.zip agency.txt routes.txt stop_times.txt stops.txt trips.txt -d ./infrastructure/data/current# 4-name:Remove archiverun:rm archive.zip# 5-name:Create PRuses:peter-evans/create-pull-request@v3# 6with:commit-message:Update to latest data filesbranch:data/refreshdelete-branch:truetitle:Refresh data files to latest versionbody:""
Run the action daily
Checkout the repository
Get the static data files archive
Extract only required files from the archive
Remove the archive file for cleanup
Use the create-pull-request action. The action creates a PR that automatically contains all new and updated files; that's the reason why I only extract some files and remove the archive.
As I mentioned in the introduction, I'm not sure this post can help many people. If it does, please don't hesitate to comment to let me know about your use case.
The complete source code for this post can be found on Github: