Automatic Data Updates for Static Websites

99littlebugs - Nov 23 '21 - - Dev Community

My Workflow

On Sunday, January 10th, 2020, teachers had just became eligible for the COVID-19 vaccine but finding an appointment was tough. Each NY state location had its own sign-up link, and people were cycling through their list every couple of minutes hoping that availability would open. As a developer, I wondered whether I'd be able to help by creating a single pane of glass that would show availability for many locations at a glance.

I wanted something simple, was excited by static sites and serverless and did some fast research on how I could string these concepts together to create an auto-updating website of vaccine appointment availability. I found some tutorials that showed developers scraping websites and then committing the data back to a git repository and knew that’s what I would try first. I told myself that if I could get the data scraping done on that first night then this was a problem worth more of my time.

I used cheerio for the website scraping and wrote the collected data out to a file. Then I used GitHub actions on a 15 minute cron job schedule to checkout the repo, run the scraper script, and commit the data back. In 24 lines of yaml, and 55 lines of JavaScript, I had an automatic data scraper. I was satisfied with my progress and well on my way.

Submission Category:

DIY Deployments

Code

Full source code, which is linked to the last code commit on January 10th.
Particular attention should be given to /.github/workflows/scrape.yml (link) and scraper.js (link).

Additional Resources / Info

Inspiration for this pattern is listed in the repo's README file.

. .
Terabox Video Player