The project includes a Laravel job (Prefetcher) and a YouTube API client (YouTubeClient) that efficiently fetches and caches popular videos. It also includes a WikipediaClient for fetching country descriptions and various routes for cache management and data seeding.
Some things are left as TODOs. There is also an MVP (first commit), which offers a significantly reduced feature set and caches based on YouTube's own pageTokens. Their prevPageToken and nextPageToken implementation means that this method resulted in caching the same pages multiple times (for example, the first page with token CAAQAA becomes CDIQAQ). I've implemented a wrapper that will allow you to specify any number of maxResults, and numerical pageTokens.
I used Laravel v11.42.1, PHP v8.4.4, WSL2 + Docker Desktop on Windows. I also tested with a fresh Ubuntu-24.04.1 installation and docker-compose, though I had to add default user groups to the .env file (already present in .env.example):
WWWGROUP=1000
WWWUSER=1000You will need docker-compose.
Then you can clone and deploy the project:
git clone https://github.com/Iaotle/youtube-feed-laravel-app.git project
cd project
# copy the .env.example file, preconfigured except for the YouTube API key.
cp .env.example .env
# IMPORTANT: add your own YouTube API key to the .env file (10'000 queries a day for free)
echo "YOUR_YOUTUBE_API_KEY" >> .env
docker-compose upThe one-time setup scripts that seed the database can be re-run on startup by deleting the .initialized file in the app's root directory.
- Should run at port 9000 or APP_PORT.
- Deploys one queue worker (see
supervisord.conf)
Refer to https://laravel.com/docs/11.x/installation#docker-installation-using-sail to setup Laravel Sail. The vendor folder will be generated by running composer install, and the NPM commands will be run in the container's entrypoint script in docker-compose.yml, so you don't need to run them manually. You will need to have PHP, NPM, and Composer already installed.
You can run the project using Laravel Sail:
composer install
./vendor/bin/sail upRun force_setup_sail.sh to run some commands to reset/refresh container configuration if something is not working as expected. Will also run artisan tests.
Running outside a Docker container will require you to also install a database client and a cache client (redis for example) and configure the app to use them in the .env file. You can then run the server yourself using php artisan serve, and compile the frontend using Vite: npm run prod (for which you will need to run npm install).
- Refreshes the whole cache
- Fetches all pages for all countries and checks that the cache hits are faster
- Checks that dispatcher job was correctly sent on first request
- Run
python3 e2e_test.py
- Don't rely on the YouTube API when testing
- Mock token service, mock http, etc.
- Run
./vendor/bin/sail artisan testto test it
- Look in
storage/logs/laravel.logfor some debug outputs - Set DB/CACHE env variables to database and refresh the container to use a DB explorer (like e.g. phpmyadmin).
- Go to the frontend (default
http://127.0.0.1:9000) and use it, here's an example of what it should look like:
/countries: Main endpoint, returns the wikipedia extract and videos. Supportscountry,pageToken,maxResultsquery parameters./supported-countries: Fetches country names and ISO codes.
Keep in mind that the database cache does not support tagging, while redis does.
/clear-cache: Clears all caches./clear-country-cache/{country}: Clears YouTube and Wikipedia caches for a specific country./clear-youtube-cache: Clears only the YouTube cache./clear-wikipedia-cache: Clears only the Wikipedia cache./clear-country-description: Clears stored country descriptions in the database. If we already cached once from wikipedia, we will never make a request to the API and will instead use the DB
/seed-countries: RunsCountrySeeder. Seeds the default list of countries without talking to restcountries, so it should always work./seed-full-countries: RunsFullCountrySeeder. Not recommended as it will make a lot of network requests when requesting all countries for example. Also the restcountries API has some timeout problems so this might not be reliable (I put in a retry and timeout to the request and it's worked for me every time since, YMMV).
- Uses a next-page token to prefetch and cache YouTube videos.
- Gets dispatched to fetch a full list of videos for a country, recursively dispatches more jobs if there are more pages.
- Logs errors when fetching fails.
- Fetches and caches popular YouTube videos.
- Implements caching with Redis, avoiding duplicate API requests while the cache is fresh.
- Pages stored in cache as 50-video chunks with YouTube tokens
- Partial fetches trigger background prefetching of the whole page range
- Multiple cached pages combined to serve arbitrary ranges
- Old pages auto-expire after 60 minutes (configurable)
+-------------------------------------------------------+
| User Request: |
| GET http://${APP_URL}/${APP_PORT}/countries |
| country=US, maxResults=20, pageToken=40 |
| (Videos 40-59) |
+-------------------------------------------------------+
|
v
+-----------------------------------+
| Calculate Needed Pages: |
| Page 0 (0-49) and Page 1 (50-99) |
+-----------------------------------+
|
+-------------------------------+
| Fetch Pages from Cache/API |
| |
+-----------------------+ +-----------------------+
| [Cached Page 0] | | [Cached Page 1] |
| YouTube Token: CAAQAA | | YouTube Token: CDIQAA |
| Videos: 0-49 | | Videos: 50-99 |
| (Freshly cached) | | (Freshly cached) |
+-----------------------+ +-----------------------+
| |
+------------+ +----------------+
|
+-----------------------------------+
| Combine pages: 0-49 + 50-99 |
| Total: 100 videos |
+-----------------------------------+
|
+-----------------------------------+
| Slice Requested Range: 40-59 |
| (20 videos across both pages) |
+-----------------------------------+
|
+-------------------------------------------------------+
| Response with 20 videos + pagination tokens |
| nextToken=60, prevToken=20, offset=40 |
| |
| +-----------------------------------------------+ |
| | Prefetcher Job Dispatched! | |
| | (Pages 2...n cached proactively so pages | |
| | never go out of sync) | |
| +-----------------------------------------------+ |
+-------------------------------------------------------+
- Converts YouTube's next-page tokens to numeric offsets for better pagination.
- Keeps a list of tokens in cache. We have more than enough page tokens.
- Retrieves country descriptions from Wikipedia.
- Use the database if a description is already stored.
- Caches exerpts in redis for 24 hours to minimize API/DB calls.