Some of our applications are deployed to Amazaon Elastic Beanstalk. They are based on PHP, Symfony and of course use composer for downloading their dependencies. This can take a while, approx. 2 minutes on our application when starting on a fresh instance. This can be annyoingly long, especially when you're upscaling for more instances due to for example a traffic spike.
You could include the vendor directory when you do
eb deploy, but then Beanstalk doesn't do a
composer install at all anymore, so you have to make sure the local vendor directory has the right dependencies. There's other caveats with doing that, so was not a real solution for us.
Composer cache to the rescue. Sharing the composer cache between instances (with a simple up and download to an s3 bucket) brought the deployment time for composer install down from about 2 minutes to 10 seconds.
For that to work, we have this on a file called
commands: 01updateComposer: command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update 02extractComposerCache: command: ". /opt/elasticbeanstalk/support/envvars && rm -rf /root/cache && aws s3 cp s3://rokka-support-files/composer-cache.tgz /tmp/composer-cache.tgz && tar -C / -xf /tmp/composer-cache.tgz && rm -f /tmp/composer-cache.tgz" ignoreErrors: true container_commands: upload_composer_cache: command: ". /opt/elasticbeanstalk/support/envvars && tar -C / -czf composer-cache.tgz /root/cache && aws s3 cp composer-cache.tgz s3://your-bucket/ && rm -f composer-cache.tgz" leader_only: true ignoreErrors: true option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: COMPOSER_HOME value: /root
It downloads the composer-cache.tgz on every instance before running
composer install and extracts that to
/root/cache. And after a new deployment is through, it creates a new tar file from that directory on the "deployment leader" only and uploads that again to S3. Ready for the next deployment or instances.
One caveat we currently didn't solve yet. That .tgz file will grow over time (since it will have old dependencies also in that file). Some process should clear it from time to time or just delete it on S3 when it gets too big. The
ignoreErrors options above make sure that the deployment doesn't fail, when that tgz file doesn't exist or is corrupted.