In this post, we will go through a way to deploy PHP applications to a shared web host (and not a cloud instance or server).
why a shared host you may ask?
Because, you don’t have to worry about backups, email, database, security, SSL certs(along with renewals) and a host of other things that you wouldn’t want to set up and maintain it yourself when you are starting up and all this for a price of coffee (2 EUR here in Ireland).
Once your application gains some traction, you can go full on with Kubernetes and traefik or whatever catches your fancy 🙂
Approach
I know there are a few approaches to deploying PHP, from blindly FTP uploading whatever is in your working directory to using git and pulling the latest code into the server using git and editing the files directly on a production server.
Though there isn’t anything particularly wrong with any of these, you are likely to make mistakes by not committing your files right or even uploading files with local credentials to a live server. You will also have trouble with things like JS builds and Node version mismatch.
After having dealt with some enterprise deployments, I have learnt that it is best to separate the codebase into the following,
- working code base – This is where you edit using your IDE
- built code base – tar.gz of every build.
- live codebase – essentially one of the tar.gz uncompressed on a QA or production server.
I use Drone CI to build my files, but you could use Bitbucket pipelines, Github Actions or CircleCI, this is how my minimal Drone file(.drone.yml) looks like and it will be something similar across the other platforms too.
kind: pipeline name: default steps: - name: Build PHP image: laradock/workspace:latest-7.4 commands: - composer install --no-dev --optimize-autoloader - name: Build JS assets image: node:10.16.3 commands: - npm install - npm run prod - touch /tmp/lmc.tar.gz - rm -f /tmp/lmc.tar.gz - rm .drone.yml - tar -zcvf /tmp/lmc.tar.gz . - mkdir -p buildarchive/${DRONE_BRANCH}/${DRONE_BUILD_NUMBER} - mv /tmp/lmc.tar.gz buildarchive/${DRONE_BRANCH}/${DRONE_BUILD_NUMBER}/lmc.tar.gz - name: Archive assets image: cschlosser/drone-ftps environment: PLUGIN_HOSTNAME: ftp.example.com:21 PLUGIN_VERIFY: false PLUGIN_DEST_DIR: lmc-builds PLUGIN_SRC_DIR: /buildarchive secrets: [ ftp_username, ftp_password ] secure: true include: - ^*.tar.gz
As you can see, it basically builds the JS assets, removes the drone yml file, packs everything into a tar.gz file and sends it across to an FTP(S) server. You may use SSH, S3 or Backblaze B2 as well but they tend to be a bit more complex and I am trying to keep this article short.
The idea here is that you get a build file(tar.gz) with every commit and when you know the commit is right, you can deploy it using, off the shelf deployment tools. For PHP, we have tools like https://deployer.org/ but I would highly recommend you to learn some general purpose tools like Ansible as being general purpose they allow you to do much more than just PHP and you can carry across this knowledge to different languages and environments.
My Ansible folder looks like the one below, I usually separate the playbooks from the host files and it looks like the following,
deployment-playbooks/
deployment-playbooks/deploy-mywebsite.yml
env-configuration/
env-configuration/mywebsite/
env-configuration/mywebsite/hosts.ini
The hosts.ini file looks like the one below, it instructs Ansible on how to connect to the target server
mywebsite ansible_connection=ssh ansible_user=nonrootuser ansible_host=mywebsite.com ansible_port=22
The playbook(deploy-mywebsite.yml) looks like the one below,
--- - hosts: mywebsite tasks: - name: Upload env file template: src="{{item}}" dest="/home/mywebsite/domains/mywebsite.com/.env" with_file: - "files/mywebsite/.env.j2" - name: "Download assets, version: {{build_number}}" get_url: url: "http://remoteftpedurl.com/deployer/lmc-builds/{{branch}}/{{build_number}}/lmc.tar.gz" force: yes dest: "/tmp/assets_{{build_number}}.tar.gz" mode: 0777 timeout: 60 - name: Extract assets unarchive: remote_src: yes src: "/tmp/assets_{{build_number}}.tar.gz" dest: "/home/mywebsite/domains/mywebsite.com/" - name: Copy public assets copy: remote_src: yes src: "/home/mywebsite/domains/mywebsite.com/public/" dest: "/home/mywebsite/domains/mywebsite.com/public_html" - name: Remove index.html file: path: "/home/mywebsite/domains/mywebsite.com/public_html/index.html" state: absent - name: clear cache shell: "php artisan config:cache" args: chdir: "/home/mywebsite/domains/mywebsite.com" - name: run migrate shell: "php artisan migrate --force" args: chdir: "/home/mywebsite/domains/mywebsite.com" 
Everything above is self explanatory, the only thing that’s a bit weird is copying the contents from ./public to ./public_html directory as shared hosting environments default to this location and I felt this was easier compared to some .htaccess shenanigans.
So now, whenever you have the right commit in place, the only thing you need to do is to run the below command from the `deployment-playbooks` directory
ansible-playbook -i ../env-configuration/mywebsite/hosts.ini deploy-mywebsite.yml -e "build_number=4 branch=prod"
and when you scale, you might want to use your own VM or VPS and Ansible could be use used to install entire PHP environments. Ansible is also very useful when you want to encrypt passwords and keys of prod servers. Now when you have a successful product with a few engineers with some free time to spare, you can change the above process to build docker images instead of tar.gz files and modify Ansible to deploy or orchestrate your next deployment.
I have been using the above setup for a few years now and this may not be perfect and there may be a few things I might have missed, so any feedback is welcome 🙂