CI & CD – An Introduction
Continuous Integration is a development workflow in which multiple developers continuously merge their code changes into a common, shared code repository. This is usually combined with code testing. There are several best ci cd tools that help businesses to deploy upgrades and maximize their productivity. In this article, I’ll help you understand the importance of GitLab CI deployment to Server and will show you how you can create a CI/CD pipeline using Cloudways.
The idea of CI is to merge all the changes to the main branch as often as possible so that changes can be tested to work with all the other introduced changes.
Continuous Delivery is the practice of continuous shipping of changes to an environment where those changes can be reviewed in terms of the final product – whether it is delivered to production or staging environment. This enables users – testers or the final audience – to inspect the changes against the project. Changes are delivered regularly, in small batches, so that any issues can be detected early on.
Continuous Deployment is a streamlined development cycle of shipping code changes to production automatically, in small batches, after they pass automated tests.
While Continuous Delivery workflow usually comes with a staging environment, where the changes are tested before being deployed, Continuous Deployment setup means that changes are tested and shipped to the production environment automatically.
This process presumes an extensive setup of automated tests (including unit, integration, and acceptance tests.
Why Automate Application Testing and Deployment?
Continuous Deployment (CD) represents the full automation of the development cycle, where developers can rapidly ship new features and ideas to production.
For a dev environment, the CD brings:
- Every code change is by default thoroughly tested. Frequent deployment of changes means that even the bugs that are not captured by automated testing can be easily tracked to specific commits.
- When it is properly set up, Continuous Deployment with automated testing and atomic deployments makes codebase and bug tracking more manageable
- Frequent, atomic deployment of changes means that the feedback cycle can be faster, not only in terms of application logs and monitoring for bugs but also in terms of customer feedback. This especially fits in with the lean startup paradigm – because business can quickly put new ideas before its users and observe how it is received.
At Cloudways, our mission is to make the developer’s life easier, and to make the complexities of the deployment process simple, so that you can focus on things that make difference.
GitLab is an open-source end-to-end DevOps tool that integrates version control with user-definable Continuous Integration and Continuous Deployment pipelines. It is available under the MIT license and can be installed on third-party servers. Gitlab.com is a hosted, SaaS GitLab version that makes it possible to use GitLab without dealing with the complexities of maintaining a GitLab server on the own.
In this tutorial, I will introduce GitLab, and show how it can be used to automate a DevOps development cycle. I will take a look at the process of setting up a GitLab CI/CD deployment workflow with PHP and Laravel framework.
Stop Wasting Time on Servers
Cloudways handle server management for you so you can focus on creating great apps and keeping your clients happy.
Introduction to GitLab CI/CD
GitLab touts itself as the “complete DevOps platform”. The (hosted) platform offers a range of paid plans, but its free tier offers quite a lot – including unlimited collaborators and private repositories. I will use the free tier in this tutorial.
Being an end-to-end tool, GitLab gives us the ability to manage source code with Git, and to connect a code repository with the production (or staging) infrastructure, it offers the streamlined workflow, and infrastructure for testing and deploying the application, all at a click of a button. It can also be set up to test and deploy code automatically, on every git push.
It features public and private package registry and container registry – where you can publish and later use software packages and Docker container images.
GitLab provides the ability to define customized pipelines for the DevOps cycle – from preparing to building, testing, configuring, and deploying the software. It also features monitoring tools to gather feedback information about commits – this includes performance metrics, tools for viewing and examining logs, incident management, error tracking, tracing – monitoring the health of different microservices, etc.
Main GitLab Terms
Before we start setting up a DevOps workflow with GitLab, I need to briefly lay out the GitLab terminology.
.gitlab-ci.yml – the main part of the GitLab workflow
.gitlab-ci.yml is a YAML file that is added to all repositories which use GitLab Continuous Deployment workflow – it is the main configuration file where we define stages of the DevOps cycle, where I define specific jobs, variables that will be used, scripts that will be executed, etc.
Some of the keywords used in this file are:
- script – marking a section with the shell script to be executed
- artifacts – listing of resources to be created by a job, uploaded to GitLab
- cache – list of files to be cached between subsequent job runs
- environment – used to specify what environment the respective job will deploy to
- extends – designates entries to inherit from
- image – specifying a Docker image to be used
- include – specifying external YAML files to include
- services – Docker service images – e.g. MySQL service
- stage – used to define different stages of the CI/CD workflow
- trigger – for defining triggers for the creation of downstream pipelines
- variables – for defining job variables
- when – for specifying when the job is run – e.g. on success, on failure, always (regardless of the result of earlier stages), manual (indicating that it needs to be executed manually), etc.
There are more keywords that you can use to define jobs with GitLab, but I will not be using all of them in this tutorial.
Usually, the file will define stages with keyword stages, and then proceed to define different jobs that belong to defined stages.
Pipelines
Pipelines represent an entire CI/CD process, including multiple jobs ran across different stages. Pipelines are defined through the .gitlab-ci.yml files, and they can be of varying complexity, including multi-project pipelines, parent-child pipelines, etc. In this tutorial, we will implement a very basic pipeline.
Jobs
Jobs are a fundamental part of every GitLab pipeline, and they can be defined to run in different stages, as we can see in the image above
Jobs are runare ran by runners, and at the minimum, have script element defined:
jobA:
script: “…”
stage: test
jobB:
script: “…”
stage: deploy
Runners
GitLab documentation defines runners as “lightweight, highly-scalable agents that pick up a CI job through the coordinator API of GitLab CI/CD, run the job, and sen the result back to the GitLab instance”
On its own, GitLab Runner is an open-source application written in Go, and can be run, apart from GitLab’s infrastructure, also on development machines, or on server infrastructure, inside Docker containers or deployed on Kubernetes clusters.
Runner agents are responsible for processing jobs, and they can process them on the same machine where the GitLab Runner is installed, or in containers and remote infrastructure
Environment Variables
Variables in GitLab are named values that affect how specific jobs are run. Environment variables mean they are a part of the environment in which the job is executed.
GitLab CI/CD predefines a set of variables that we can use in pipelines, by their names. At the same time, I can define my own, custom variables. These can be Variable or File type. They can be defined through the UI, through API, or in the .gitlab-ci.yml file.
Without further ado, I will jump into the project – setting up Continuous Deployment to Cloudways server with GitLab CI/CD.
Complete Process of CI/CD with Gitlab for PHP app on Cloudways
In this section, I will set up the automated Continuous Deployment workflow for a custom PHP app to one of the Cloudways servers.
For this, I will use a Laravel app because of the framework’s popularity in dev circles.
Creating the Application on Cloudways Platform
The first thing I am going to do is create an application on Cloudways. I will start by launching a new managed server.
After I have selected all the particulars of the server instance and launched it – which may take a minute or two – I will proceed to install the web application.
Cloudways makes it easy to launch an entire server environment with a preinstalled and pre-optimized software stack that contains all the elements I need to run a web application. The Cloudways web hosting stack Apache, NGINX, MongoDB, and PHP.
Once the application is launched, it will be available on the provisional Cloudways subdomain until I assign a top-level domain name to it.
I will also be able to add the public SSH key to it, to enable passwordless authentication.
Under Application Settings, I will want to enable SSH access to the application, and disable Varnish for the duration of the initial setup. The varnish is a full-page caching system that speeds up the web app frontend, but I will not be able to see the changes in the application in real-time, so this is something that I will enable when I have deployed the final application version.
Once I added the SSH key and checked these settings, I will be able to mirror the application code from the server to my local machine (if I chose the Laravel application) so I can work on it.
If I installed the PHP application/stack on the server, or if I wish to have the newest Laravel version to work with, I will skip this step and do the initial app deployment via rsync or scp from the local machine, or via Git from the GitLab repository.
If I choose a Laravel application, the application directory structure on the server will look something like this:
– public_html is the directory where the application code will live, with public_html/public/index.php as the entry script.
Creating & pushing to a repository on GitLab.com
The next step will be to initiate a repository on GitLab. Presuming I have signed up on the website, I will simply create a project, which will give me an empty repository:
I can also choose to import an existing repository from Github or Bitbucket.
Next, I will push the application to the repo. I assume that I already have a Laravel application on the local machine – either by downloading it from the public_html of the application or by initiating a Laravel application locally with laravel new.
The application should already have .gitignore, so I will initiate the Git repository, add the GitLab remote address, commit and push the app (this is presuming that I have already added the public SSH key to the GitLab profile):
git init git remote add origin [email protected]:username/repo-name.git git add . git commit -m "Initial commit" git push -u origin master
– naturally, I will replace the username and repo-name with the actual details of my GitLab repository. Having done this, the Laravel application will now be visible on GitLab.
Now, if I just want to be able to pull the code changes manually from GitLab, that is a fairly simple setup, which we explained here. But in this tutorial, I will set up automated testing & deployment pipeline which will test my code and push it to the server automatically on every commit.
Creating Tests
The next thing I will do is create some tests. The default Laravel scaffolded application already comes with default tests in the test folder of the application:
If I open tests/Feature/ExampleTest.php, I will see something like this:
<?php namespace Tests\Feature; use Illuminate\Foundation\Testing\RefreshDatabase; use Tests\TestCase; class ExampleTest extends TestCase { /** * A basic test example. * * @return void */ public function testBasicTest() { $response = $this->get('/'); $response->assertStatus(200); } }
Function testBasicTest will always return true. If I run PHPUnitnow (or vendor/bin/phpunit, depending on the PHPUnit installation, we will see two tests passing (Feature and Unit tests):
Now, I will want to expand these tests and set up some real testing. I will add two more test cases to the tests/Feature/ExampleTest.php:
public function testIndexView() { $view = $this->view('index'); $view->assertSee('Homepage'); } public function testAboutView() { $view = $this->view('about'); $view->assertSee('About'); }
If I save this file and try running the tests again, I will see the following:
I can see that of four tests, two have failed. And that is what I want, for now.
I will now add Dockerfile to the repository. Dockerfile serves as the container in which GitLab will run the tests. The base image for the Dockerfile will be php:7.4 image:
# Set the base image for subsequent instructions
FROM php:7.4
# Updating packages RUN apt-get update # Installing dependencies RUN apt-get install -qq apt-utils git curl libzip-dev libjpeg-dev libpng-dev libfreetype6-dev libbz2-dev # Clearing out the local repository RUN apt-get clean # Installing extensions RUN docker-php-ext-install pdo_mysql zip # Installing Composer RUN curl --silent --show-error https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer # Installing Laravel Envoy RUN composer global require "laravel/envoy=~1.0"
After Dockerfile is saved in the app root folder, I will log in to the GitLab docker registry and build & push the image:
docker login registry.gitlab.com docker build -t registry.gitlab.com/username/repo-name . docker push registry.gitlab.com/username/repo-name
I will also add the.gitlab-ci.yml file, in which I will add the address of the image:
image: registry.gitlab.com/username/repo-name:latest stages: - test test_basic: stage: test script: - cp .env.example .env - composer install - php artisan key:generate - vendor/bin/phpunit
To start, I have defined only one stage – test, and one job belonging to that stage: test_basic.
I will now commit the changes and push them to GitLab:
git add . git commit -m “added CI & Dockerfile” git push --set-upstream origin master
After the code was pushed, I will be able to observe GitLab at work. If I now go to the repository page on gitlab.com and navigate to CI / CD > Pipelines, I will see the job running:
and after a couple of minutes I will see the failed job:
I can look into the details:
By clicking through the pipeline/job button, I will eventually be presented with the command line output from the container.
The two test cases I added to feature tests presume certain text being present in index and about pages. asertSee() asserts that the specified string is present in the response. It is invoked on the instance of Illuminate\Testing\TestView
For tests to pass, I will add the appropriate views in resources/views I need to create index.blade.php and about.blade.php):
In routes/web.php I will replace existing routes with the following:
Route::get('/', function () { return view('index'); }); Route::get('/about', function () { return view('about'); });
– and now the tests should pass.
Setting up deployment
The first thing to do will be to create the SSH keys on the Cloudways server and to add them to GitLab. Due to constraints of the application-level users, I will log into the server as a master user and create an SSH key pair with ssh-keygen. This will produce id_rsa and id_rsa.pub files in the master user’s home directory, under .ssh.
I will then copy id_rsa (private key) file and add it as a variable in the GitLab project, under Settings > CI / CD:
For this tutorial, I will name it ID_RSA. Now I can use the private key in the jobs.
I will also copy the public key – id_rsa.pub, and add it to GitLab (Project > Settings > Repository) as Deploy Key
Next, copy id_rsa.pub to the authorized_keys on the Cloudways server:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
This way, I will be able to pass the private key into the container environment, which means that the GitLab jobs can be authorized against the Cloudways server
Next, I will clone the GitLab repo to anywhere on the server. Adding GitLab to the local known_hosts – might fail otherwise.
Now, when I deploy to the Cloudways server, I need to remember that Cloudways supports two levels of users – master user and application-level user. Neither of them has root privileges, but the master user has some permissions that I need for deployments. For example, the application-level user is not able to create an SSH key pair which I will use, because the user cannot access the files in the .ssh folder.
The second thing to pay attention to is the permissions of different directories. The public_html application directories cannot be deleted, and the strategy in this deployment setup will be to have different releases in the private_html application directory and then to link to the newest one from the webroot.
This will make instant switching between code versions possible – there will be no downtime (copying or even manipulating Git commits/versions can still incur a certain time to finish, so changing the webroot symbolic link target is perhaps the fastest way, both in introducing new changes and reverting to previous versions.
This strategy means that I will need the webserver root folder to be a symbolic link that is (re)created on every deployment.
For this reason, I will empty the public_html directory, and inside it create a webroot symbolic link, targeting the most recent release. Releases will be stored in the private_html directory, and in case of need to revert the changes, I can instantly link back to the previous code version. This way, the application should have no downtime even if I push code changes that introduce errors in the application. I can roll back all the changes to a precise point at a click of a button.
Laravel Envoy
Envoy is Laravel’s task runner.
In the configuration, when GitLab runs the pipeline and launches the container that I defined with the Dockerfile after the tests have successfully passed, it will run a deployment. This means that Envoy, acting running on a separate machine,
will:
- log in to the Cloudways server,
- set up needed directories
- clone the repository,
- run composer to install all the dependencies and
- update symbolic links
In the meantime, I may also need to set it up to change certain directory/file ownership and permissions
To configure Envoy to do all this, I will create Envoy.blade.php in the root of the application:
@servers(['web' => '<master_server_username>@<server_ip>']) @setup $root_app_dir = '/home/<application_directory_path>'; $releases_dir = $root_app_dir .'/' . 'private_html'; $release = date('YmdHis'); $current_release_dir = $releases_dir .'/'. $release; $repository = '[email protected]:<gitlab username>/<repo name>.git'; $webroot = 'webroot'; @endsetup @story('deploy') clone_repository run_composer update_symlinks @endstory @task('clone_repository') echo 'Cloning repository' [ -d {{ $releases_dir }} ] || mkdir {{ $releases_dir }} git clone --depth 1 {{ $repository }} {{ $current_release_dir }} cd {{ $current_release_dir }} git reset --hard {{ $commit }} @endtask @task('run_composer') echo "Deploying ({{ $release }})" cd {{ $current_release_dir }} composer install --prefer-dist --no-scripts -q -o @endtask @task('update_symlinks') echo "Linking storage directory" rm -rf {{ $current_release_dir }}/storage [ -d {{ $releases_dir }}/storage ] || mkdir {{ $releases_dir }}/storage [ -d {{ $current_release_dir }}/bootstrap ] || mkdir {{ $current_release_dir }}/bootstrap ln -nfs {{ $releases_dir }}/storage {{ $current_release_dir }}/storage chown -R $USER:www-data {{ $current_release_dir }}/storage {{ $current_release_dir }}/bootstrap echo 'Linking .env:' ln -nfs {{ $releases_dir }}/.env {{ $current_release_dir }}/.env echo 'Linking current release:' ln -nfs {{ $current_release_dir }} {{ $root_app_dir }}/public_html/{{ $webroot }} @endtask
Here, I will need to update the parts of the @servers and @setup sections that contain actual application data like usernames, repository names, exact paths, surrounded with < and >.
Note that @servers contains the Cloudways server details, @setup sets up the environment by defining some variables, @story is a collection of tasks, which is followed by definitions of the actual tasks.
update_symlinks task contains a lot of the little tasks which may make or break the deployment and you may need to update it to change webroot/publicpermissions or ownership of the directories on his system. I will link to the storage directory which will live under the private_html folder. The bootstrap folder will need to be created, with proper write access, for the application to work.
Before deployment, I will put the initial .env file also into the private_html and will link the app directory to it.
Because I cannot replace the public_html itself, I am creating a symlink “webroot” under public_html pointing to the current release.
For this to work later, in Cloudways web administration panel, under Application Settings > General, I will add a string webroot/public to the WEBROOT setting – webroot is the symlink to the current release and public is the actual folder name where laravel entry script – index.php – lives:
After the Envoy configuration, I will also update the .gitlab-ci.yml file, and add the deployment job:
image: registry.gitlab.com/<username>/<repo-name>:latest stages: - test - deploy test_basic: stage: test script: - cp .env.example .env - composer install - php artisan key:generate - vendor/bin/phpunit deploy_cloudways: stage: deploy script: - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - eval $(ssh-agent -s) - ssh-add <(echo "$ID_RSA") - mkdir -p ~/.ssh - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' - ~/.composer/vendor/bin/envoy run deploy environment: name: production url: <our_app_url> only: - master
I am also adding the ID_RSA variable to the environment.
Once all of this is done, the application directory structure, after some deployments, should look something like this:
The numbered directories under private_html represent different releases.
If all this has been done correctly, I can commit and push the code changes to GitLab.
I will be able to observe the pipeline running, now with two stages:
– and after the jobs are finished, if I click the throughout pipeline button, I should see that the pipeline ran successfully:
If I now visit the application’s public URL, I should see the live version:
In case there are any issues, I can go back to the pipeline page and click the Deployment Job button to see the terminal output of the deployment, and to polish any details that may need fixing.
Conclusion
In this tutorial, I took a close look at setting up a Continuous Deployment pipeline to a Cloudways server. I introduced all the main principles of CI/CD, terms used in working with GitLab CI/CD, and then went on to set up a small but functional test-deploy pipeline with Laravel.
This is by no means an exhaustive tutorial on the subject, but I am confident that it is a solid starting point for setting up more comprehensive workflow scenarios. If you are planning to use the GitLab CI Deploy feature then the above-explained example will help you get started with it.
At Cloudways, we strive every day to make developers’ lives simpler and hope we have helped do that with this tutorial. Let us know what you think!
Tonino Jankov
Tonino is an entrepreneur, OSS enthusiast and technical writer. He has over a decade of experience in software development and server management. When he isn't reading error logs, he enjoys reading comics, or explore new landscapes (usually on two wheels).