Hi!

When it comes to deployment you can proceed the old way, manually handling every aspect of your servers, or you can switch to the container approach. With the container approach your app runs in variable number of containers each one, a replica, running the same image built from your codebase. If your user base suddenly increases it’s enough to run more replicas. Images are built and run by Docker whilst Kubernetes is an orchestrator that simplify the operations of running and maintaining containers.

Disclaimer: I am not an expert, just a self-taught student of Computer Engineering so if you spot a mistake please leave a comment. Thank you!

Docker and Kubernetes

Docker is a virtualization software that runs container. Unlike proper virtual machines, containers do not bundle a complete operating system. All the containers share the same OS while being isolated from each other. Each container ships with some software along with all the dependencies, tools and libraries it needs.

Containers run images that are created in a declarative way using Dockerfile. Containers are reliable and immutable: they behave exactly in the same way regardless of the environment in which it is running. You can test an image in your local environment and expect it to work also in production. The only way to modify a container is by replacing it with a new one. Even though it can be done, containers should never be modified at runtime. In case of an issue rolling back requires only deploying the old working image. Kubernetes is an orchestrator that automates and handles the processes of deploying, scaling and keeping up containers. It spawns and kills containers according to the configuration specified and the traffic level. If one container fails, Kubernetes replaces it with a new one.

A full dive in Docker and Kubernetes is outside the scope of this tutorial. I suggest to give a look to this book: Kubernetes Up & Running.
Google offers Kubernetes Engine, a managed environment for deploying containers to the cloud. In this tutorial we will setup Continuos Integration using GitLab to host our code and to run a pipeline that submits the code to Google Cloud Build. Google Cloud Build will then build a proper Docker image and upload it to the Google Container Registry, a sort of repository for Docker images. The last step of this first part is to deploy to Google Kubernetes Engine.

It may sound complicated: it’s not.

What you’ll need

  • A GitLab account which allows to host unlimited private git repositories. Moreover Gitlab has CI/CD tools out of the box with generous limits. We will face this topic later on.
  • A Google Cloud account: new users have access to $300 dollar of credits that can be spent within a year.
  • A Linux terminal as most of the operations we will go through are easily done with a terminal and way more complicated or even impossible via the web interface.

Step 1: .NET Core setup

If you do not plan to deploy a .NET Core app, you are free to jump to part 2. Otherwise you need to install the .NET Core SDK. You can either follow the step below or the ones from the official Microsoft’s site. First add the Microsoft key and repository:

 $ wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg 
 $ sudo mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/
 $ wget -q https://packages.microsoft.com/config/debian/9/prod.list
 $ sudo mv prod.list /etc/apt/sources.list.d/microsoft-prod.list
 $ sudo chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg
 $ sudo chown root:root /etc/apt/sources.list.d/microsoft-prod.list
Then install .NET Core SDK:
 $ sudo apt-get update
 $ sudo apt-get install dotnet-sdk-2.1
Finally you can create and run the dotnet app:

 $ dotnet new web -n gitlab-google-cloud-demo
 $ cd gitlab-google-cloud-demo && dotnet run
If you visit localhost:5000 you will read an Hello World! message. We’re now ready to proceed to step 2.

Dotnet app running on localhost:5000

Step 2: Dockerfile

Installing Docker depends on your distro. You can find a detailed guide here.

The Dockerfile of our app is made up of 4 steps:

  1. fetch a build container that will compile and build the .NET Core app,
  2. copy and restore the .csproj file to download all the needed references,
  3. copy all the remaining files and build the app,
  4. build runtime Docker image by copying the dll built in the build container and defining it as the entrypoint of the container.
    # Step 1: fetch a building container
    FROM microsoft/dotnet:sdk AS build-env
    WORKDIR /app
    
    # Step 2: copy and restore the project 
    COPY *.csproj ./
    RUN dotnet restore
    
    # Step 3: copy all the remaining files and build
    COPY . ./
    RUN dotnet publish -c Release -o out
    
    # Step 4: build runtime Docker image
    FROM microsoft/dotnet:aspnetcore-runtime
    WORKDIR /app
    COPY --from=build-env /app/out .
    ENTRYPOINT ["dotnet", "gitlab-cloud-build-demo.dll"]
    We can now try to build and run the container locally:
     $ docker build -t gitlab-google-cloud-demo:1.0 .
     $ docker run gitlab-google-cloud-demo:1.0 -p 8080:80 -d

The last command tells Docker to run a new container with the image we just created and to map the port 80 inside the container (the one on which the C# app is listening) to the external port 8080. We can find the address of the container by typing

 $ docker ps

Step 3: Creation of a Gitlab Pipeline

The next step involves the configuration of GitLab pipeline through a .gitlab-ci.yml configuration file. Its aim is to define what GitLab has to do when some new code is pushed to the remote repository. GitLab pipelines can implements many recurring operations like testing, deploying and so. They run on Runners: you can either specify a Runner in your project’s setting or rely on shared runners provided for free by GitLab and hosted on Google Cloud Platform (read more here). Bare in mind that they’re limited to 2000 CI minutes per month for private projects.

The pipeline may have different stages (i.e build, test, deploy ecc.). This example only defines build. The stage requires different properties but two are important among others:

  • image declares the image the pipeline must run on; google/cloud-sdk is provided by Google and comes with gcloud CLI bundled.
  • script is the series of step that must be carried out:
    1. the environment variable $GCP_SERVICE_KEY is printed in .gcp_credentials.json,
    2. gcloud authenticates using the credentials file and sets $GCP_PROJECT_ID as the current project,
    3. generates a new cloudbuild.yaml. This file defines how Google Cloud Build should build the image of the app,
    4. the cloudbuild.yaml is submitted to Cloud Build along with all the files in the current directory (don’t miss the . at the end of the command.).
      image: alpine:latest
      
      stages:
        - build
      
      build:
        stage: build
        image: google/cloud-sdk
        script:
          - echo $GCP_SERVICE_KEY > .gcp_credentials.json
          - gcloud auth activate-service-account --key-file=.gcp_credentials.json
          - gcloud config set project $GCP_PROJECT_ID
          - |
              cat << EOF > cloudbuild.yaml
              steps:
              - name: 'gcr.io/cloud-builders/docker'
                args: ['build', '-t', 'gcr.io/$GCP_PROJECT_ID/gitlab-cloud-build-demo:latest', '.']
              images: ['gcr.io/$GCP_PROJECT_ID/gitlab-cloud-build-demo:latest']
              EOF
          - gcloud builds submit --config cloudbuild.yaml .

We can now push our code to a repository hosted on Gitlab.

 $ git init
 $ git add .
 $ git commit -m "Initial commit"
 $ git remote add origin "https://gitlab.com/my-fancy-nickname/my-repo"
 $ git push -u origin master
On the project dashboard a warning will appear. The reason behind it is that our pipeline failed because we didn’t specify some environmental variables.

Step 4: Configuration of Google Cloud Build

Now create a new Google Cloud project. Go to Google Cloud Console and follow the instructions.

Google Cloud Console project creation dialog

Enable Build Cloud.

Executing the pipeline on GitLab requires a service account with permissions. Go to IAM & Admin > Roles. Create a new service account and grant it the following roles:

  • Cloud Build Service account
  • Storage Admin
  • Viewer
Service account permissions
Service account permissions

In the last step create a new json credentials key. Download it and store it safely. We will copy its content to GitLab console later.

Service account permissions

Now let’s move to GitLab project’s settings > CI/CD > Variables and create two new variable:

  • GCP_PROJECT_ID must contain the project id of the project on Google Cloud Console,
  • GCP_SERVICE_KEY must be set to the content of the credentials file.

Let’s now start the pipeline again. Go to CI/CD > Pipelines and click on Run pipeline. From the pipeline view you can see the different jobs running (only one in this case). Clicking on it shows the logs of the job.

Running pipeline
Pipeline logs

If everything goes well the job should end in a few minutes. The newly created image is now stored in Google Container Registry, ready for the next part of this tutorial. This first part just grasped the surface of what can be done integrating GitLab and Google Cloud with container concept.

Stay tuned!

Google Container Registry
Google Cloud Build