Skip to content

ase-23-the-bald-owls/swiss-card-exchange

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

183 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Swiss Card Exchange

This is a student project developed by @BacLuc and @Off3line.
The goal of the courses project is to create a cloud native e-commerce application.
Course page of Advanced Software Engineering @UZH

Project Documentation

A markdown file with the documentation about the organization can be found here

Github Badges:

CI CD Development setup

SonarCloud Badges:

Bugs Code Smells Coverage Vulnerabilities

Architecture

The application consist of 3 deployment units.

  1. app: A NEXT.js application which is the frontend of the WebShop.
    Next.js allows to use SSR which improves SEO because web crawlers of search engines can parse the content without executing javascript.

  2. supabase: A container deployed with its terraform module to LocalStack.
    It is supabase self hosted with a docker-compose.yml file. The container defined with the Dockerfile coordinates the start of the docker-compose.yml file.
    Supabase offers BaaS which allows you to define a backend api by just defining a database schema with SQL. Supabase then analyzes this schema and allows to interact with the backend via client SDK and REST.
    The openapi specification can also be seen here: https://app.swaggerhub.com/apis/LUCIUSBACHMANN/swiss-card-exchange/1

  3. mail-function: a nodejs application to confirm orders via email to the customers. This is a dedicated service because this offers the retry without any additional effort, and the notification of the customers via email must be very reliable. The mail-function only needs to interact with supabase, thus it can easily be moved to a separate service.

architecture

These deployment units are then deployed to LocalStack. How these units are deployed is described with TerraFrom.

Because it would be difficult to send emails to real mailboxes and the deployment target is LocalStack, the mail-function sends the emails to MailHog. The deployment of MailHog is defined in the top level docker-compose.yml under the service mail.

Docker

Our setup heavily relies on docker. If you use a linux distro, make sure docker is installed according to this guideline of docker inc: https://docs.docker.com/engine/install/ubuntu/.
Maybe some features used in the Dockerfiles are not available when your docker daemon does not use buildkit to build the containers.
You need buildkit to build the docker images. If your docker cli does not use buildkit by default, enable it following the guide here: https://docs.docker.com/engine/reference/commandline/dockerd/#feature-options.

Local development

For local development, the local development environment with the supabase-cli is recommended. See app

Local deployment

To quickly build and test the docker images, a docker-compose.yml file is in the root directory. Start the deployment with:

docker compose --profile local-run up

Sadly the supabase containers are not stopped when stopping the supabase-dc service. To stop them, run:

docker compose -f supabase/app/docker-compose.yml down

Deployment on LocalStack

Setup

To deploy to the LocalStack docker compose service, you need the following tools:

And you need the following entries in your hosts file. (/etc/hosts in linux, C:\Windows\System32\drivers\etc\hosts on windows)

127.0.0.1 s3.localhost.localstack.cloud
127.0.0.1 supabase.s3.localhost.localstack.cloud
127.0.0.1 supabase.local

You also need a LocalStack Pro license to start the Amazon ECS services on LocalStack. You can obtain an educational license, which enables to use Pro features, here: Free LocalStack Educational License. Then run

cp .env.example .env

and set the LOCALSTACK_API_KEY variable to your localstack api key.

Then download the providers used by terraform: tflocal init

Deploy

  1. Start the LocalStack container: docker compose up -d
  2. Refresh the terraform state: tflocal refresh
  3. Build all images needed: docker compose --profile local-run build
  4. Apply the changes: tflocal apply --auto-approve

Then, supabase should be reachable under http://localhost:3000 sce-app is reachable under http://localhost:8080

Cleanup

  1. Destroy the resources of terraform: tflocal destroy --auto-approve
  2. Shut down the containers: docker compose -f supabase/app/docker-compose.yml down
  3. Shut down LocalStack: docker compose down

Documentation

To keep the documentation simple and avoid the "too long; didn't read" problem, the services follow best coding practices for the technologies used.

The documentation and the deployment steps above were only tested on Ubuntu 22.04, MacOS on Apple Silicon and Windows 10 with WSL 2. If you use another Setup you may encounter problems with the docker networking or with line endings on Windows.

In the top level directory there is the docker-compose.yml which describes the contexts, arguments and image URI to build the services. It also allows to build all services with the command docker compose --profile local-run build. The docker-compose.yml also contains the volumes, environment variables and network configuration to run the services. With the ports directive the user knows under which port the service will be available.

The dockerfiles app/Dockerfile, supabase/Dockerfile and mail-function/Dockerfile self contain all their build steps. There is no need to install npm packages or execute a build step before building the image. The Dockerfiles also declare their environment variables which allow to configure the image, their ports, their entrypoints and the default command.

The javascript based services have a package.json (app/package.json, mail-function) which contains all the scripts that are needed to develop and run the applications. In the app service we use typescript to document the internal interfaces between the different components.

TerraForm

TerraForm is a rather special tool for devops. Because of that the documentation for our TerraForm modules is more detailed. TerraForm is used that we have an executable documentation how the services are deployed to LocalStack with more possibilities to structure the code than bash scripts. The main.tf file in each module describes the desired state of the deployment unit in a declarative way. When a TerraForm module is applied, TerraForm then checks which parts already exists and which commands it has to execute to get from the current state to the desired state.

This application consist of 4 TerraForm modules.

The top level module prepares the variables like the jwt_anon_key (the JWT for the anonymous role of supabase) which are shared between the sub modules. These shared variables are generated in locals.tf. As input it receives the versions of the images in should deploy with variables.tf. Then it calls the other submodules in main.tf. After the module is applied, TerraForm prints the variables defined in outputs.tf to the console. TerraForm allows to use resources implemented by external modules. These modules and how they are configured is defined in provider.tf. The other modules supabase/main.tf, app/main.tf and mail-function/main.tf deploy their image to LocalStack.

Continuous Integration and Deployment

CI and CD is implemented with github actions. The workflows are defined in .github/workflows.

ci.yml lints all javascript services, lints and validates TerraForm and checks that the generated types in database.types.ts are in sync with the schema and thus with the api. Then it executes the cypress.io component and e2e tests. You find the e2e tests in app/cypress/e2e. And at the end a sonarcloud scan is performed to check the coverage, code duplications and security issues. Videos and screenshots of the test run can be seen as artifacts of the job run at the bottom of the summary.

cd.yml executes the deployment to LocalStack and then runs the e2e tests against the LocalStack deployment to check if something broke in the production build. For that it needs to build the images with the build-images.yml workflow. The nightly builds are build from the latest commit on the main branch and also provide images for the arm64 CPU architecture.

development-setup.yml also executes productive builds of the images, but here they are deployed with docker compose and not with LocalStack to check whether the built images ore ok, but we have an error in the LocalStack deployment.

If you fork this repository, you need to provide the following secrets for the workflows to run:

  • SONAR_TOKEN: the token of the project on sonarcloud
  • GHCR_USER: The user with which you want to push the images to the ghcr.io container registry. secrets.GITHUB_TOKEN did not work, so we made a personal access token. This is the user from which the token is generated.
  • GHCR_TOKEN: The personal access token to push to the registry.
  • LOCALSTACK_API_KEY

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors