
[ad_1]
Build a Modern REST API with Go – Part 4

This article is the fourth in a series covering all aspects of implementing a modern REST API microservices step by step:
- Defining a SQL First Data Model with SQLC
- REST API to Gin. apply with
- configuring with wiper
- Building and Running in a Container
- containerized testing
All codes for the series are available here https://github.com/bquenin/modern-go-rest-api-tutorial
To better understand what we need to build, we need to know which execution environment we are targeting:
- production environment: We want to run our containerized applications in an environment that is as close to the production environment as possible,
- development environment: We also want a fast development cycle where we can run, stop and restart our applications without containerizing them. In general, this is useful when developing a new feature, implementing tests, debugging a problem, profiling, etc.
When developing a REST API microservice, you are responsible for distributing a container for your application. Once the container is created, you will probably need to push it to the container registry of your choice so that the application can be deployed to your production environment.
We will not cover the entire deployment process for containerized applications in this tutorial, but we will cover the following topics:
- production container image build,
- Running this image in a stack mimicking a production environment using docker-compose.
important to come
Having a small footprint is important in a containerized world. In effect, your image is being created, compressed, pushed to a registry, and pulled from that registry an incredible number of times. Not only will a larger image take longer to perform all these tasks but it will also generate a lot of cost as cloud providers charge for storage and network usage!
That’s why smaller footprint OSes like alpine To work a lot Alpine is a very small Linux distribution designed to run containerized applications and strive for the smallest possible footprint. The size of the compressed image is a little over 2MB,
Go is an ideal choice for building containerized applications. In fact, the fact that binaries are statically linked makes them very easy to package into a single container. You can even build an image with just the Go binary (from scratch) and it will still work!
Here, we are using Alpine as the base image because having a shell and other commands is very useful when you need to debug a running container and attach it.
build container image
There are many ways to create an OCI-compliant container image but in this tutorial, we will be using Docker. To keep our container image small, we’ll use a multi-step build process:
- Build Go Binary: We are using official Go Docker image. To enable caching, we first download the Go module. This will create a separate layer that can be reused throughout the build, greatly reducing the build time. Then, we copy the source code and build the Go binary.
- Production Image Creation: We use the official Alpine image and we simply copy go binary we just made
--from
back layer.
The resulting image is minimal as it only contains the Alpine OS and Go binaries. No build artifacts (source code, imported packages, etc.) are present in the final image.
Note that we are pinning the images to specific versions. Really, you want to be as precise as possible because Docker image tags are aliases and can point to different images over time. Pinning specific versions makes sure you have makes it reproducibleWhich means that if a problem arises with a given build, you can reproduce it with confidence.
Defining Our Service Stack
Now that we have our container, we want to run it in an environment that is as close to the production environment as possible. To achieve this, we are going to use docker-compose. write It is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Our stack only needs 2 services: the postgres database and our microservices. We can use Compose to describe this stack as follows:
This is our application microservice.
it contains a Make When we are initializing stack tells compose to create container. If you have modified the source code, a new image will be created to reflect those changes.
As mentioned in the previous article, we are configuring our application using atmosphere Variables to specify the Postgres database hostname and password. Please note that the service DNS name is equal to the service name defined in the stack. In this case, Postgres can be accessed at postgres DNS name.
By default, containers do not export their ports to the local host. So we need to specify a port mapping to be able to access the service on port 8080. You can learn more about write networking Here.
postgres
This is our Postgres database service. We are using official image from docker hub.
For the second service, we configure it using environment variables to specify the password. In a production deployment, this environment variable will be provided by a configuration manager or by other means such as a Kubernetes secret.
To initialize our database schema, we rely on the containerized database convention and mount our schema.sql to /docker-entrypoint-initdb.d folder. You can read more about system here,
Lastly, we use named volumes to store postgres data. In this tutorial, we make sure to erase the volume between restarts, but you can keep the volume between restarts to reuse existing data.
A note about security
In this tutorial, we provide the database password using environment variables in the Docker Compose file. The default database password is checked in the source code repository for convenience. However, keep in mind that you Never check any sensitive information In the source code repository.
In our case, the database password environment variable can be set by the deployment pipeline a Variable substitution in composition file. A more secure approach would be to rely on a secret management tool like Hashicorp Vault combined with kubernetes secrets either docker secrets,
Now let’s take a look at the development environment. The only fundamental difference is that we don’t want our microservices to run in the stack. We want to start our service through our favorite IDE or any other means.
We still need to have the Postgres database current and accessible. Actually, the production postgres service does not export any ports, so it is not possible to connect to it from outside the stack. We need to get this port in development mode.
Hopefully, we can express those differences by simply creating another stack file:
By associating this file with the production stack, we can start our stack in development mode. The number of replicas for the microservice is set to 0, so there are no instances running in the stack and the postgres service is exposing its port to the host.
To connect to the Postgres instance from the stack we just need to start our microservices instance and configure it using the environment variable (accessible on localhost:5432).
Now that we have our stacks, we need to use convenient commands to be able to start and stop them. For this we’re going to use a good old Makefile:
Now you can just use the following command:
make prod
: to start the production environment where you can connect to your microservice on localhost:8080,make dev
:dev to start the environment where you can connect to the database on localhost:5432.make stop
: to stop any stack gracefully.
Now that we’ve covered production and development environments, we’ll look at how to write and containerize our integration tests!
[ad_2]
Source link
#Optimize #Dockerfile #size #speed #Bertrand #Quenin #August