![peek com api peek com api](https://i.pinimg.com/originals/da/6a/19/da6a19ad98712b0baed5850a352054a7.jpg)
Here, we take advantage of the fact that we split PSPDFKit API into multiple smaller services.
#Peek com api code#
If there are code changes, we build our services. Here’s how the final pipeline looks, including all the steps that can optionally be skipped: We also make use of concurrency groups to ensure that all merges to master are deployed in order, and that only one job is touching our infrastructure at any one time. Since we have a monorepo, this is needed to filter out unrelated changes and prevent useless deployments from running.įinally, once we’ve determined what parts of the pipeline need to run, we can assemble it. Then, we check if any of the changed files are part of any of our deployed services. We do this by generating a Git diff of the commit being deployed (in our case, the latest master merge) and the last successfully deployed commit. Next, we check if any of our services needs to be updated. For this reason, we explicitly first try running terraform plan with the currently deployed image tag to determine if there are infrastructure changes or only code changes. In our case, this encompasses both actual infrastructure changes and changes to the Docker image used (since we tag each image with the Git SHA we built it from, causing it to be marked as a change). When running plan, Terraform will detect all changes between the deployed infrastructure and the infrastructure defined locally.
![peek com api peek com api](https://media1.tenor.com/images/94551b1eb48b9e56728bcc777871e5fb/tenor.gif)
For this, we have a Ruby script, and we use it to do the following:įirst, we check if there were any changes to our infrastructure by running terraform plan. One of the powerful things about Buildkite is that you can dynamically generate your pipeline on demand. We use Buildkite for our CI, so the final step is to put things together in a way that makes sense. One thing to note here is that PSPDFKit API is actually composed of multiple different services working together - something we can make use of later to improve the turnaround time of our CD pipeline. Then, when our ECS tasks start, they pull the latest image from the registry.
![peek com api peek com api](https://gdm-catalog-fmapi-prod.imgix.net/ProductScreenshot/b06de03c-f9a7-41a9-8ada-94407516e292.png)
There isn’t any magic here we just have a Dockerfile for each of our services that installs the dependencies needed, builds the code, and sets up the entry point.įor our CD pipeline, all we need to do is wrap each process in a job we can run on CI and then upload the resulting image to our registry. Luckily for us, this is something we have a lot of experience with, since PSPDFKit Server and PSPDFKit Processor are shipped as Docker images. For PSPDFKit API, we make use of ECS, which means we have to build Docker images. Infrastructure is one half of the equation, and the other half is deploying is our actual code. What matters is that by using Terraform, we can check if a pull request changes any part of our deployed infrastructure, and if so, we can apply the changes automatically. The details of this aren’t relevant to the CD setup itself. With the tool selected, all we needed to do was set up all the AWS resources required to run. However, since we already had experience managing our CI infrastructure with Terraform, we decided that would be the way to go. With AWS, the first choice might be CloudFormation, as it’s an AWS-native way of provisioning the infrastructure. Hello WorldĪll our PSPDFKit API infrastructure is hosted on AWS, and the first step in making our CD pipeline work was to choose a tool to define our infrastructure. More specifically, I’ll talk about which tools we used and how we structured our deployment workflow. Today I want to give you a peek behind the curtain and walk you through how we set up our continuous deployment (CD) pipeline for PSPDFKit API.