Skip to content

NX e2e testing with AWS Amplify

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Recently, I have been working on a project which uses AWS Amplify as its backend. The project itself is set up with an NX workspace. One of my tasks was to set up e2e testing with Cypress in our AWS Amplify build pipeline. It started as an easy task, but I quickly went down the rabbit hole. My builds and test runs were a success, but at the deploy stage, the builds failed with a rather generic error.

2021-04-20T14:18:15 [INFO]: Starting Deployment
2021-04-20T14:18:15 [ERROR]: Failed to deploy

What I was trying to do?

I was bedazzled with this error, and I didn't find much on it. The only issue that had a similar error message was fixed, merged and distributed all around the world at AWS. So what did I do wrong? I assumed that the configuration mentioned in the documentation had optional properties. So I just skipped the configFilePath property, thinking that I don't have mocha style reports. I did set up Cypress with Gherkin and generated a cucumber style report, which I wanted to preserve as an artefact.

# Additional build configuration which is not relevant here

test:
  artifacts:
    baseDirectory: cyreport
    files:
      - "**/*.png"
      - "**/*.mp4"
      - "**/*.html"
  phases:
    preTest:
      commands:
        - npm ci
    test:
      commands:
        - npm run affected:e2e
    postTest:
      commands:
        - npm run devtools:cucumber:report

After hours of searching on the internet, I went back to where it all started, and I re-read the whole documentation. That is when I noticed the following:

preTest - Install all the dependencies required to run Cypress tests. Amplify Console uses mochawesome to generate a report to view your test results, and wait-on to set up the localhost server during the build.

So I acquired a mochawesome report json from an old project, committed it, and added it to the configFilePath property. Everything worked fine, and the application deployed. So the next step was to generate that report based on actual tests.

Setting up mochawesome reporter in NX e2e tests

For us to generate mochawesome reports, we need to install that dependency with npm install mochawesome --save-dev. After running this command, generating the test reports for your application can be set up in its cypress.json. Let's add the following settings to the existing settings:

{
  "videosFolder": "../../cyreport/client-e2e/videos",
  "screenshotsFolder": "../../cyreport/client-e2e/screenshots",
  "reporter": "../../node_modules/mochawesome/src/mochawesome.js",
  "reporterOptions": {
    "reportDir": "../../cyreport/mochawesome-report",
    "overwrite": false,
    "html": false,
    "json": true,
    "timestamp": "yyyy-mm-dd_HHMMss"
  }
}

NX runs the cypress command inside the folder where it is set up. So, for example, in my project, the client had its client-e2e counterpart. We set up the config in a way, that every static asset (videos, screenshots and the mochawesome-report json files) are generated into the {projectDir}/cyreport folder. This is also the reason why we added the relative path of the reporter in the node_modules folder.

This setup enables us to generate a mochawesome.json file for every test. For it to be a merged report, we need to use the mochawesome-merge library. I added a script to the project's package.json file:

{
  "scripts": {
    "devtools:mochawesome:report": "npx mochawesome-merge cyreport/mochawesome-report/mochawesome*.json > cyreport/mochawesome.json",
  }
}

The script merges everything inside the cyreport/mochawesome-report folder into the cyreport/mochawesome.json file. Then we can set this as our configFilePath property in our amplify.yml file.

# Additional build configuration which is not relevant here

test:
  artifacts:
    baseDirectory: cyreport
    configFilePath: cyreport/mochawesome.json
    files:
      - "**/*.png"
      - "**/*.mp4"
      - "**/*.html"
  phases:
    preTest:
      commands:
        - npm ci
    test:
      commands:
        - npm run affected:e2e
    postTest:
      commands:
        - npm run devtools:mochawesome:report
        - npm run devtools:cucumber:report

In the postTest hook, we added the mochawesome generator script, so it would be present in the cyreport folder. With these changes, we forgot only one thing. Namely, that we don't have changes in any of the NX projects or libs of the project, therefore, affected:e2e will not run any tests in CI for this PR. If no tests run, we don't have reports, and we have nothing to merge in our devtool script.

Generating empty report for not-affected pull-requests

In order to the mochawesome-merge script to run, we need one mochawesome_timestamp.json file in the cyreport/mochawesome-report folder. Since those folders are present in our .gitignore, commiting them would not be feasible. But we can easily generate one file, which follows the format. In the project's tools folder, let's create a simple ensure-mochawesome-report.js script which generates such a json file based on a simple json template:

const empty_report = `{
  "stats": {
    "suites": 0,
    "tests": 0,
    "passes": 0,
    "pending": 0,
    "failures": 0,
    "start": "${new Date().toISOString()}",
    "end": "${new Date().toISOString()}",
    "duration": 0,
    "testsRegistered": 0,
    "passPercent": 0,
    "pendingPercent": 0,
    "other": 0,
    "hasOther": false,
    "skipped": 0,
    "hasSkipped": false
  },
  "results": []
}`;

This template is generated whenever no e2e tests run in a CI/CD pipeline. If such a case occurs, the cyreport/mochawesome-report folder will not be present when the script runs. We need to make sure that the path is created before we write our json file. Also if the directory already exists, and it contains at least one file, we don't need to generate our dummy json. For these, we have two simple helper functions in our script:

const fs = require('fs');
const { join } = require('path');

function isDirectoryEmpty(path) {
  try {
    const files = fs.readdirSync(path);
    if (!files.length) {
      throw new Error('folder is empty');
    }
  } catch (e) {
    console.log(`${path} is empty`);
    return true;
  }
  console.log(`${path} is not empty`);
  return false;
}

function ensureDir(path) {
  try {
    fs.lstatSync(path);
  } catch (e) {
    if (e.code === 'ENOENT') {
      console.log(`creating ${path}...`);
      fs.mkdirSync(path);
    } else {
      console.log(`${path} already exists..`);
    }
  }
}

If the target directory is empty, we need to generate the file:

const cyreportPath = join(__dirname, '../', 'cyreport');
const mochawesomeReportPath = join(cyreportPath, 'mochawesome-report');

if (isDirectoryEmpty(mochawesomeReportPath)) {
  ensureDir(cyreportPath);
  ensureDir(mochawesomeReportPath);

  console.log('creating mochawesome.json empty json file');
  fs.writeFileSync(
    join(mochawesomeReportPath, `mochawesome_${new Date().getTime()}.json`),
    empty_report,
    { encoding: 'utf-8' }
  );
}

And let's update our package.json file to handle this scenario:

{
  "scripts": {
    "devtools:mochawesome:report": "node ./tools/ensure-mochawesome-report.js && npx mochawesome-merge cyreport/mochawesome-report/mochawesome*.json > cyreport/mochawesome.json",
  }
}

Finished?

With this setup, our build runs smoothly in AWS Amplify. However, there's one catch. Our "test" phase is not displayed on our builds in the console. That is because AWS Amplify has two sources of truth, and the consol displays are configured by the one you can set up in the Build settings. This can be solved rather quickly, by copying the amplify.yml file, and adding it to the App build specification config.

AWS Amplify build settings

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful. In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo The sample application For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS. The UI The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello endpoint. If there is a response with status code 2xx, it puts an h1 tag with the response contents into the div with the id result. If it errors out, it puts an error message into the same div. ` The API We bootstrap the NestJS app to have the api prefix before every endpoint call. ` We bootstrap it with the AppModule which only has the AppController in it. ` And the AppController sets up two very basic endpoints. We set up a health check on the /api route and our hello endpoint on the /api/hello route. ` Hosting the front-end with S3 and CloudFront To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally. We go with the defaults for this bucket setting it to the us-east-1 region. Let's set up the bucket to block all public access, because we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket. When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution. As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended). Create a new Control setting with the defaults. I recommend adding a description to describe this control setting. At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution. In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html and create the distribution. When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions tab. Edit the Bucket policy and paste the policy you just copied, then save it. If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name. If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello endpoint and should display an error message on the page. Hosting the API in Elastic Beanstalk Elastic beanstalk prerequisites For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src folder, let's create a Procfile with the contents: web: node main.js. Then open the apps/api/project.json and under the build configuration, extend the production build setup with the following (I only ) ` The above settings will make sure that when we build the API with a production configuration, it will generate a package.json and a package-lock.json near the output file main.js. To have a production-ready API, we set up a script in the package.json file of the repository. Running this will create a dist/apps/api and a dist/apps/frontend folder with the necessary files. ` After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later. ` Creating the Elastic Beanstalk Environment Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well. We are creating a Web server environment. Provide your application's name in the Application information section. You could also provide some unique tags for your convenience. In the Environment information section please provide information on your environment. Leave the Domain field blank for an autogenerated value. When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version. Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible) On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role for the EC2 instance profile. If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier, AWSElasticBeanstalkMulticontainerDocker and the AWSElasticBeanstalkRoleWorkerTier managed permissions. The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs. For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group. In the Capacity section we set the Environment type to Load balanced. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the instance types section, We only set this to t3.micro For this tutorial, but you might need to use larger tiers. Configure the Scaling triggers to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before. At the Load Balancer Type section, choose Application load balancer and select Dedicated for exactly this environment. Let's set up the listeners, to support HTTPS. Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default process. Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api route. Let's modify the default process accordingly and set its port to 8080. For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket. At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments. At the end, let's configure the Environment properties. We have set the default process to occupy port 8080, therefore, we need to set up the PORT environment variable to 8080. Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending /api/hello at the end of it. Connect CloudFront to the API endpoint Let's go back to our CloudFront distribution and select the Origins tab, then create a new origin. Copy your load balancer's URL into the Origin domain field and select HTTPS only protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use HTTP only, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to API. Leave everything else as default and create a new origin. Under the Behaviors tab, create a new behavior. Set up the path pattern as /api/* and select your newly created API origin. At the Viewer protocol policy select Redirect HTTP to HTTPS and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best. Now if you visit your deployment, when you click on the HELLO button, it should no longer attach an error message to the DOM. --- Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned....

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml file. We put this file under the tools/aws/ folder. ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci to install the dependencies and in the build phase, we are going to run the build command using the ENVIRONMENT_TARGET variable. This is useful, because if you have more environments, like development and staging you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the aws/codebuild/standard:7.0 version. This version uses Node 18. We want to always use the latest image version for this runtime and as the Environment type we are good with Linux EC2. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec section select Use a buildspec file and give the path from your repository root as the Buildspec name. For our example, it is tools/aws/build-and-deploy.buildspec.yml. We leave the Batch configuration and the Artifacts sections as they are and in the Logs section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the aws-codebuild-build-logs bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2) and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild as the build provider and let's select the build that we created above. Remember that we have the ENVIRONMENT_TARGET as a variable used in our build, so let's add it to this stage with the Plaintext value prod. This way the build will run the build:prod command from our package.json. As the Build type we want Single build. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role. Let's go to the IAM page in the console and open the Roles page. Search for our codebuild role and let's add permissions to it. Click the Add permissions button and select Attach policies. We need two AWS-managed policies to be added to this service role. The AdministratorAccess-AWSElasticBeanstalk will allow us to deploy the API and the AmazonS3FullAccess will allow us to deploy the front-end. The CloudFrontFullAccess will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws command. Let's update our buildspec file with the following changes: ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit button. This will then enable us to edit the Build stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod as Plaintext for the FRONT_END_BUCKET and E3FV1Q1P98H4EZ as Plaintext for the CLOUDFRONT_DISTRIBUTION_ID Now if we add changes to our index.html file, for example, change the button to HELLO 2, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID: #{SourceVariables.CommitId} - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME: Test AWS App - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME: TestAWSApp-prod - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET: elasticbeanstalk-us-east-1-474671518642 - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. ` The APP_VERSION variable is the version property from the package.json file. In a release process, the application's version is stored here. The API_VERSION variable will contain the APP_VERSION and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the API_ZIP_KEY will have this information. The APP_VERSION_DESCRIPTION will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. ` Let's make a change in the API, for example, the message sent back by the /api/hello endpoint and push up the changes. --- Now every time a change is merged to the main branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Introducing the express-typeorm-postgres Starter Kit cover image

Introducing the express-typeorm-postgres Starter Kit

Here at This Dot, we've been working with ExpressJS APIs for a while, and we've created a starter.dev kit for ExpressJS that you can use to scaffold your next backend project. The starter kit uses many well-known npm packages, such as TypeORM or BullMQ and integrates with databases such as PostgreSQL and Redis. Kit contents The express-typeorm-postgres starter kit provides you with infrastructure for development, and integrations with these infrastructures. It comes with a working Redis instance for caching and a second Redis instance for queues. It also starts up a Postgres instance for you, which you can seed with TypeORM. The infrastructure runs on docker using docker-compose. The generated project comes with prettier and eslint set-up, so you only need to spend time on configuration if you want to tweak or change the existing rules. Unit testing is set up using Jest, and there are some example tests provided with the example controllers. How to initialise API development usually requires more infrastructure than front-end development. Before you start, please make sure you have docker and docker-compose installed on your machine. To initialize a project with the express-typeorm-postgres kit, run the following: 1. Run npx @this-dot/create-starter to run the scaffolding tool 2. Select the Express.js, TypeORM, and PostgreSQL kit from the CLI library options 3. Name your project 4. cd into your project directory, and install dependencies using the tool of your choice (npm, yarn or pnpm) 5. copy the contents of the .env.example file into a .env file With this setup, you have a working starter kit that you can modify to your needs. TypeORM and Database When we started developing the kit, we decided to use PostgreSQL as the database, because it is a powerful, open-source object-relational database system that is widely used for storing and manipulating data. It has a strong reputation for reliability, performance, and feature richness, making it a great choice for a wide range of applications. It can also handle high levels of concurrency and large amounts of data, and supports complex queries and data types. Postgres is also highly extensible because it allows developers to add custom functions and data types to the database. It has a large and active community of developers and users who contribute to its ongoing development and support. The kit uses TypeORM to connect to the database instance. We chose TypeORM because it makes it easy to manage database connections and perform common database operations, such as querying, inserting, updating and deleting data. It supports TypeScript and a wide range of databases, such as PostgreSQL, MySQL, SQLite and MongoDB, therefore if you want to be able to switch between databases, it makes it easier. TypeORM also includes features such as database migrations, which help manage changes to database schema over time, and an entity model that allows you to define your database schema using classes and decorators. Overall, TypeORM is a useful tool for improving the efficiency and reliability of database-related code, and it can be a valuable addition to any TypeScript or JavaScript project that needs to interact with a database. To seed an initial set of data into your database, run the following commands: 1. npm run infrastructure:start - this starts up the database instance 2. npm run db:seed - this leverages TypeORM to seed the database. The seed command runs the src/db/run-seeders.ts file, where you can introduce your seeders for your own needs. The kit uses TypeORM-extension for seeding. Please refer to the src/db/seeding/technology-seeder.ts file for an example. Caching Storing response data in caches allows subsequent requests for the same data to be served more quickly. This can improve the performance and user experience of an API by reducing the amount of time it takes to retrieve data from the server. It can reduce the load on the database or mitigate rate limiting on third-party APIs called from your back-end. It also improves the reliability of an application by providing a fallback mechanism in case the database or the server is unavailable or slow to respond. There is a Redis instance set up in the kit to be used for caching data. Under the hood, we use the cachified library to store cached data in Redis. The kit has a useCache method exported from src/cache/cache.ts, which requires a key and a callback function to be called to fetch the data. ` When you need to invalidate cache entries, you can use the clearCacheEntry method by supplying a key string to it. It will remove the cached data from Redis, and the next request that fetches from the database will cache the new values. ` Under the src/modules/technology folder, you can see a complete example of a basic CRUD REST endpoint with caching enabled. Feel free to use those handlers as examples for your development needs. Queue A message queue allows different parts of an application, or different applications, to communicate with each other asynchronously by sending and receiving messages. This can be useful in a variety of situations, such as when one part of the application needs to perform a task that could take a long time, or when different parts of the application need to be decoupled from each other for flexibility and scalability. We chose BullMQ because it is a fast, reliable, and feature-rich message queue system. It is built on top of the popular Redis in-memory data store, which makes it very performant and scalable. It has support for parallel processing, rate limiting, retries, and a variety of other features that make it well-suited for a wide range of use cases. BullMQ has a straightforward API and good documentation. The kit has a second Redis instance set up to be used with BullMQ, and there is a queue set up out of the box, so resource-intensive tasks can be offloaded to a background process. The src/queue/ folder contains all the configuration and setup steps for the queue. Both the queue and its worker is set up in the queue.ts file. The job-processor.ts file contains the function that will process the data. To run int in a separate thread, we must pass the path to this file into the worker: ` When to use this kit This kit is most optimal when you: - want to build back-end services that can be consumed by other applications and services using ExpressJS - need a flexible and scalable way to build server-side applications - need to deal with CPU-intense operations on the server and you need a messaging queue - need to build an API with relational data - would like to just jump right into API development with ExpressJS using TypeORM and Postgres Conclusion The express-typeorm-postgres starter kit can help you kickstart your development by providing you with a working preset. It has testing configured, and it comes with a complete infrastructure orchestrated by docker-compose....

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer<typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co