Developer Insights
Join millions of viewers! Our engineers craft human-written articles solving real-world problems weekly. Enjoy fresh technical content and numerous interviews featuring modern web advancements with industry leaders and open-source authors.
How to set up local cloud environment with LocalStack
Most applications are powered by AWS services due to their richness in providing services for specific purposes. However, testing an AWS application while developing without maintaining a dev account is hard....
Jun 19, 2024
6 mins
Vercel vs. VPS - What's the drama, and which one should you choose?
A breakdown of the hosting options you have and when you should choose a cloud provider such as Vercel and when a VPS or a Linux server in your basement is just enough....
May 8, 2024
8 mins
GitHub Actions for Serverless Framework Deployments
Background Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard) for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx) would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution. Serverless Configuration Restrictions By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats) including .json, .js, and .ts. Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle. Prohibitive Project Structures With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue: ` What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like: To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete. Moving to GitHub Actions with the Serverless Dashboard We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs), we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born: api-ci.yml ` api-clean.yml ` These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success. Local Development Problems The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment): serverless.ts ` The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error: Resulting Configuration At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working: serverless.ts - Remove app and org fields - Remove accessing environment secrets via the param option ` api-ci.yml - Add all our secrets to GitHub and include them in the scripts - Add serverless confg ` api-cleanup.yml - Add serverless config - Remove secrets ` Conclusions The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes: - Add support for different configuration file types - Add better support custom deployment commands - Update the framework to not fail on login so local development works regardless of dashboard credentials The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already....
Sep 26, 2022
5 mins
Migrating a classic Express.js to Serverless Framework
Problem Classic Express.js applications are great for building backends. However, their deployment can be a bit tricky. There are several solutions on the market for making deployment "easier" like Heroku, AWS Elastic Beanstalk, Qovery, and Vercel. However, "easier" means special configurations or higher service costs. In our case, we were trying to deploy an Angular frontend served through Cloudfront, and needed a separately deployed backend to manage an OAuth flow. We needed an easy to deploy solution that supported HTTPS, and could be automated via CI. Serverless Framework The Serverless Framework is a framework for building and deploying applications onto AWS Lambda, and it allowed us to easily migrate and deploy our Express.js server at a low cost with long-term maintainability. This was so simple that it only took us an hour to migrate our existing API, and get it deployed so we could start using it in our production environment. Serverless Init Script To start this process, we used the Serverless CLI to initialize a new Serverless Express.js project. This is an example of the settings we chose for our application: ` Here's a quick explanation of our choices: What do you want to make? This prompt offers several possible scaffolding options. In our case, the Express API was the perfect solution since that's what we were migrating. What do you want to call this project? You should put whatever you want here. It'll name the directory and define the naming schema for the resources you deploy to AWS. What org do you want to add this service to? This question assumes you are using the serverless.com dashboard for managing your deployments. We're choosing to use Github Actions and AWS tooling directly though, so we've opted out of this option. Do you want to deploy your project? This will attempt to deploy your application immediately after scaffolding. If you don't have your AWS credentials configured correctly, this will use your default profile. We needed a custom profile configuration since we have several projects on different AWS accounts so we opted out of the default deploy. Serverless Init Output The init script from above outputs the following: - .gitignore - handler.js - package.json - README.md - serverless.yml The key here is the serverless.yml and handler.js files that are outputted. serverless.yml ` handler.js ` As you can see, this gives a standard Express server ready to just work out of the box. However, we needed to make some quality of life changes to help us migrate with confidence, and allow us to use our API locally for development. Quality of Life Improvements There are several things that Serverless Framework doesn't provide out of the box that we needed to help our development process. Fortunately, there are great plugins we were able to install and configure quickly. Environment Variables We need per-environment variables as our OAuth providers are specific per host domain. Serverless Framework supports .env files out of the box but it does require you to install the dotenv package and to turn on the useDotenv flag in the serverless.yml. Babel/TypeScript Support As you can see in the above handler.js file, we're getting CommonJS instead of modern JavaScript or TypeScript. To get these, you need webpack or some other bundler. serverless-webpack exists if you want full control over your ecosystem, but there is also serverless-bundle that gives you a set of reasonable defaults on webpack 4 out of the box. We opted into this option to get us started quickly. Offline Mode With classic Express servers, you can use a simple node script to get the server up and running to test locally. Serverless wants to be run in the AWS ecosystem making it. Lucky for us, David Hérault has built and continues to maintain serverless-offline allowing us to emulate our functions locally before we deploy. Final Configuration Given these changes, our serverless.yml file now looks as follows: ` Some important things to note: - The order of serverless-bundle and serverless-offline in the plugins is critically important. - The custom port for serverless-offline can be any unused port. Keep in mind what port your frontend server is using when setting this value for local development. - We set the profile and stage in our provider configuration. This allowed us to use specify the environment settings and AWS profile credentials to use for our deployment. With all this set, we're now ready to deploy the basic API. Deploying the new API Serverless deployment is very simple. We can run the following command in the project directory: ` This command will deploy the API to AWS, and create the necessary resources including the API Gateway and related Lambdas. The first deploy will take roughly 5 minutes, and each subsequent deply will only take a minute or two! In its output, you'll receive a bunch of information about the deployment, including the deployed URL that will look like: ` You can now point your app at this API and start using it. Next Steps A few issues we still have to resolve but are easily fixed: - New Lambdas are not deploying with their Environment Variables, and have to be set via the AWS console. We're just missing some minor configuration in our serverless.yml. - Our deploys don't deploy on merges to main. For this though, we can just use the official Serverless Github Action. Alternatively, we could purchase a license to the Serverless Dashboard, but this option is a bit more expensive, and we're not using all of its features on this project. However, we've used this on other client projects, and it really helped us manage and monitor our deployments. Conclusion Given all the above steps, we were able to get our API up and running in a few minutes. And due to it being a 1-to-1 replacement for an existing Express server, we were able to port our existing implementations into this new Serverless implementation, deploy it to AWS, and start using it in just a couple of hours. This particular architecture is a great means for bootstraping a new project, but it does come with some scaling issues for larger projects. As such, we do not recommend a pure Serverless, and Express for monolithic projects, and instead suggest utilizing some of the amazing capabilities of Serverless Framework with AWS to horizontally scale your application into smaller Lambda functions....
Jan 17, 2022
5 mins
Creating Custom GitHub Actions
Since its generally available release in Nov 2019, Github Actions has seen an incredible increase in adoptions. Github Actions allows you to automate, customize and execute your software development workflows. In this article, we will learn how to create our first custom Github Actions using Typescript. We will also show some of the best practices, suggested by Github, for publishing and versioning our actions. Types of Actions There two types of publishable actions: Javascript and Docker actions. Docker containers provide a more consistent and reliable work unit than Javascript actions because they package the environment with the Github Actions code. They are ideal for actions that must run in a specific configuration. On the other hand, JavaScript actions are faster than Docker actions since they run directly on a runner machine and do not have to worry about building the Docker image every time. Javascript actions can run in Windows, Mac, and Linux, while Docker actions can just run in Linux. But most importantly (for the purpose of this article), Javascript actions are easier to write. There is a third kind of Action: the Composite run steps Actions. These help you reuse code inside your project workflows, and hide complexity when you do not want to publish the Action to the marketplace. You can quickly learn how to create Composite run step Actions in this video, or by reading through the docs. The Action For this article, we will be creating a simple Javascript Action. We will use the Typescript Action template to simplify our setup, and use TypeScript out of the box. The objective is to walk over the whole lifecycle of creating and publishing a custom GitHub Action. We will be creating a simple action that counts the Lines of Code (LOC) of a given type of file, and throws if the sum of LOC exceeds a given threshold. > Keep in mind that the source code is not production-ready and should only be used for learning. The Action will receive three params: - fileOrFolderToProcess (optional): The file or folder to process - filesAndFoldersToIgnore (optional): A list of directories to ignore. It supports glob patterns. - maxCount (required): The maximum number of LOC for the sum of files. The Action recursively iterates over all files under the folder to calculate the total amount of Lines of Code for our project. During the process, the Actions will skip the files and folders marked to ignore, and at the end, if the max count is reached, we throw an error. Additionally, we will set the total LOC in an Action output no matter the result of the Action. Setting up the Environment JavaScript Github Actions are not significantly different from any other Javascript project. We will set up some minimal configuration, but you should feel free to add your favorite workflow. Let us start by creating the repository. As mentioned, we will use the Typescript Github Actions template, which will provide some basic configuration for us. We start by visiting https://github.com/actions/typescript-action. We should see something like this: The first thing we need to do is add a start to the repo :). Once that is completed, we will then click on the "Use this template" button. We are now in a regular "create new repository" page that we must fill. We can then create our new repository by clicking the "Create repository from template" button. Excellent, now our repository is created. Let us take a look at what this template has provided for us. The first thing to notice is that Github recognizes that we are in a GitHub Actions source code. Because of that, GitHub provides a contextual button to start releasing our Action. The file that allows this integration is the action.yml file. That is the action metadata container, including the name, description, inputs, and outputs. It is also where we will reference the entry point .js for our Action. The mentioned entry point will be located in the dist folder, and the files contained there is the result of building our Typescript files. > Important! Github uses the dist folder to run the Actions. Unlike other repositories, this build bundle MUST be included in the repository, and should not be ignored. Our source code lives in the source folder. The main.ts is what would be compiled to our Action entry point index.js. There is where most of our work will be focused. Additional files and configurations In addition to the main files, the TypeScript template also adds configuration files for Jest, TypeScript, Prettier and ESLint. A Readme template and a CODEOWNERS file are included, along with a LICENSE. Lastly, it will also provide us with a GitHub CI YAML file with everything we need to e2e test our Action. Final steps To conclude our setup walkthrough, let us clone the repository. I will be using mine, but you should replace the repository with yours. ` Navigate to the cloned project folder, and install the dependencies. ` Now we are ready to start implementing our Action. The implementation First we must configure our action.yml file and define our API. The metadata The first three properties are mostly visual metadata for the Workspace, and the Actions tab. ` The name property is the name of your Action. GitHub displays the name in the Actions tab to help visually identify actions in each job. GitHub will also use the name, the description, and the author of the Action to inform users about the Action goal in the Actions Marketplace. Ensure a short and precise description; Doing so will help the users of the Action quickly identify the problem that the Action is solving. Next, we define our inputs. Like we did with the Action, we should write a short and precise description to avoid confusion about the usage of each input variable. ` We will mark our inputs as required or optional, according to what we already specified when describing our plans for the Action. The default values help provide pre-configured data to our Action. As with the inputs, we must define the outputs. ` Actions that run later in a workflow can use the output data set in our Action run. If you don't declare an output in your action metadata file, you can still set outputs and use them in a workflow. However, it would not be evident for a user searching for the Action in the Marketplace since GitHub cannot detect outputs that are not defined in the metadata file. Finally, we define the application running the Action and the entry point for the Action itself. ` Now, let's see everything together so we can appreciate the big picture of our Action metadata. ` The Code Now that we have defined all our metadata and made GitHub happy, we can start coding our Action. Our code entry point is located at src/maint.ts. Let's open the file in our favorite IDE and start coding. Let's clean all the unnecessary code that the template created for us. We will, however, keep the core tools import. ` The core library will give us all the tools we need to interact with the inputs and outputs, force the step to fail, add debugging information, and much more. Discover all the tools provided by the Github Actions Toolkit. After cleaning up all of the example code, the initial step would be extracting and transforming our inputs to a proper form. ` With our inputs ready, we need to start thinking about counting our LOC while enforcing the input restrictions. Luckily there is a couple of libraries that can do this for us. For this example, we will be using node-sloc, but feel free to use any other. Go on and install the dependency using npm or any package manager that you prefer. ` Import the library. ` And the rest of the implementation is straightforward. ` Great! We have our LOC information ready. Let's use it to set the output defined in the metadata before doing anything else. ` Additionally, we will also provide debuggable data. Notice that debug information is only available if the repository owner activated debug logging capabilities. ` Here is the link if you are interested in debugging the Action yourself. Finally, verify that the count of the LOC is not exceeding the threshold. ` If the threshold is exceeded, we use the core.setFailed, to make this action step fail and, therefore, the entire pipeline fails. ` Excellent! We just finished our Action. Now we have to make it available for everyone. But first, lets configure our CI to perform an e2e test of our Action. Go to the file .github/workflows/*.yml. I called mine ci.yml but you can use whatever name makes sense to you. ` Here, we are triggering the pipeline whenever a pull request is created with base branch main or the main branch itself is pushed. Then, we run the base setup steps, like installing the packages, and building the action to verify that everything works as it should. Finally, we run e2e jobs that will test the actions as we were running it in an external application. That's it! Now we can publish our Action with confidence. Publish and versioning Something you must not forget before any release is to build and package your Action. ` These commands will compile your TypeScript and JavaScript into a single file bundle on the dist folder. With that ready, we can commit our changes to the main branch, and push to origin. Go back to your browser and navigate to the Action repository. First, go to the Actions tab and verify that our pipeline is green and the Action is working as expected. After that check, go back to the "Code" tab, the home route of our repository. Remember the "Draft a release" button? Well, it is time to click it. We are now on the releases page. This is where our first release will be created. Click on the terms and conditions link, and agree with the terms to publish your actions. Check the "Publish this Action to the Github Marketplace" input, and fill in the rest of the information. You can mark this as pre-release if you want to experiment with the Action before inviting users to use it. And that's it! Just click the "Publish release" button. Tada! Click in the marketplace button to see how your Action looks! After the first release is out, you will probably start adding features or fixing bugs. There are some best practices that you should follow while maintaining your versioning. Use this guide to keep your version under control. But the main idea is that the major tag- v1 for instance- should always be referencing the latest tag with the same major version. This means that if we release v1.9.3 we should update v1 to the same commit as v1.9.3. Our Action is ready. The obvious next step is to test it with a real application. Using the Action Now it is time to test our Action, and see how it works in the wild. We are going to use our Plugin Architecture example application. If you have read that article yet, here is the link. The first thing we need to do is create a new git branch. After that, we create our ci.yml file under .github/workflows. And we add the following pipeline code. ` Basically, we are just triggering this Action when a PR is created using main as the base branch, or if we push directly to main. Then, we add a single job that will checkout the PR branch and use our Action with a max count of 200. Finally, we print the value of our output variable. Save, commit, and push. Create your PR, go to the check tab, and see the result of your effort. Great! We have our first failing custom GitHub action. Now, 200 is a bit strict. Maybe 1000 lines of code are more appropriate. Adjust your step, commit, and push to see your pipeline green and passing. How great is that!? Conclusion Writing Custom GitHub Actions using JavaScript and TypeScript is really easy, but it can seem challenging when we are not familiar with the basics. We covered an end-to-end tutorial about creating, implementing, publishing, and testing your Custom GitHub Action. This is really just the beginning. There are unlimited possibilities to what you can create using GitHub Actions. Use what you learned today to make the community a better place with the tools you can create for everyone....
Oct 27, 2021
13 mins
Build IT Better - DevOps - Monitoring Roundup
Build IT Better DevOps - Monitoring Roundup On This Dot's Build IT Better show, I talk to people who make popular tools that help developers make great software. In my most recent series, we looked at application monitoring tools. Marcus Olssen from Grafana and Ben Vinegar from Sentry showed us how the tools they work on can help developers keep their applications running smoothly. Grafana Grafana is an organization that builds a number of open source observation and monitoring tools for collecting and visualizing application metrics. Their namesake product is a platform for aggregating and visualizing any kind of data from a near limitless number of sources via their rich plugin library. Grafana's commercial counterpart, Grafana Labs, maintains this plugin library as well as educational resources for the Grafana ecosystem and paid products and services for companies that are looking for help managing their own Grafana tooling. Flexibility Grafana is a platform for application monitoring and analytics that offers a really huge amount of flexibility for collecting and analyzing application data. Instead of providing a hyper-focused application monitoring solution, Grafana provides unparallelled flexibility for collecting almost any kind of data. Grafana offers built in integrations for all the most popular SQL and non-SQL databases, as well as Grafana's own popular application monitoring tools, Prometheus, Loki, and Tempo (and a handful of other popular sources). Community developed plugins can be used to add support for most other platforms. This flexibility allows Grafana to have applications outside the traditional application monitoring use cases. Some are even using Grafana to track their own home energy usage and health data. You can really analyze almost any kind of data in Grafana. *A Grafana dashboard with custom metrics* Datasource Compatibility While flexibility allows Grafana to reach across industries to find users and use cases, it still excels at traditional application monitoring. Developers can use Prometheus to pull data out of their own applications. Most popular host operating systems and appliation development frameworks offer community developed integrations with Prometheus that will provide useful system and application data like resource usage and response time, as well as the ability to publish your own custom application data. Loki is a tool for aggregating and querying system and application logs. Also, you can use Tempo for aggregating distributed application trace data from tools like Jaeger, OpenTelemetry, and Zipkin. If you use all 4 tools together, you can visually trace transactions all the way through your application, even as the user shifts between different components of your microservice architecture. Visualization and Analysis All of this flexible data collection technology would be useless without Grafana's equally flexible visualization platform. Once you've integrated all your data sources, you can use Grafana to explore and visualize data you've collected. You can use dashboards to create an array of vizualizations of your data. As a DevOps engineer, one of my favorite things about Grafana is their Dashboard library. The dashboard library contains community developed dashboards for a number of popular (and not so popular) application frameworks and backend tools and systems. Instead of needing to make your own dashboards from scratch for monitoring Rails apps and PostgreSQL databases, you can simply add and modify community Dashboards, saving you time and providing insights you may not have considered on your own. Finally, we have to mention the Explore tool. It can be easy to overlook with everything that's possible with Dashboards, but it allows users to easily view, query, and analyze data streams on the fly without needing to create permanent dashboards. *Grafana Nginx Dashboard - available from the dashboard library* This big tent collection of features makes Grafana a great platform for observing any amount of any kind data. The flexibility does come with the overhead of needing to know a lot about a number of different tools like Prometheus and Loki, which have a non-trivial amount of overhead on their own. As with any community-developed content, plugins and dashboards from the library don't always work as expected out of the box and will often need to be modified to line up with your devops procedures and environments. Sentry Sentry, like Grafana, is a tool for monitoring application health and performance. However, unlike Grafana, Sentry is laser-focused on providing curated experiences with deep first party integrations for popular application development tools and provides some additional tools for tracking user errors and code changes, which it uses as the framing narrative for all of the data the Sentry platform surfaces for developers. Integratons are available for most popular frontend JavaScript frameworks (React, Angular, Vue, etc) and backend applications in Python, Ruby, Go, and more. Sentry gives developers a huge amount of visibility without the overhead of more complex devops driven platforms like Grafana. Developer Focused Sentry's primary goal is to help you understand what's wrong with all of the parts of your application. They do this by giving you a view into what errors your users are experiencing in real time. Sentry collects data on all of the exceptions thrown by applications which have a Sentry integration. As you investigate individual issues, Sentry provides you with a curated collection of datapoints to cross reference with the specific error. Sentry provides some very traditional data, such as user like browser agent, OS, their geographical location, and the url they were visiting, but it also connects that error back to the code. Not only can you see the stack trace and easily see the lines of code where the error manifested, but Sentry also uses its deep integration to provide what they call "Breadcrumbs." These are pieces of data about what actual activity led up to the error. Depending on the what type of application you're troubleshooting, this might be things like log output, events fired from UI elements, or your own custom breadcrumb events. These can give you a better idea of the actions the user took leading up to the error. *Sentry's Issue (aka Error) View* *A sample of Sentry's Breadcrumbs* Integrations In addition to helping you identify the root cause of your errors, Sentry also aggregates errors to make it easier for you to understand which errors have the highest impact on your application. You can easily identify errors that are happening frequently and on critical paths. If you've enabled integration with a source control platform like GitHub, Sentry will even make suggestions as to which code commits introduced the problem. All these features together will help you tackle application health like a devops expert, without needing to be a devops expert. Application Performance Debugging and error surfacing aren't the only place where Sentry shines. I'm really excited to talk about Sentry's performance and application tracing platform. Using their deep framework and platform integrations, you're able to collect a lot of performance data from your applicaitons and to coallate them with user behaviors. Similar to the debugging experience, Sentry starts you from a broad view of your performance picture, and shows you the slowest pages and endpoints of your application, and provides you with another curated experince for investigating and resolving performance problems. The most interesting aspect of the performance investigation tools are transactions, or traces. When you choose a slow page to begin investigating, alongside the individual performance metrics for that page, are transactions. These transactions allow you to see the performance of your pages broken into waterfall graphs, like you might already be used to from browser dev tools. However, Sentry adds some really cool tricks since they're deeply integrated into all the parts of your application. If you analyze a transaction that starts from your javascript app and see that there's a fetch request that's taking a long time, assuming the API is part of your stack that's integrated with Sentry, you can click down into that fetch request within the Sentry UI and switch contexts to the API application and see a waterfall graph of what the API did to handle that request, allowing you to simply traverse your whole application to identify the exact source of performance problems. These transactions also benefit from the same Breadcrumb and code change data that's provided in the error analysis tools. Conclusions Sentry and Grafana are both strong tools to add to your DevOps toolbelt. While they both provide great features for observing application health and analyzing data, they really fill two pretty different niches. Sentry provides curated developer experiences and deep integrations that will help developers dive head first into error and performance monitoring for their applications without needing to be experts. However for experts and "data scientists" Grafana provides an incredibly powerful and flexible platform for not only analyzing application metrics and health, but really any data you can manage to get into a Dashboard. Some organizations may even benefit from using both tools for different use cases....
Apr 20, 2021
7 mins
Let's innovate together!
We're ready to be your trusted technical partners in your digital innovation journey.
Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.