Developer Insights
Join millions of viewers! Our engineers craft human-written articles solving real-world problems weekly. Enjoy fresh technical content and numerous interviews featuring modern web advancements with industry leaders and open-source authors.
Integrating Next.js with New Relic
Check out our detailed guide on integrating New Relic’s application monitoring with Next.js....
Jul 19, 2024
7 mins
Communication Between Client Components in Next.js
Describing different strategies for communication between client components in Next.js....
Jun 21, 2024
5 mins
Maximizing Server Rendering for Interactive Next.js Applications
Maximize server rendering while maintaining interactivity by strategically combining React Server Components (RSCs) and client components in a Next.js application....
May 15, 2024
5 mins
A Look at Playwright Parallelism
In this blog post, we are exploring Playwright’s parallelism capabilities to speed up test execution....
Mar 20, 2024
5 mins
Deploying Next.js Applications to Fly.io
Fly.io has gained significant popularity in the developer community recently, particularly after being endorsed by Kent C. Dodds for hosting his Epic Web project. It's a go-to choice for hobby projects, thanks to its starting plans that are effectively free, making it highly accessible for individual developers. While Next.js applications are often deployed to Vercel, Fly.io has emerged as a perfectly viable alternative, offering robust hosting solutions and global deployment capabilities. In this blog post, we'll give an overview of how to install a Next.js app to Fly.io, mentioning any gotchas you should be aware of along the way. The Project Our project is a simple Next.js project using the latest version of Next.js at the time of the writing (14). It uses the app directory and integrates with Spotify to get a list of episodes for our podcast, Modern Web. The bulk of the logic is in the page.tsx file, shown below, which represents the front page that is server-rendered after retrieving a list of episodes from Spotify. ` The getEpisodes is a custom function that is a wrapper around the Spotify API. It uses the Spotify client ID and secret (provided through the SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables, respectively) to get an access token from Spotify and invoke a REST endpoint to get a list of episodes for the given show ID. As can be seen from the above code, the Home is an async, server-rendered component. Scaffolding of Fly.io Configuration To get started with Fly.io and deploy a new project using flyctl, you need to go through a few simple steps: installing the flyctl CLI, logging into Fly.io, and using the flyctl launch command. Installing the CLI Installing flyctl is different depending on the operating system you use: - If you're on Windows, the easiest way to install flyctl is by using scoop, a command-line installer. First, install scoop if you haven’t already, then run scoop install flyctl in your command prompt or PowerShell. - For macOS users, you can use Homebrew, a popular package manager. Simply open your terminal and run brew install superfly/tap/flyctl. - Linux users can install flyctl by running the following script in the terminal: curl -L https://fly.io/install.sh | sh. This will download and install the latest version. Logging In After installing flyctl, the next step is to log in to your Fly.io account. Open your terminal or command prompt and enter flyctl auth login. This command will open a web browser prompting you to log in to Fly.io. If you don’t have an account, you can create one at this step. Once you're logged in through the browser, the CLI will automatically be authenticated. Scaffolding the Fly.io Configuration The next step is to use fly launch to add all the necessary files for deployment, such as a Dockerfile and a fly.toml file, which is the main Fly.io configuration file. This command initiates a few actions: - It detects your application type and proposes a configuration. - It sets up your application on Fly.io, including provisioning a new app name if you don’t specify one. - It allows you to select a region to deploy to. There are really many to choose from, so you can get really picky here. Once the process completes, flyctl will be ready for deploying the application. In our case, the process went like this: ` Deploying Now, if this was a simpler Next.js app without any environment variables, running flyctl deploy would build the Docker image in a specialized "builder" app container on Fly.io and deploy that image to the app container running the app. However, in our case, executing flyctl deploy will fail: ` This is because our page is statically rendered, and the Next.js build process attempts to run Home, our server-rendered component to cache its output. Before we can do this, we need to add our environment variables so that Fly.io is aware of them, but this is somewhat a tricky subject, so let's explain why in the following chapter. Handling of Secrets Most complex web apps will need some secrets injected into the app via environment variables. Environment variables are a good way to inject sensitive information, such as API secret keys, to your web app without storing them in the repository, the file system, or any other unprotected place. Unlike other providers such as the previously mentioned Vercel, Fly.io distinguishes built-time and run-time secrets, which are then injected as environment variables. Build-time secrets are those secrets that your app requires to build itself, while run-time secrets are needed while the app is running. In our case, due to the fact that Next.js will attempt to cache our static pages upfront, the Spotify client ID and client secret are needed during both build-time and run-time (after the cache expires). Build-Time Secrets Our Next.js app is built while building the Docker image, therefore build-time secrets should be passed to the Docker context. The recommended, Docker-way of doing this, is through Docker's build-time secrets, which are added through a special --mount=type=secret flag to the RUN command that builds the site. This is a relatively newer feature that allows you to securely pass secrets needed during the build process without including them in the final image or even as an intermediate layer. This means that, instead of having the following build command in our Dockerfile: ` we will have: ` Now, you can either modify the Dockerfile manually and add this, or you can use a helpful little utility called dockerfile: ` If we were using docker build to build our image, we would pass the secret values like so: ` However, in our case we use fly deploy, which is just a wrapper around docker build. To pass the secrets, we would use the following command: ` And now, the app builds and deploys successfully in a few minutes. To summarize, if you have any secrets which are necessary at build-time, they will need to be provided to the fly deploy command. This means that if you have a CI/CD pipeline, you will need to make sure that these secrets are available to your CI/CD platform. In the case of GitHub actions, they would need to be stored in your repository's secrets. Run-Time Secrets Run-time secrets are handled in a different way - you need to provide them to Fly.io via the fly secrets set command: ` Now, you might be wondering why fly deploy cannot use these secrets if they are already stored in Fly.io. The architecture of Fly.io is set up in such a way that reading these secrets through the API, once they are set, is not possible. Secrets are stored in an encrypted vault. When you set a secret using fly secrets set, it sends the secret value through the Fly.io API, which writes it to the vault for your specific Fly.io app. The API servers can only encrypt; they cannot decrypt secret values. Therefore, the fly deploy process, which is, if you remember, just a wrapper around docker build, cannot access the decrypted secret values. Other Things to Consider Beware of .env Files In Next.js, you can use .env as well as .env.local for storing environment variable values for local development. However, keep in mind that only .env.local files are ignored by the Docker build process via the .dockerignore file generated by Fly.io. This means that if you happen to be using an .env file, this file could be bundled into your Docker image, which is potentially a security risk if it contains sensitive information. To prevent this from happening, be sure to add .env to your .dockerignore file as well. Not Enough Memory? For larger Next.js sites, you might run into situations where the memory of your instance is simply not enough to run the app, especially if you are on the hobby plan. If that happens, you have two options. The first one does not incur any additional costs, and it involves increasing the swap size. This is not ideal, as more disk operation is involved, making the entire process slower, but it is good enough if you don't have any other options. To set swap size to something like 512 MB, you need to add the following line to the fly.toml file near the top: ` The second one is increasing memory size of your instance. This does incur additional cost, however. If you decide to use this option, the command to use is: ` For example, to increase RAM memory to 1024 MB, you would use the command: ` After making the changes, you can try redeploying and seeing if the process is still crashing due to lack of memory. Conclusion In conclusion, deploying Next.js applications to Fly.io offers a flexible and robust solution for developers looking for alternatives to more commonly used platforms like Vercel. We hope this blog post has provided you with some useful insights on the things to consider when doing so. Be sure to also check out our Next starter templates on starter.dev if you'd like to integrate a few other frameworks into your Next.js project. The entire source code for this project is available on Stackblitz....
Feb 21, 2024
7 mins
OAuth2 for JavaScript Developers
Using GitHub as an example, the post guides JavaScript developers through the OAuth2 process, emphasizing its importance for secure and efficient third-party integrations in web applications....
Jan 19, 2024
5 mins
Angular 17: Continuing the Renaissance
Dive into the Angular Renaissance with Angular 17, emphasizing standalone components, enhanced control flow syntax, and a new lazy-loading paradigm. Discover server-side rendering improvements, hydration stability, and support for view transitions....
Nov 20, 2023
6 mins
The Renaissance of PWAs
What Are PWAs? Progressive Web Apps, or PWAs, are not a new concept. In fact, they have been around for years, and have been adopted by companies such as Starbucks, Uber, Tinder, and Spotify. Here at This Dot, we have written numerous blog posts on PWAs. PWAs are essentially web applications that utilize modern web technologies to deliver a user experience akin to native apps. They can work offline, send push notifications, and even be added to a user's home screen, thus blurring the boundaries between web and native apps. PWAs are built using standard web technologies like HTML, CSS, and JavaScript. However, they leverage advanced web APIs to deliver enhanced capabilities. Most web apps can be transformed into PWAs by incorporating certain features and adhering to standards. The keystones of PWAs are the service worker and the web app manifest. Service workers enable offline operation and background syncing by acting as network proxies, managing requests programmatically. The web app manifest, on the other hand, gives the PWA a native-like presence on the user's device, specifying its appearance when installed. Looking Back The concept of PWAs was introduced by Google engineers Alex Russell and Frances Berriman in 2015, even though Steve Jobs had already discussed the idea of web apps that resembled and behaved like native apps as early as 2007. However, despite the widespread adoption of PWAs over the years, Apple's approach to PWAs drastically changed after Steve Jobs' 2007 presentation, distinguishing it from other tech giants such as Google and Microsoft. As a leader in technological innovation, Apple was notably slower in adopting PWA technology, much of which can be attributed to its business model and the ecosystem it built around the App Store. For example, Safari, Apple's web browser, has historically been slow to adopt the latest web standards and APIs crucial for PWAs. Features such as push notifications, background sync, and access to certain hardware functionalities were unsupported or only partially supported for a long time. As a result, the PWA experience on iOS/iPadOS was not - and to some extent, still isn't - on par with that provided by Android. Despite the varying degrees of support from different vendors, PWAs have seen a significant increase in adoption since 2015, both by businesses and users, due to their cross-platform nature, offline capabilities, and the enhanced user experience they offer. Major corporations like Twitter, Pinterest, and Alibaba have launched PWAs, leading to substantial increases in user engagement and session duration. For instance, according to a 2017 Pinterest case study, Pinterest's PWA led to a 60% increase in core engagements and a 44% increase in user-generated ad revenue. Google and Microsoft have also championed this technology, integrating more PWA support into their platforms. Google highlighted the importance of PWAs for the mobile web, while Microsoft sought to populate its Windows Store with PWAs. Apple's Shift Towards PWAs Despite slower adoption and limited support, Apple isn't completely dismissing PWAs. Recent updates have indicated some promising improvements in PWA capabilities on both iOS/iPadOS and MacOS. For instance, in iOS/iPadOS 16 released last year, Apple added notifications for Home Screen web apps, utilizing the Web Push standard with support for badging. They also included an API for iOS/iPadOS browsers to facilitate the 'Add to Home Screen' feature. The forthcoming Safari 17 and MacOS Sonoma releases, announced at June's WWDC, promise even more significant changes, most notably: Installing Web Apps on MacOS, iOS, and iPadOS Any web app, not just PWAs, can now be added to the MacOS dock from the File menu. Once added, these web apps will open in their own window and integrate with operating system features such as the Stage Manager, Screen Time, Notifications, and Focus. They will also have their own isolated storage, and any cookies present in the Safari browser for that web app at the time of installation will be transferred to this isolated storage. This means many users will not need to re-authenticate to web apps after installing them locally. PWAs, in particular, can control the appearance and behavior of the installed web app via the PWA manifest. For instance, if your web app already includes navigation controls, or if they're not necessary in the context of the app, you can manage whether the navigation buttons are displayed by setting the display configuration option in the manifest to standalone. The display option will also be taken into consideration in iOS/iPadOS, where standalone web apps will become *Home Screen web apps*. These apps offer a standalone, app-like experience on iOS, complete with separate cookies and storage from the browser, and improved notification handling. Improved Notifications Apple initially added support for notifications in iOS/iPadOS 16, but Safari 17 and MacOS Sonoma take it a step further. If you've already implemented Web Push according to web standards, then push notifications should work for your web page as a web app on Mac without any additional effort. Moreover, the silent property is now taken into account, and there are several improvements to the Notifications API to enhance its reliability. These recent updates put the support for notifications on Mac on par with iOS/iPadOS, including support for badging, and seamless integration with the Focus mode. Improved API Over the past year, Apple has also introduced several other API-level improvements. The enhancements to the User Activation API aid in determining whether a function that depends on user activation, such as requesting permission to send notifications, is called. Safari 16 updated the un-prefixed Fullscreen API, and introduced preliminary support for the Screen Orientation API. Safari 17 in particular has improved support for the Storage API and added support for ReadableStream. The Renaissance of PWAs? With the new PWA-related features in the Apple ecosystem, it's hard not to wonder if we are witnessing a renaissance of PWAs. Initially praised for their ability to leverage web technology to create app-like experiences, PWAs went through a period of relative stagnation, especially on Apple's platforms. For a time, Apple was noticeably more conservative in their implementation of PWA features. However, their recent bolstering of PWA support in Safari signals a significant shift, aligning the browser with other major platforms such as Google Chrome and Microsoft Edge, both of which have long supported PWAs. The implications of this shift are profound. This widespread and robust support for PWAs across all major platforms could effectively reduce the gap between web applications and native applications. PWAs, with their promise of a single, consistent experience across all devices, could become the preferred choice for businesses and developers. The cost-effectiveness of developing and maintaining one PWA versus separate applications for multiple platforms is an undeniable benefit. The fact that all these platforms are now heavily supporting PWAs might suggest an industry-wide shift toward a more unified and simplified development paradigm, hinting that indeed, we could be on the verge of a PWA renaissance....
Aug 18, 2023
5 mins
Utilizing AWS Cognito for Authentication
Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPool_Dev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts file as shown below: ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth class in the @aws-amplify/auth package. For example, to sign in after the user has submitted the form containing the username and password, you can use the Auth.signIn(email, password) method as shown below: ` The logged-in user object is then translated to an instance of CoreUser, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: ` The login component triggers a SignInActions.userSignInAttempted action, which is processed by the above effect. Depending on the outcome of the signInAdmin call in the AuthService class, the action is translated to either AuthAPIActions.userLoginSuccess or AuthAPIActions.userSignInFailed. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp method for user registration. - Signing out is done using Auth.signOut. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub property. ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template() function. The template reads the HTML template embedded in the Lambda. ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...
Jul 19, 2023
6 mins
Implementing a Task Scheduler in Node Using Redis
Node.js and Redis are often used together to build scalable and high-performing applications. Although Redis has always been primarily an in-memory data store that allows for fast and efficient data access, over time, it has gained many useful features, and nowadays it can be used for things like rate limiting, session management, or queuing. With its excellent support for sorted sets, one feature to be added to that list can also be task scheduling. Node.js doesn't have support for any kind of task scheduling other than the built-in setInterval() and setTimeout() functions, which are quite simple and don't have task queuing mechanisms. There are third-party packages like node-schedule and node-cron of course. But what if you wanted to understand how this could work under the hood? This blog post will show you how to build your own scheduler from scratch. Redis Sorted Sets Redis has a structure called _sorted sets_, a powerful data structure that allows developers to store data that is both ordered and unique, which is useful in many different use cases such as ranking, scoring, and sorting data. Since their introduction in 2009, sorted sets have become one of the most widely used and powerful data structures in Redis. To add some data to a sorted set, you would need to use the ZADD command, which accepts three parameters: the name of the sorted set, the name of the member, and the score to associate with that member. When having multiple members, each with its own score, Redis will sort them by score. This is incredibly useful for implementing leaderboard-like lists. In our case, if we use a timestamp as a score, this means that we can order sorted set members by date, effectively implementing a queue where members with the most recent timestamp are at the top of the list. If the member name is a task identifier, and the timestamp is the time at which we want the task to be executed, then implementing a scheduler would mean reading the sorted list, and just grabbing whatever task we find at the top! The Algorithm Now that we understand the capabilities of Redis sorted sets, we can draft out a rough algorithm that will be implemented by our Node scheduler. Scheduling Tasks The scheduling task piece would include adding a task to the sorted set, and adding task data to the global set using the task identifier as the key. The steps are as follows: 1. Generate an identifier for the submitted task using the INCR command. This command will get the next integer sequence each time it's called. 2. Use the SET command to set task data in the global set. The SET command accepts a key and a string. The key must be unique, therefore it can be something like task:${taskId}, while the value can be a JSON representation of the task data. 3. Use the ZADD command to add the task identifier and the timestamp to the sorted set. The name of the sorted set can be something simple like sortedTasks, while the set member can be the task identifier and the score is the timestamp. Processing Tasks The processing part is an endless loop that checks if there are any tasks to process, otherwise it waits for a predefined interval before trying again. The algorithm can be as follows: 1. Check if we are still allowed to run. We need a way to stop the loop if we want to stop the scheduler. This can be a simple boolean flag in the code. 2. Use the ZRANGE command to get the first task in the list. ZRANGE accepts several useful arguments, such as the score range (the timestamp interval, in our case), and the offset/limit. If we provide it with the following arguments, we will get the first next task we need to execute. - Minimal score: 0 (the beginning of time) - Maximum score: current timestamp - Offset: 0 - Count: 1 3. If there is a task found: 3.1 Get the task data by executing the GET command on the task:${taskId} key. 3.2 Deserialize the data and call the task handler. 3.3 Remove the task data using the DEL command on the task:${taskId} key. 3.4 Remove the task identifier from the sorted set by calling ZREM on the sortedSets key. 3.5 Go back to point 2 to get the next task. 4. If there is no task found, wait for a predefined number of seconds before trying again. The Code Now to the code. We will have two objects to work with. The first one is RedisApi and this is simply a façade over the Redis client. For the Redis client, we chose to use ioredis, a popular Redis library for Node. ` The RedisApi function returns an object that has all the Redis operations that we mentioned previously, with the addition of isConnected, which we will use to check if the Redis connection is working. The other object is the Scheduler object and has three functions: - start() to start the task processing - stop() to stop the task processing - schedule() to submit new tasks The start() and schedule() functions contain the bulk of the algorithm we wrote above. The schedule() function adds a new task to Redis, while the start() function creates a findNextTask() function internally, which it schedules recursively while the scheduler is running. When creating a new Scheduler object, you need to provide the Redis connection details, a polling interval, and a task handler function. The task handler function will be provided with task data. ` That's it! Now, when you run the scheduler and submit a simple task, you should see an output like below: ` ` Conclusion Redis sorted sets are an amazing tool, and Redis provides you with some really useful commands to query or update sorted sets with ease. Hopefully, this blog post was an inspiration for you to consider using sorted sets in your applications. Feel free to use StackBlitz to view this project online and play with it some more....
May 3, 2023
5 mins
Next.js Authentication Using OAuth
Modern web apps have come a long way from their early days, and as a result, users have come to expect certain features. One such feature is being able to authenticate in the web app using external accounts owned by providers such as Facebook, Google, or GitHub. Not only is this way of authenticating more secure, but there is less effort required by the user. With only a few clicks, they can sign in to your web app. Such authentication is done using the OAuth protocol. It's a powerful and very commonly used protocol that allows users to authenticate with third-party applications using their existing login credentials. These days, it has become an essential part of modern web applications. In this blog post, we will explore how to implement OAuth authentication in a Next.js application. Why OAuth? Implementing authentication using OAuth is useful for a number of reasons. First of all, it allows users to sign in to your web app using their existing credentials from a trusted provider, such as Facebook and Google. This eliminates the need to go through a tedious registration process, and most importantly, it eliminates the need to come up with a password for the web app. This has many benefits for both the web app owner and the user. Neither need to store the password anywhere, as the password is handled by the trusted OAuth provider. This means that even if the web app gets hacked for some reason, the attacker will not gain access to the user password. For exactly that reason, you'll often hear experienced developers advise to "never roll your own authentication". OAuth in Next.js Next.js is the most popular React metaframework, and this gives you plenty of options and libraries for implementing authentication in your app. The most popular one, by far, is definitely Auth.js, formerly named NextAuth.js. With this library, you can get OAuth running in only a few simple steps, as we'll show in this blog post. We'll show you how to utilize the latest features in both Next.js and Auth.js to set up an OAuth integration using Facebook. Implementing OAuth in Next.js 13 Using Auth.js Creating Facebook App Before starting with the project, let's create an app on Facebook. This is a prerequisite to using Facebook as an OAuth provider. To do this, you'll need to go to Meta for Developers and create an account there by clicking on "Get Started". Once this is done, you can view the apps dashboard and click "Create App" to create your app. Since this is an app that will be used solely for Facebook login, we can choose "Consumer" as the type of the app, and you can pick any name for it. In our case, we used "Next.js OAuth Demo". After the app is created, it's immediately visible on the dashboard. Click the app, and then click Settings / Basic on the left menu to show both the app ID and the app secret - this will be used by our Next.js app. Setting Up Next.js For the purpose of this blog post, we'll create a new Next.js project from scratch so you can have a good reference project with minimum features and dependencies. In the shell, execute the npx create-next-app@latest --typescript command and follow the prompts: ` As you can see, we've also used this opportunity to play with the experimental app directory which is still in beta, but is the way Next.js apps will be built in the future. In our project, we've also set up Tailwind just to design the login page more quickly. Next, install the Auth.js library: ` Now we need to create a catch-all API route under /api/auth that will be handled by the Auth.js library. We'll do this by creating the following file: ` Note that even though we will be utilizing Next.js 13's app directory, we need to place this route in the pages directory, as Auth.js doesn't yet support placing its API handler in the app directory. This is the only case where we will be using pages, though. In your project root, create a .env.local file with the following contents: ` All the above environment variables except NEXTAUTH_URL are considered secret, and you should avoid committing them in the repository. Now, moving on to the React components, we'll need to have a few components that will perform the following functionality: - Display the sign-in button if the user is not authenticated - Otherwise, display user name and a sign-out button The Home component that was auto-generated by Next.js is a server component and we can use getServerSession() from Auth.js to get the user's session. Based on that, we'll show either the sign-in component or the logged-in user information. authOptions provided to getServerSession() is the object that is defined in the API route. ` The SignIn component has the sign-in button. The sign-in button needs to open an URL on Facebook that will initiate the authentication process. Once the authentication process is completed, it will invoke a "callback"- a special URL on the app side that is handled by Auth.js. ` The UserInformation, on the other hand, is displayed after the authentication process is completed. Unlike other components, this needs to be a client component to utilize the *signOut* method from Auth.js, which only works client-side. ` And that's it! Now run the project using npm run dev and you should be able to authenticate to Facebook as shown below: Conclusion In conclusion, implementing OAuth-based authentication in Next.js is relatively straightforward thanks to Auth.js. This library not only comes with built-in Facebook support, but it also comes with 60+ other popular services, such as Google, Auth0, and more. We hope this blog post was useful, and you can always refer to the CodeSandbox project if you want to view the full source code. For other Next.js demo projects, be sure to check out starter.dev, where we already have a Next.js starter kit that can give you the best-practices in integrating Next.js with other libraries....
Mar 27, 2023
6 mins
Splitting Work: Multi-Threaded Programming in Deno
Deno is a new runtime for JavaScript/TypeScript built on top of the V8 JavaScript engine. It was created as an alternative for Node.js with a focus on security and modern language features. Here at This Dot, we've been working with Deno for a while, and we've even created a starter kit that you can use to scaffold your next backend Deno project. The starter kit uses many standard Deno modules, such as the Oak web server and the DenoDB ORM. One issue you may encounter, when scaling an application from this starter kit, is how to handle expensive or long-running asynchronous tasks and operations in Deno without blocking your server from handling more requests. Deno, just like Node.js, uses an event loop in order to process asynchronous tasks. This event loop is responsible for managing the flow of Deno applications and handling the execution of asynchronous tasks. The event loop is executed in a single thread. Therefore, if there is some CPU-intensive or long-running logic that needs to be executed, it needs to be offloaded from the main thread. This is where Deno workers come into play. Deno workers are built upon the Web Worker API specification, and provide a way to run JavaScript or TypeScript code in separate threads, allowing you to execute CPU-intensive or long-running tasks concurrently, without blocking the event loop. They communicate with the main process through a message-passing API. In this blog post, we will show you how to expand on our starter kit using Deno Workers. In our starter kit API, where we have CRUD operations for managing technologies, we'll modify the create endpoint to also read an image representing the technology and generate thumbnails for that image. Generating thumbnails Image processing is CPU-intensive. If the image being processed is large, it may require a significant amount of CPU resources to complete in a timely manner. When including image processing as part of an API, it's definitely a good idea to offload that processing to a separate thread if you want to keep your API responsive. Although there are many image processing libraries out there for the Node ecosystem, the Deno ecosystem does not have as many for now. Fortunately, for our use case, using a simple library like deno-image is good enough. With only a few lines of code, you can resize any image, as shown in the below example from deno-image's repository: ` Let's now create our own thumbnail generator. Create a new file called generate_thumbnails.ts in the src/worker folder of the starter kit: ` The function uses fetch to retrieve the image from a remote URL, and store it in a local buffer. Afterwards, it goes through a predefined list of thumbnail sizes, calling resize() for each, and then saving each image to the public/images folder, which is a public folder of the web server. Each image's filename is generated from the original image's URL, and appended with the thumbnail dimensions. Calling the web worker The web worker itself is a simple Deno module which defines an event handler for incoming messages from the main thread. Create worker.ts in the src/worker folder: ` The event's data property expects an object representing a message from the main thread. In our case, we only need an image URL to process an image, so event.data.imageUrl will contain the image URL to process. Then, we call the generateThumbnails function on that URL, and then we close the worker when done. Now, before calling the web worker to resize our image, let's modify the Technology type from the GraphQL schema in the starter kit to accept an image URL. This way, when we execute the mutation to create a new technology, we can execute the logic to read the image, and resize it in the web worker. ` After calling deno task generate-type-definition to generate new TypeScript files from the modified schema, we can now use the imageUrl field in our mutation handler, which creates a new instance of the technology. At the top of the mutation_handler.ts module, let's define our worker: ` This is only done once, so that Deno loads the worker on module initialization. Afterwards, we can send messages to our worker on every call of the mutation handler using postMessage: ` With this implementation, your API will remain responsive, because post-processing actions such as thumbnail generation are offloaded to a separate worker. The main thread and the worker thread communicate with a simple messaging system. Conclusion Overall, Deno is a powerful and efficient runtime for building server-side applications, small and large. Its combination of performance and ease-of-use make it an appealing choice for developers looking to build scalable and reliable systems. With its support for the Web Worker API spec, Deno is also well-suited for performing large-scale data processing tasks, as we've shown in this blog post. If you want to learn more about Deno, check out deno.framework.dev for a curated list of libraries and resources. If you are looking to start a new Deno project, check out our Deno starter kit resources at starter.dev....
Mar 21, 2023
4 mins
Let's innovate together!
We're ready to be your trusted technical partners in your digital innovation journey.
Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.