Skip to content

Building interactive forms with TanStack Form

Building interactive forms with TanStack Form

TanStack Form is a new headless form library that makes building more complex and interactive forms easy. TanStack has understood the power of ‘headless’ UI libraries for quite some time with some really incredible tools like TanStack Table and TanStack Query. Given the high quality of all the other libraries in the TanStack suite, I was excited to give this new form library a try.

If you’ve used react-hook-form before the two libraries have quite a bit in common. The transition should be mostly straightforward. Let's start by comparing the two libraries and bit and then dig into the TanStack Form API with some examples and code.

Comparison to react-hook-form

At a high level, the two libraries are pretty similar:

  • Headless, hook-based API
  • Performance (both libraries minimize amount of renders triggered by state changes)
  • Lightweight, zero-dependencies (TanStack 4.4kb, react-hook-form 9.7kb)
  • Comprehensive feature set
  • Type-safety / TypeScript support
  • Form / Input validation and schemas

There are a few things that set TanStack Form apart:

  • UI Agnostic - TanStack Form offers support for other UI libraries and frameworks through a plugin-style system. Currently, React & Solid is supported out of the box.
  • Simpler API and documentation - The API surface area is a bit smaller, and the documentation makes it easy to find the details on all the APIs.
  • First-class TypeScript support - TanStack Form provides really impressive type inference. It stays out of your way and lets you focus on writing code instead of wrangling types.

Building forms

Since the API for TanStack Form is pretty small, we can walk through the important APIs pretty quickly to see how they work together to help us build interactive client-side forms.

useForm(options)

useForm accepts an options object that accepts defaultValues, defaultState, and event handler functions like onSubmit, onChange, etc. The types are clear, and the source code is easy to read so you can refer to FormApi.ts file for more specifics. Currently, the examples on the website don’t provide a type for the generic TData that useForm accepts. It will infer the form types based on the defaultValues that are provided, but in this example, I will provide the type explicitly.

import { useForm } from "@tanstack/react-form";

interface CreateAccountFormData {
	email: string;
	password: string;
	displayName: string;
}

function CreateAccountForm() {
	const form = useForm<CreateAccountFormData>({
		defaultValues: {
			email: '',
			password: '',
			displayName: '',
		},
		onSubmit: (values) => {
			// values are typed for us
			// values.[email, password, displayName]
		}
	});
}

The API for your form is returned from the useForm hook. We will use this API to build the fields and their functionality in our component.

useField(opts)

If you want to abstract one of your form fields into its own component we can use the useField hook. The useField generic parameters take the type definition of your form and the field name.

function PasswordField() {
  const field = useField<CreateAccountFormData, "password">({
    name: "password",
    onChange: (value) =>
      value.length > 8 ? undefined : "Password must be 8 characters long",
  });
  const error = field.state.meta.errors[0];
  return (
    <>
      <label htmlFor={field.name}>Password</label>
      <input
        type="password"
        name={field.name}
        value={field.state.value}
        onBlur={field.handleBlur}
        onChange={(e) => field.handleChange(e.target.value)}
      />
      {error && <span>{error}</span>}
    </>
  );
}

Validation is as simple as returning a string from your field onChange handler. For our password field we return an error message if the length isn’t at least 8 characters long. We bind our field states and event handlers to the form input by passing them in as props.

Component API

form includes a component API with a Context Provider, a component for rendering Field components, and a Subscribe component for subscribing to state changes. I’ve added the remaining fields to our form using the Field component and included our PasswordField from above. Using the Field component is similar to using the useField hook - the difference is our field instance gets passed in via a render prop.

return (
    <form.Provider>
      <form
        onSubmit={(e) => {
          e.preventDefault();
          e.stopPropagation();
          void form.handleSubmit();
        }}
      >
        <div>
          <form.Field name="email">
            {(field) => (
              <>
                <label htmlFor={field.name}>Email</label>
                <input
                  name={field.name}
                  value={field.state.value}
                  onBlur={field.handleBlur}
                  onChange={(e) => field.handleChange(e.target.value)}
	      required
                />
              </>
            )}
          </form.Field>
        </div>
        <div>
	<PasswordField />
        </div>
        <div>
          <form.Field name="displayName">
            {(field) => (
              <>
                <label htmlFor={field.name}>Display Name</label>
                <input
                  name={field.name}
                  value={field.state.value}
                  onBlur={field.handleBlur}
                  onChange={(e) => field.handleChange(e.target.value)}
	      required
                />
              </>
            )}
          </form.Field>
        </div>
        <form.Subscribe
selector={(state) => [state.canSubmit, state.isSubmitting]}>
{([canSubmit, isSubmitting]) => (
<button type="submit" disabled={!canSubmit}>
              			{isSubmitting ? "..." : "Submit"}
            </button>
          )}
        </form.Subscribe>
      </form>
    </form.Provider>
  );

We wrapped our submit button with the Subscribe component which has a pretty interesting API. Subscribe allows you to subscribe to state changes granularly. Whatever state items you return from the selector gets passed into the render prop and re-render it on changes. This will be really useful in places where render performance is critical.

Async event handlers

We can use the built-in async handlers and debounce options to implement a search input. This feature makes a common pattern like this trivial.

function SearchField() {
  const field = useField<{ query: string }>, "query">({
    name: "query",
    onChangeAsync: (value) => searchFn(value),
    onChangeAsyncDebounceMs: 500
  });
  return (
      <input
        type="text"
        name={field.name}
        value={field.state.value}
        onBlur={field.handleBlur}
        onChange={(e) => field.handleChange(e.target.value)}
      />
  );
}

Async event handlers can be used for other things like async validation as well. We could update our email input from our create account form to check to see if the account already exists.

<form.Field 
  name="email" 
  onBlurAsync={async (value) => api.isEmailRegistered(value)}>
  {(field) => (
    <>
      <label htmlFor={field.name}>Email</label>
      <input
        name={field.name}
        value={field.state.value}
        onBlur={field.handleBlur}
        onChange={(e) => field.handleChange(e.target.value)}
        required
      />
    </>
  )}
</form.Field>

Validation adapters

Since this post was first written, TanStack Form has added adapters for popular schema validation libraries like Yup and Zod. The actual API for using the schemas is a little bit different then what you might be used to from react-hook-form. Instead of passing a schema definition for all of the fields to the root form API, you provide schemas for each field individually. We’ll look at the Zod example from the documentation.

import { useForm } from '@tanstack/react-form'
import { zodValidator } from '@tanstack/zod-form-adapter'

export default function App() {
 const form = useForm({
   defaultValues: {
     firstName: '',
     lastName: '',
   },
   onSubmit: async ({ value }) => {
     // Do something with form data
     console.log(value)
   },
   // Add a validator to support Zod usage in Form and Field
   validatorAdapter: zodValidator,
 })

To use the schema validation you need to pass the schema validation adapter into the validatorAdapter option in the useForm declaration. Looking at the FieldInfo component from the example we can see that the internal state and API for form validation is the same. The adapter takes care of wiring up the schema validation library to handle the underlying field validation.

function FieldInfo({ field }: { field: FieldApi<any, any, any, any> }) {
 return (
   <>
     {field.state.meta.touchedErrors ? (
       <em>{field.state.meta.touchedErrors}</em>
     ) : null}
     {field.state.meta.isValidating ? 'Validating...' : null}
   </>
 )
}

So we can easily check the state of our field to see if there are any errors or if it’s processing some async validation logic.

We mentioned that without the schema adapter we can just return an error message string from a validation function to indicate an error. With the schema adapter, we instead just return the schema itself. This works for async validations as well.

<form.Field
    name="firstName"
    validators={{
        onChange: z
          .string()
           .min(3, 'First name must be at least 3 characters'),
         onChangeAsyncDebounceMs: 500,
         onChangeAsync: z.string().refine(
              async (value) => {
                   await new Promise((resolve) => setTimeout(resolve, 1000))
                   return !value.includes('error')
               },
               {
                 message: "No 'error' allowed in first name",
               },
           ),
     }}
/>

Initially, I wasn’t sure that I would like defining a schema at the field level instead of providing a schema up front for the entire form. After sitting with it for a bit, I think it’s a better solution. The validations are decoupled from the value of the field and it allows a clean and simple API to decide when they run. In the example above validations against the schema will run on every change to the field value but we could easily change this to onBlur or any of the other available event handlers.

Conclusion

If you don’t mind taking a chance on a newer library from a well-established author in the ecosystem, Tanstack Form is a solid choice.

  • The APIs are easy to understand, and it has some nice async event handling features that are extremely useful.
  • It includes adapters for using schema validation libraries like Zod.
  • It is UI library/framework agnostic (currently has plugins to support React, React Native, and Solid)
  • Quality TypeScript support In my opinion, it’s the most intuitive and flexible form library out there. TanStack and all of its libraries are amazingly well designed. 10/10

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to test React custom hooks and components with Vitest cover image

How to test React custom hooks and components with Vitest

Introduction In this guide, we'll navigate through the process of testing React hooks and components using Vitest—a powerful JavaScript unit testing framework. Discover how Vitest simplifies testing setups, providing an optimal solution for both Vite-powered projects and beyond. Vitest is a javascript unit testing framework that aims to position itself as the Test Runner of choice for Vite projects and as a solid alternative even for projects not using Vite. Vitest was built primarily for Vite-powered projects, to help reduce the complexity of setting up testing with other testing frameworks like Jest. Vitest uses the same configuration of your App (through vite.config.js), sharing a common transformation pipeline during dev, build, and test time. Prerequisites This article assumes a solid understanding of React and frontend unit testing. Familiarity with tools like React Testing Library and JSDOM will enhance your grasp of the testing process with Vitest. Installation and configuration Let’s see how we can use Vitest for testing React custom hooks and components. But first, we will need to create a new project with Vite! If you already have an existing project, you can skip this step. ` Follow the prompts to create a new React project successfully. For testing, we need the following dependencies installed: Vitest as the unit testing framework JSDOM as the DOM environment for running our tests React Testing Library as the React testing utilities. To do so, we run the following command: ` Once we have those packages installed, we need to configure the vite.config.js file to run tests. By default, some of the extra configs we need to set up Vitest are not available in the Vite config types, so we will need the vite.config.ts file to reference Vitest types by adding /// reference types=”vitest” /> at the top of the file. Add the following code to the vite.config.ts ` We set globals to true because, by default, Vitest does not provide global APIs for explicitness. So with this set to true, we can use keywords like describe, test and it without needing to import them. To get TypeScript working with the global APIs, add vitest/globals to the types field in your tsconfig.json. ` The environment property tells Vitest which environment to run the test. We are using jsdom as the environment. The root property tells Vitest the root folder from where it should start looking for test files. We should add a script for running the test in package.json ` With all that configured, we can now start writing unit tests for customs hooks and React components. Writing test for custom hooks Let’s write a test for a simple useCounter hook that takes an initial value and returns the value, an increment function and a decrement function. ` We can write a test to check the default return values of the hook for value as below: ` To test if the hook works when we increment the value, we can use the act() method from @testing-library/react to simulate the increment function, as shown in the below test case: ` Kindly Note that you can't destructure the reactive properties of the result.current instance, or they will lose their reactivity. Testing hooks with asynchronous logic Now let’s test a more complex logic that contains asynchronous logic. Let’s write a useProducts hook that fetches data from an external api and return that value ` Now, let’s see what the test looks like: ` In the above example, we had to spy on the global fetch API, so that we can mock its return value. We wrapped that inside a beforeAll so that this runs before any test in this file. Then we added an afterAll method and called the mockRestore() to run after all test cases have been completed and return all mock implementations to their original function. We can also use the mockClear() method to clear all the mock's information, such as the number of calls and the mock's results. This method is handy when mocking the same function with different return values for different tests. We usually use mockClear() in beforeEach() or afterEach() methods to ensure our test is isolated completely. Then in our test case, we used a waitFor(), to wait for the return value to be resolved. Writing test for components Like Jest, Vitest provides assertion methods (matchers) to use with the expect methods for asserting values, but to test DOM elements easily, we will need to make use of custom matchers such as toBeInTheDocument() or toHaveTextContent(). Luckily the Vitest API is mostly compatible with the Jest API, making it possible to reuse many tools originally built for Jest. For such methods, we can install the @testing-library/jest-dom package and extend the expect method from Vitest to include the assertion methods in matchers from this package. ` After installing the jest-dom testing library package, create a file named vitest-setup.ts on the root of the project and import the following into the project to extend js-dom custom matchers: ` Since we are using typescript, we also need to include our setup file in our tsconfig.json: ` In vite.config.ts, we need to add the vitest-setup.ts file to the test.setupFiles field: ` Now let’s test the Products.tsx component: ` We start by spying and mocking the useProducts hook with vi.spyOn() method from Vitest: ` Now, we render the Products component using the render method from @testing-library/react and assert that the component renders the list of products as expected and also the product has the title as follows: ` In the above code, we use the render method from @testing-library/react to render the component and this returns some useful methods we can use to extract information from the component like getByTestId and getByText. The getByTestId method will retrieve the element whose data-testid attribute value equals product-list, and we can then assert its children to equal the length of our mocked items array. Using data-testid attribute values is a good practice for identifying a DOM element for testing purposes and avoiding affecting the component's implementation in production and tests. We also used the getByText method to find a text in the rendered component. We were able to call the toBeInTheDocument() because we extended the matchers to work with Vitest earlier. Here is what the full test looks like: ` Conclusion In this article, we delved into the world of testing React hooks and components using Vitest, a versatile JavaScript unit testing framework. We walked through the installation and configuration process, ensuring compatibility with React, JSDOM, and React Testing Library. The comprehensive guide covered writing tests for custom hooks, including handling asynchronous logic, and testing React components, leveraging custom matchers for DOM assertions. By adopting Vitest, developers can streamline the testing process for their React applications, whether powered by Vite or not. The framework's seamless integration with Vite projects simplifies the setup, reducing the complexities associated with other testing tools like Jest....

Improving INP in React and Next.js cover image

Improving INP in React and Next.js

Improving INP in React and Next.js In one of my previous articles, I've explained what INP is, how it works, and how it may affect your website. I also promised you to follow up with more concrete advice on how to improve your INP in your favorite framework. This is the follow-up article, where I'll focus on how to improve your INP score in React and Next.js. How to prepare for INP in React and Next.js? The first thing to do is to ensure you're using the latest version of React. The React team has been working on making React more INP-friendly and has already made some improvements in the latest versions. To enhance your INP score, consider fully taking advantage of new features introduced in React 18, such as Concurrent Rendering, Automatic Batching, and Selective Hydration. However, there are also some general areas to focus on, such as SSR and SSG in Next.js, Web Workers, or optimizing your hooks and state management. Concurrent Rendering The Concurrent Mode in React uses an algorithm that breaks rendering down into so-called "fiber nodes" and schedules the renders based on their expiration and priority. This effectively allows the user to interact with the page while the rendering is still in progress. In previous React versions, all updates, such as setState calls were treated as "urgent" and once the re-render had started, there was no way to interrupt it. Concurrent Mode changes this by being able to prioritize the updates and interrupt a non-blocking state update started with startTransition. For a simple explanation of concurrency in React, you can check out Dan Abramov's explanation. As part of the Concurrent Mode, React introduced several lifecycle methods that allow you to prioritize the rendering of certain parts of your UI, such as: - useTransition hook that allows you to update the state without blocking the UI, - useDeferredValue hook that allows you to defer the rendering of certain parts of your UI, - startTransition API that, similarly to the useTransition hook lets you mark a state update as non-blocking. It lacks, however, an indication of whether it's still pending. Automatic Batching Introduced in React 18, Automatic Batching reduces the number of re-renders that happen on state changes even when they happen outside of event handlers, e.g. in a setTimeout or Promise callback. This feature comes out of the box and you don't have to do anything to enable it, and it makes a great argument for upgrading to React 18. Selective Hydration Selective Hydration allows you to take hydration off the main thread by wrapping your components in a Suspense boundary. This way, components can become interactive faster as the browser can do other work on the main thread while the hydration is happening. To fully take advantage of selective hydration, consider the following: - Prioritizing Above-the-Fold Content: Use Suspense boundaries strategically around any parts of your application that may take the server longer to deliver to ensure they don’t block critical content from becoming interactive as soon as possible. - Hydration on Interaction: Implementing hydration upon user interaction for non-critical components can drastically reduce the main thread's workload, enhancing INP. Vercel even has a small case study showing how using selective hydration improved the performance of a Next.js site. Server-Side Rendering (SSR) and Static Site Generation (SSG) in Next.js Not everything has to run client-side. Next.js excels in SSR and SSG capabilities, which can significantly impact INP by delivering content to users faster. Optimizing SSR with techniques like incremental static regeneration (ISR) or leveraging SSG for static pages ensures that users can interact with content faster, improving the perceived performance. Workers Offloading heavy computations to Web Workers can free up the main thread, enhancing the responsiveness of React and Next.js applications. This strategy is especially useful when dealing with third-party scripts. Offloading such scripts in Next.js can be easily done by specifying the "worker" strategy on your Script component. Be aware that this feature is not yet stable and does not work with the app directory, though. If you want to take things one step further, you could use Partytown, which helps you offload any resource-intensive scripts to Web Workers. It comes with a React component that you can use to wrap your third-party scripts and offload them to a Web Worker, and it's compatible with Next.js as well. Hooks and State Management State management in React applications can easily get out of hand, leading to unnecessary re-renders and effectively an increased INP. Sometimes, using a state management library like Redux or MobX can help you consolidate your state and reduce the number of re-renders. However, they are not silver bullets and can also introduce performance issues if not used properly. If you are dealing with a lot of re-renders due to prop changes, make sure you are leveraging memoization. As of now, you may need to work with useMemo and useCallback hooks to memoize your values and functions, respectively. The upcoming React 19’s Forget Compiler, however, will apparently memoize everything under the hood, making these hooks obsolete. Using memoization properly can help you reduce the number of re-renders and improve your INP. To investigate your hook dependencies and re-renders, you can leverage React Developer Tools or use this handy helper hook I found on the internet to trace your re-renders: ` Conclusion Improving INP in React and Next.js is not easy and can require much investigation and fine-tuning. Still, it's worth doing to avoid being penalized by Google in its search results and provide a better experience for your users. Adopting React 18's new features, leveraging SSR and SSG in Next.js, utilizing Web Workers, and optimizing hooks and state management can significantly boost your INP score and deliver a faster application to your users. Remember, INP is just one among many performance metrics emphasizing the need for a comprehensive approach to performance optimization...

D1 SQLite: Schema, migrations and seeds cover image

D1 SQLite: Schema, migrations and seeds

I’ve written posts about some of the popular ORM’s in TypeScript and covered their pros and cons. Prisma is probably the most well known and Drizzle is a really popular up and comer. I like and use ORM’s in most of my projects but there’s also a camp of folks who believe they shouldn’t be used. I started a small Cloudflare Workers project recently and decided to try using their D1 SQLite database without adding any ORM. This is the first post in a 2 part series where we’ll explore what this experience is like using only the driver and utilities made available in the Wrangler CLI. Introduction If you’re unfamiliar with Cloudflare D1 - it’s a distributed SQLite database for the Cloudflare Workers platform. Workers are lightweight serverless functions/compute distributed across a global network. The platform includes services and API’s like D1 that provide extended capabilities to your Workers. At the time of this writing, there are only two ways to interact with a D1 database that I’m aware of. * From a Worker using the D1 Client API * D1 HTTP API In this 2 part series, we will create a simple Workers project and use the D1 Client API to build out our queries. Getting Started For this tutorial, we’ll create a simple Cloudflare Worker project and treat it like a simple node script/http server to do our experiments. The first step is initializing a new Cloudflare Worker and D1 database: ` ` We need to take our binding and add it to our project wrangler.toml configuration. Once our binding is added we can re-generate the types for our Worker project. ` We now have our DB binding added to our project Env types. Let’s add a simple query to our worker script to make sure our database is setup and working: ` Start the development server with npm run dev and access the server at http://localhost:8787 . When the page loads we should see a successful result {"1 + 1":2} . We now have a working SQLite database available. Creating a schema Since we’re not using an ORM with some kind of DSL to define a schema for our database, we’re going to do things the old fashioned way. “Schema” will just refer to the data model that we create for our database. We’ll define it using SQL and the D1 migrations utility. Let’s create a migration to define our initial schema: ` For our demo purposes we will build out a simple blog application database. It includes posts, authors, and tags to include some relational data. We need to write the SQL in our migration to create all the tables and columns that we need: ` The SQL above defines our tables, columns, and their relations with foreign keys. It also includes a join table for posts and tags to represent a many-to-many relationship. If you don’t have a lot of experience writing SQL queries it might look a little bit intimidating at first. Once you take some time to learn it it’s actually pretty nice. DataLemur is a pretty great resource for learning SQL. If you need help with a specific query, Copilot and GPT are quite good at generating SQL queries. Just make sure you take some time to try to understand them and check for any potential issues. After completing a migration script it needs to be applied: ` I added the --local flag so that we’re working against a local database for now. Typing our Schema One of the downsides of our ORMless approach is we don’t get TypeScript types out of the box. In a smaller project, I think the easiest approach is just to manage your own types. It’s not hard, you can even have GPT help if you want. If managing type definitions for your schema is not acceptable for your project or use case you can look for a code generation tool or switch to an ORM / toolset that provides types. For this example I created some basic types to map to our schema so that we can get help from the lsp when working with our queries. ` Seeding development data Outside of our migrations, we can write SQL scripts and execute them against our D1 SQLite database using wrangler. To start we can create a simple seeds/dev.sql script to load some initial development seed data into our local database. Another example might be a reset.sql that drops all of our tables so we can easily reset our database during development as we rework the schema or run other experiments. Since our database is using auto incrementing integer ID’s, we can know up front what the ID’s for the rows we are creating are since our database is initially empty. This can be a bit tricky if you’re using randomly generated ID’s. In that case you would probably want to write a script that can collect ID’s of newly created records and use them for creating related records. Here we are just passing the integer ID directly in our SQL script. As an example, we know up front that the author Alice Smith will have the id 1, Bob Johnson 2, and so on. Post_tags looks a little bit crazy since it’s just a join table. Each row is just a post_id and a tag_id. (1, 1), (1 2), etc. Here’s the code for a dev seed script: ` Here’s the code for a reset script - it’s important to remember to drop the migrations table in your reset so you can apply your migrations. ` Using the wrangler CLI we can execute our script files against our local development and remote d1 database instances. Since we have already applied our migrations to our local database, we can use our dev.sql seed script to load some data into our db. ` The Wrangler output is pretty helpful - it lets us know to add the --remote flag to run against our remote instance. We can also use execute to run commands against our database. Lets run a select to look at the data added to our posts table. ` This command should output a table showing the columns of our db and the 7 rows we added from the dev seed script. Summary Using wrangler and the Cloudflare D1 platform we’ve already gotten pretty far without an ORM or any other additional tooling. We have a simple but effective migrations system in place and some initial scripts for easily seeding and resetting our databases. There are also some other really great things built-in to the D1 platform like time travel and backups. I definitely recommend at least skimming through the documentation at some point. In the next post we will start interacting with our database and sample data using the D1 Client API....

The 2025 Guide to JS Build Tools cover image

The 2025 Guide to JS Build Tools

The 2025 Guide to JS Build Tools In 2025, we're seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we've seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. Void Zero is one such recent example. With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let's explain some concepts. Concepts When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we've attempted to explain some of the most common ones so that you can easily understand how tools compare. Minification The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user's browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of *minification*, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file. This is what an unminified JavaScript looks like: ` This is the same file, minified: ` Closely related to minimizing is the concept of source maps#Source_mapping), which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code. Tree-Shaking *Tree-shaking* was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called *dead code elimination*. Bundling Development in JavaScript and TypeScript rarely involves a single file. Typically, we're talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. *Bundling* is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files. A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function. Transpiling Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a *transpiler* is to transform one language into another. In the JavaScript ecosystem, it's most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that's supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, arrow functions, which are specified in ES6, are converted into regular function declarations if the target language is ES5. Additionally, a transpiler can also be used by modern frameworks such as React to transpile JSX syntax (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages. Compiling While a transpiler's purpose is to transform from one language to another without or with little optimization, the purpose of a *compiler* is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster. Hot-Module Reloading (HMR) *Hot-module reloading* (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you'd make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components. With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code. Development Server When developing web applications, you need to have a locally running development server set up on something like http://localhost:3000. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application. The Tools Now that you understand the most important features of build tools, let's take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we'll cover, along with the features they primarily focus on and those they support secondarily or through plugins. The tools are presented in alphabetical order below. Babel Babel, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers. esbuild esbuild, created by Evan Wallace, the co-founder and former CTO of Figma, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits. esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like Vite and frameworks such as Angular. Metro Unlike other build tools mentioned here, which are mostly web-focused, Metro's primary focus is React Native. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by Meta and actively maintained by the Meta team. Oxc The JavaScript Oxidation Compiler, or Oxc, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite. Parcel Feature-wise, Parcel covers a lot of ground (no pun intended). Largely created by Devon Govett, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration. Rolldown Rolldown is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond. Rollup Rollup is the current bundler for Vite. Originally created by Rich Harris, the creator of Svelte, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point. Rsbuild Rsbuild is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the Web Infrastructure Team at ByteDance, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn't provide, such as a better development server, image compression, and type checking. Rspack Rspack, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration. Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others. Snowpack Snowpack was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild. Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is no longer actively developed, and Vite has become the recommended alternative. SWC SWC, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. SWCpack is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by Vercel, and the documentation states that SWCpack is presently not in active development. Terser Terser has the smallest scope compared to other tools from this list, but considering that it's used in many of those tools, it's worth separating it into its own section. Terser's primary role is minification. It is the successor to the older UglifyJS, but with better performance and ES6+ support. Vite Vite is a somewhat of a special beast. It's primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities. Vite shines in different ways depending on how it's used. During development, it provides a fast server that doesn't bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn't bundled, Vite also delivers fast HMR, so any updates you make are nearly instant. Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you'll have the same bundler for both development and production.) Traditionally, Vite has been used for client-side apps, but with the new Environment API released in Vite 6.0, it bridges the gap between client-side and server-rendered apps. Turbopack Turbopack is a bundler, written in Rust by the creators of webpack and Next.js at Vercel. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over. The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js's dev server. Production builds are not yet supported but are planned for future releases. Webpack And finally we arrive at Webpack, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we've seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it's written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it. Conclusion With so many build tools in today's JavaScript ecosystem, many of which are similarly named, it's easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co