Skip to content

Choosing Remix: The JavaScript Fullstack Framework and Building Your First Application

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Remix is a React framework for building server-side rendering full-stack applications. It focuses on using web standards to build modern user interfaces. Remix was created by the team behind React Router, and was recently open-sourced for the community.

Remix extends most of the core functionality of React and React Router with server-side rendering, server requests, and backend optimization.

But why start using Remix? Remix comes with a number of advantages:

  • Nested routes - nested routes help eliminate nearly every loading state by using React Router’s outlet to render nested routes from directories and subdirectories.

  • Setup is easy! Spinning up a remix project takes just a few minutes and gets you productive immediately.

  • Remix uses server side progressive enhancement which means only necessary JavaScript, JSON, and CSS content is sent to the browser.

  • Remix focuses on server-side rendering.

  • File-system-based routing automatically generates the router configuration based on your file directory.

  • Remix is built on React which makes it easy to use if you know React.

Key Features

Let’s highlight a few key features of Remix you should be aware of.

Nested Route

Routes in Remix are file-system-based. Any component in the route directory is handled as a route and rendered to its parent outlet components.

If you define a parent component inside the routes directory, and then different routes inside a directory with the same name as the parent component, the latter will be nested within the first one.

Error boundaries

Error handling with applications is critical. In many cases, a single error can cause the entire application to be affected.

With Remix, when you get an error in a Remix component or a nested route, the errors are limited to the component and the component will fail to render or it will display the error without disrupting the entire application’s functionality.

Loading State

Remix handles loading states in parallel on the server and sends the fully formed HTML document to the browser; this eliminates the need to use a loading skeleton or spinner when fetching data or submitting form data.

Loaders and Actions

Among the most exciting features of Remix are Loaders and Actions.

These are special functions:

Loaders are functions (Hooks) that retrieve dynamic data from the database or API using the native fetch API. You can add loaders to a component by calling the useLoaderData() hook.

Actions are functions used to mutate data. Actions are mostly used to send form data to the API or database, to make changes to an API, or perform a delete action.

Building Your First Remix App

The next portion of this blog post will show you how to build your first Remix app!

We will be building a small blog app with primsa’s SQLite to store the posts.

To start a Remix project the prerequisites are:

  • A familiarity with JavaScript and React
  • Node.js installed
  • A code editor i.e VSCode

Open your system terminal and run:

npx create-remix@latest

You can accept the default prompts or use your own preferences.

Screenshot 2022-09-16 1.14.53 PM

Remix will install all the dependencies it requires and scaffold the application with directories and files.

In the project’s directory let install the other dependencies we will be using, run:

npm install bootstrap

You should use something like the directory in the image.

Screenshot 2022-09-16 1.17.44 PM

File structure walk through

The app directory contains the main building files for our Remix application.

The route directory holds all the routes that expose the exported default function as the route handler from the file.

entry.client.jsx and entry.server.jsx are core Remix’s files. Remix uses entry.client.jsx as the entry point for the browser bundle, and uses entry.server.jsx to generate the HTTP response when rendering on the server.

root.jsx is the root component of Remix application, the default export is the layout component that renders the rest of the app in an <Outlet />

These are the files we really want to touch on in our project. To learn more about the file directory, check out Remix’s API conventions.

Project set up

Open the root.jsx file and replace the code with:

import { Links, LiveReload, Meta, Outlet } from "@remix-run/react";
import bootstrapCSS from "bootstrap/dist/css/bootstrap.min.css";

export const links = () => [{ rel: "stylesheet", href: bootstrapCSS }];

export const meta = () => ({
  charset: "utf-8",
  viewport: "width=device-width,initial-scale=1",
});

export default function App() {
  return (
    <Document>
      <Layout>
        <Outlet />
      </Layout>
    </Document>
  );
}

function Layout({ children }) {
  return (
    <>
      <nav className="navbar navbar-expand-lg navbar-light bg-light px-5 py-3">
        <a href="/" className="navbar-brand">
          Remix
        </a>
        <ul className="navbar-nav mr-auto">
          <li className="nav-item">
            <a className="nav-link" href="/posts">Posts </a>
          </li>
        </ul>
      </nav>

      <div className="container">{children}</div>
    </>
  );
}

function Document({ children }) {
  return (
    <html>
      <head>
        <Links />
        <Meta />
        <title>ThisDot Labs Code Blog</title>
      </head>
      <body>
        {children}
        {process.env.NODE_ENV === "development" ? <LiveReload /> : null}
      </body>
    </html>
  );
}

Since we are using bootstrap for styling we imported the minified library, Remix uses the <link rel="stylesheet"> to add stylesheet at component level using the Route Module links export.

The links function defines which elements to add to the page when the user visits the route. Visit Remix Styles to learn more about stylesheets in Remix.

Similar to the stylesheet, Remix can add the Meta tags to the route header by calling Meta Module. The meta function defines which meta elements you can add to the routes. This is a huge plus for your application’s SEO.

The app file has three components with a default export of the App component. Here, we declared a Document Component for the HTML document template, a Layout Component to further improve the template layout for rendering components, and added the {process.env.NODE_ENV === "development" ? <LiveReload /> : null} for Hot reload when changing things in the file during development.

One important note is to not confuse Links with Link. The latter, Link, is a router component for matching routes in Remix’s apps. We will handle the components to match our routes below.

To test out the application:

npm run dev

You should have a similar app as shown below:

Screenshot 2022-09-16 1.20.17 PM

Let’s configure the db we will use for the app. We will use primsa SQLite to store the posts.

Install the Prisma packages in your project:

npm install prisma @prisma-client

Now let’s initialize the prisma SQlite in project:

npx prisma init --datasource-provider sqlite

This will add a Prisma directory to the project.

Open prisma/schema.prisma and add the following code at the end of the file:

model Post {
  slug     String @id
  title    String
  body String

  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
}

This is the schema for the blog database.

Simply, the schema is identifying that we will need a slug which will be a unique string and used for our blog’s unique route. The title property is also a string type for the title of our blog. The body type of string will hold the body, and finally we want to have the dates created and updated.

Now, let’s push the schema to Prisma:

npx prisma db push

Prisma will create a dev.db in the Prisma directory.

Now let's seed our database with a couple posts. Create prisma/seed.js and add this to the file.

const {PrismaClient} = require('@prisma/client')

const prisma = new PrismaClient()

async function seed() {

  await Promise.all(
    getPosts().map(post => {

      return prisma.post.create({data: post})
    })
  )
}

seed()

function getPosts() {
  return [
    {
      title: 'JavaScript Performance Tips',
      slug: "javaScript-performance-tips",
      body: `We will look at 10 simple tips and tricks to increase the speed of your code when writing JS`,
    },
    {
      title: 'Tailwind vs. Bootstrap',
      slug: `tailwind-vs-bootstrap`,
      body: `Both Tailwind and Bootstrap are very popular CSS frameworks. In this article, we will compare them`,
    },
    {
      title: 'Writing Great Unit Tests',
      slug: `writing-great-unit-tests`,
      body: `We will look at 10 simple tips and tricks on writing unit tests in JavaScript`,
    },
    {
      title: 'What Is New In PHP 8?',
      slug: `what-is-new-in-PHP-8`,
      body: `In this article we will look at some of the new features offered in version 8 of PHP`,
    },
  ]
}

Now edit the package.json file just before the script property and add this:

  "prisma": {
    "seed": "node prisma/seed"
  },
  "scripts": {
… 

Now, to seed the db with Prisma, run:

npx prisma db seed

Our database setup is now done.

There is just one more thing for database connection, We need to create a typescript file app/utils/db.server.ts.

We specify a server file by appending .server at the end of the file name, Remix will compile and deploy this file on the server..

Add the code below to the file:

import { PrismaClient } from "@prisma/client";

let db: PrismaClient

declare global {
  var __db: PrismaClient | undefined
}

if (process.env.NODE_ENV === 'production') {
  db = new PrismaClient()
  db.$connect()
} else {
  if (!global.__db) {
    global.__db = new PrismaClient()
    global.__db.$connect()
  }

  db = global.__db
}

export { db }

Now, go back to the code editor and create post.jsx in the route directory, and add the following code:

import { Outlet } from "@remix-run/react"

function Posts() {
  return (
    <>
      <Outlet />
    </>
  )
}

export default Posts

All we are doing is rendering our nested routes of posts here, nothing crazy.

Now create routes/posts/index.jsx

import { Link, useLoaderData } from "@remix-run/react";
import {db} from '../../utils/db.server';

export const loader = async () => {
  const data = {
    posts: await db.post.findMany({
      take: 20,
      select: {slug: true, title: true, createdAt: true},
      orderBy: {createdAt: 'desc'}
    })
  }
  return data
}

function PostItems() {
  const {posts} = useLoaderData()
  return (
    <>
      <div>
        <h1>Posts</h1>
        <Link to='/posts/new' className="btn btn-primary">New Post</Link>
      </div>
      <div className="row mt-3">
      {posts.map(post => (
        <div className="card mb-3 p-3" key={post.id}>
          <Link to={`./${post.slug}`}>
            <h3 className="card-header">{post.title}</h3>
            {new Date(post.createdAt).toLocaleString()}
          </Link>
        </div>
      ))}

      </div>
    </>
  )
}

export default PostItems

Here, we are declaring the loader and fetching all the posts from the database with a limit of 20 and destructuring the posts from the useLoaderData hook, then finally looping through the returned posts.

Dynamic Routes

Remix dynamic routes are defined with a $ sign and the id key. We will want to use the slugs properties as our dynamic routes for the blogs.

To do this, create app/routes/posts/$postSlug.jsx

import {Link, useLoaderData} from "@remix-run/react";
import {db} from '../../utils/db.server';

export const loader = async ({params}) => {
  const post = await db.post.findUnique({
    where: {slug: params.postSlug}
  })

  if (!post) throw new Error('Post not found')

  const data = {post}

  return data
}

function Post() {
  const {post} = useLoaderData() 
  return (
    <div className="card w-100">
      <div className="card-header">
        <h1>{post.title}</h1>
      </div>
      <div className="card-body">{post.body}</div>
      <div className="card-footer">
        <Link to='/posts' className="btn btn-danger">Back</Link>
      </div>
    </div>
  )
}

export default Post

Here, the loader is fetching from the database using the unique slug we provided and rendering the article.

To add where to post new blogs, create app/routes/posts/new.jsx, and the following code:

import { redirect } from "@remix-run/node";
import { Link } from "@remix-run/react";
import { db } from "../../utils/db.server";

export const action = async ({ request }) => {
  const form = await request.formData();

  const title = form.get("title");
  const body = form.get("body");
  const fields = { title, body, slug: title.split(‘ ‘).join(‘-’) };

  const post = await db.post.create({ data: fields });

  return redirect(`/posts/${post.slug}`);
};

function NewPost() {
  return (
    <div className="card w-100">
<form method="POST">
      <div className="card-header">
        <h1>New Post</h1>
        <Link to="/posts" className="btn btn-danger">
          Back
        </Link>
      </div>

      <div className="card-body">

          <div className="form-control my-2">
            <label htmlFor="title">Title</label>
            <input type="text" name="title" id="title" />
          </div>
          <div className="form-control my-2">
            <label htmlFor="body">Post Body</label>
            <textarea name="body" id="body" />
          </div>

      </div>
      <div className="card-footer">
        <button type="submit" className="btn btn-success">
          Add Post
        </button>
      </div>
        </form>
    </div>
  );
}

export default NewPost;

We created an action function to handle the form post request, and Remix will catch all the formData by passing the request.

And there you have it! You’ve created a functional blog application.

Conclusion

In this article, we learned why Remix is a great choice for building your next application, and why it’s becoming a more popular framework to use amongst developers.

We also explored the core features of Remix and learned how to build a sample blog application using a database.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Exploring Angular Forms: A New Alternative with Signals cover image

Exploring Angular Forms: A New Alternative with Signals

Exploring Angular Forms: A New Alternative with Signals In the world of Angular, forms are essential for user interaction, whether you're crafting a simple login page or a more complex user profile interface. Angular traditionally offers two primary approaches: template-driven forms and reactive forms. In my previous series on Angular Reactive Forms, I explored how to harness reactive forms' power to manage complex logic, create dynamic forms, and build custom form controls. A new tool for managing reactivity - signals - has been introduced in version 16 of Angular and has been the focus of Angular maintainers ever since, becoming stable with version 17. Signals allow you to handle state changes declaratively, offering an exciting alternative that combines the simplicity of template-driven forms with the robust reactivity of reactive forms. This article will examine how signals can add reactivity to both simple and complex forms in Angular. Recap: Angular Forms Approaches Before diving into the topic of enhancing template-driven forms with signals, let’s quickly recap Angular's traditional forms approaches: 1. Template-Driven Forms: Defined directly in the HTML template using directives like ngModel, these forms are easy to set up and are ideal for simple forms. However, they may not provide the fine-grained control required for more complex scenarios. Here's a minimal example of a template-driven form: ` ` 2. Reactive Forms: Managed programmatically in the component class using Angular's FormGroup, FormControl, and FormArray classes; reactive forms offer granular control over form state and validation. This approach is well-suited for complex forms, as my previous articles on Angular Reactive Forms discussed. And here's a minimal example of a reactive form: ` ` Introducing Signals as a New Way to Handle Form Reactivity With the release of Angular 16, signals have emerged as a new way to manage reactivity. Signals provide a declarative approach to state management, making your code more predictable and easier to understand. When applied to forms, signals can enhance the simplicity of template-driven forms while offering the reactivity and control typically associated with reactive forms. Let’s explore how signals can be used in both simple and complex form scenarios. Example 1: A Simple Template-Driven Form with Signals Consider a basic login form. Typically, this would be implemented using template-driven forms like this: ` ` This approach works well for simple forms, but by introducing signals, we can keep the simplicity while adding reactive capabilities: ` ` In this example, the form fields are defined as signals, allowing for reactive updates whenever the form state changes. The formValue signal provides a computed value that reflects the current state of the form. This approach offers a more declarative way to manage form state and reactivity, combining the simplicity of template-driven forms with the power of signals. You may be tempted to define the form directly as an object inside a signal. While such an approach may seem more concise, typing into the individual fields does not dispatch reactivity updates, which is usually a deal breaker. Here’s an example StackBlitz with a component suffering from such an issue: Therefore, if you'd like to react to changes in the form fields, it's better to define each field as a separate signal. By defining each form field as a separate signal, you ensure that changes to individual fields trigger reactivity updates correctly. Example 2: A Complex Form with Signals You may see little benefit in using signals for simple forms like the login form above, but they truly shine when handling more complex forms. Let's explore a more intricate scenario - a user profile form that includes fields like firstName, lastName, email, phoneNumbers, and address. The phoneNumbers field is dynamic, allowing users to add or remove phone numbers as needed. Here's how this form might be defined using signals: ` > Notice that the phoneNumbers field is defined as a signal of an array of signals. This structure allows us to track changes to individual phone numbers and update the form state reactively. The addPhoneNumber and removePhoneNumber methods update the phoneNumbers signal array, triggering reactivity updates in the form. ` > In the template, we use the phoneNumbers signal array to dynamically render the phone number input fields. The addPhoneNumber and removePhoneNumber methods allow users to reactively add or remove phone numbers, updating the form state. Notice the usage of the track function, which is necessary to ensure that the ngFor directive tracks changes to the phoneNumbers array correctly. Here's a StackBlitz demo of the complex form example for you to play around with: Validating Forms with Signals Validation is critical to any form, ensuring that user input meets the required criteria before submission. With signals, validation can be handled in a reactive and declarative manner. In the complex form example above, we've implemented a computed signal called formValid, which checks whether all fields meet specific validation criteria. The validation logic can easily be customized to accommodate different rules, such as checking for valid email formats or ensuring that all required fields are filled out. Using signals for validation allows you to create more maintainable and testable code, as the validation rules are clearly defined and react automatically to changes in form fields. It can even be abstracted into a separate utility to make it reusable across different forms. In the complex form example, the formValid signal ensures that all required fields are filled and validates the email and phone numbers format. This approach to validation is a bit simple and needs to be better connected to the actual form fields. While it will work for many use cases, in some cases, you might want to wait until explicit "signal forms" support is added to Angular. Tim Deschryver started implementing some abstractions around signal forms, including validation and wrote an article about it. Let's see if something like this will be added to Angular in the future. Why Use Signals in Angular Forms? The adoption of signals in Angular provides a powerful new way to manage form state and reactivity. Signals offer a flexible, declarative approach that can simplify complex form handling by combining the strengths of template-driven forms and reactive forms. Here are some key benefits of using signals in Angular forms: 1. Declarative State Management: Signals allow you to define form fields and computed values declaratively, making your code more predictable and easier to understand. 2. Reactivity: Signals provide reactive updates to form fields, ensuring that changes to the form state trigger reactivity updates automatically. 3. Granular Control: Signals allow you to define form fields at a granular level, enabling fine-grained control over form state and validation. 4. Dynamic Forms: Signals can be used to create dynamic forms with fields that can be added or removed dynamically, providing a flexible way to handle complex form scenarios. 5. Simplicity: Signals can offer a simpler, more concise way to manage form states than traditional reactive forms, making building and maintaining complex forms easier. Conclusion In my previous articles, we explored the powerful features of Angular reactive forms, from dynamic form construction to custom form controls. With the introduction of signals, Angular developers have a new tool that merges the simplicity of template-driven forms with the reactivity of reactive forms. While many use cases warrant Reactive Forms, signals provide a fresh, powerful alternative for managing form state in Angular applications requiring a more straightforward, declarative approach. As Angular continues to evolve, experimenting with these new features will help you build more maintainable, performant applications. Happy coding!...

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer&lt;typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

How to Build Apps with Great Startup Performance Using Qwik cover image

How to Build Apps with Great Startup Performance Using Qwik

In this article, we will recap the JS Drops Qwik workshop with Misko Hevery. This workshop provided an overview on Qwik, its unique features and a look at some example components. We will also address some of the questions raised at the end of the workshop. If you want to learn more about Qwik with Misko Hevery, then please check out this presentation on This Dot Media’s YouTube Channel. Also don’t forget to subscribe to get the latest on all things web development. Table of Contents - What is Qwik? - How to create a Counter Component in Qwik - Unique features of Qwik - Directory Based Routing - Slots in Qwik - Very little JavaScript in production - Resumability with Qwik - Lazy load components by default - Questions asked during the workshop - Are all these functions generated at build time or are they generated at runtime? What's the server consideration here (if any) or are we able to put everything behind a CDN? - How do you access elements in Qwik? - Can you force a download of something out of view? - What is the choice to use $ on Qwik declarations? - Can you explain the interop story with web components and Qwik? Any parts of the Qwik magic that aren’t available to us if, say, our web components are too complex? - Is there an ideal use case for Qwik? - When to use useWatch$ instead of useClientEffect$? - Conclusion What is Qwik? Qwik is a web framework that builds fast web applications with consistent performance at scale regardless of size or complexity. To get started with Qwik, run the following command: ` The Qwik CLI will prompt options to scaffold a starter project on your local machine. To start the demo application, run npm start and navigate to http://127.0.0.1:5173/ in the browser. How to create a Counter Component in Qwik Create a sub-directory in the routes directory named counter and add an index.tsx file with the component definition below. ` Now navigate to http://127.0.0.1:5173/counter and you should see the counter component rendered on the page. Unique features of Qwik Directory Based Routing Qwik is a directory-based routing framework. When we initiated Qwik, it created a routes sub-directory in the src directory and added index and layout files for route matching. The index.tsx is the base route component and the layout.tsx is the component for handling the base page layout. The sub-directories in the route directory serve as the application’s structure for route matching with its index.tsx files as the route components. Every index.tsx file does a look up for the layout component. If it doesn’t exist in the same directory, then it moves up to the parent directory. ` Slots in Qwik Qwik uses slots as a way of connecting content from the parent component to the child projection. The parent component uses the q:slot attribute to identify the source of the projection and the element to identify the destination of the projection. To learn more about slots, please check out the Qwik documentation. Very little JavaScript in production In production, Qwik starts the application with no JavaScript at startup, which makes the startup performance really fast. To see this in action, open the browser’s dev tools, click on the Network tab, and on the Filter tab select JS. You will notice the Vite files for hot module reloading are currently the only JavaScript files served which will not be shipped to production. Go to the filter tab and check the invert checkbox then in the filter input type select Vite. Resumability with Qwik Qwik applications do not require hydration to resume an application on the client. To see this in action, click on the increment button and observe the browser’s dev tools network tab. You will notice Qwik is downloading only the required amount of JavaScript needed. The way Qwik attaches the event to the DOM and handles the state of components is that it serializes the attribute, which tells the browser where to download the event handler and its state. To learn more about serialization with Qwik, read through the Qwik documentation. By default, the code associated with the click event will not download until the user triggers that event. On this interaction, Qwik will only download the minimum code for the event handler to work. To learn more about Qwik events, please read through the [documentation] (https://qwik.builder.io/docs/components/events/#events)....

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website cover image

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website The Department of Government Efficiency (DOGE) launched a new website, doge.gov. Within days, it was defaced with messages from hackers. The culprit? A misconfigured database was left open, letting anyone edit content. Reports suggest the site was built on Cloudflare Pages, possibly with a Next.js frontend pulling data dynamically. While we don’t have the tech stack confirmed, we are confident that Next.js was used from early reporting around the website. Let’s dive into what went wrong—and how you can secure your own Next.js projects. What Happened to DOGE.gov? The hack was a classic case of security 101 gone wrong. The database—likely hosted in the cloud—was accessible without authentication. No passwords, no API keys, no nothing. Hackers simply connected to it and started scribbling their graffiti. Hosted on Cloudflare Pages (not government servers), the site might have been rushed, skipping critical security checks. For a .gov domain, this is surprising—but it’s a reminder that even big names can miss best practices. It’s easy to imagine how this happened: an unsecured server action is being used on the client side, a serverless function or API route fetching data from an unsecured database, no middleware enforcing access control, and a deployment that didn’t double-check cloud configs. Let’s break down how to avoid this in your own Next.js app. Securing Your Next.js Website: 5 Key Steps Next.js is a powerhouse for building fast, scalable websites, but its flexibility means you’re responsible for locking the doors. Here’s how to keep your site safe. 1. Double-check your Server Actions If Next.js 13 or later was used, Server Actions might’ve been part of the mix—think form submissions or dynamic updates straight from the frontend. These are slick for handling server-side logic without a separate API, but they’re a security risk if not handled right. An unsecured Server Action could’ve been how hackers slipped into the database. Why? Next.js generates a public endpoint for each Server Action. If these Server Actions lack proper authentication and authorization measures, they become vulnerable to unauthorized data access. Example: * Restrict Access: Always validate the user’s session or token before executing sensitive operations. * Limit Scope: Only allow Server Actions to perform specific, safe tasks—don’t let them run wild with full database access. * Don’t use server action on the client side without authorization and authentication checks 2. Lock Down Your Database Access Another incident happened in 2020. A hacker used an automated script to scan for misconfigured MongoDB databases, wiping the content of 23 thousand databases that have been left wide open, and leaving a ransom note behind asking for money. So whether you’re using MongoDB, PostgreSQL, or Cloudflare’s D1, never leave it publicly accessible. Here’s what to do: * Set Authentication: Always require credentials (username/password or API keys) to connect. Store these in environment variables (e.g., .env.local for Next.js) and access them via process.env. * Whitelist IPs: If your database is cloud-hosted, restrict access to your Next.js app’s server or Vercel deployment IP range. * Use VPCs: For extra security, put your database in a Virtual Private Cloud (VPC) so it’s not even exposed to the public internet. If you are using Vercel, you can create private connections between Vercel Functions and your backend cloud, like databases or other private infrastructure, using Vercel Secure Compute Example: In a Next.js API route (/app/api/data.js): ` > Tip: Don’t hardcode MONGO_URI—keep it in .env and add .env to .gitignore. 3. Secure Your API Routes Next.js API routes are awesome for server-side logic, but they’re a potential entry point if left unchecked. The site might’ve had an API endpoint feeding its database updates without protection. * Add Authentication: Use a library like next-auth or JSON Web Tokens (JWT) to secure routes. * Rate Limit: Prevent abuse with something like rate-limiter-flexible. Example: ` 4. Double-Check Your Cloud Config A misconfigured cloud setup may have exposed the database. If you’re deploying on Vercel, Netlify, or Cloudflare: * Environment Variables: Store secrets in your hosting platform’s dashboard, not in code. * Serverless Functions: Ensure they’re not leaking sensitive data in responses. Log errors, not secrets. * Access Controls: Verify your database firewall rules only allow connections from your app. 5. Sanitize and Validate Inputs Hackers love injecting junk into forms or APIs. If your app lets users submit data (e.g., feedback forms), unvalidated inputs could’ve been a vector. In Next.js: * Sanitize: Use libraries like sanitize-html for user inputs. * Validate: Check data types and lengths before hitting your database. Example: ` Summary The DOGE website hack serves as a reminder of the ever-present need for robust security measures in web development. By following the outlined steps–double-checking Server Actions, locking down database access, securing API routes, verifying cloud configurations, and sanitizing/validating inputs–you can enhance the security posture of your Next.js applications and protect them from potential threats. Remember, a proactive approach to security is always the best defense....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co