Skip to content

Quick intro to Genetic algorithms (with a JavaScript example)

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

The purpose of this article is to give the reader a quick insight into what genetic algorithms are, and how they work.

In short: What are genetic algorithms?

Genetic algorithms are a metaheuristic inspired by Charles Darwin's theory of natural selection. They replicate the operations: mutation, crossover and selection. These features make them suitable for use in optimization problems. Because of this, they find applications in a lot of disciplines like chemistry, climatology, robotics, gaming, etc. They are part of evolutionary computation.

The example problem

Before implementing the algorithm, let's take a look of the problem we are going to solve.

You might have heard about the infinite monkey theorem which states that, given an infinite amount of time, a monkey will almost certainly type logical, meaningful text while randomly pressing the keys on a keyboard. Of course, the lifetime of a monkey will probably not be long enough for this to happen. Still, the chances are greater than zero. Very close to zero, but not zero. Anyway, let's stick to the metaphor.

Another way we can imagine this: imagine we have a device that produces random strings. At some point in the future, regardless how long it takes, it will output our desired text.

The purpose of the genetic algorithm we are going to develop is to find an optimal way to achieve the result we are looking for rather than brute forcing it (which will inevitably take a ludicrous amount of time).

This problem is widely used as an example for GA, so for the sake of consistency and simplicity, we will stick to it.

Parts of GA

We already mentioned some of the operations involved in the process. Let's break down the different steps involved in the GA:

  1. Initializing the population - This is the first step. We will pick a set of members with different traits (or genes). The key here is to have a big enough variety in order to evolve to the target for which we are aiming.
  2. Selection - The process of selecting those members of our population who are fit enough to be reproduced. This happens by predetermined criteria.
  3. Reproduction - After we select the fittest members from our current generation, we will perform a crossover -- mixing their traits/genes. Optionally, we will mutate the newly produced members by some random factor. That way, we will further support the variety in our population.
  4. Repeat 2 and 3 - Simply said -- we will perform selection, crossover and mutation for every new generation until the population evolves to such an extent that we have the desired candidate.

Implementation

Okay, before we start with the algorithm, let's first introduce two utility functions that we are going to use throughout the example. The first one will be used for generating a random integer value in a provided interval, and the second -- generating a letter from a-z:

// Src: MDN
function random(min, max) {
  min = Math.ceil(min);
  max = Math.floor(max);

  // The maximum is exclusive and the minimum is inclusive
  return Math.floor(Math.random() * (max - min)) + min;
}

function generateLetter() {
  const code = random(97, 123); // ASCII char codes
  return String.fromCharCode(code);
}

Now that we have our utils ready, we'll start with the first step of the genetic algorithms, which is:

Initializing the population

Let's introduce our population member or the monkey that we referenced previously. It will be represented by an object called Member which contains an array with the pressed keys, and the target string. The constructor will generate a set of randomly pressed keys:

class Member {
  constructor(target) {
    this.target = target;
    this.keys = [];

    for (let i = 0; i < target.length; i += 1) {
      this.keys[i] = generateLetter();
    }
  }
}

For example, new Member('qwerty') can result in a member who pressed a, b, c, d, e, f or y, a, w, r, t, w.

Next, after we have our member created, let's move to the population object, which will keep the set of members we want to evolve:

class Population {
  constructor(size, target) {
    size = size || 1;
    this.members = [];

    for (let i = 0; i < size; i += 1) {
      this.members.push(new Member(target));
    }
  }
}

The Population should accept size and target (target string). In the constructor, we are going to generate size members all with randomly generated words, but with equal length.

This should be enough to create a diverse initial population!

Selection

It's time for us to add some methods to the classes we created. We will start with the Member. Do you remember that we had to develop some criteria based on how we determine the fitness of our members? This is exactly what we are going to do by introducing the fitness method.

Let me elaborate:

class Member {
  // ...

  fitness() {
    let match = 0;

    for (let i = 0; i < this.keys.length; i += 1) {
      if (this.keys[i] === this.target[i]) {
        match += 1;
      }
    }

    return match / this.target.length;
  }
}

As you can see, what we are attempting to do is to find how many of the randomly pressed keys (by a member) match the letters in our target string. The more, the better! Since we want to have a coefficient as an output, we will divide the matches by the length of the target.

Going back to our Population class, we will implement the actual selection operation. Logically, we would like to have the fitness calculated to every member of the population.

Based on that value, we want to increase the chances of the fittest members being selected, and respectively, decrease the chances of selection for the least fit candidates for the new generation. This can happen by adding an additional array called the mating pool. In it, we will add every member, multiplied by its-fitness-coefficient. To grasp this concept, let's take a look at this example:

Member1 (fitness = 4)
Member2 (fitness = 2)
Member3 (fitness = 1)

== Output ==

MatingPool = [
  Member1, Member1, Member1, Member1,
  Member2, Member2,
  Member3
]

At this point, you should be able to guess the idea behind this approach -- to increase the chances of the fittest members being randomly selected. Note that there are different ways for building your mating pool distribution.

And the code:

class Population {
  // ...

  _selectMembersForMating() {
    const matingPool = [];

    this.members.forEach((m) => {
      // The fitter it is, the more often it will be present in the mating pool
      // i.e. increasing the chances of selection
      // If fitness == 0, add just one member
      const f = Math.floor(m.fitness() * 100) || 1;

      for (let i = 0; i < f; i += 1) {
        matingPool.push(m);
      }
    });

    return matingPool;
  }
}

Now that we have readied our selection process, it is time to reproduce!

Reproduction

In this section of the implementation, we will focus on the crossover, and mutation operations. This will require further extending of our Member class. I will start by creating the crossover method. Its role is to combine/mix the traits, genes, or in our case, keys, of two parents randomly selected from the mating pool. The output child might contain only keys from parent A or keys from parent B, but in most cases, it will contain keys from both the parents. This happens by selecting a random index from the array that contains them:

class Member {
  // ...

  crossover(partner) {
    const { length } = this.target;
    const child = new Member(this.target);
    const midpoint = random(0, length);

    for (let i = 0; i < length; i += 1) {
      if (i > midpoint) {
        child.keys[i] = this.keys[i];
      } else {
        child.keys[i] = partner.keys[i];
      }
    }

    return child;
  }
}

Since we are working with the Member, let's add our last method to it. This is the mutate method. Its purpose is to give some uniqueness, and randomness, to the newly created child. It's worth mentioning that the mutation is not something that every genetic algorithm incorporates. That's why we will use a rate which will make the process controlled:

class Member {
  // ...

  mutate(mutationRate) {
    for (let i = 0; i < this.keys.length; i += 1) {
      // If below predefined mutation rate,
      // generate a new random letter on this position.
      if (Math.random() < mutationRate) {
        this.keys[i] = generateLetter();
      }
    }
  }
}

// We will also have to update the Population constructor
class Population {
  constructor(size, target, mutationRate) {
    size = size || 1;
    this.members = [];
    this.mutationRate = mutationRate // <== New property

    // ...
  }

  // ...
}

It's time to tie these two in our Population class. Previously, we added the _selectMembersForMating. Let's make use of it:

class Population {
  // ...

  _reproduce(matingPool) {
    for (let i = 0; i < this.members.length; i += 1) {
      // Pick 2 random members/parents from the mating pool
      const parentA = matingPool[random(0, matingPool.length)];
      const parentB = matingPool[random(0, matingPool.length)];

      // Perform crossover
      const child = parentA.crossover(parentB);

      // Perform mutation
      child.mutate(this.mutationRate);

      this.members[i] = child;
    }
  }
}

Here, we only pick 2 random parents. Then, we perform the crossover. After that comes the mutation. And then finally, we replace the member with the new child. That's it!

We are almost done. For the purposes of this demo, we are going to add control over the number of generations (or iterations) that we want to have. That way, we can closely monitor how our population changes over time. In a real world scenario, we would like to create new generations until we reach our final goal or target. Anyway, let's add the final evolve method to the Population class for that goal:

class Population {
  // ...

  evolve(generations) {
    for (let i = 0; i < generations; i += 1) {
      const pool = this._selectMembersForMating();
      this._reproduce(pool);
    }
  }
}

In the end, we will add a helper function that will make the running process a bit easier for us:

function generate(populationSize, target, mutationRate, generations) {
  // Create a population and evolve for N generations
  const population = new Population(populationSize, target, mutationRate);
  population.evolve(generations);

  // Get the typed words from all members, and find if someone was able to type the target
  const membersKeys = population.members.map((m) => m.keys.join(''));
  const perfectCandidatesNum = membersKeys.filter((w) => w === target);

  // Print the results
  console.log(membersKeys);
  console.log(`${perfectCandidatesNum ? perfectCandidatesNum.length : 0} member(s) typed "${target}"`);
}

generate(20, 'hit', 0.05, 5);

Running this, we will create a population of 20 members whose target is to type "hit". We'll execute it for 5 generations with a mutation rate of 5%.

Note: Keep in mind that, for our demo, the target string must contain only lowercase a-z letters.

Results

Having this particular population evolve for 5 generations will probably end up with 0 members who typed "hit". However, making that number 20 might land us a good candidate:

  5 Generations  |  20 Generations
-----------------|------------------
       ...       |        ...
                 |
      'gio'      |       'fit'
      'yix'      |       'hrt'
      'gio'      |    >> 'hit' <<
      'kbo'      |       'hiy'

Online demo

You can find the full source code on GitHub

If you want to dig more into genetic algorithms ...

Since the article only lightly touches on what genetic algorithms are, I highly recommend you check out The Nature of Code by Daniel Shiffman (from which this text was based on). It is a great resource for those who want a deeper insight.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Incremental Hydration in Angular cover image

Incremental Hydration in Angular

Incremental Hydration in Angular Some time ago, I wrote a post about SSR finally becoming a first-class citizen in Angular. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better. As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: deferrable views. Using the @defer blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport. Then, in September 2024, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration. I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively hydrating the server-rendered content. But what exactly does "dehydrated" mean, you might ask? Here's what will happen when you mark a part of your application to be incrementally hydrated: 1. Server-Side Rendering (SSR): The content marked for incremental hydration is rendered on the server. 2. Skipped During Client-Side Bootstrapping: The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time. 3. Dehydrated State: The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance. 4. Hydration Triggers: The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a hydrate trigger in the @defer block. 5. On-Demand Hydration: Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts. How to Use Incremental Hydration Thanks to Mark Thompson, who recently hosted a feature showcase on incremental hydration, we can show some code. The first step is to enable incremental hydration in your Angular application's appConfig using the provideClientHydration provider function: ` Then, you can mark the components you want to be incrementally hydrated using the @defer block with a hydrate trigger: ` And that's it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user. But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don't want to hydrate the component at all? The same triggers already supported in @defer blocks are available for hydration: - idle: Hydrate once the browser reaches an idle state. - viewport: Hydrate once the component enters the viewport. - interaction: Hydrate once the user interacts with the component through click or keydown triggers. - hover: Hydrate once the user hovers over the component. - immediate: Hydrate immediately when the component is rendered. - timer: Hydrate after a specified time delay. - when: Hydrate when a provided conditional expression is met. And on top of that, there's a new trigger available for hydration: - never: When used, the component will remain static and not hydrated. The never trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page. Personally, I'm very excited about this feature and can't wait to try it out. How about you?...

D1 SQLite: Writing queries with the D1 Client API cover image

D1 SQLite: Writing queries with the D1 Client API

Writing queries with the D1 Client API In the previous post we defined our database schema, got up and running with migrations, and loaded some seed data into our database. In this post we will be working with our new database and seed data. If you want to participate, make sure to follow the steps in the first post. We’ve been taking a minimal approach so far by using only wrangler and sql scripts for our workflow. The D1 Client API has a small surface area. Thanks to the power of SQL, we will have everything we need to construct all types of queries. Before we start writing our queries, let's touch on some important concepts. Prepared statements and parameter binding This is the first section of the docs and it highlights two different ways to write our SQL statements using the client API: prepared and static statements. Best practice is to use prepared statements because they are more performant and prevent SQL injection attacks. So we will write our queries using prepared statements. We need to use parameter binding to build our queries with prepared statements. This is pretty straightforward and there are two variations. By default we add ? ’s to our statement to represent a value to be filled in. The bind method will bind the parameters to each question mark by their index. The first ? is tied to the first parameter in bind, 2nd, etc. I would stick with this most of the time to avoid any confusion. ` I like this second method less as it feels like something I can imagine messing up very innocently. You can add a number directly after a question mark to indicate which number parameter it should be bound to. In this exampl, we reverse the previous binding. ` Reusing prepared statements If we take the first example above and not bind any values we have a statement that can be reused: ` Querying For the purposes of this post we will just build example queries by writing them out directly in our Worker fetch handler. If you are building an app I would recommend building functions or some other abstraction around your queries. select queries Let's write our first query against our data set to get our feet wet. Here’s the initial worker code and a query for all authors: ` We pass our SQL statement into prepare and use the all method to get all the rows. Notice that we are able to pass our types to a generic parameter in all. This allows us to get a fully typed response from our query. We can run our worker with npm run dev and access it at http://localhost:8787 by default. We’ll keep this simple workflow of writing queries and passing them as a json response for inspection in the browser. Opening the page we get our author results. joins Not using an ORM means we have full control over our own destiny. Like anything else though, this has tradeoffs. Let’s look at a query to fetch the list of posts that includes author and tags information. ` Let’s walk through each part of the query and highlight some pros and cons. ` * The query selects all columns from the posts table. * It also selects the name column from the authors table and renames it to author_name. * It aggregates the name column from the tags table into a JSON array. If there are no tags, it returns an empty JSON array. This aggregated result is renamed to tags. ` * The query starts by selecting data from the posts table. * It then joins the authors table to include author information for each post, matching posts to authors using the author_id column in posts and the id column in authors. * Next, it left joins the posts_tags table to include tag associations for each post, ensuring that all posts are included even if they have no tags. * Next, it left joins the tags table to include tag names, matching tags to posts using the tag_id column in posts_tags and the id column in tags. * Finally, group the results by the post id so that all rows with the same post id are combined in a single row SQL provides a lot of power to query our data in interesting ways. JOIN ’s will typically be more performant than performing additional queries.You could just as easily write a simpler version of this query that uses subqueries to fetch post tags and join all the data by hand with JavaScript. This is the nice thing about writing SQL, you’re free to fetch and handle your data how you please. Our results should look similar to this: ` This brings us to our next topic. Marshaling / coercing result data A couple of things we notice about the format of the result data our query provides: Rows are flat. We join the author directly onto the post and prefix its column names with author. ` Using an ORM we might get the data back as a child object: ` Another thing is that our tags data is a JSON string and not a JavaScript array. This means that we will need to parse it ourselves. ` This isn’t the end of the world but it is some more work on our end to coerce the result data into the format that we actually want. This problem is handled in most ORM’s and is their main selling point in my opinion. insert / update / delete Next, let’s write a function that will add a new post to our database. ` There’s a few queries involved in our create post function: * first we create the new post * next we run through the tags and either create or return an existing tag * finally, we add entries to our post_tags join table to associate our new post with the tags assigned We can test our new function by providing post content in query params on our index page and formatting them for our function. ` I gave it a run like this: http://localhost:8787authorId=1&tags=Food%2CReview&title=A+review+of+my+favorite+Italian+restaurant&content=I+got+the+sausage+orchette+and+it+was+amazing.+I+wish+that+instead+of+baby+broccoli+they+used+rapini.+Otherwise+it+was+a+perfect+dish+and+the+vibes+were+great And got a new post with the id 11. UPDATE and DELETE operations are pretty similar to what we’ve seen so far. Most complexity in your queries will be similar to what we’ve seen in the posts query where we want to JOIN or GROUP BY data in various ways. To update the post we can write a query that looks like this: ` COALESCE acts similarly to if we had written a ?? b in JavaScript. If the binded value that we provide is null it will fall back to the default. We can delete our new post with a simple DELETE query: ` Transactions / Batching One thing to note with D1 is that I don’t think the traditional style of SQLite transactions are supported. You can use the db.batch API to achieve similar functionality though. According to the docs: Batched statements are SQL transactions ↗. If a statement in the sequence fails, then an error is returned for that specific statement, and it aborts or rolls back the entire sequence. ` Summary In this post, we've taken a hands-on approach to exploring the D1 Client API, starting with defining our database schema and loading seed data. We then dove into writing queries, covering the basics of prepared statements and parameter binding, before moving on to more complex topics like joins and transactions. We saw how to construct and execute queries to fetch data from our database, including how to handle relationships between tables and marshal result data into a usable format. We also touched on inserting, updating, and deleting data, and how to use transactions to ensure data consistency. By working through these examples, we've gained a solid understanding of how to use the D1 Client API to interact with our database and build robust, data-driven applications....

Keeping Costs in Check When Hosting Next.js on Vercel cover image

Keeping Costs in Check When Hosting Next.js on Vercel

Vercel is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it's no wonder people choose it more and more over other providers. Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel. Vercel's Pricing Structure Vercel's pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the Developer Experience Platform (DX Platform) and the Managed Infrastructure. The DX Platform offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it's also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you're charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. Observability Plus is one such example feature. The Managed Infrastructure, on the other hand, is what makes your app run and scale. It is a serverless platform priced per usage. Thanks to this infrastructure, you don't need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on Hacker News, leading to a sudden spike in costs. Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage. Pricing Plans The DX Platform can be used in three different pricing plans on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects. The first of the three pricing plans is the Hobby plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only. The Pro plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams. The third tier is the Enterprise plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel. What Contributes to Usage Costs and Limiting Them Now that you've selected a plan for your team, you're ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits? The Vercel pricing page has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we've highlighted some of the most impactful factors on pricing that we've seen on many of our client projects. Number of Function Invocations Each time a serverless function runs, it counts as an invocation. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes: - Invoking server actions (server functions) - Invoking API routes (from the frontend, another system, or even within another serverless function) - Rendering a React server component tree (as part of a request to display a page) - Middleware execution To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites. To minimize serverless function invocations, focus on reducing any of the above points. For example: - Batch up server actions: If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action. - Minimize calls to the backend: Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as useSwr or TanStack Query to keep your backend calls under control. - Use API routes correctly: Next.js recommends using API routes for external systems invoking your app. For instance, Contentful can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations. Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to static content when you don’t expect them to change in real-time. On the client, utilize Next.js navigation primitives to use the client-side router cache. Middleware in Next.js runs before every request. This adds an extra function invocation on top of the original request processing (e.g., a server action invocation or page render). To minimize middleware invocations, limit them to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using matchers. For example, the matcher configuration below would prevent invoking the middleware for most static assets: ` Function Execution Time The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it's within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time. Many things can contribute to this, but one common pattern we've seen is not utilizing caching properly or under-caching. Next.js offers several caching techniques you can use, such as: - Using a data cache to prevent unnecessary database calls or API calls - Using memoization to prevent too many API or database calls in the same rendering pass Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable Fluid Compute, a recent feature by Vercel that optimizes function invocations and reusability. Bandwidth Usage The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with: - Large JavaScript bundles - Many images - Large API JSON payloads Image optimization is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred. To further optimize your bandwidth usage, focus on using the Link component efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to disable this feature for infrequently accessed pages. The Link component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage. Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, always return only the minimum amount of data necessary to the end user. Image Transformations Every time Vercel transforms an image from an unoptimized image, this counts as an image transformation. After transformation, every time an optimized image is written to Vercel's CDN network and then read by the user's browser, this counts as an image cache read and an image cache write, respectively. The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it's worth checking if the associated costs can be reduced. Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don't require transformation. You can store these images in the public folder and use the unoptimized property with the Image component to mark them as non-transformable. Another approach is to utilize an external image provider like Cloudinary or AWS CloudFront, which may have already optimized the images. In this case, you can use a custom image loader to take advantage of their optimizations and avoid Vercel's image transformations. Finally, Next.js provides several configuration options to fine-tune image transformation: - images.minimumCacheTTL: Controls the cache duration, reducing the need for rewritten images. - images.formats: Allows you to limit eligible image formats for transformation. - images.remotePatterns: Defines external sources for image transformation, giving you more control over what's optimized. - images.quality: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage. Monitoring The "Usage" tab on the team page in Vercel provides a clear view of your team's resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team's limit, and if you're approaching it, you'll see the amount. This page is a great way to monitor regularity. However, you don't need to check it constantly. Vercel offers various aspects of spending management, and you can set alert thresholds to get notified when you're close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges. One good feature of Vercel is its ability to pause projects when your spending reaches a certain point, acting as an "emergency break" in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won't be charged for any extra usage. This option is enabled by default. Conclusion Hosting a Next.js app on Vercel offers a great developer experience, but it's also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs. We hope you enjoyed this blog post. Be sure to check out our other blog posts on Next.js for more in-depth coverage of different features of this framework....

“ChatGPT knows me pretty well… but it drew me as a white man with a man bun.” – Angie Jones on AI Bias, DevRel, and Block’s new open source AI agent “goose” cover image

“ChatGPT knows me pretty well… but it drew me as a white man with a man bun.” – Angie Jones on AI Bias, DevRel, and Block’s new open source AI agent “goose”

Angie Jones is a veteran innovator, educator, and inventor with over twenty years of industry experience and twenty-seven digital technology patents both domestically and internationally. As the VP of Developer Relations at Block, she facilitates developer training and enablement, delivering tools for developer users and open source contributors. However, her educational work doesn’t end with her day job. She is also a contributor to multiple books examining the intersection of technology and career, including *DevOps: Implementing Cultural Change*, and *97 Things Every Java Programmer Should Know*, and is an active speaker in the global developer conference circuit. With the release of Block’s new open source AI agent “goose”, Angie drives conversations around AI’s role in developer productivity, ethical practices, and the application of intelligent tooling. We had the chance to talk with her about the evolution of DevRel, what makes a great leader, emergent data governance practices, women who are crushing it right now in the industry, and more: Developer Advocacy is Mainstream A decade ago, Developer Relations (DevRel) wasn’t the established field it is today. It was often called Developer Evangelism, and fewer companies saw the value in having engineers speak directly to other engineers. > “Developer Relations was more of a niche space. It’s become much more mainstream these days with pretty much every developer-focused company realizing that the best way to reach developers is with their peers.” That shift has opened up more opportunities for engineers who enjoy teaching, community-building, and breaking down complex technical concepts. But because DevRel straddles multiple functions, its place within an organization remains up for debate—should it sit within Engineering, Product, Marketing, or even its own department? There’s no single answer, but its cross-functional nature makes it a crucial bridge between technical teams and the developers they serve. Leadership Is Not an Extension of Engineering Excellence Most engineers assume that excelling as an IC is enough to prepare them for leadership, but Angie warns that this is a common misconception. She’s seen firsthand how technical skills don’t always equate to strong leadership abilities—we’ve all worked under leaders who made us wonder *how they got there*. When she was promoted into leadership, Angie was determined not to become one of those leaders: > “This required humility. Acknowledging that while I was an expert in one area, I was a novice in another.” Instead of assuming leadership would come naturally, she took a deliberate approach to learning—taking courses, reading books, and working with executive coaches to build leadership skills the right way. Goose: An Open Source AI Assistant That Works for You At Block, Angie is working on a tool called goose, an open-source AI agent that runs locally on your machine. Unlike many AI assistants that are locked into specific platforms, goose is designed to be fully customizable: > “You can use your LLM of choice and integrate it with any API through the Model Context Protocol (MCP).” That flexibility means goose can be tailored to fit developers’ workflows. Angie gives an example of what this looks like in action: > “Goose, take this Figma file and build out all of the components for it. Check them into a new GitHub repo called @org/design-components and send a message to the #design channel in Slack informing them of the changes.” And just like that, it’s done— no manual intervention required. The Future of Data Governance As AI adoption accelerates, data governance has become a top priority for companies. Strong governance requires clear policies, security measures, and accountability. Angie points out that organizations are already making moves in this space: > “Cisco recently launched a product called AI Defense to help organizations enhance their data governance frameworks and ensure that AI deployments align with established data policies and compliance requirements.” According to Angie, in the next five years, we can expect more structured frameworks around AI data usage, especially as businesses navigate privacy concerns and regulatory compliance. Bias in AI Career Tools: Helping or Hurting? AI-powered resume screeners and promotion predictors are becoming more common in hiring, but are they helping or hurting underrepresented groups? Angie’s own experience with AI bias was eye-opening: > “I use ChatGPT every day. It knows me pretty well. I asked it to draw a picture of what it thinks my current life looks like, and it drew me as a white male (with a man bun).” When she called it out, the AI responded: > “No, I don’t picture you that way at all, but it sounds like the illustration might’ve leaned into the tech stereotype aesthetic a little too much.” This illustrates a bigger problem— AI often reflects human biases at scale. However, there are emerging solutions, such as identity masking, which removes names, race, and gender markers so that only skills are evaluated. > “In scenarios like this, minorities are given a fairer shot.” It’s a step toward a more equitable hiring process, but it also surfaces the need for constant vigilance in AI development to prevent harmful biases. Women at the Forefront of AI Innovation While AI is reshaping nearly every industry, women are playing a leading role in its development. Angie highlights several technologists: > “I’m so proud to see women are already at the forefront of AI innovation. I see amazing women leading AI research, training, and development such as Mira Murati, Timnit Gebru, Joelle Pineau, Meredith Whittaker, and even Block’s own VP of Data & AI, Jackie Brosamer.” These women are influencing not just the technical advancements in AI but also the ethical considerations that come with it. Connect with Angie Angie Jones is an undeniable pillar of the online JavaScript community, and it isn’t hard to connect with her! You can find Angie on X (Twitter), Linkedin, or on her personal site (where you can also access her free Linkedin Courses). Learn more about goose by Block. Sticker Illustration by Jacob Ashley...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co