Skip to content

This Dot's "A Complete Guide to VueJS" is now available for free download!

This Dot's "A Complete Guide to VueJS" is now available for free download!

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Today, This Dot Labs is announcing the release of A Complete Guide to VueJS written by one of our in-house experts, Bilal Haidar.

The book is a comprehensive resource on the internal structures, complementary frameworks, components, and integrations needed to make Vue work for you!

Readers can look forward to exploring the tools needed to get their first Vue application up and running, from code editors to installing the Vue CLI.

The book then walks through the anatomy an application, introducing force-multiplying technologies supported by Vue, including ESLint, Prettier, source code repository management systems, Git, CSS frameworks, Axios, and more.

There is even a complete section, introducing readers to the yet unreleased Vue 3 update!

At about 80 pages, this quick guide packs in a ton of additional free resources that readers can explore to gain even more insight into this powerful framework.

Download your free copy of A Complete Guide to VueJS, and be sure to join This Dot Labs and JavaScript Marathon on Wednesday, 4/22 at 3:30PM EDT for a live Vue training, where we will dive into unit testing!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer<typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

D1 SQLite: Writing queries with the D1 Client API cover image

D1 SQLite: Writing queries with the D1 Client API

Writing queries with the D1 Client API In the previous post we defined our database schema, got up and running with migrations, and loaded some seed data into our database. In this post we will be working with our new database and seed data. If you want to participate, make sure to follow the steps in the first post. We’ve been taking a minimal approach so far by using only wrangler and sql scripts for our workflow. The D1 Client API has a small surface area. Thanks to the power of SQL, we will have everything we need to construct all types of queries. Before we start writing our queries, let's touch on some important concepts. Prepared statements and parameter binding This is the first section of the docs and it highlights two different ways to write our SQL statements using the client API: prepared and static statements. Best practice is to use prepared statements because they are more performant and prevent SQL injection attacks. So we will write our queries using prepared statements. We need to use parameter binding to build our queries with prepared statements. This is pretty straightforward and there are two variations. By default we add ? ’s to our statement to represent a value to be filled in. The bind method will bind the parameters to each question mark by their index. The first ? is tied to the first parameter in bind, 2nd, etc. I would stick with this most of the time to avoid any confusion. ` I like this second method less as it feels like something I can imagine messing up very innocently. You can add a number directly after a question mark to indicate which number parameter it should be bound to. In this exampl, we reverse the previous binding. ` Reusing prepared statements If we take the first example above and not bind any values we have a statement that can be reused: ` Querying For the purposes of this post we will just build example queries by writing them out directly in our Worker fetch handler. If you are building an app I would recommend building functions or some other abstraction around your queries. select queries Let's write our first query against our data set to get our feet wet. Here’s the initial worker code and a query for all authors: ` We pass our SQL statement into prepare and use the all method to get all the rows. Notice that we are able to pass our types to a generic parameter in all. This allows us to get a fully typed response from our query. We can run our worker with npm run dev and access it at http://localhost:8787 by default. We’ll keep this simple workflow of writing queries and passing them as a json response for inspection in the browser. Opening the page we get our author results. joins Not using an ORM means we have full control over our own destiny. Like anything else though, this has tradeoffs. Let’s look at a query to fetch the list of posts that includes author and tags information. ` Let’s walk through each part of the query and highlight some pros and cons. ` * The query selects all columns from the posts table. * It also selects the name column from the authors table and renames it to author_name. * It aggregates the name column from the tags table into a JSON array. If there are no tags, it returns an empty JSON array. This aggregated result is renamed to tags. ` * The query starts by selecting data from the posts table. * It then joins the authors table to include author information for each post, matching posts to authors using the author_id column in posts and the id column in authors. * Next, it left joins the posts_tags table to include tag associations for each post, ensuring that all posts are included even if they have no tags. * Next, it left joins the tags table to include tag names, matching tags to posts using the tag_id column in posts_tags and the id column in tags. * Finally, group the results by the post id so that all rows with the same post id are combined in a single row SQL provides a lot of power to query our data in interesting ways. JOIN ’s will typically be more performant than performing additional queries.You could just as easily write a simpler version of this query that uses subqueries to fetch post tags and join all the data by hand with JavaScript. This is the nice thing about writing SQL, you’re free to fetch and handle your data how you please. Our results should look similar to this: ` This brings us to our next topic. Marshaling / coercing result data A couple of things we notice about the format of the result data our query provides: Rows are flat. We join the author directly onto the post and prefix its column names with author. ` Using an ORM we might get the data back as a child object: ` Another thing is that our tags data is a JSON string and not a JavaScript array. This means that we will need to parse it ourselves. ` This isn’t the end of the world but it is some more work on our end to coerce the result data into the format that we actually want. This problem is handled in most ORM’s and is their main selling point in my opinion. insert / update / delete Next, let’s write a function that will add a new post to our database. ` There’s a few queries involved in our create post function: * first we create the new post * next we run through the tags and either create or return an existing tag * finally, we add entries to our post_tags join table to associate our new post with the tags assigned We can test our new function by providing post content in query params on our index page and formatting them for our function. ` I gave it a run like this: http://localhost:8787authorId=1&tags=Food%2CReview&title=A+review+of+my+favorite+Italian+restaurant&content=I+got+the+sausage+orchette+and+it+was+amazing.+I+wish+that+instead+of+baby+broccoli+they+used+rapini.+Otherwise+it+was+a+perfect+dish+and+the+vibes+were+great And got a new post with the id 11. UPDATE and DELETE operations are pretty similar to what we’ve seen so far. Most complexity in your queries will be similar to what we’ve seen in the posts query where we want to JOIN or GROUP BY data in various ways. To update the post we can write a query that looks like this: ` COALESCE acts similarly to if we had written a ?? b in JavaScript. If the binded value that we provide is null it will fall back to the default. We can delete our new post with a simple DELETE query: ` Transactions / Batching One thing to note with D1 is that I don’t think the traditional style of SQLite transactions are supported. You can use the db.batch API to achieve similar functionality though. According to the docs: Batched statements are SQL transactions ↗. If a statement in the sequence fails, then an error is returned for that specific statement, and it aborts or rolls back the entire sequence. ` Summary In this post, we've taken a hands-on approach to exploring the D1 Client API, starting with defining our database schema and loading seed data. We then dove into writing queries, covering the basics of prepared statements and parameter binding, before moving on to more complex topics like joins and transactions. We saw how to construct and execute queries to fetch data from our database, including how to handle relationships between tables and marshal result data into a usable format. We also touched on inserting, updating, and deleting data, and how to use transactions to ensure data consistency. By working through these examples, we've gained a solid understanding of how to use the D1 Client API to interact with our database and build robust, data-driven applications....

Announcing July JavaScript Marathon - Free, online training! cover image

Announcing July JavaScript Marathon - Free, online training!

Join us July 22nd, 2020 for our next JavaScript Marathon! JavaScript Marathon is a full day of free, online courses on Angular, React, Vue, RxJS, and Web Performance. Come learn about some of the leading web development technologies, and concepts! Stay for one training, or stick around for the whole day! No two sessions will be the same! --- Featuring Shawn Wang @ 11:00am - 12:00pm EDT In this session we will learn how to build a fullstack serverless React + GraphQL app from scratch with authentication, storage, and multiplayer realtime collaboration, all atop infinitely scalable AWS components, with AWS Amplify! It's never been this easy to go from idea to prototype, and each piece will be livecoded in front of your very eyes! --- Featuring Michael Hladky @ 12:30pm - 1:30pm EDT The async pipe is boring! Understand the guts of Angulars change detection and why zone.js is your biggest enemy. Learn the tricks on template bindings, component rendering, and where you pay the biggest price. As a cutting edge demo, you will understand how to analyze blocking UIs over flame charts and how to avoid them. In the end, you will be able to get zone-less performance even in zone-full Angular applications! --- Featuring Nathan Walker @ 2:00pm - 3:00pm EDT During this introduction to Nativescript, you’ll get a brief overview of what Nativescript is and how it works. You’ll also learn how to create a TypeScript, Angular, Vue, and React based app, + so much more! --- Featuring Cecelia Martinez @ 3:30pm - 4:30pm EDT Looking to add testing to your skill set or just feel more confident pushing to production? In this beginner-level talk, we will walk through the process of installing, configuring, and writing a critical-path test using Cypress. Written in JavaScript and built on the popular Mocha and Chai libraries, the free and open-source Cypress Test Runner gets you up to speed with end-to-end testing fast. We will also cover general testing strategies for beginners, including how to decide what to test and how to ensure your test suite is effective. --- Featuring Jesse Tomchak @ 5:00pm - 6:00pm EDT Setting up user authorization and authentication can be a minefield of security practices, token verification, valid callback urls, salt hashes, and more. Now take all those struggles and sprinkle them over serverless functions! When all we want to do is get past the login page to our actual application. We'll walk through setting up secure oAuth with AWS Lambda functions, covering common pitfalls, so that you can get back to the fun part of your project. --- Tune in next month for another full day of JavaScript Marathon! Need private trainings for your company? If you would like to learn more about how you can leverage This Dot’s expertise to upskill your team, and reinvigorate your developers with new knowledge about the web’s leading development technologies, visit the trainings page....

Keeping Costs in Check When Hosting Next.js on Vercel cover image

Keeping Costs in Check When Hosting Next.js on Vercel

Vercel is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it's no wonder people choose it more and more over other providers. Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel. Vercel's Pricing Structure Vercel's pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the Developer Experience Platform (DX Platform) and the Managed Infrastructure. The DX Platform offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it's also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you're charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. Observability Plus is one such example feature. The Managed Infrastructure, on the other hand, is what makes your app run and scale. It is a serverless platform priced per usage. Thanks to this infrastructure, you don't need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on Hacker News, leading to a sudden spike in costs. Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage. Pricing Plans The DX Platform can be used in three different pricing plans on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects. The first of the three pricing plans is the Hobby plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only. The Pro plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams. The third tier is the Enterprise plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel. What Contributes to Usage Costs and Limiting Them Now that you've selected a plan for your team, you're ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits? The Vercel pricing page has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we've highlighted some of the most impactful factors on pricing that we've seen on many of our client projects. Number of Function Invocations Each time a serverless function runs, it counts as an invocation. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes: - Invoking server actions (server functions) - Invoking API routes (from the frontend, another system, or even within another serverless function) - Rendering a React server component tree (as part of a request to display a page) - Middleware execution To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites. To minimize serverless function invocations, focus on reducing any of the above points. For example: - Batch up server actions: If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action. - Minimize calls to the backend: Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as useSwr or TanStack Query to keep your backend calls under control. - Use API routes correctly: Next.js recommends using API routes for external systems invoking your app. For instance, Contentful can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations. Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to static content when you don’t expect them to change in real-time. On the client, utilize Next.js navigation primitives to use the client-side router cache. Middleware in Next.js runs before every request. This adds an extra function invocation on top of the original request processing (e.g., a server action invocation or page render). To minimize middleware invocations, limit them to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using matchers. For example, the matcher configuration below would prevent invoking the middleware for most static assets: ` Function Execution Time The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it's within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time. Many things can contribute to this, but one common pattern we've seen is not utilizing caching properly or under-caching. Next.js offers several caching techniques you can use, such as: - Using a data cache to prevent unnecessary database calls or API calls - Using memoization to prevent too many API or database calls in the same rendering pass Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable Fluid Compute, a recent feature by Vercel that optimizes function invocations and reusability. Bandwidth Usage The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with: - Large JavaScript bundles - Many images - Large API JSON payloads Image optimization is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred. To further optimize your bandwidth usage, focus on using the Link component efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to disable this feature for infrequently accessed pages. The Link component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage. Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, always return only the minimum amount of data necessary to the end user. Image Transformations Every time Vercel transforms an image from an unoptimized image, this counts as an image transformation. After transformation, every time an optimized image is written to Vercel's CDN network and then read by the user's browser, this counts as an image cache read and an image cache write, respectively. The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it's worth checking if the associated costs can be reduced. Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don't require transformation. You can store these images in the public folder and use the unoptimized property with the Image component to mark them as non-transformable. Another approach is to utilize an external image provider like Cloudinary or AWS CloudFront, which may have already optimized the images. In this case, you can use a custom image loader to take advantage of their optimizations and avoid Vercel's image transformations. Finally, Next.js provides several configuration options to fine-tune image transformation: - images.minimumCacheTTL: Controls the cache duration, reducing the need for rewritten images. - images.formats: Allows you to limit eligible image formats for transformation. - images.remotePatterns: Defines external sources for image transformation, giving you more control over what's optimized. - images.quality: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage. Monitoring The "Usage" tab on the team page in Vercel provides a clear view of your team's resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team's limit, and if you're approaching it, you'll see the amount. This page is a great way to monitor regularity. However, you don't need to check it constantly. Vercel offers various aspects of spending management, and you can set alert thresholds to get notified when you're close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges. One good feature of Vercel is its ability to pause projects when your spending reaches a certain point, acting as an "emergency break" in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won't be charged for any extra usage. This option is enabled by default. Conclusion Hosting a Next.js app on Vercel offers a great developer experience, but it's also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs. We hope you enjoyed this blog post. Be sure to check out our other blog posts on Next.js for more in-depth coverage of different features of this framework....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co