Skip to content

Developer Insights

Join millions of viewers! Our engineers craft human-written articles solving real-world problems weekly. Enjoy fresh technical content and numerous interviews featuring modern web advancements with industry leaders and open-source authors.

Newest First
Tags:Jae Anne Bach Hardie
Integrating In-house Data and Workflows with Stripe Using Private Stripe Apps cover image

Integrating In-house Data and Workflows with Stripe Using Private Stripe Apps

Stripe Apps is a recently-announced platform that allows developers to embed content within Stripe's web UI, extending its functionality to allow interaction with non-Stripe services. Your immediate thought upon hearing of such a platform might be that it is useful for public services, such as customer support, to develop Stripe integrations. This is a core use-case and high-profile public integrations like those for Intercom and DocuSign have featured prominently in demonstrations of the platform's capabilities. However, you shouldn't overlook the value of private apps, developed specifically for your organization and visible only to your employees. Private apps may prove to be even more valuable, because they can specifically address your business' problems, automating domain-specific workflows, and integrate with in-house data and services. What is a private Stripe App? Stripe Apps published to the marketplace act like apps you might be familiar with from iOS or Android. They are developed for use by the public, each version of them goes through a strict review by Stripe, and they are published to the Stripe Marketplace where anyone can install them. Private Stripe Apps, in contrast, are published directly to the Stripe account that owns them. Since they are not going to be visible to users outside of the organization they are developed for, they don't have to go through the app review process, simplifying the development and maintenance process. Accessing internal services Since Stripe Apps are browser apps which execute on a user's own machine — as opposed to on Stripe's servers — private Stripe Apps can make use of resources that are only accessible from company-controlled devices. Intranet services and any internal authentication are accessible from your private Stripe App just as they are from any other browser-based internal tooling. There are only two caveats to accessing HTTP services from Stripe Apps. Both of them are driven by the security model of apps. The first is that services must be served over HTTPS, which is standard for internet-facing services, but might not be the case for private services on an intranet. The second one is that services must allow all cross-origin requests, since requests from Stripe Apps are made with a null origin, and therefore CORS allowlisting cannot be used to secure services against cross-site request forgery. If this is a major concern, endpoints specific to your Stripe App can be constructed that are secured through Stripe's request signing mechanism, and which proxy requests to the internal services only for requests signed with the App's secret. Enriching views with context-based data and workflows Stripe Apps are displayed on the same screen as Stripe objects like customers or invoices, and can access information about those objects and interact with them. This enables smoother workflows by operators by showing important context all in the same view. For example: - Displaying product shipping and returns information on the Invoice Details screen in order to make processing refunds more efficient. - Allowing operators to see — and maybe edit — the features that a given subscription plan includes directly in the Product Details screen. - Displaying account activity from multiple sources like access and change logs in the Customer Details screen in order to make it easier to resolve support queries. If anyone in your organization is currently working with multiple open browser tabs to manually collate information or execute workflows that cross service boundaries, a private Stripe App could help automate that process. This will free them up to do more valuable tasks, and reduce the likelihood of errors by making contextual information more reliably available, and eliminating manual steps....

Using Astro on framework.dev cover image

Using Astro on framework.dev

Have you heard of Astro? It's an exciting, if still experimental, static site generation framework that allows you to author components using your choice of framework and completely control if and when javascript is shipped to the client. If you would like to learn more, I wrote an introductory blog post about it. Considering how new and experimental Astro is, you might be wondering what it's like to actually try to build a website with it. Well, you're in luck because we chose to build react.framework.dev in Astro, and I'm here to tell you how it went. Some background When I was first presented the pitch for the framework.dev project, it had the following characteristics: 1. It was going to be primarily a reference site, allowing users to browse static data. 2. It should be low-cost, fast and accessible. 3. Because it was a small internal project, we were pretty free to go out on a limb on our technology choices, especially if it let us learn something new. At the time, I had recently heard exciting things about this new static site generator called "Astro" from a few developers on Twitter, and from cursory research, it seemed perfect for what we had planned. Most of the site was going to be category pages for browsing, with search existing as a bonus feature for advanced users. Astro would allow us to create the whole site in React but hydrate it only on the search pages, giving us the best of the static and dynamic worlds. This plan didn't quite survive contact with the enemy (see the "ugly" section below) but it was good enough to get the green light. We also picked vanilla-extract as a styling solution because we wanted to be able to use the same styles in both React and Astro and needed said styles to be easily themeable for the different site variants. With its wide array of plugins for different bundlers, it seemed like an extremely safe solution. Being able to leverage Typescript type-checking and intellisense to make sure theme variables were always referenced correctly was certainly helpful. But as you'll see, the journey was much less smooth than expected. The Good Astro did exactly what it had promised. Defining what static pages would be generated based on data, and what sections of those pages would be hydrated on the client, was extremely straightforward. The experience was very similar to using NextJS, with the same system of file system paths defining the routing structure of the site, augmented with a getStaticPaths function to define the generation of dynamic routes. However, Astro is easier to use in many ways due to its more focused feature set: - Very streamlined and focused documentation. - No potential confusion between static generation and server-rendering. - All code in Astro frontmatter is run at build-time only, so it's much easier to know where it's safe to write or import code that shouldn't be leaked to the client bundle. - No need to use special components for things like links, which greatly simplifies writing and testing components. Our Storybook setup didn't have to know about Astro at all, and in fact is running exactly the same code but bundling it with its own webpack-based builder. Choosing what code should be executed client-side is so easy that describing it is somewhat anticlimactic: ` However, this is a feature that is simply not present in any other framework, and it gives a site very predictable performance characteristics: each page loads only as much Javascript as has been marked as needing to be loaded. This means that each page can be profiled and optimized in isolation, and increases in complexity in other pages will not affect it. For example, we have kept the site's homepage entirely static, so even when it shares code with dynamic areas like search, the volume of that code doesn't impact load times. The Bad Although we were pleasantly surprised by how many things worked flawlessly despite Astro being relatively new, we were still plagued by a number of small issues: - A lot of things didn't work in a monorepo. We had to revert to installing all npm libraries in the root package rather than using the full power of workspaces, as we had trouble with hoisting and path resolution. Furthermore, we had to create local shims for any component we wanted to be a hydration root, as Astro didn't handle monorepo imports being roots. - We were hit multiple times with a hard-to-trace issue that prevented hydration from working correctly with a variety of errors mostly relating to node modules being left in the client bundle. The only way to fix this was to clear the Snowpack cache and build the site for production before trying to start the dev server. You can imagine how long it took to figure out this slightly bizarre workaround. - Astro integration with Typescript and Prettier was pretty shaky, so the experience of editing Astro components was a bit of a throwback to the days before mature Javascript tooling and editor integrations. I'm very thankful that we had always intended to write almost all of our components in React rather than Astro's native format. We also hit a larger hurdle that contributed to the above problems' remaining issues for the lifetime of the project: Astro moved from Snowpack to Vite in version 0.21, but despite vanilla-extract having a Vite plugin, we were unable to get CSS working with the new compiler. It's uncertain whether this is an issue with Astro's Vite compiler, or whether it's down to vanilla-extract's Vite plugin not having been updated for compatibility with Vite's new (and still experimental) SSR mode that Astro uses under the hood. Whatever the reason, what we had thought was a very flexible styling solution left us locked in version 0.20 with all its issues. The lesson to be learnt here is probably that when dealing with new and untested frameworks it's wise to stick to the approaches recommended by their authors, because there's no guarantees of backwards compatibility for third-party extensions. The Ugly As alluded to in the introduction, our plans for framework.dev evolved in ways that made the benefits of Astro less clear. As the proposed design for the site evolved, the search experience was foregrounded and eventually became the core of every page other than the homepage. Instead of a static catalogue with an option to search, we found ourselves building a search experience where browsing was just a selection of pre-populated search terms. This means that, in almost all pages, almost all of the page is hydrated, because it is an update-results-as-you-type search box. In these conditions, where most pages are fully interactive and share almost all of their code, it's arguable that a client-side-rendering SPA like you'd get with NextJS or Gatsby would be of equal or even superior performance. Only slightly more code would have to be downloaded for the first view and subsequent navigation would actually require less markup to be fetched from the server. The flip side of choosing the right tool for the job is that the job can often change during development as product ideas are refined, and feedback is incorporated. However, even though replacing Astro with NextJS would be fairly simple since the bulk of the code is in plain React components, we decided against it. Even Astro's worst case is still quite acceptable, and maybe in the future, we will add features to the site which will be able to truly take advantage of the islands-of-interactivity hydration model. Closing thoughts Astro's documentation does not lie. It is a very flexible and easy to use framework with unique options to improve performance that is also still firmly in an experimental stage of its development. You should not use it in large production projects yet. The ground is likely to shift under you as it did with us in the Snowpack-to-Vite move. The team behind Astro is now targeting a 1.0 release which will hopefully mean greater guarantees of backwards compatibility and a lack of major bugs. It will be interesting to see what features end up making it in, and whether developer tools like auto-formatting, linting and type-checking are going to also be finished and supported going forward. Without those quality of life improvements, Astro has still been an exciting tool to test out, which has made me think differently about the possibilities of static generation. With them, it might become the only SSG framework you will need....

Introducing Framework.dev cover image

Introducing Framework.dev

Have you ever started to learn a technology and felt overwhelmed by the amount of information out there and became paralyzed, not knowing where to start? We at ThisDot can certainly relate, which is why we are creating framework.dev...

Building Web Applications using Astro - What makes it special? cover image

Building Web Applications using Astro - What makes it special?

I have recently been working on a site built from the ground up using Astro and even in the early state its in, I've been able to see the amazing possibilities it opens up in web development. So let me give you a tour of what makes it special....

Using XState Actors to Model Async Workflows Safely cover image

Using XState Actors to Model Async Workflows Safely

In my previous post I discussed the challenges of writing async workflows in React that correctly deal with all possible edge cases. Even for a simple case of a client with two dependencies with no error handling, we ended up with this code: ` This tangle of highly imperative code functions, but it will prove hard to read and change in the future. What we need is a way to express the stateful nature of the various pieces of this workflow and how they interact with each other in a way in which we can easily see if we've missed something or make changes in the future. This is where state machines and the actor model can come in handy. State machines? Actors? These are programming patterns that you may or may not have heard of before. I will explain them in a brief and simplified way but you should know that there is a great deal of theoretical and practical background in this area that we will be leveraging even though we won't go over it explicitly. 1. A state machine is an entity consisting of state and a series of rules to be followed to determine the next state from a combination of its previous state and external events it receives. Even though you might rarely think about them, state machines are everywhere. For example, a Promise is a state machine going from _pending_ to _resolved_ state when it receives a value from the asynchronous computation it is wrapping. 2. The actor model is a computing architecture that models asynchronous workflows as the interplay of self-contained units called actors. These units communicate with each other by sending and receiving events, they encapsulate state and they exist in a hierarchical relationship, where parent actors spawn child actors, thus linking their lifecycles. It's common to combine both patterns so that a single entity is both an actor and a state machine, so that child actors are spawned and messages are sent based on which state the entity is in. I'll be using XState, a Javascript library which allows us to create actors and state machines in an easy declarative style. This won't be a complete introductory tutorial to XState, though. So if you're unfamiliar with the tool and need context for the syntax I'll be using, head to their website to read through the docs. Setting the stage The first step is to break down our workflow into the distinct states it can be in. Not every step in a process is a state. Rather, states represent moments in the process where the workflow is waiting for something to happen, whether that is user input or the completion of some external process. In our case we can break our workflow down coarsely into three states: 1. When the workflow is first created, we can immediately start creating the connection, and fetching the auth token. But, we have to wait until those are finished before creating the client. We'll call this state "preparing". 2. Then, we've started the process of creating the client, but we can't use it until the client creation returns it to us. We'll call this state "creatingClient". 3. Finally, everything is ready, and the client can be used. The machine is waiting only for the exit signal so it can release its resources and destroy itself. We'll call this state "clientReady". This can be represented visually like so (all visualizations produced with Stately): And in code like so ` However, this is a bit overly simplistic. When we're in our "preparing" state there are actually two separate and independent processes happening, and both of them must complete before we can start creating the client. Fortunately, this is easily represented with parallel child state nodes. Think of parallel state nodes like Promise.all: they advance independently but the parent that invoked them gets notified when they all finish. In XState, "finishing" is defined as reaching a state marked "final", like so ` Leaving us with the final shape of our state chart: Casting call So far, we only have a single actor: the root actor implicitly created by declaring our state machine. To unlock the real advantages of using actors we need to model all of our disposable resources as actors. We could write them as full state machines using XState but instead let's take advantage of a short and sweet way of defining actors that interact with non-XState code: functions with callbacks. Here is what our connection actor might look like, creating and disposing of a WebSocket: ` And here is one for the client, which demonstrates the use of promises inside a callback actor. You can spawn promises as actors directly but they provide no mechanism for responding to events, cleaning up after themselves, or sending any events other than "done" and "error", so they are a poor choice in most cases. It's better to invoke your promise-creating function inside a callback actor, and use the Promise methods like .then() to control async responses. ` Actors are spawned with the spawn action creator from XState, but we also need to save the reference to the running actor somewhere, so spawn is usually combined with assign to create an actor, and save it into the parent's context. ` And then it becomes an easy task to trigger these actions when certain states are entered: ` Putting on the performance XState provides hooks that simplify the process of using state machines in React, making this the equivalent to our async hook at the start: ` Of course, combined with the machine definition, the action definitions and the actor code are hardly less code, or even simpler code. The advantage of breaking a workflow down like this include: 1. Each part can be tested independently. You can verify that the machine follows the logic set out without invoking the actors, and you can verify that the actors clean up after themselves without running the whole machine. 2. The parts can be shuffled around, and added to without having to rewrite them. We could easily add an extra step between connecting and creating the client, or introduce error handling and error states. 3. We can read and visualize every state and transition of the workflow to make sure we've accounted for all of them. This is a particular improvement over long async/await chains where every await implicitly creates a new state and two transitions — success and error — and the precise placement of catch blocks can drastically change the shape of the state chart. You won't need to break out these patterns very often in an application. Maybe once or twice, or maybe never. After all, many applications never have to worry about complex workflows and disposable resources. However, having these ideas in your back pocket can get you out of some jams, particularly if you're already using state machines to model UI behaviour — something you should definitely consider doing if you're not already. A complete code example with everything discussed above using Typescript, and with mock actors and services, that actually run in the visualizer, can be found here....

What's new in Next.js 12 cover image

What's new in Next.js 12

NextJS 12 came with a number of new features. Some are quiet and largely automatic, but others unlock completely new ways of working....

Async Code in useEffect is Dangerous. How Do We Deal with It? cover image

Async Code in useEffect is Dangerous. How Do We Deal with It?

Async/await syntax is commonly used inside useEffect to trigger asynchronous workflows in React but it comes with severe pitfalls related to resource management that you should be aware of...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co