Migrate a Large React Codebase to Nx
Imagine trying to fit a square peg in a round hole – that pretty much sums up the challenge we faced with our React codebases at Hasura. The open-source version of our console was like a puzzle with 460,000 pieces, while the Pro version added its own set of unique, advanced features. But maintaining these as separate entities was becoming a Herculean task.
Enter : the beacon of hope for our monorepo aspirations. This migration saga isn’t just about moving code around. It’s the story of how we transformed our local development feedback loop from a patience-testing five-minute ordeal to a lightning-fast 10-second revelation.
Join us as we unfold the tale of this strategic shift, where we not only brought order to chaos but also revolutionized our development efficiency. Let’s embark on this migration adventure together, and I’ll walk you through every twist and turn that led to our ultimate victory in dev time.
While I was working at Hasura, there was a product called the Console. It was the UI for the Hasura engine. And this Console had two main version:
- The OSS (Open-Source Software) one, with most of the feature present, about 460 000 lines of code
- The Pro one, that is used on Hasura cloud with extra features like advanced security, metrics and other Pro only features , with about 30 000 lines of code
The Pro ones imports nearly all the open source one into it, and add the extra Pro features around it.
The way it was done was like this:
- In OSS,
- In OSS,
- In Pro, have a
postinstallto run the build of the OSS codebase, and then installing with
--no-saveso that there is no conflict between dev envs
- In Pro, have the same
postinstallrun a full build to make sure there was no issue
- All of this in a mix of makefiles & package.json scripts
So to recap visually:
Due to the way the package was installed, we had to keep in sync both
package.json & lock files by hand to make sure there isn’t a version difference, since the build output didn’t contain any package versions in the
package.json. Otherwise, the Pro Console would break, even full react crashes sometimes !
Plus, all of this took about 5 minute each time ! And I’m not taking about how the Pro Console was serve, this was another bag of worms. This meant that when you were doing work in the OSS codebase that could impact the Pro codebase, you had to do changes, wait 5 minutes, and your local dev would be updated 🤯.
And to add a cherry on top, because this was a fragile setup, we didn’t update the webpack files for a while, nor the build libraries and thus, not enjoying the new performance improvement a lot of tool had during this time. With the , it was hard to change anything in this area.
So, we had one OSS codebase, tested, in TypeScript with Storybook, a Pro Console using pure JS like the old days, the
package.json that could nuke the app if out of sync, the extremely slow local dev for Pro and a lack of coherence, a change was due ! And we decided to move to Nx.
With the move and refactor, we had 2 big objectives:
- Move to a proper front-end monorepo instead of two isolated packages
- Have the less maintenance of build tools while keeping them updated
There are nowadays ! Between (at the time, ), , , , , or even , there is plenty of choices there ! However, most of them only focus on the monorepo side, but not so much into the integration of tools. This meant choosing between Bazel, Gradle, Nx & Pants. And only one had enough coverage in terms of tools we used already built in, and that was Nx.
Package based act like a regular Yarn / Pnpm workspace: a collection of independent, standard packages with their own package.json and build step, linked at build time. Integrated, however, are a bit spicier: instead of having isolated package, it uses TypeScript paths to emulate packages, speeding up the dev process removing build step. Furthermore, it’s in this mode where we can find Nx plugins at play.
What is a Nx plugin? It does a couple of things:
- It can have executors, a target to run something, that could be running jest test, or vite build, or a webpack dev server
- It can have generators, like the React library one, that can generate an integrated package inside the monorepo without having to modify all files by hand, or add Tailwind to an application, or many other things…
- It can have automatic migration to update your tool and your code, like updating to the new Jest snapshot system while updating jest
Using plugins meant for us that we didn’t have to maintain the tools thanks to executors, nor the setup thanks to generators and moreover, we didn’t have to maintain migrations !
Now, like every decision we make as engineers, there are drawbacks…
Using an integrated monorepo means a buy in into a single version policy.
The single version policy can be resumed by “There may only be one version of a dependency and package”. This means that, for example, there can only be one version of React or any other libraries. This was a valid trade-off for us since we were already trying to enforce a single version policy with the two separate apps.
Another limitation of this is that, we are buying instead of building the build tools setups.
This means that, when a new version of a tooling dependency is released, we need to wait until it’s updated into Nx to be able to use it. While it was an issue in the early days of Nx (around 2018), now it’s way less of a limitation.
First, the Nx team now works with the tools themselves, providing updates on a way faster pace than before, and secondly, there is a lot of escape hatches we can now take to be able to run custom configs if we need it
And when there is an update, then Nx will take care, thanks to code migration, to migrate our code to use the latest code. Like for example, , or , or using the ; and all of it without us doing anything ! So it’s a trade-off I’m willing to make if I don’t have to maintain such tooling anymore.
Ok, so, now that we know what we want, how do we want to do it? We didn’t want to stop work on the Console for more than a day for this migration. So we had to adopt an incremental migration…
In the pursuit of updating our legacy system, we consciously align with Nx’s conventional configurations. While Nx offers the leeway to customize, our strategy is to converge our legacy code with the established standards of Nx, minimizing custom webpack usage to leverage the full potential of Nx’s built-in features.
Here is what our migration feedback loop looked like:
- Fresh Nx Workspace: Our starting point is a clean Nx workspace, symbolising a new beginning and preparedness for the code evolution ahead.
- Apply Known Modifications: Prior knowledge guides the application of essential modifications, preparing our code for integration into Nx’s ecosystem. I’ll touch a bit about those later.
- Import Current Code: Seamlessly, we introduce our existing code into this new environment, commencing the transformation journey.
- Run Build/Dev: Initiating the build or development server, we begin the iterative process of compiling and running the code, crucial for revealing the fit within Nx’s structure.
- Assessment and Adaptation: Through a cycle of testing and evaluation, we discern the functionality of our code in its new setting. Breakages lead to a deeper understanding and targeted fixes that ultimately shape our code to work harmoniously within the Nx workspace.
- If things broke: We then identify what cause the breakage. Is it because of a webpack config? A missing node polyfill? A non-standard syntax?
- Then we made it broken on the old code: By this manner, we were ensuring new code will follow Nx guidelines and strategies.
- Fix it in the old code: The old code remains the source of truth for all changes; then import again.
- If things were successful: Celebrate !
Now, there were still some things we needed to do that didn’t follow Nx guidelines:
- TypeScript paths: We used
@/*as a path alias in the old OSS codebase, and we wanted to tackle this after the migration was done, because it would have cost too much to do beforehand.
- Webpack Node.js fallbacks: Given we migrated from webpack 4 to 5 thanks to the Nx migration, we needed to provide extra node polyfills, since we use some node libraries in the front-end. .
- Our Webpack plugins and misc configs: We also needed some globally defined values, and some small tweaks to the overall end config. compared to the 1k+ lines form before.
- Disable or silence some eslint rules: Nx eslint rules were stricter than what we had before, so we ended up or .
As for our code, we had to change the following:
- CSS module using proper CSS module: We were using CSS module for all
.scssfiles without specifying the
.module.scssfile name. A bulk rename helped there.
- CSS imports were relying on absolute paths: Given the structure we were &going to in Nx land, we needed to use only relative paths inside CSS files. A pass over each file helped.
- Path import were resolving even if they shouldn’t have: We had import of
utilsthat were referencing a root folder but should have resolve a node_module instead. Fixing them to relative fixed it.
- Update of various tools: We had to update jest and TypeScript. There were some small changes to be made for those.
- Update the clients entry files: Client files were mouting the app directly, but we needed to decouple that in Nx in order to not mount the app twice. So we made client component export the full App component instead of mounting it so that the Apps could load them depending on the need.
- Now, here was the tricky part: Circular dependencies. We had a loooot of them (around 5k loops), and Webpack 5 didn’t handle them as gracefully as version 4. This required a lot of manual sifting thought the codebase to identify what cause them, and how to fix them. This took the most amount of time. There isn’t much secret here than looking at the loops, try to identify when and where they meet, try to build some tooling and hope for the best.
But after all this blood and sweat, we had it working ! And this is what it looked like in the end, with 2 libraries and apps in our Nx workspace:
But there were still one big thing to tackle: migrate the active codebase to Nx so that engineers work on the new codebase.
The necessity to retain five years of git history is a testament to the value we place on our development legacy. It was paramount that the integrity of our past work remained intact as we transitioned into a new phase. Git history saved was not just a goal—it was a requirement.
In preparation for the transition, we took a methodical approach to prevent any new changes from affecting the old structure. The introduction of a CI step to forbid commits to the old folders was a critical safeguard. A CI job was configured to fail automatically if it detected any merge activity in the legacy directories, effectively putting a freeze on the old sections of our codebase.
Further, leveraging the GitHub API, we identified all pull requests that included file changes in the old folders. A comment was systematically posted to these pull requests, alerting contributors of the impending changes and providing guidance on the new process.
With the groundwork laid, we moved on to the actual migration:
- Clearing old import: The first major step was to commit a thorough clearance of the Nx workspace to the main branch, as seen in this . This made sure git would treat it as a git move and not a copy.
- Migration of Codebase: Subsequently, we transitioned the existing code into the new workspace with precision. This significant shift is captured in the following . Thanks to
git move, history was transferred.
- Creating Temporary Copies: To facilitate a smoother transition, we temporarily replicated the new code into the old folders. This step was necessary to support ongoing work and is documented in this .
- Adjustments and Tweaks: Some minor modifications were essential to ensure the code operated flawlessly in its new environment. These adjustments were meticulously carried out, as recorded here .
All of this took less than half a day, and most of the time was waiting for CI to run.
With the structural changes in place, attention was turned to the contributors. Comments were added to all active pull requests, providing clear instructions on how to engage with the new folder system, supported by a .
As we stepped back and looked upon the fruits of our labor, it was evident that we had achieved more than a mere restructuring. We had redefined our workflow, set a new standard for our operations, and most importantly, we had done it without losing sight of where we came from. Our git history remained a monument to our evolution, unscathed and respected.
The big switch was more than a task completed—it was a triumph celebrated. 🎉 And all of this without a single bug in production 🎉🎉
The culmination of our meticulous transition and optimization efforts was not only successful but also significantly impactful in terms of performance and efficiency. We receive praise from a lot of folks in the company, ranging from CEO, front-end, back-end engineers and even our solution engineers! Why ? Let’s see the actual impact of this change.
First and foremost, the impact on the end-user experience was substantial. By updating our browser list to exclude Internet Explorer and targeting ES2017, we managed to reduce the bundle size delivered to users by an astonishing 70%—a leap from 43MB down to a mere 13MB. This refinement, along with an upgrade of our tooling, meant that pages now loaded 10 seconds faster in production, a remarkable improvement that users could instantly feel.
On the development front, the changes ushered in equally impressive improvements. Where once a typical development cycle could take up to five minutes, it had been slashed to just 10 seconds. This was not only a quality-of-life enhancement for our developers but also a radical increase in productivity.
The updated tools themselves introduced more than just speed; they brought about robustness and future-proofing, ensuring our development environment remained cutting-edge. Furthermore, the implementation of the transformed the build process for our back-end engineers to what can only be described as ‘instantaneous’.
Our Continuous Integration (CI) processes experienced a significant boost. Post-transition, CI runs became 60% faster. This uptick in speed has had a profound ripple effect, translating to weeks of compute time saved each month. The Nx cache’s strategic role in this acceleration cannot be overstated, as it allowed for swift and efficient utilization of resources.
As we turn our gaze to the horizon, the journey with Nx is far from complete. The future holds a continuum of refinement and evolution aimed at not only enhancing our current capabilities but also paving the way for new opportunities.
Our immediate triumphs were followed by a seamless transition from the unconventional
@/ alias to more orthodox relative imports—a change that aligns us more closely with the Nx methodology. This was a significant first step in standardizing our project structure. This was a two part job: a that changed all files to use relative instead of
@/, and a to change the paths so that it would not add too much friction for in flight PRs.
The current focus is on deconstructing the library monolith that, while once served its purpose, now demands a more modular approach. By breaking it down into smaller, more manageable libraries, we aim to boost maintainability and encourage more granular scalability.
Furthermore, the aspiration to modernize the Pro codebase is not merely a desire but an ongoing process. With the powerful and up-to-date tools that Nx brings to the table, we anticipate a surge in both productivity and innovation within our development cycles.
As we stand at the threshold of this new frontier, it’s crucial to reflect on the methodological approach that guided us here. Let’s weigh the benefits against the challenges, examining the pros and cons of this specific path to Nx migration.
Git History Retention: Ensured the preservation of extensive git history during the migration to a modern development environment.
Seamless Transition: Temporary measures and clear documentation provided a smooth transition for developers.
Enhanced Performance: Significantly reduced bundle sizes and faster loading times were immediate benefits of the migration.
Increased Efficiency: The adoption of Nx cache and other tools greatly improved CI/CD pipeline speed and reduced compute costs.
Developer Experience: Dramatically faster development time, enhancing productivity.
Complex Coordination: The transition required careful management to avoid communication and execution errors.
Temporary Redundancies: Maintaining old structures for a while introduced some confusion, requiring a strategic phase-out.
Learning Curve: Developers needed time to adjust to the new system for full effectiveness.
Workflow Shift: The move required developers to change longstanding practices and adapt to new workflows.
In the end, the strategic shift to Nx, guided by a well-considered approach, was not embarked upon lightly. It required a delicate balance of foresight and precision, understanding the weight of the legacy we carry, and the innovative future we aim to build. The trade-offs encountered and the learning curves navigated are investments in a foundation built to support the burgeoning scale of our aspirations. This narrative of transformation, with its blend of pros and cons, serves not only as a case study but also as a guiding light for the industry at large, illuminating the intricate dance of maintaining continuity while breaking new ground. ☀️