LLM Driven Code Migration

Β· 521 words Β· 3 minute read

Moving to a More Powerful Framework πŸ”—

At my previous company, we used an older GraphQL framework with limited features. For the team owning the code, it was like having a tiny bicycle that could not go very fast or handle rough roads. They needed to move faster and add new features without getting stuck on minor details. Despite being part of that team, I saw an opportunity to lead this migration so my teammates could see how they might take on similar projects in the future.

I also wanted this process to include everyone on our team. The idea was not just to solve a problem but to show that a big impact can come from stepping up and upgrading something that holds everyone back.


How We Switched the Code πŸ”—

Finding Files to Update πŸ”—

Our old calls were scattered across many files in our codebase. I wrote a script to walk through every file, looking for any place that used the old GraphQL framework call. I stored those files so I could process them one by one.

Using Llama to Transform the Calls πŸ”—

I told the LLM:

“Transform the old code from framework A to framework B. Here are some placeholder examples to guide you:
Old code:

oldFrameworkCall({ query: '...' })

New code:

newFrameworkCall({ query: '...' })

…and so on.”

I passed one file at a time. The LLM received the entire file but was asked to replace only the parts related to the GraphQL call. This way, I kept the rest of our code safe from changes I did not intend.

Creating Merge Requests πŸ”—

Once the LLM produced the new version of each file, our script packaged the changes into a separate branch, then opened a Merge Request. This let the team review the changes in small chunks, rather than one massive update. We could roll out fixes at a steady pace.

Testing and Iterating πŸ”—

We reviwed every update before merging it. In most cases, the LLM’s changes worked well.

Testing was critical, without a robust, well-tested environment to validate outputs, we wouldn’t have been able to trust LLM-generated code, at this scale without a lot of manual intervention.

When we encountered an edge case, we added a relevant example to our prompt and re-ran it. Over time, iterating on this feedback refined the prompt to handle even the trickiest parts of the code.


Why This Approach Worked πŸ”—

  • Changes were small and targeted, keeping the problem manageable.
  • Processing one file at a time allowed for rapid and parallel updates.
  • Small, incremental merge requests made code reviews easier.
  • Continuous iteration on edge cases improved accuracy over time.
  • Visible progress encouraged team collaboration and idea sharing.
  • The workflow demonstrated potential for wider application in code migrations.

A service that automates these kinds of migrations could save companies significant time and money. In the past, teams have spent huge amounts of money just switching between versions or frameworks. A well-trained LLM can handle much of that work if you provide solid examples.

This might even be its own business opportunity someday, unifying legacy systems and elevating platform teams to drive enterprise-scale innovation.