Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From Pipenv to UV: Migrating to a Monorepo to T...

From Pipenv to UV: Migrating to a Monorepo to Tame a Complex Repository

Avatar for Satoshi Kaneyasu

Satoshi Kaneyasu

September 27, 2025
Tweet

More Decks by Satoshi Kaneyasu

Other Decks in Programming

Transcript

  1. From Pipenv to UV: Migrating to a Monorepo to Tame

    a Complex Repository PyCon JP 2025 2025.09.26 Satoshi Kaneyasu
  2. 2 Speaker Introduction Name:Satoshi Kaneyasu Company:Serverworks Co., Ltd. Department: Application

    Services Division Location:Hiroshima (Fully Remote) Role: DevOps, Project Manager, Scrum Master SNS(X):@satoshi256kbyte • 2025 AWS Community Builders • 2025 Japan AWS Top Engineers (AI/ML Data Engineer) • 2025 Japan AWS All Certifications Engineers • Certified ScrumMaster • PMP
  3. 4 Table of Contents 1. The Growing Complexity of the

    Repository 2. Rethinking the Runtime Version Manager: Migrating from pyenv to asdf 3. From pipenv to uv: Moving a Complex Repository into a Monorepo Structure 4. What We Postponed 5. The Impact of Migrating to a Monorepo with uv 6. Conclusion
  4. 6 Project and Repository Overview ◼ Agile Development (Scrum) ◼

    Web Application built on AWS ◼ AWS was chosen primarily for its scalability ◼ Backend in Python with long-running asynchronous processing ◼ The backend is mostly serverless, while long-running async tasks run in containers ◼ Using pyenv + pipenv
  5. 9 Complicated deployment scripts Move to another folder for each

    module, switch versions again, then deploy (Omitted) (Omitted) (Omitted) Switch to the version used in that folder, then deploy
  6. 10 What I Was Thinking at That Time ◼ The

    deployment scripts had become something that felt very unnatural. ◼ Because of this, CI/CD lost popularity, and its operation became increasingly dependent on specific individuals. ◼ Concerns started to be raised about the difficulty of writing unit tests. ◼ Although the feedback was somewhat vague, I felt the root cause was that there were simply too many things to consider overall, creating a high cognitive load. ◼ I believed this would eventually become a real problem and act as a brake on the development team. The next page moves on to the following topic.
  7. 12 Migrating from pyenv to asdf ◼ First, we started

    migrating from pyenv to asdf ◼ The initial use of asdf came from the team member responsible for newly added modules, and it was later rolled out across the team. ◼ The decision to adopt asdf more broadly was driven by the need to manage versions not only for Python, but also for tools such as the AWS CLI and Node.js for IaC, which aligned well with what asdf provides. ◼ As a result, the roles became clearer: asdf for managing languages and tools, and pipenv for managing libraries. .tool_versions aws-sam-cli 1.136.0 nodejs 23.10.0 python 3.13.1
  8. 13 Policy for Revisiting the Runtime Version Manager ◼ Migration

    Work Proceeds in Parallel with Feature Development ◼ Image of Git Branching develop feature migration
  9. 14 Migration from pyenv to asdf This is the only

    case where versions other than Python are also specified.
  10. 15 Migration from pyenv to asdf Took One Month ◼

    While the introduction of asdf itself was quick, it took about a month for it to become fully adopted across the team. ◼ The main reason for the delay was that many members reported that asdf was not working. ◼ If Python virtual environments created with pyenv remained, the version in the virtual environment could take precedence over the settings in asdf and .tool-versions. ◼ The solution was to exit and then remove the virtual environment before retrying. ◼ In cases where the Python path still pointed to pyenv, running reshim or reviewing shell initialization scripts (e.g., .zshrc, .bashrc) helped resolve the issue. deactivate rm –rf .venv asdf reshim python
  11. 16 (Omitted) (Omitted) (Omitted) At This Point, No Review of

    Deployment Scripts Since "pyenv local xxx" was simply replaced with "asdf install", only that part was modified, and a broader review of the deployment scripts was postponed. The next page moves on to the following topic.
  12. From pipenv to uv: Moving a Complex Repository into a

    Monorepo Structure Middle Late Early
  13. 18 Reasons for Choosing uv as the Migration Target from

    pipenv ◼ Dependency resolution and package installation are overwhelmingly fast ◼ Leads to faster CI/CD ◼ Rapidly gaining popularity and being adopted by major projects ◼ Provides a built-in workspace feature that supports top-level bulk locking and bulk installation ◼ Enables a monorepo structure using uv alone
  14. 19 Refactoring Policy ◼ Migration Work Proceeds in Parallel with

    Feature Development ◼ The goal is to improve readability and simplify deployment scripts by restructuring the directory layout and unifying Python versions. ◼ This will help avoid major overhauls even if additional modules are introduced in the future. ◼ No unnecessary revisions will be made. develop feature migration
  15. 20 Steps AI / Manual Restructure the directories AI Unify

    the Python version Manual Reinstall the libraries Manual Review the deployment scripts and perform trial & error Manual, AI Verify operation Manual Rewrite the pipenv configuration files into uv format AI, Manual Add monorepo settings to the configuration file Manual Review the deployment scripts and perform trial & error AI, Manual Verify operation Manual Refactoring Steps Generative AI: Amazon Q Developer CLI was used
  16. 21 Restructure the directories<AI> <Prompt> Create a modules folder and

    move each module into it. Move another_project_a under modules and rename it to module_a. <Prompt> Update the import statements and various script paths to match the new location.
  17. 26 Review of Deployment Scripts<Manual> With the revised directory structure,

    build and deployment could now be handled by simply repeating the same set of commands for each module. Therefore, at this stage, the build and deployment process was converted into shell scripts. Organized into parent and child shells
  18. 28 Review of Deployment Scripts: Child Shell Concept Parent/child shells

    were introduced because we anticipated deploying individual modules during development as well. Using shell scripts was necessary due to the AWS environment, which required handling many arguments, conditional branches, and preparation steps.
  19. 29 Review the deployment scripts and perform trial & error<AI>

    Execute the build and deployment shell scripts via generative AI, and whenever an error occurs, give instructions to fix it and gradually refine the scripts.
  20. 31 Points to Note When Using Generative AI for Migration

    to uv ◼ Instructed the generative AI to migrate from a pipenv Pipfile to a pyproject.toml for uv. ◼ At first glance, the migration seemed successful, but the AI attempted to forcefully translate the Pipfile [scripts] section, so we switched to manual migration. ◼ We treated the AI-generated output as a trial migration just to get a rough idea of the structure, and then rebuilt it manually.
  21. 33 Add monorepo settings to the configuration file List of

    Modules (Omitted) (Omitted) (Omitted)
  22. 34 Add monorepo settings to the configuration file Common libraries

    such as linters and formatters were moved from each module and consolidated here. (Omitted) (Omitted) (Omitted)
  23. 35 Add monorepo settings to the configuration file Module-specific libraries

    are specified here. (Omitted) (Omitted) (Omitted)
  24. 36 On Merging the Working Branch ◼ The migration to

    uv and the transition to a monorepo structure were carried out in a dedicated working branch. ◼ During this period, feature development continued, and we regularly merged updates from the main branch. ◼ No major conflicts occurred, likely because the work mainly involved changes to the directory structure and package management, without affecting the business logic. develop feature migration The next page moves on to the following topic.
  25. 38 What We Postponed Things Postponed Reason Resolution Task Runner

    • uv does not provide a task runner. • Makefile seemed like the preferred migration target, but there was no decisive choice. • Further discussion was needed before moving forward. • Since we were already using IaC and had a package.json, we temporarily managed tasks there. • Because it felt unnatural, we eventually migrated to Makefile. Linters / Formatters • Previously used: black + isort + flake8 • Tried adopting ruff to align with uv, but the formatting results did not match the existing tools, so this was postponed. • No action was taken during the uv migration period. • Later, isort and flake8 were migrated to ruff, while black was kept because its formatting results still did not perfectly align. Duplicate Code Resolution Found a method to resolve duplicate code and decided to introduce it as an example. (This is not presented as a best practice.)
  26. 39 Duplicate Code Resolution We created a common module, but

    due to the directory structure, other modules cannot reference it without some adjustments. Not accessible from other modules
  27. 40 Duplicate Code Resolution With this approach, the common module

    could be referenced in the editor, but deployment to AWS Lambda or containers was not possible.
  28. 41 Packaging Duplicate Code and Deploying to AWS Lambda ①

    uv run python -m build Packaged the common module Generated dist/shared_module.whl ② uv export --format requirements-txt --no-emit-workspace --no-dev --output-file requirements_layer/requirements.txt echo “../../../../../shared/dist/shared_module.whl --hash=sha256:hash value" >> requirements_layer/requirements.txt Prepare a requirements.txt that adds a relative path to the .whl file, then use it to deploy the Lambda layer, followed by the Lambda itself.
  29. 42 Packaging Duplicate Code and Multi-Stage Container Builds Build the

    common module locally into a container. When building containers for other modules, they also reference the build image of the common module and copy its wheel file into their own image.
  30. 43 [Deferred] Proposal to Use a Repository Service This was

    deferred because it would require setting up a repository service accessible to everyone, and managing authentication and URL retrieval for the service would add extra overhead. The next page moves on to the following topic.
  31. 45 The Impact of Migrating to a Monorepo with uv

    ◼ Reducing the Individual Dependency of CI/CD and Introducing SCA/SAST into CI/CD as a Result Git Push Static Analysis (flake8 → later migrated to ruff) Automated Testing (pytest) SCA Library Security Checks SAST Source Code Security Checks Build Deploy
  32. 46 The Impact of Migrating to a Monorepo with uv

    ◼ A System for Mass-Producing Modules Was Established ◼ New improvements have started to emerge, such as rewriting operational shell scripts (other than build and deployment) into Python.
  33. 47 Notes on SCA and SAST ◼ SCA uses Amazon

    Inspector to check libraries. ◼ SAST uses Semgrep (Community Edition) to check source code. ◼ Since Inspector does not support uv.lock as a scan target, in CI/CD we output a requirements.txt from uv and let Inspector check that instead. uv export --format requirements-txt --no-emit-workspace --no-dev --output-file requirements_layer/requirements.txt The next page moves on to the following topic.
  34. 49 Migration Steps from a Complex Repository to a Monorepo

    Structure with uv Steps AI Manual Remarks Revisiting the Runtime Version Manager ー Good The effectiveness of using generative AI for this task has not been verified. Restructure the directories Good Partial Generative AI is better suited for ensuring path consistency. Unify the Python version ー Good There is no real benefit to using generative AI here. Reinstall the libraries ー Good Review the deployment scripts and perform trial & error Partial Good It is better to handle this carefully by hand. For the overall framework, it is recommended to build it manually. Trial and Error with Deployment Scripts Good Partial When left to generative AI, scripts tend to become verbose. Since it requires multiple rounds of trial and error, it may be better to let generative AI handle it. Rewrite the pipenv configuration files into uv format Partial Good be aware of potential hallucinations Add monorepo settings to the configuration file Partial Good be aware of potential hallucinations Review of Deployment Scripts Partial Good Same as with the first deployment script. Trial and Error with Deployment Scripts Good Partial
  35. 50 Conclusion ◼ While uv alone can handle runtime management,

    using asdf + uv is also convenient. ◼ Because uv is fast, it also contributes to speeding up CI/CD. ◼ Generative AI can be helpful for migration to uv and for restructuring into a monorepo with uv, but be cautious of hallucinations. ◼ Once you introduce uv, you may want to add tools like ruff; however, full compatibility is difficult, so phased migration or coexistence with existing tools should be considered. ◼ In a monorepo structure, handling common modules can be challenging, but approaches such as using wheels or multi-stage builds are available.