Pipenv: The Newer Dependency Manager

Dylan Anthony - May 3 '19 - - Dev Community

This is part of a series explaining the different options for managing Python dependencies. Make sure you've read the previous two posts, this one will build on that information.

TL;DR

Pipenv is a tool that lets you save a few steps in development and gives you the ability to lock dependencies. However it is very slow and quite opinionated, the defaults will get you into some trouble if you aren't careful.

Components

This list is very similar to that of the vanilla stack, bundling together a few components.

  • Pipenv: Used to install dependencies and manage a virtual environment
  • Pipfile: Used to declare both production and development dependencies
  • Pipfile.lock: A file used to synchronize exact versions of dependencies between environments
  • setup.py: File used to set metadata for and create your package
  • setuptools: Used for building your project
  • twine: Used to upload your project to a PyPI server
  • pip: Used for some distribution cases

Summary

The Pipenv stack is basically an improved vanilla stack with a few tradeoffs. The developer experience is (mostly) better, the distribution process is a little worse. For smaller projects I think Pipenv makes sense, for bigger projects, you're better off finding another solution.

Development

TL;DR

The Pipenv stack saves you time by mostly abstracting away commands related to the virtual environment. It also allows you to separate requirements (the versions of dependencies your project depends on) from the last tested version (the one you know for sure works with your project).

Defining Requirements

  • 👍 Dev Requirements: One of the major improvements over the vanilla stack is the ability to declare development requirements in the same file as the production requirements. Do this either by manually adding to the [dev-packages] section of the Pipfile, or by doing pipenv install --dev [package_name]
  • 😕 Reproducibility: While having a lock file you can use is a huge step up from the vanilla stack, Pipenv makes quite an effort to get you to constantly update your packages. You can use pipenv sync to install what's in your lock file, but every other command (add, install, etc.) will update your packages and lock file, meaning it's hard to maintain reproducibility.
  • 😨 Adding new packages: Because Pipenv will try to update your entire project whenever it can, adding new packages becomes a real pain. Basically, you can't do it unless you're manually defining full versions (e.g. == 2.3.2) or ready to review and test everything that updated. What's worse, if you use the pipenv install command to add the package to both your environment and you Pipfiles, it will default to a wildcard (*), meaning any version of that package is accepted as an update (even breaking changes!). On top of all of those things, Pipenv is very slow. Basically, Pipenv just makes it a chore to add new packages to your project.
  • 😊 Alternative sources: Pipenv does this well. Just add your new repository as a source in your Pipfile and reference any credentials with environment variables. Easy peasy.

Virtual Environments

  • 😌 Setup: A breeze. pipenv sync, then go make some coffee. When you come back you'll have a shiny new virtual environment which is still installing your dependencies, but it only took one command.
  • 🙂 Usage: Pipenv greatly simplifies the virtual environment workflow over the vanilla stack
    • Installing requirements: If you want what was last tested, use pipenv sync. If you want the latest packages that conform to your constraints, use pipenv install. I know I'm beating a dead horse here but it's hard to overstate how slow Pipenv is to resolve and install packages.
    • Running: pipenv run python [your script]
    • Activating the environment: pipenv shell
    • Deactivating: deactivate

Distribution

TL;DR

Anything you've liked about Pipenv so far pretty much goes out the window when it comes to distribution. You have to use the same tools as you would with the vanilla stack made more difficult by having to get from a Pipfile to ye olde requirements format.

Build

  • 😿 Definitions: Everything still has to be stored in your setup.py, even your requirements. This means you either have to export a requirements.txt file from your Pipfile before each build (which will update all your dependencies to the latest version, undoing any testing you just finished!) or declare your dependencies in your setup.py file and install your project as a dependency in your Pipfile, pretty much taking the Pipfile out of the process and revoking most of the features you gained by choosing Pipenv.
  • 🙄 Building: Exactly the same as it was in the vanilla stack

Deploy

  • 🥵 PyPI: Exactly the same as it was in the vanilla stack
  • 🙅‍ Source Distribution: okay last time, I promise. Installing dependencies with Pipenv is so slow that you should just export to a requirements file and set up your server the old fashioned way. Seriously, putting Pipenv on a server is not worth it.
  • 😪 Prebuilt Dependencies: This is just like the vanilla stack but now with the added step of exporting your Pipfile to a requirements file that pip can use.

Conclusion

To be honest, I'm not sure that Pipenv is worth it. In my experience, it ends up costing more time than it saves, but your mileage will very. It is technically a part of PyPA now, so hopefully they'll be improving it, but given what I've read of issues on GitHub that seems unlikely. The maintainers are very opinionated and don't much care how other people use the tool. When picking a stack, I'd steer clear of Pipenv.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player