You wrote some code.
It uses a library,
libfoo is not a thing in stasis,
a little island of perfection birthed into an ideal of perfect form and function.
libfoo has a version,
libfoo changes as features are added and bugs are fixed.
You do not “depend on
you depend on some specific version of
with its behaviour at that time.
If you’re writing a service / application / tool or some other thing where you’re the end product and control your dependency chain,
then specify your dependencies exactly -
If you’re writing a library,
you have a harder time because you need to fit with other things,
but at the VERY least specify an upper bound -
you’re not prescient,
and you’ve probably got little idea what downstream will do in their next release.
Unpinned dependencies mean when Alex the new contributor turns up, they’ve got Extra Fun Barriers To Entry as they have to work out whether tests failed on their PR because did something wrong, or because they happened to pick up a new version of a dependency.
Unpinned dependencies mean when Bobby the user is debugging a problem, they’ve got extra work to try to find out whether their problem is because they’re using different versions to other people, and they’ve a harder time seeing what’s actually being tested and used.
If you’re at the top of the dependency chain, then to not be explicit about what dependency versions you need / allow / support / test against presents a hostile UX to users and contributors.
But how do we stay up to date?
Fortunately there’s a bunch of services out there that you can point at your codebase and they’ll automatically file PRs to update dependencies - e.g.:
That way you can ensure your code is always explicit about dependencies AND keeps tracking latest versions.
But what about if you’re not top of the chain?
Everything is terrible; good luck to you.
Seriously, you face a massive combinatorial explosion, with little in the way of helpful tooling.
As an example,
django12factor depends on four other libraries - to test on just two versions of each of those means 16 test runs (24),
multiplied again by the number of different versions of Python I care about.
You’re unlikely to be able to explicitly pin your dependency versions because you’ve no idea what upstream or its other dependencies might need, plus you don’t necessarily want to be shipping a new release just because downstream did so too.
At least pin the dependency versions you use for testing, so that there’s One True Known Good version set, and you can be more confident about the stability of your stable branch.
Vendoring in your dependencies should reduce interoperability problems, but tooling support there seems limited at best.
Everything? Absolutely everything? Really?
Sweeping generalities are a useful rhetorical device. In practice nobody really wants to say something like “This requires this specific point release of Python 3.5 / Spring 4.2 / etc.”
For a dependency you trust to maintain compatibility, sure, pin the major and minor version numbers (or equivalent if they do something very funky and not SemVer).
As an absolute minimum, provide an upper bound on the dependency version.
You may be confident in the entire
but chances are that if/when they ship a
2.x then things will be substantially different.
Isn’t this just the static vs dynamic linking argument all over again?
Kind of, yes
(I appreciate that it was nice to not need to update Every Single Thing Ever to deal with the recent
glibc security vulnerability)