Frankly, I’m surprised that I can’t find more blog posts on this topic.
I can’t imagine that I’m the only software engineer who has encountered dependency management of the form “it’s somewhere on the K: drive”. And yet, when I search online, I can’t find anyone griping about this. All I can find is this one blog post by Sonatype that goes on to advertise their Nexus repository manager.
In this case, we use environment variables for all our builds.. or so I thought. Last week, I encountered one project that was making an exception and doing its own thing: It was explicitly calling a tool on the K: drive and linking against libraries on that drive. Not only that, it didn’t use environment variables but hardcoded the path to the network drive in the build script and VS project settings.
I encountered this when I wanted to add a build configuration for this project to TeamCity. However, the build kept failing because our TeamCity server does not have access to the network drive. And why should it? Our network drive contains loads of things, most of them unrelated to building the software. I wouldn’t want anyone or anything that doesn’t need to, access it. I’ve setup the TeamCity build configurations such that all dependencies are in D:\Dependencies on the build server and have set the environment variables accordingly. Whenever a dependency is added to a project, I add it to D:\Dependencies on the build servers and add environment variables to the projects/builds. I use the same approach on my local machine, by the way, but that is beside the point.
So anyway, I couldn’t build this project, the first in many and found out it was because it accessed the network drive, which the build servers don’t have access to. Can we not use the network drive? Is that such a bad thing?
I’ve seen usage of network drives for build artifacts and I think that shouldn’t be an issue in itself. When using TeamCity, I can specify that a build creates artifacts. I can specify an additional build step that pushes these artifacts to a network drive, no problem. But, and here’s a possible problem and its solution: If the network drive is unreachable, the build can still continue and its artifacts can be downloaded manually from TeamCity. There’s a failsafe.
When it comes to build dependencies, however, problems that may arise will block the build. For example, what if the network (drive) is down? If neither your continuous integration server nor your developers can continue working on their local machines, because compiling requires a connection to a network drive, development will simply come to a screeching halt when the network (drive) is down.
Even if the network drive can be approached, two builds may not be able to run at the same time. This means that if several developers want to build the project locally, in addition to a continuous integration server running the builds, there may be conflicts and they may end up blocking eachother.
Why risk that?
Hardcoding paths to build dependencies means that you’ve basically locked down the dependency. You can’t move it unless you want to manually open all your projects and fix their hardcoded paths. I’ve seen situations where a tool was located at PATH/TO/TOOL/1.03 but the tool inside that folder was version 2.5! The developer just deleted everything in folder “1.03” and copy-pasted the new version of the tool.
I can’t see any reason to prefer hardcoded paths over use of an environment variable.
My point here is: limit the location of dependencies to your VCS and a local drive. Use environment variables to ensure that moving dependencies around (or upgrading them) is a simple procedure. And don’t deviate from this approach unless you can come up with a really, really good reason why hardcoding a path to a network drive would be necessary and preferred over using an environment variable or relative path in your VCS.
I’ve encountered a few developers who look at me in a funny way and say “Why worry about such a small thing?”. And maybe they do have a point, because I do tend to make a fuss about the little things(*). Then again, all those small things will eventually add up. “What does it matter if this one project here uses hardcoded paths to a network drive?” may turn out to mean “We have no guidelines on how to set up the structure of our repository and dependencies”. If you end up with projects having hardcoded paths all over the place, not only does it mean that you can strike out the word ‘management’ in ‘dependency management’, it also means you’ll get into trouble as soon as someone decides to make changes in the network. (Think: Let’s lock down this network drive to everyone except accounting, and create a new drive for development.) So, I prefer tackling this here and now. I’ve asked the developer who set up the project to create and use an environment variable, so that all dependencies used by the build configurations on our continuous integration server are managed the same way.
(*) Like: don’t give a project name the suffix ‘Lib’ when it’s a web application that can be built and run standalone. That’s not a library!