# Dagger 2, 2 Years Later

2019-08-27

…in software, feedback cycles tend to be on the order of months, if not years…It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months…

–Erik Dietrich, “How Developers Stop Learning: Rise of the Expert Beginner”

A few years ago, we started using Dagger 2 in our applications. We saw some quick wins and were able to do some neat things like mock mode for testing and better support our white-labelling process. However, as time went on, several members of our team developed an aversion to working on the Dagger code, and I must admit that even I occasionally found it frustrating to work with.

I want to say a little about why folks were frustrated and how I think we might have avoided that frustration. I still think using Dagger is a good idea, but there are some things I might have differently in how we adopted Dagger if we were starting today.

## Object-Graph First, Dagger Second

If you’re writing an Object-oriented program, then you have objects that depend on each other. These objects and dependencies can be thought of as an object graph, where the nodes are objects and the edges are dependency relationships.

When we first introduced Dagger into our code base, our object graph was a mess. Dependency relationships weren’t always clear (thanks, singletons) and when they were clear, they didn’t always seem sensible (e.g., Why does this depend on a Context?).

By aggressively adopting Dagger with an existing messy object graph, we effectively enshrined our messy dependency relationships; we made it more difficult to change those relationships, and because the underlying graph was hard to understand, the Dagger code built on top of it was also more complicated than it needed to be.

One concrete way this played out for us was how difficult it was to swap out dependencies for testing and white-labeling purposes. Because overriding modules isn’t supported/recommended in Dagger 2, the docs actually recommend some up-front design in how Dagger modules are structured. As you can imagine, sensibly setting up Components and Modules to support swapping dependencies can be tricky when the object graph is itself a mess.

I think this mistake was partially driven by a poor understanding of what Dagger is for: it’s just a library that just helps you write less code to create your object graph. The object graph is the thing you really care about, and it’s the thing that should drive how you adopt Dagger in your app.

Letting the object graph drive your Dagger adoption could mean a few things. It could mean waiting to adopt Dagger until your graph is cleaned up. It could also mean refraining from adding objects to Dagger when you can’t do so in a way that moves you towards your desired object graph (instead of the one where that random object somehow depends on a Context).

## Maybe cool it with the DI

Consider the following code:

class View(private val context: Context) {

private val children = mutableListOf<View>()

}
}


mutableListOf returns an ArrayList, which means that View depends on a concrete implementation of List, which means we’re violating the “dependency inversion principle” (one of the SOLID principles), which states:

Depend upon Abstractions. Do not depend upon concretions.1

Although we’re violating SOLID here, I suspect few of us would claim that we need to inject a List instead of having View create its own. Indeed, Uncle Bob himself may not even have a problem with this code since he says:

…if you have tried and true modules that are concrete, but not volatile, depending upon them is not so bad. Since they are not likely to change, they are not likely to inject volatility into your design.2

Unfortunately for us, although DI was often unnecessary when we were depending on stable parts of our code, I was in a sort of do DI by default mode after we adopted Dagger. After all, I thought, Dagger makes DI so easy, why not just default to using DI, especially since — to quote Uncle Bob again — “Non-volatility is not a replacement for the substitutability of an abstract interface."3

As you can imagine, doing this with an object graph that was messy meant that restructuring our Dagger-encrusted object graph was even more difficult. Using Dagger with a messy underlying object graph turns DI into a liability rather than a benefit, especially if your team likes to use interface implementation pairs (which, for the record, I think are often a bad idea. I with Fowler on this.)

This isn’t just a rehash of the above “object-graph first” point: If we could start over, I’d probably cool it with the DI, even if I could add an object to Dagger’s graph in a sensible way. Needing additional Dagger code to support injecting an interface costs something, and in some cases, that trade-off makes about as much sense as using Dagger to inject a List into the above View.

## Flattening the Learning Curve

Dagger isn’t trivial to learn, and if its used heavily in a code-base, it can be pretty intimidating. This is true for a few reasons:

• It generates code, so it appears to be magic to people who aren’t familiar with how it works
• Many of the resources for learning Dagger assume some familiarity with dependency injection and previous DI libraries
• The naming of central elements of the Dagger API (namely, Component, Subcomponent, and Module) gives us little help in understanding their purpose

Unfortunately, the docs don’t do a great job of conveying the broader historical context and design considerations that went into the creation of dagger, and these considerations are quite helpful in addressing the above issues. However, I found that the talks about Dagger to be extremely helpful in addressing these issues.

Gregory Kick’s talk is linked in the user guide, but its easy to gloss over. Note to future self: It’s worth the hour long detour for the team to watch. The Dagger 2 design document linked at the end of Kick’s slides for his talk also has some useful context for understanding the why behind Dagger 2’s design, and it has some useful comments on the distinction between component dependencies and Subcomponents.

Jake Wharton’s Dagger 2 talk is also very helpful in understanding how the code generation works and gives some insight into how to think about Components (they expose roots of an object graph).

## Notes

1. Bob Martin, “Design Principles and Design Patterns," 13. ↩︎

2. Ibid., 14. ↩︎

3. Ibid. ↩︎

androiddependency injectionprogramming

An Intro to Gradient Descent for Kotlin Programmers

Maybe we Should Stop Creating Inscrutable CLIs