You might be better off without a DI framework
Introduction
ASP.NET Core has a built-in DI framework, which it uses internally to manage its own dependencies. Opting out isn’t straightforward, so most ASP.NET Core projects use it by default without seriously considering alternatives.
Like most practices, a DI framework is not a silver bullet. It has real downsides that are worth weighing:
- It can introduce a lot of boilerplate.
- It encourages excessive use of interfaces, which makes code navigation harder.
- It can make testing harder: objects built by the container can become difficult to construct without it.
- Dependency/configuration errors are often caught at runtime rather than compile time.
Constructor injection and boilerplate code
One of the big OSS projects that uses the default DI framework in .NET is Umbraco-CMS. Let’s look at the logout endpoint implementation.
[UmbracoMemberAuthorize]
public class UmbLoginStatusController : SurfaceController
{
private readonly IMemberSignInManager _signInManager;
public UmbLoginStatusController(
IUmbracoContextAccessor umbracoContextAccessor,
IUmbracoDatabaseFactory databaseFactory,
ServiceContext services,
AppCaches appCaches,
IProfilingLogger profilingLogger,
IPublishedUrlProvider publishedUrlProvider,
IMemberSignInManager signInManager)
: base(umbracoContextAccessor, databaseFactory, services,
appCaches, profilingLogger, publishedUrlProvider)
=> _signInManager = signInManager;
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
[ValidateUmbracoFormRouteString]
public async Task<IActionResult> HandleLogout(
[Bind(Prefix = "logoutModel")] PostRedirectModel model)
{
var isLoggedIn = HttpContext.User.Identity?.IsAuthenticated ?? false;
if (isLoggedIn)
{
await _signInManager.SignOutAsync();
}
// ...
}
}
This is a lot of code for something that “just” proxies a sign-out call. Most of the injected dependencies aren’t even used by this endpoint directly—they’re only there because the base class constructor requires them. They provide context/environment, but they also obscure the important parts and reduce readability.
Now compare that to an example that doesn’t use a DI framework:
public abstract class ApiController : ControllerBase
{
protected AppState App;
protected Db Db => App.Db;
protected Settings Settings => App.Settings;
// This is called by a custom IControllerFactory
[NonAction]
public void Initialize(AppState app)
{
App = app;
}
}
[Route($"{Settings.ApiBaseUri}/quota")]
public class QuotaController : ApiController
{
[HttpPost]
[Route("set")]
[Authorize(Roles = "admin")]
public ActionResult SetQuota(SetQuotaRequest r)
{
Db.Transaction(ctx => QuotasRow.SetQuota(
r.Username,
r.Used,
r.Limit,
ctx));
return Ok();
}
}
The idea is to wrap the infrastructure context into a single object and pass it to the controller during initialization. That reduces boilerplate and improves readability.
Excessive use of interfaces (interface bloat)
Interfaces help separate implementation from contract. One common reason to introduce them is to make external resources testable (e.g., HTTP APIs, databases, file systems).
Unfortunately, in DI-heavy codebases this is often taken too far: an interface gets created for every service registered in the container. Often the interface even lives in the same file as the implementation:
public interface ISeriesService
{
Series GetSeries(int seriesId);
List<Series> GetSeries(IEnumerable<int> seriesIds);
Series AddSeries(Series newSeries);
List<Series> AddSeries(List<Series> newSeries);
// ...
}
public class SeriesService : ISeriesService
{
public Series GetSeries(int seriesId)
{
return _seriesRepository.Get(seriesId);
}
public List<Series> GetSeries(IEnumerable<int> seriesIds)
{
return _seriesRepository.Get(seriesIds).ToList();
}
public Series AddSeries(Series newSeries)
{
_seriesRepository.Insert(newSeries);
_eventAggregator.PublishEvent(new SeriesAddedEvent(GetSeries(newSeries.Id)));
return newSeries;
}
// ...
}
This tends to happen because:
- We follow convention for “consistency” without re-evaluating whether it helps.
- We want to mock a dependency because it’s hard to construct due to too many transitive dependencies.
The result is unnecessary boilerplate, slower refactoring (every signature change happens in two places), and an extra layer of indirection that makes code harder to read.
I’d argue that when dependencies are managed manually, this happens less. You have to construct objects yourself, so you naturally think harder about what depends on what—and what should be simplified.
Creating dependencies manually becomes hard
With a DI framework, adding a new dependency is trivial. That convenience often means we stop being strict about dependency growth, and constructors quietly accumulate 10+ parameters.
When you later want to write unit tests, that becomes painful: you can no longer construct the class easily. What often happens is that real implementations get replaced with mocks, which can make tests fragile and low-value:
- You test wiring and expectations rather than behavior.
- You duplicate behavior in mock setups.
- When call signatures change, tests break even if the behavior you care about didn’t.
public Attempt<string?> IsAuthorized(
IUser? currentUser,
IUser? savingUser,
IEnumerable<int>? startContentIds,
IEnumerable<int>? startMediaIds,
IEnumerable<string>? userGroupAliases)
{
var currentIsAdmin = currentUser?.IsAdmin() ?? false;
// a) A non-admin cannot save an admin
if (savingUser != null)
{
if (savingUser.IsAdmin() && currentIsAdmin == false)
{
return Attempt.Fail("The current user is not an administrator so cannot save another administrator");
}
}
// ...
}
public void Non_Admin_Cannot_Save_Admin()
{
var currentUser = CreateUser();
var savingUser = CreateAdminUser();
var contentService = new Mock<IContentService>();
var mediaService = new Mock<IMediaService>();
var entityService = new Mock<IEntityService>();
var authHelper = new UserEditorAuthorizationHelper(
contentService.Object,
mediaService.Object,
entityService.Object,
AppCaches.Disabled);
var result = authHelper.IsAuthorized(
currentUser,
savingUser,
new int[0],
new int[0],
new string[0]);
Assert.IsFalse(result.Success);
}
Testing quickly becomes a burden. Since the dependencies aren’t used in this particular test, we’re mostly providing placeholders. If we needed to mock specific calls, maintenance would get worse: whenever parameters are added, many mocks must be updated even when the new parameters don’t matter to the test.
If you end up with code like this, it’s often better to use the DI framework in tests as well—at least you avoid duplicating the construction logic by hand.
When dependencies are manually managed, objects can be easier to construct and there’s less pressure to mock everything. Fake objects are usually enough for truly external resources (e.g., network calls). Fakes are often more flexible than mocks, allow better state management, and are easier to maintain.
public abstract class TestBase
{
protected AppState App;
protected Db Db => App.Db;
protected Settings Settings => App.Settings;
protected readonly FakeAiClient Ai;
protected TestBase()
{
// We replace the real AI client with a fake one!
Ai = new FakeAiClient();
App = new AppState(
Settings.Load(),
Ai,
new Db("test-db"));
}
protected void CreateDatabase()
{
// We use a real database for testing!
Db.ResetTestDatabase();
}
}
The compiler no longer catches dependency errors
If we create objects using the new keyword, the call site must supply every constructor parameter. Missing dependencies are caught at compile time.
With a DI container, constructor calls are effectively assembled at runtime. Missing registrations or misconfigured graphs are discovered only when the service is resolved. To be confident the app is wired correctly, you need integration or end-to-end tests; otherwise, you can deploy a build that compiles but fails at startup (or worse, fails on a specific endpoint).
As the codebase grows, registration code can become a “second program” that needs structure and review.
public static class UmbracoBuilderExtensions
{
public static IUmbracoBuilder AddBackOffice(this IUmbracoBuilder builder) =>
builder
.AddConfiguration()
.AddUmbracoCore()
.AddWebComponents()
.AddHelpers()
.AddBackOfficeCore()
.AddBackOfficeIdentity()
.AddBackOfficeAuthentication()
.AddTokenRevocation()
.AddMembersIdentity()
.AddUmbracoProfiler()
.AddMvcAndRazor(configureMvc)
.AddBackgroundJobs()
.AddUmbracoHybridCache()
.AddDistributedCache()
.AddCoreNotifications();
}
Dependency lifetimes are not very transparent
A DI framework centralizes lifetime management (singleton/scoped/transient). Lifetime decisions are made far from where a service is used, which makes mistakes easy to miss. Misconfigurations—like a singleton depending on a scoped service (or incorrect disposal of IDisposable services)—compile fine but can fail at runtime or cause subtle bugs.
The rules are non-trivial. One way to reduce risk is to introduce naming conventions and enable automatic checks (in .NET, ValidateScopes helps).
For resource-sensitive code, it can be clearer if the DI container only provides factories, which you use to construct objects on demand. For example, rather than resolving DbContext directly from the container, you can use a DbContext factory.
Conclusion
A DI framework heavily influences the structure of an application. When I searched for ASP.NET Core projects that use DI, I had no trouble finding them. Projects like Umbraco-CMS, Sonarr, Jellyfin, and nopCommerce all use it. Finding examples that don’t use a DI framework proved more difficult; the only project I found was RavenDB.
One reason is that Microsoft promotes DI and makes opting out difficult—especially if you don’t know the framework deeply. As a result, DI is often accepted as “part of ASP.NET Core” and not given a second look.
A DI framework has real benefits. The biggest is that it provides a structured, unified way to manage object creation and dependency graphs. It also makes ASP.NET Core applications look similar, at least at first glance.
But it’s worth remembering that DI is a general-purpose approach. Like many frameworks, it optimizes for the general case. That can discourage us from considering simpler designs that are more transparent and compile-time friendly.
My view is that manual dependency management often forces better decisions: fewer dependencies, clearer construction, and a codebase that’s easier to understand.