Migrating And Creating Appointments API A Comprehensive Guide
Hey guys! Let's dive into the exciting world of API migration and creation, specifically focusing on an appointments API. This guide will walk you through the entire process, ensuring you follow Clean Architecture principles and SOLID design patterns. We'll cover everything from setting up your development environment to implementing robust endpoints. So, buckle up and let's get started!
📋 Setup Prerequisites
Before we jump into the nitty-gritty details, let's make sure your development environment is all set up. This section is crucial, so please follow each step carefully to avoid any hiccups down the road.
## 🚀 Required Setup Steps
**⚠️ CRITICAL**: This project uses **pnpm** as the package manager. Using npm or yarn will cause issues!
### 1. Install pnpm globally (if not already installed)
npm install -g pnpm
### 2. Install project dependencies
pnpm install
### 3. Verify setup by running tests
# For API components
pnpm nx test api
# For PWA components
pnpm nx test web
# For library components
pnpm nx test domain
pnpm nx test application-api
pnpm nx test application-shared
pnpm nx test application-web
pnpm nx test utils-core
**✅ You're ready to work on this issue once these commands run successfully!**
The first crucial step is ensuring you have the correct package manager installed. This project mandates the use of pnpm, so make sure you have it set up globally. Using npm or yarn might lead to unexpected issues, and we want to avoid those, right? After installing pnpm, you'll need to install the project dependencies by running pnpm install
. This command fetches all the necessary packages and sets them up in your project. Finally, to ensure everything is working smoothly, run the tests using various pnpm nx test
commands. These tests cover different parts of the application, including API, PWA, and library components. Passing these tests confirms that your setup is correct, and you're ready to start the actual development work. These initial steps are foundational, guys, so let's nail them!
Comprehensive Plan Description
The primary goal here is to migrate or create appointments API endpoints in the apps/api
directory. This involves replicating the functionality found in the legacy legacy/server/src/routes/appointments.routes.js
file. However, we're not just copying and pasting code. We need to adhere to the current architecture and naming conventions of the apps/api
project. This means a thorough analysis of existing API functionality is essential to ensure the new endpoints cover all necessary features. Remember, we're aiming for a seamless integration that aligns with the project's overall design. One key requirement is to avoid the use of Swagger in the new endpoints. This might seem like a small detail, but it’s crucial for maintaining consistency within the project.
Let's talk about the comprehensive plan description in detail. Our main task is to create new API endpoints for appointments within the apps/api
section of our project. Think of it as building a robust, modern version of the legacy appointment routes. This isn’t just about rewriting old code; it’s about reimagining the functionality while sticking to the current architectural standards. To achieve this, we'll need to dive deep into the existing API, understanding how it works, what features it offers, and how it's structured. This deep dive is critical because we want our new endpoints to seamlessly integrate with the current system. We don’t want any clashes or inconsistencies, so we'll be paying close attention to naming conventions, data structures, and overall design. It’s like fitting a new piece into a puzzle – it has to match perfectly. Another thing to keep in mind is that we're specifically avoiding the use of Swagger in these new endpoints. This might seem like a small detail, but it's about maintaining a consistent approach across the project. We're aiming for a clean, well-structured API that's easy to understand and maintain. So, guys, let's get our hands dirty and make this API shine!
API Endpoint Implementation Checklist
The API Endpoint Implementation Checklist is our roadmap to success. It's a detailed guide that ensures we follow Clean Architecture, SOLID principles, and consistent patterns with existing implementations. This checklist is broken down into several layers, each with specific tasks and considerations. This meticulous approach is what separates a good API from a great API.
DOMAIN LAYER (libs/domain/)
This layer is the heart of our application, focusing on business logic and rules. We'll start by creating an entity with business logic, such as src/entities/FeatureName.ts
. These entities should have private readonly fields with public getters, ensuring immutability. Business methods like toggle()
, update()
, and validate()
should also be included. Think of these entities as the core concepts of our application. Next, we'll create value objects with validation, for example, src/value-objects/FeatureNameProperty.ts
. These objects are self-validating in the constructor and immutable, ensuring data integrity. We'll also define a repository interface (src/repositories/IFeatureNameRepository.ts
) with methods like getAll()
, create()
, update()
, delete()
, and getById()
. This interface ensures that our domain layer doesn't depend on specific implementations. Domain services (src/services/FeatureNameDomainService.ts
) will be created if needed, instantiated manually in use cases. Finally, we'll define domain events and exceptions to handle various scenarios and errors. The domain layer is where the core business rules live, so let's make sure they're rock solid, guys!
Let's break down the Domain Layer a bit further, guys. This is where the magic of our application truly happens. Think of it as the brain, holding all the critical business logic and rules. The first thing we'll tackle is creating entities. These entities are like the key players in our application's story – they represent the core concepts we're working with. For example, if we're building an appointment scheduling system, an entity might be an Appointment
itself. Each entity will have its own set of rules and behaviors, and we want to make sure they're well-defined and consistent. We'll achieve this by using private readonly fields with public getters, which helps us maintain the immutability of our entities. Immutability is super important because it means that once an entity is created, its core properties can't be changed directly. This helps prevent unexpected side effects and makes our code much easier to reason about. We'll also be adding business methods to our entities, things like toggle()
, update()
, and validate()
. These methods encapsulate the specific actions and validations that can be performed on an entity. Next up are value objects. Think of these as smaller, more specialized data structures that represent specific values within our domain. A good example might be an AppointmentTime
object, which ensures that the time is always valid and properly formatted. Value objects are self-validating, meaning they have built-in checks to ensure that the data they hold is correct. We'll also be defining a repository interface. This is like a contract that specifies how we can interact with our data storage. The interface will include methods for things like getting all appointments, creating new appointments, updating existing ones, deleting appointments, and getting appointments by ID. The key here is that the interface doesn't specify how these operations are performed; it just defines what operations are available. This separation of concerns is a cornerstone of Clean Architecture. Lastly, we'll be creating domain services, domain events, and domain exceptions. Domain services handle complex business logic that doesn't naturally fit into an entity or value object. Domain events allow us to react to changes within our domain, and domain exceptions provide a structured way to handle errors. So, guys, let's roll up our sleeves and build a domain layer that's not only robust but also a joy to work with!
APPLICATION LAYER (libs/application-shared/)
Moving up a layer, we have the application layer. This layer orchestrates the domain logic and provides use cases for each business operation. We'll create use cases such as src/use-cases/commands/ActionFeatureNameUseCase.ts
, ensuring each use case has a single responsibility. These use cases will be decorated with @injectable()
and use constructor injection via TOKENS.*
. It's crucial that these use cases don't handle errors; they should let domain exceptions bubble up. We'll also create query handlers for data retrieval (src/use-cases/queries/GetFeatureNameQueryHandler.ts
), following the CQRS pattern to separate commands and queries. DTOs (Data Transfer Objects) will be defined for commands and queries (src/dto/FeatureNameCommands.ts
and src/dto/FeatureNameQueries.ts
) to handle request and response data. Validation schemas will be created using Zod (src/validation/FeatureNameValidationSchemas.ts
) for complex validation and data transformation. Validation services (src/validation/FeatureNameValidationService.ts
) will be implemented, extending ValidationService<unknown, CommandType>
. We'll also create mappers (src/mappers/FeatureNameMapper.ts
) for data transformation between domain entities and DTOs. Finally, we'll define service interfaces (src/interfaces/IFeatureNameService.ts
) to abstract service implementations. The application layer acts as a bridge between the presentation layer and the domain layer, so it's vital to keep it clean and focused.
Now, let’s zoom in on the Application Layer, guys. This layer is like the conductor of our application's orchestra. It takes requests from the outside world (the presentation layer) and orchestrates the domain logic to fulfill those requests. The key concept here is the use case. Think of a use case as a specific action that a user can perform in our application. For example, creating a new appointment, canceling an appointment, or rescheduling an appointment would all be separate use cases. We want each use case to have a single responsibility, meaning it should focus on doing one thing and doing it well. This makes our code easier to understand, test, and maintain. Each use case will be implemented as a class, and we'll use the @injectable()
decorator to make it available for dependency injection. Dependency injection is a fancy term for a simple idea: instead of creating dependencies within a class, we pass them in from the outside. This makes our classes more flexible and testable. We'll be using TOKENS.*
for constructor injection, which is a way of specifying which dependencies a class needs. It’s super important that our use cases don't handle errors directly. Instead, they'll let any exceptions thrown by the domain layer bubble up. This keeps our use cases focused on their core responsibility: orchestrating the domain logic. We’ll also be implementing query handlers for data retrieval. This is where the CQRS (Command Query Responsibility Segregation) pattern comes into play. CQRS is all about separating the operations that modify data (commands) from the operations that read data (queries). This separation can lead to significant performance improvements and make our application more scalable. To handle the data that flows in and out of our application, we'll be using DTOs (Data Transfer Objects). DTOs are simple objects that carry data between layers of our application. They help us decouple our layers and ensure that we're only passing the data that's needed. We’ll also be using Zod for validation. Zod is a fantastic library for creating validation schemas in TypeScript. It allows us to define the structure and types of our data and automatically validate that incoming data conforms to our schema. This is crucial for preventing errors and ensuring data integrity. We'll be implementing validation services to handle the validation logic. These services will extend the ValidationService<unknown, CommandType>
class and provide a consistent way to validate data across our application. To transform data between our domain entities and DTOs, we'll be using mappers. Mappers are classes that know how to convert between different data representations. This helps us keep our domain entities clean and independent of the specific data formats used in our presentation and infrastructure layers. Finally, we'll be defining service interfaces to abstract service implementations. This is another key principle of Clean Architecture: we want to depend on abstractions, not concretions. By defining interfaces, we can easily swap out different implementations of a service without affecting the rest of our application. So, guys, the Application Layer is where all the action happens – let's make sure it's well-organized and efficient!
APPLICATION API LAYER (libs/application-api/)
The application API layer is where we'll create API-specific use cases, such as src/use-cases/commands/FeatureNameUseCase.ts
. These use cases handle API-specific business logic and integrate infrastructure services like JWT and hashing. This layer is crucial for adapting the application core to the specific needs of our API.
Let's break down the Application API Layer even further, guys! This layer is all about tailoring our application to the specific needs of our API. Think of it as the translator between our core application logic and the outside world. Here, we'll be creating API-specific use cases. These use cases are similar to the ones we created in the Application Layer, but they're specifically designed to handle the unique requirements of an API. For example, we might have a use case for registering a new user, which involves hashing the password and generating a JWT (JSON Web Token) for authentication. These are tasks that are specific to an API and wouldn't be handled in the core application logic. The Application API Layer also plays a crucial role in integrating infrastructure services. These are things like JWT handling, password hashing, and interacting with external APIs. By handling these details in this layer, we keep our core application logic clean and focused on the business rules. This layer acts as an adapter, transforming requests from the API into commands that our core application can understand, and then transforming the results back into API responses. It’s a crucial step in ensuring that our API is both functional and secure. So, guys, let's dive in and build this layer with precision and care!
INFRASTRUCTURE LAYER (apps/api/src/infrastructure/)
The infrastructure layer is where we implement our repository with multiple persistence options. This includes options like TypeORM (featurename/persistence/typeorm/TypeOrmFeatureNameRepository.ts
), Mongoose (featurename/persistence/mongoose/MongooseFeatureNameRepository.ts
), SQLite (featurename/persistence/sqlite/SqliteFeatureNameRepository.ts
), and in-memory (featurename/persistence/in-memory/InMemoryFeatureNameRepository.ts
). We'll implement the domain repository interface and handle data mapping between the domain and persistence layers. Database entities/schemas will be created, such as featurename/persistence/typeorm/FeatureNameEntity.ts
and featurename/persistence/mongoose/FeatureNameSchema.ts
. Dependencies will be registered in the DI container (apps/api/src/infrastructure/di/container.ts
), including repository implementations, use cases, validation services, and infrastructure services. DI tokens will be defined in libs/application-shared/src/di/tokens.ts
, using unique symbols for all injectable services. The infrastructure layer is responsible for the concrete implementations of our abstractions, allowing us to switch between different technologies without affecting the core application.
Now, let's shine a spotlight on the Infrastructure Layer, guys. This layer is the engine room of our application – it's where we deal with the nitty-gritty details of how our application interacts with the outside world. Think of it as the foundation upon which our entire system is built. One of the most important responsibilities of this layer is implementing our repository. A repository is a design pattern that provides an abstraction over our data storage. It allows us to interact with our data without having to worry about the specific details of the database or data storage mechanism. In this layer, we'll be implementing our repository with multiple persistence options. This means we can choose to store our data in different ways, depending on our needs. We'll have options like TypeORM, which is an ORM (Object-Relational Mapper) for relational databases like PostgreSQL and MySQL; Mongoose, which is an ODM (Object-Document Mapper) for MongoDB; SQLite, which is a lightweight, file-based database; and an in-memory repository, which is useful for testing. This flexibility is a huge win because it means we can easily switch between different data storage technologies without having to rewrite our entire application. We'll be implementing the domain repository interface, which we defined in the Domain Layer. This ensures that our Infrastructure Layer is adhering to the contracts established by our Domain Layer. We'll also be handling data mapping between the domain and persistence layers. This involves converting data from our domain entities into the format required by our database, and vice versa. This mapping is crucial for keeping our domain entities clean and independent of the specific data formats used in our persistence layer. We'll be creating database entities and schemas to define the structure of our data in each of our persistence options. These entities and schemas will be specific to the technology we're using (e.g., TypeORM entities, Mongoose schemas). To manage the dependencies within our application, we'll be using a DI (Dependency Injection) container. This container is responsible for creating and managing the dependencies between our classes. We'll register our repository implementations, use cases, validation services, and infrastructure services in the DI container. This makes our application more modular and testable. We'll also be defining DI tokens in libs/application-shared/src/di/tokens.ts
. These tokens are unique symbols that we use to identify our injectable services. This helps us avoid naming conflicts and makes our code more maintainable. The Infrastructure Layer is where the rubber meets the road – it's where we take our abstract ideas and turn them into concrete implementations. So, guys, let's get down to the details and build a solid foundation for our application!
PRESENTATION LAYER (apps/api/src/presentation/)
In the presentation layer, we'll create a controller following the TodoController
pattern (apps/api/src/presentation/controllers/TodoController.ts
). This controller will be decorated with @Controller('/resource')
, using a proper route prefix. It will also be decorated with @injectable()
and use constructor injection via TOKENS.*
. Controllers should not have try/catch blocks; error handling should be managed via middleware. We'll implement RESTful endpoints using HTTP method decorators (@Get
, @Post
, @Put
, @Delete
, @Patch
) and use @HttpCode()
for non-standard status codes. Parameter validation will use the TodoIdSchema.parse()
pattern, and body validation will use the validation service. Consistent ApiResponseBuilder.success()
responses will be used. Example controller methods include getAll()
to retrieve a list of items and create()
to create a new item. The presentation layer is the entry point for external requests, so it's crucial to keep it lean and focused on routing requests to the appropriate use cases.
Let’s zoom in on the Presentation Layer, guys! This layer is the face of our application – it’s what the outside world sees and interacts with. Think of it as the front desk of a hotel, where guests (API clients) come to make requests and receive responses. Our primary tool in this layer is the controller. We'll be creating controllers that follow the TodoController
pattern, which provides a consistent structure and approach for handling API requests. Each controller will be decorated with @Controller('/resource')
, which specifies the base route for the controller. This helps us organize our API endpoints and makes them easier to understand. We'll also be using @injectable()
and constructor injection via TOKENS.*
to manage dependencies in our controllers. Just like in the Application Layer, this makes our controllers more flexible and testable. A key principle in the Presentation Layer is that controllers should not handle errors directly. Instead, we'll delegate error handling to middleware. This keeps our controllers focused on their core responsibility: routing requests and responses. We'll be implementing RESTful endpoints using HTTP method decorators like @Get
, @Post
, @Put
, @Delete
, and @Patch
. These decorators tell our framework which HTTP method a particular controller method should handle. For example, a method decorated with @Get
will handle GET requests, while a method decorated with @Post
will handle POST requests. We'll also be using @HttpCode()
for non-standard status codes. This allows us to specify the HTTP status code that should be returned for a particular response. For example, we might use @HttpCode(201)
to indicate that a POST request was successful and a new resource was created. To ensure that incoming data is valid, we'll be using parameter validation with the TodoIdSchema.parse()
pattern and body validation with our validation service. This helps us prevent errors and ensures that our application is secure. We'll be using consistent ApiResponseBuilder.success()
responses to format our API responses. This helps us maintain a consistent look and feel across our API and makes it easier for clients to consume our responses. Our controllers will have methods like getAll()
, which retrieves a list of items, and create()
, which creates a new item. These methods will orchestrate the use cases in our Application Layer to fulfill the requests. The Presentation Layer is the gateway to our application, so it's crucial to keep it clean, focused, and secure. So, guys, let's make a great first impression!
ERROR HANDLING & MIDDLEWARE
Error handling is crucial. We'll leverage existing error handling middleware (apps/api/src/shared/middleware/RoutingControllersErrorHandler.ts
) to avoid manual error handling in controllers. Domain exceptions will be automatically caught and formatted, and ValidationError details will be preserved. Proper HTTP status codes will be returned. Feature-specific domain exceptions will be created, extending the base DomainException
class and following the naming convention FeatureName*Exception
. This ensures consistent and centralized error handling.
Let's dig a bit deeper into Error Handling & Middleware, guys. Error handling is a critical aspect of any robust application. It's not just about catching errors; it's about handling them gracefully and providing meaningful feedback to the user. In our application, we're taking a centralized approach to error handling, which means we're delegating most of the error handling logic to middleware. Middleware are functions that sit in the middle of the request-response cycle. They can intercept requests before they reach our controllers and responses before they're sent back to the client. This makes them a perfect place to handle errors. We'll be leveraging existing error handling middleware (apps/api/src/shared/middleware/RoutingControllersErrorHandler.ts
) to avoid manual error handling in our controllers. This keeps our controllers clean and focused on their core responsibilities. When a domain exception is thrown, our middleware will automatically catch it, format it into a consistent error response, and send it back to the client. This includes preserving the details of any validation errors, which can be incredibly helpful for debugging. We'll also be returning proper HTTP status codes to indicate the type of error that occurred. For example, a 400 status code might indicate a bad request, while a 500 status code might indicate a server error. To ensure that our error handling is consistent and well-organized, we'll be creating feature-specific domain exceptions. These exceptions will extend the base DomainException
class and follow the naming convention FeatureName*Exception
. This makes it easy to identify the specific type of error that occurred and where it originated. By taking a centralized approach to error handling and using domain exceptions, we can ensure that our application is robust, reliable, and easy to maintain. So, guys, let's make sure our error handling is top-notch!
API RESPONSE PATTERNS
We'll use ApiResponseBuilder
for consistent responses (apps/api/src/presentation/dto/ApiResponse.ts
). Success responses will use ApiResponseBuilder.success(data)
, and operation responses will use ApiResponseBuilder.successWithMessage(message)
. Consistent response structures across all endpoints are essential. Response type interfaces will be defined, such as FeatureNameResponse
for single items, FeatureNameListResponse
for array responses, FeatureNameOperationResponse
for operation confirmations, and FeatureNameStatsResponse
for statistics. This uniformity makes it easier for clients to consume our API.
Let's talk about API Response Patterns now, guys. Consistency is key when it comes to API responses. We want our API to be predictable and easy to use for developers. That's why we'll be using a consistent pattern for all of our API responses. We'll be using ApiResponseBuilder
(apps/api/src/presentation/dto/ApiResponse.ts
) to construct our responses. This builder provides a simple and consistent way to create both success and error responses. For success responses, we'll use ApiResponseBuilder.success(data)
. This will return a response with a 200 OK status code and the data in the response body. For operation responses, which are responses that confirm that an operation was successful (e.g., creating a new resource), we'll use ApiResponseBuilder.successWithMessage(message)
. This will return a response with a 200 OK status code and a message in the response body. We'll also be defining response type interfaces for different types of responses. For example, we'll have a FeatureNameResponse
interface for single item responses, a FeatureNameListResponse
interface for array responses, a FeatureNameOperationResponse
interface for operation confirmations, and a FeatureNameStatsResponse
interface for statistics. These interfaces help us ensure that our responses are well-typed and consistent. By using a consistent pattern for our API responses, we can make our API easier to use and maintain. So, guys, let's make our API responses shine!
VALIDATION PATTERNS
Input validation in controllers will use validation service methods like this.validationService.validateCreateCommand(body)
. Parameter validation will use patterns like const validatedId = FeatureNameIdSchema.parse(id)
. Combined validation for update operations will use { ...body, id }
. The validation service structure will include individual validation services for each command type, a composite facade service combining all validations, and safe validation methods for non-throwing validation. Proper DI registration for all validation services is crucial. Robust validation ensures data integrity and prevents errors.
Let's dive into Validation Patterns now, guys. Validation is a critical step in ensuring the integrity of our application. It's all about making sure that the data we receive is in the format we expect and that it meets our business rules. In our controllers, we'll be using validation service methods like this.validationService.validateCreateCommand(body)
to validate incoming data. This allows us to keep our controllers clean and focused on their core responsibilities. For parameter validation, we'll be using patterns like const validatedId = FeatureNameIdSchema.parse(id)
. This ensures that parameters like IDs are in the correct format before we use them in our application. For update operations, we'll be using combined validation with the { ...body, id }
pattern. This allows us to validate both the body of the request and the ID of the resource being updated. Our validation service structure will include individual validation services for each command type. This helps us keep our validation logic modular and easy to maintain. We'll also have a composite facade service that combines all of our validations. This provides a single entry point for validation and makes it easier to use our validation services. We'll be using safe validation methods for non-throwing validation. This allows us to validate data without throwing exceptions, which can be useful in certain scenarios. Proper DI (Dependency Injection) registration for all validation services is crucial. This ensures that our validation services are available where they're needed. By implementing robust validation patterns, we can protect our application from errors and ensure data integrity. So, guys, let's make sure our validation is rock solid!
CQRS IMPLEMENTATION
We'll separate command and query responsibilities, using command use cases for mutations (Create, Update, Delete) and query handlers for data retrieval (Get, List, Search, Stats). Different response types will be used for commands vs. queries, and no business logic mixing between commands and queries will be allowed. Use case structure will include a single execute()
method per use case, domain validation before persistence, repository calls for data operations, and a return of domain entities (not DTOs). CQRS enhances performance and maintainability by separating read and write operations.
Let’s get into CQRS Implementation, guys. CQRS stands for Command Query Responsibility Segregation, and it's a powerful pattern for building scalable and maintainable applications. The core idea behind CQRS is to separate the operations that modify data (commands) from the operations that read data (queries). This separation can lead to significant performance improvements and make our application easier to reason about. We'll be using command use cases for mutations, which are operations that change the state of our application. This includes things like creating, updating, and deleting resources. For data retrieval, we'll be using query handlers. This includes things like getting a single resource, listing resources, searching for resources, and getting statistics. We'll be using different response types for commands and queries. This helps us ensure that we're returning the correct data for each type of operation. We'll also be ensuring that there's no business logic mixing between commands and queries. This keeps our code clean and focused. Our use case structure will include a single execute()
method per use case. This makes our use cases easy to understand and test. We'll be performing domain validation before persistence, which means we'll be validating the data before we save it to the database. We'll be using repository calls for data operations, which allows us to abstract away the details of our data storage. And we'll be returning domain entities (not DTOs) from our use cases. This keeps our use cases focused on the business logic and prevents them from being tied to specific data formats. By implementing CQRS, we can enhance the performance and maintainability of our application. So, guys, let's get CQRS right!
DATABASE INTEGRATION
We'll support multiple database implementations, including in-memory for testing, SQLite with native SQL, TypeORM for PostgreSQL/MySQL/SQLite, and Mongoose for MongoDB. A consistent interface adherence across all implementations is crucial. A repository implementation factory will be used for dynamic repository selection based on configuration, proper connection management, and database-specific optimizations. This flexibility allows us to choose the best database for our needs and switch between them if necessary.
Let's shine a light on Database Integration, guys. One of the key features of a well-architected application is the ability to support multiple databases. This gives us the flexibility to choose the best database for our needs and switch between them if necessary. We'll be supporting multiple database implementations, including in-memory for testing, SQLite with native SQL, TypeORM for PostgreSQL/MySQL/SQLite, and Mongoose for MongoDB. This means we can choose the database that best fits our performance, scalability, and cost requirements. Consistent interface adherence across all implementations is crucial. This means that all of our database implementations will adhere to the same interface, which makes it easy to switch between them. We'll be using a repository implementation factory for dynamic repository selection based on configuration. This allows us to choose the database implementation at runtime, based on our configuration. We'll also be implementing proper connection management to ensure that our database connections are handled efficiently. And we'll be implementing database-specific optimizations to ensure that our application performs well with each database. By supporting multiple database implementations, we can make our application more flexible and scalable. So, guys, let's get our database integration right!
DEPENDENCY INJECTION CONFIGURATION
All dependencies will be registered in the container (apps/api/src/infrastructure/di/container.ts
). This includes infrastructure layer dependencies (repositories, database connections), application layer dependencies (use cases, query handlers, validation services), and domain layer dependencies (domain services as singletons). Existing registration patterns in container.ts
will be followed. Centralized TOKENS
will be used, with all tokens in a single TOKENS
object and consistent naming (e.g., CreateFeatureNameUseCase
, FeatureNameRepository
). No separate token objects or manual Symbol creation will be used. Proper DI configuration makes our application modular and testable.
Let’s discuss Dependency Injection Configuration now, guys! Dependency Injection (DI) is a design pattern that helps us build modular, testable, and maintainable applications. The basic idea behind DI is that classes should not be responsible for creating their dependencies. Instead, dependencies should be