This meticulously crafted boilerplate serves as a solid foundation for building production-ready Fastify applications. While designed specifically for Fastify, the underlying principles and best practices aim to be adaptable to different frameworks and languages. These principles include clean architecture, domain-driven design, CQRS, vertical slice architecture, and dependency injection.
- Framework: Fastify 5 with Awilix for the dependency injection and Pino for logging
- Plugins: @fastify/helmet for security headers, @fastify/swagger for Swagger documentation, @fastify/under-pressure for automatic handling of "Service Unavailable", @fastify/awilix for dependency injection, typebox for JSON schema and TS generation and validation
- DB: Postgres as client + DBMate for seeds and migrations
- Graphql: Mercurius
- Format and Style: Eslint 9 + Prettier
- Dependencies validation: depcruise
- Release flow: Husky + Commitlint + Semantic-release
- Tests: E2E tests with Cucumber, and unit and integration tests with node:test
npx degit marcoturi/fastify-boilerplate my-app
cd my-app
# To enable yarn 4 follow the instruction here: https://yarnpkg.com/getting-started/install
yarn #Install dependencies.
yarn start
- start a development server.yarn build
- build for production. The generated files will be on thedist
folder.yarn test
- run unit and integration tests.yarn test:coverage
- run unit and integration tests with coverage.yarn test:unit
- run only unit tests.yarn test:coverage
- run only integration tests.yarn test:e2e
- run E2E testsyarn type-check
- check for typescript errors.yarn deps:validate
- check for dependencies problems (i.e. use route code inside a repository).yarn outdated
- update dependencies interactively.yarn format
- format all files with Prettier.yarn lint
- runs ESLint.yarn create:env
- creates and .env file by copying .env.example.yarn db:create-migration
- creates a new db migration.yarn db:migrate
- start db migrations.yarn db:create-seed
- creates a new db seed.yarn db:seed
- start db seeds.
Diagram adapted from here
- Adaptable Complexity: The structure should be flexible (scalable through adding or removing layers) to handle varying application complexities.
- Future-Proofing: Technology and design choices should ensure the project's long-term health. This includes: clear separation of framework and application code and utilizing well-established, widely used packages/tools with minimal dependencies.
- Functional Programming Emphasis: Prioritize functional programming patterns and composition over object-oriented approaches (OOP) and inheritance for potentially improved maintainability.
- Microservices-Ready Architecture: Leveraging techniques like vertical slice architecture, path aliases, and CQRS (Command Query Responsibility Segregation) for communication. This promotes modularity and separation of concerns, facilitating potential future extraction and creation of microservices.
- Framework Agnostic: Using vanilla Node.js, express, fastify? Your core business logic does not care about that either. Therefore, we will minimize fastify dependencies inside modules folder.
- Client Interface Agnostic: The core business logic does not care if you are using a CLI, a REST API, or even gRPC. Command/Query handlers will serve every protocol needs.
- Database Agnostic: Your core business logic does not care if you are using Postgres, MongoDB, or CouchDB for that matter. Database code, problems and errors should be tackled only in repositories.
- The dependencies between software components should always point inward towards the core of the application. In other words, the innermost layers of the system should not depend on the outer layers. The flow of the code will be Route β Handler β Domain (optional) β Repository.
This project is based on some of the following principles/styles:
- Domain-Driven Design (DDD)
- Hexagonal (Ports and Adapters) Architecture
- Clean Architecture
- Onion Architecture
- SOLID Principles
- Vertical slice architecture)
- The Common Closure Principle (CCP)
- Each module's name should reflect an important concept from the Domain and have its own folder (see vertical slice architecture).
- It's easier to work on things that change together if those things are gathered relatively close to each other (see The Common Closure Principle (CCP)).
- Every module is independent (i.e. no direct imports) and has interactions between modules minimal. You want to be able to easily extract a module into a separate microservice if needed.
How do I keep modules clean and decoupled between each one?
- One module shouldn't call other module directly (for example by importing an entity from module A to module B). You should create public interfaces to do so (for example, create a query that returns some data from feature A to anyone who calls it, so module B can query it).
- In case of fire and forget logic (like sending an email after some operation), you can use events to communicate between modules.
- If two modules are too "chatty", maybe they should be merged into a single module
Each layer should handle its own distinct functionality. Ideally, these components should adhere to the Single Responsibility Principle, meaning they have a single, well-defined purpose:
Route:
The process starts with a request (HTTP, GRPC, Graphql) sent to the route. You can find the routes in the src/modules/feature/{commands|queries}
folder.
Routes handle HTTP request/response, request validation, and response formatting based on the type of protocol implied. They should not contain any business logic.
Example file: find-users.route.ts
Command/Query Handler: The Command or Query Handler, or an application Service handles the received request. It executes the necessary business logic by leveraging Domain Services. It also interacts with the infrastructure layer through well-defined interfaces (ports). Commands are state-changing operations and Queries are data-retrieval operations. They do not contain domain-specific business logic. One handler per use case it's generally a good practice (e.g., CreateUser, UpdateUser, etc.).
Note: instead of using handler you can also use an application service to handle commands/queries/events.
Using this pattern has several advantages:
- You can implement middlewares in between the route and the handler. This way you can achieve things like authorization, rate limiting, caching, profiling, etc. It's easy to granular apply these middlewares to specific commands/queries by using simple regex i.e. by looking at
users/*
if you want to hit every command in the user module orusers/create
for specific ones. Example file: middlewares.ts - Reduce coupling between modules. Instead of explicitly defining dependencies between modules you can use commands/queries. At the moment you want to extract a module into a separate microservice you can just implement the GRPC request logic in the handler, and you are good to go.
Example file: find-users.handler.ts
Domain Service: Contains the core business logic. For example, how to compose a new entity, calculate its own properties and how to change them. In this specific project there are no entities/aggregates classes (see Domain-Driven Design) and data is generally composed by objects/arrays/primitives therefore domain services are the responsible for handling the surrounding business. The domain should represent (in code) what the business is or does (in real life).
Example file: user.domain.ts
Repository: Adapts data to its internal format, retrieves or persists data from/to a database as needed. Repositories decouple the infrastructure or technology used to access databases from the other layers (i.e. for example handler/domains should not know if the data is stored in a SQL or NoSQL database). They should not contain business logic.
Example file: user.repository.ts
General recommendation: The optimal project structure balances complexity with maintainability. Carefully consider the project's anticipated size and intricacy. Utilize as many layers and building blocks as necessary to ensure efficient development, while avoiding unnecessary complexity.
The vertical slice architecture is the recommended structure. Each feature encapsulates commands, queries, repositories, etc.
.
βββ db/
β βββ migrations
β βββ seeds
βββ tests
βββ src/
βββ config
βββ modules/
β βββ feature/
β βββ commands/
β β βββ command-example/
β β βββ command.handler.ts β Route command handler/service
β β βββ command.route.ts β Fastify http route
β β βββ command.graphql-schema.ts β Graphql schema
β β βββ command.resolver.ts β Graphql resolver
β β βββ command.schema.ts β Schemas for request and response validation
β βββ database/
β β βββ feature.repository.port.ts β Fastify repository port
β β βββ feature.repository.ts β Feature repository
β βββ domain/
β β βββ feature.domain.ts β Domain services
β β βββ feature.errors.ts β Domain-specific errors
β β βββ feature.types.ts β Domain-specific types
β βββ dtos/
β β βββ feature.graphql-schema.ts β Common Graphql schema
β β βββ feature.response.dto.ts β Common DTO definition used across feature commands/queries
β βββ queries/
β β βββ query-example/
β β βββ query.handler.ts β Route command handler/service
β β βββ query.graphql-schema.ts β Graphql schema
β β βββ query.resolver.ts β Graphql resolver
β β βββ query.route.ts β Fastify http route
β β βββ query.schema.ts β Schemas for request and response validation
β βββ index.ts β Module entrypoint, dependencies definitions, command/query base definition
β βββ feature.mapper.ts β Mapper util to map entities between layers (controller, domain, repositories)
βββ server/
β βββ plugins β Fastify plugins
βββ shared/
βββ utils β Generic functions that don't belong to any specific feature
βββ db β DB configuration and helpers
This boilerplate draws inspiration from Domain-Driven Hexagon, but prioritizes functional programming paradigms over traditional Java-style backend practices. Also, while acknowledging the value of Domain-Driven Design (DDD), this project aims for a more approachable structure with a lower knowledge barrier for onboarding new team members. Despite these adjustments, the core principles of Domain-Driven Hexagon remain a valuable resource. I highly recommend exploring it for further knowledge acquisition.
react-redux boilerplate: A meticulously crafted, extensible, and robust architecture for constructing production-grade React applications.
While including a specific application instrumentation example is avoided in this project due to configuration and provider variations, I strongly recommend considering OpenTelemetry. It offers significant benefits:
- Vendor Independence: OpenTelemetry is an open standard, freeing you from vendor lock-in. This flexibility allows you to choose and swap backend tools as needed without rewriting instrumentation code.
- Simplified Instrumentation: OpenTelemetry provides language-specific SDKs and automatic instrumentation features that significantly reduce the complexity of adding tracing, metrics, and logs to your codebase.
- Unified Observability: By offering a consistent way to collect and export telemetry data, OpenTelemetry facilitates a more holistic view of your application's performance.
- Reduced Development Time: OpenTelemetry allows you to write instrumentation code once and export it to any compatible backend system, streamlining development efforts.
Load testing is a powerful tool to mitigate performance risks by verifying an API's ability to manage anticipated traffic. By simulating real-world user interactions with an API under development, businesses can pinpoint potential bottlenecks before they impact production environments. These bottlenecks might otherwise go unnoticed during development due to the absence of production-level loads. Example tools:
- k6
- Artillery (example file create-user.artillery.yaml)
To generate client types for your API based on you schemas, you can use the following command:
// Be sure to have the server running
npx openapi-typescript http://127.0.0.1:3000/api-docs/json -o ./client.schema.d.ts
With a little effort you can add this process in the pipeline and have a package published with each version of the backend. Same concept apply for graphql schemas using graphql-code-generator.
Contributions are always welcome! If you have any ideas, suggestions, fixes, feel free to contribute. You can do that by going through the following steps:
- Clone this repo
- Create a branch:
git checkout -b your-feature
- Make some changes
- Test your changes
- Push your branch and open a Pull Request