Tomas Kocman14 min

Go With API

EngineeringMay 17, 2024



May 17, 2024

Tomas KocmanGo Backend Engineer

Share this article

The API layer, or what we often call the transport layer, is the last but crucial part of the “STRV Go Template.” All the fancy things with databases, architectures and designs that I wrote about in the previous parts are important, but every application needs to have a solid entry point where we can determine which business logic branch we should take, somehow validate input parameters and return a proper response, either expected or an error. With many ways to implement this layer, how do we choose? Why? And are there best practices?

I'll share our perspective, real examples and introduce our open-source Go packages to help you build your next breakthrough application. You’ll find out how to properly structure the transport layer and choose the right documentation strategy. If you enjoyed the generic helpers from the previous article, expect more here!

Transport and Other Layers

The relationship between the transport and service layers is pretty straightforward. The transport layer handler serves as the application’s entry point, where input parameters are parsed and validated. It’s also where the response from the service layer is serialized to the transport layer protocol and sent back to the API caller. The service layer function, in turn, is called by the transport layer handler and is responsible for returning the result of the API call by contacting all necessary dependencies. These dependencies may include other application services, third-party services like AWS or GCP, or databases. Speaking of which, the database is the third and final layer. Its role is simply to satisfy all operations required by the service to meet the client's requests, such as storing, fetching, listing, or deleting domain entities.

The diagram depicts the relationship between all the layers described above. The service layer is called by the handlers because the request must be fulfilled by a service method. The database layer is directly called by the service layer method. Dataloader for optimized data fetching is a wrapper around the database and is called by the service method so communication between transport and database layers is eliminated.

It’s very important to note that when I mention calling the service layer, everything happens by interfaces. You already know how we use domain-driven design and you’ve seen some example domains — session and user. There are also services (domain/application/infrastructure) that represent a composition of all possible operations with domains and that’s what our interfaces in the transport layer are composed of.

type UserService interface {
    Read(ctx context.Context, userID uuid.UUID) (*domuser.User, error)

type SessionService interface {
    Destroy(ctx context.Context, refreshTokenID uuid.UUID) error

In the example above, we can see the mapping between API handlers and service functions no matter what protocol is used. Having one universal big interface for all services goes against the Go philosophy and it’s not even practical.

HTTP Server

When you have a basic understanding of how the transport layer communicates with the service layer, let’s peek under the hood. As you probably guessed — it’s an HTTP server. Setting it up is a relatively easy task. In short, you instantiate the Go native HTTP server and provide an HTTP handler.

We wanted to make this initialization process even friendlier, that’s why we’ve built our custom net package that wraps the native HTTP server. Actually, this functionality is just one part of the package. Overall it provides a whole range of helpers for writing APIs. The Go team at STRV is constantly improving our open-source packages, and you may notice most of them (if not all) are not released as v1 at this time — we’re still battle-testing our tools on real projects before we have enough confidence to release it with a major version. But hey, if you encounter any problems, make sure to create an issue in GitHub!

For now, let’s focus on the server functionality of the HTTP subpackage and define a basic server configuration.

serverConfig := httpx.ServerConfig{
    Addr:    addr,
    Handler: controller,
    Hooks: httpx.ServerHooks{
        BeforeShutdown: []httpx.ServerHookFunc{
            func(_ context.Context) {
    Limits: nil,
    Logger: util.NewServerLogger("httpx.Server"),

We always rename imports of our STRV open-source packages to prevent conflicts with the standard ones. In this case, we import the package as httpx "". addr variable is a network address in the typical format, e.g., “:8080”. The controller variable is our custom HTTP handler, no matter if it is REST or GraphQL API. BeforeShutdown is useful for operations that need to be done before the HTTP server stops. A good idea is to include the stopping of databases or wrapping up HTTP connections your server has instantiated. When the HTTP server stops, it won’t be receiving new requests. However, we want to complete the ones in progress and close the database.

For a simple HTTP server, you don’t even need to configure limits, but you have the option. In case you don’t define the Limits object, default values for the Go server are applied. You can check the content of the Limits data structure yourself in the documentation.

The last field in the configuration is Logger, which is the interface you need to implement. At this time, we still use the zap package for logging, so implementing a logger that will suffice the interface by wrapping the zap is super easy. Although slog, the Go native structured logging package, was recently released, we still use the zap. slog currently waits in our enhancement pipeline with low priority, since zap is one of the best 3rd party logging packages.

The next step is to initialize and run the server.

server := httpx.NewServer(&serverConfig)
if err = server.Run(ctx); err != nil {
    logger.Fatal("HTTP server unexpectedly ended", zap.Error(err))

Providing a server config and running the HTTP server is the last step. It's straightforward, so there's no need to show this process in deeper detail. In the following sections, I'll describe the implementation of the controller, where the real heavy lifting happens.

Which API To Use

The HTTP server works with any handler, no matter what transport layer technology you use. However, it’s worth admitting that choosing the architectural style of the transport layer is not an easy task. There are many factors that play important roles in decision-making. The most prevalent architectural styles these days are REST, GraphQL, and gRPC. Each of these API approaches comes with substantial benefits and some trade-offs. I expect you to know what these styles are about, so I'm not going to deep dive into each of them. Instead, I want to describe how we decide what transport layer architecture we choose for a new project. That means we have plenty of opportunities to change our opinions based on newly gained experience from real projects over the years. And we do change them quite often on our way to write better services.

Let's start with the last one – gRPC. This is the framework we almost never use. The nature of our client projects is not compatible with the use cases of gRPC, which is more suitable for internal server-to-server communication than client-server. The good news is that the template is extendable so if there is a change in the future and gRPC is required more often, we can easily add gRPC support next to REST and GraphQL. Until then, the template is without it, and if someone needs to have gRPC support, a developer is encouraged to implement it within the project.

I mentioned REST and GraphQL. These two styles are prevalent in our projects, so it's no wonder our template supports them both. Developers can pick their favorite when starting a new project. Remember the Clash of APIs blog post? My comrade in arms, Mr. Viking breaks down the differences between REST and GraphQL in more detail there. 

In my experience, although you might be able to identify projects where, for example, GraphQL is more suitable, or REST is a better option, in the end, the most important aspect is the people on the team who have certain expertise and experiences. I’ve been working on a project where we as backend engineers proposed using REST API, but frontend engineers were more familiar with GraphQL and they didn’t want to work with REST API. On the contrary, we proposed using GraphQL for the project I’m currently working on, but iOS engineers wanted REST API because they are used to working with it.

In my experience, project success often boils down to team expertise rather than just technology specs. I've seen the backend team push for REST, only for frontend to prefer GraphQL, and vice versa. The key? Flexibility. We can handle both with ease.

It doesn't matter what technology is used on the transport layer. We are capable of using both without any limitations and proposing reasonable design. Offer options and let the team decide what fits best. If you recognize the project should run with REST or GraphQL and you have strong arguments for your proposal, it’s definitely worth sharing your point of view. And hey, if someone's more comfortable with one over the other, that's completely fine. The goal is to architect a readable and maintainable API, no matter what technology is used.

Design Process

Let’s shift gears from the technical stuff and break down the design process, a crucial step when kicking off a project. Quick note: I’m speaking for the Go backend team here, not the whole backend department. Sometimes the process or style you are working with is influenced by your tooling. That’s why there might be differences between our team and, for example, other teams working with Node.js. That’s why you can see diametrically different development processes across the company, since there is no single correct way to design and work with APIs.

Our development process could be described as design-first. By this, I mean that we make the cornerstone by always starting to implement new functionality by writing an OpenAPI/GraphQL specification. In general, you have two options – code-first and design-first. A code-first approach, as the name suggests, involves writing code first and then documenting it later. In this case, documentation is most often generated based on code annotations. It can work in certain development environments but not in ours.

Why design-first? It keeps us all on the same page. After agreeing upon an API definition of a certain feature, frontend and test engineers can work in parallel with developers. In many cases, the result is a faster time-to-market, more consistent documentation and a more reliable test process. Another aspect is communication and joint designing of features. We always achieve better results when we can discuss new features with someone from the other side, from a technological point of view. You can propose an overall better API when you understand the point of view of those implementing the frontend part of the application.

As you may feel, in most cases, it doesn’t matter if you prefer code-first or design-first as long as you regularly communicate with other members of the team and design the most crucial features together.


Now back to technicalities. REST API style is well-known by everyone reading this blog post, so I’m not going to dive into details. Instead, I’ll describe what technologies we use, what helpers we’ve built in the open-source package I mentioned earlier and how handler methods can be dramatically simplified. But let’s take it one step at a time. Regarding generators, we don’t generate code from OpenAPI specifications at this time. We are happy with writing models with all the needed methods and validations ourselves. Maybe we will get there when we discover or develop a generator that satisfies all our needs and is compatible with the validator package — we keep this point in our To-Do list. If you know of a tool or are currently developing one that matches these requirements, we would appreciate your input.


A controller is a construct that provides the ServeHTTP method for the HTTP server. We use Chi as a router with general middlewares in the controller. I describe the middlewares as general because they wrap all used APIs and helper endpoints. These middlewares include CORS handling, generating request ID, logging, recovering from panics, limiting the size of the HTTP request body and many more. Some of them you can even find in our http package. By helper endpoints, I mean, for example, a health endpoint or an endpoint serving rendered OpenAPI. Though the core of the controller is business logic endpoints. In the template, we always start with v1 endpoints but other versions could be added later at this place.

Let’s dive into the actual v1 handlers, which are connected to the controller. At this point, business logic handlers with specialized middlewares like authentication and authorization are defined. Each handler version defines its own interfaces to be satisfied by a service layer. Interface examples are defined in the Transport and other layers section.

We also define API models in this section. As I already hinted, we use the validate package for validations. I recommend creating a helper function that parses the HTTP request body and performs validation. When some of the fields are incorrect, return the path to the JSON key along with the appropriate status code to provide more information for clients.


Let’s start by parsing the request. I already provided some advice regarding validations, but there are also path and query parameters. We use this handy generic function for parsing path parameters from requests.

type ParamUnmarshaller interface {
    UnmarshalText(data []byte) error

func GetPathParam[TParam any, TPtrParam interface {
}](r *http.Request, paramName string) (pathParam TParam, err error) {
    p := TPtrParam(new(TParam))
    if err = p.UnmarshalText([]byte(chi.URLParam(r, paramName))); err != nil {
        return pathParam, err
    return *p, nil

Usage might look similar to this.

objectID, err := GetPathParam[id.User](r, "userId")

In a similar way, you can also implement a helper for parsing query parameters. However, there is an alternative to these functions that I recommend for more serious APIs. The param subpackage contains useful functions for parsing incoming requests. It parses not only the request body, but you can also define the path and query parameters to be parsed, so you don’t need to bother with parsing everything iteratively; you can parse it all at once before your handler method is invoked. The package contains extensive documentation with examples, so check it out.

Another practice I highly recommend adopting: the correct translation of errors, which are returned from the service layer. Maybe you remember from the first part of this blog post series, where I was describing domain errors, that it is the best practice to separate concerns and let other layers convert properties of the domain layer to something useful in a given context. Here is the right place where you should try to convert returned errors using errors.As and convert Code to HTTP status code and maybe use other fields in some meaningful way (Message and Data).

When it comes to returning the resulting JSON, I encourage you to try our helpers to make this much easier and readable. I’m referring to the http subpackage. In case of returning errors to follow up on the previous section, we use it in the way we create multiple ErrorResponseOption objects.

opts = []httpx.ErrorResponseOption{

As you can see, the package provides a full range of helper functions to create all possible options (httpx.WithError is used just for logging purposes, don’t worry, internal error messages are not exposed). When you are satisfied with the result that will be composed of options, you can return it to the client.

err := httpx.WriteErrorResponse(w, statusCode, opts...)

WriteErrorResponse marshals JSON response and sends it back to the client. In case of a response without any error, you can use a similar function.

err := httpx.WriteResponse(w, data, statusCode, opts...)

In both cases, you don’t have to specify options; it’s a variadic parameter. If you don’t specify them, the default ones will be used. The last thing worth mentioning is a wrapper around HTTP handler functions. The thing is, every handler has a lot of duplicate code — parsing the input request body, parsing path and query parameters and the logic regarding writing the response to the response writer or writing an error response if the service function returned any. The handler wrappers solve this everyday boilerplate for you. We call this functionality a signature. The subpackage is well-documented and has examples, so feel free to try it on your own. All of our helpers are optional, and any engineer can decide not to use them for their use case.


This section is going to be much shorter than the previous one because, in GraphQL, a lot of things are handled by third-party libraries, so you don’t need many helpers. Regarding the philosophy, I would just be repeating myself. The important parts are the same as with REST. We work in a design-first manner, proposing new features in terms of GraphQL schema in cooperation with other team members. Nothing world-shattering. Let’s move on to more interesting things.


GraphQL controller is very similar to the one mentioned in REST API. However when using GraphQL, it’s still running over HTTP protocol, so we need some of the general middlewares anyway. Actually, exactly the same ones as in the REST API controller. The difference comes in authentication and authorization. While authentication is managed by regular HTTP middleware, authorization is a GraphQL directive but with very similar functionality as in middleware implemented in REST.

In comparison with REST, there is no API versioning. As mentioned in the official documentation for GraphQL, while there's nothing that prevents a GraphQL service from being versioned just like any other REST API, GraphQL takes a strong stance on avoiding versioning by providing the tools for the continuous evolution of a GraphQL schema. You can read more here. Nothing has changed regarding the interfaces the services must satisfy and data models with the validator.

Code Generator 

We use gqlgen in the template. In comparison to the graphql package, it’s based on a code generator that generates code based on the GraphQL schema. All the boring parts are generated, allowing us to focus on building our app. To be more concrete, except for the internal GraphQL request handling functionality, it generates function declarations, and it’s up to you to complete the implementation. When regenerating, already existing functions are not touched at all. Type safety is the absolute priority here, so you shouldn’t see any empty interfaces or similar ambiguous things. When you run the generator for the first time, it generates the project skeleton. We just added the functionality with the controller to be able to pass the whole GraphQL API into a single HTTP handler with all the middlewares.

gqlgen supports extensive configuration like choosing if your slices should be []T or []*T (it’s []*T by default), or custom model binding, for example (useful for types like UUID, Password, or Email). In comparison with other GraphQL alternative packages, it is capable of features like generated enums, generated inputs, federation (this is especially important for big projects), hooks for error logging, custom errors, query complexity and much more. For more information, you can go through the official tutorial and try it on your own.

I recommend creating a custom error presenter for handling errors similarly to the REST API. All errors returned by resolvers, or from validation, pass through a hook before being returned to the user. This hook allows you to customize errors. The error presenter serves us for two purposes — we are logging at this place instead of some general logging middleware and enhancing the returned error to the user with contextual information like in REST. If you are thinking about the domain errors, you’re correct, this is the right place to check if the returned error is the domain error and use its attributes to enhance the resulting error and log it.

Data Loaders

GraphQL is prone to the n+1 problem. The n+1 problem occurs when multiple types of data are requested in one query, data is nested, and you end up with n requests instead of one. An example might be requesting team members and their colleagues. It would result in n queries per each team member. Data loaders consolidate the retrieval of information into fewer, batched calls. This example demonstrates the value of data loaders by consolidating many database queries into a single bulk query. In this example, data loaders allow consolidating the fetching of colleagues across all resolvers for a given GraphQL request into a single database query and even cache the results for subsequent requests. 

As a solution to the mentioned problem, we use a third-party package for data loading. It’s fully based on generics, so you can instantiate a data loader for any type you want to request from the database. The implementation lies in the repository package of a specific domain. To get the data loaders into the GraphQL layer, we use simple middleware, which inserts all data loaders (for fetching users, sessions, products, etc) into the context, so GraphQL resolver can use it for fetching certain parts of the query. Since data loaders are not mandatory, I would recommend defensive programming at this place — start your project without them and add this functionality to solve specific performance problems when needed in the future.


Now you have a rough understanding of what the “STRV Go template” looks like. I hope you found the knowledge I shared valuable, especially in part one, where I described what domain-driven design is in general and how we embraced it. You know what a domain is, the types of services built around them and why custom domain errors are so important for keeping separation of concerns.

From part two, you should feel your brain cells filling with information about databases and our approach to working with them. You know our stance on ORMs, our SQL driver of choice and how we handle migrations, querying and scanning. But the highlight is our emphasis on repositories and their benefits for your codebase. Now you know that you should strive for eventual consistency and not recklessly avoid it. I believe you also find our generic helpers useful for your next project.

In this final part, I want you to grasp our mindset when it comes to API technologies, how we choose the right architectural style for each project, and the open-source packages that streamline our development process. If you're uncertain about applying these practices in your project, reach out to me. Let your APIs flourish! 

Share this article

Sign up to our newsletter

Monthly updates, real stuff, our views. No BS.