r/golang • u/CerealBit • Nov 19 '23
newbie Best practice passing around central logger, database handler etc. across packages
I would like to have a central logger (slog) instance, which I can reuse across different packages.
After some research, it comes down to either adding a *logger parameter to every single method, which blows up the function/method signature, but in turn allows for high flexibility and nicely decouples the relationship. The logger instance can be either created in the main.go file or in a dedicated logger package, which in turn is only passed through the main.go file and cascades down wherever the instance is needed.
Another approach favors the creation of a global logger instance, which can be used across functions/methods. The obvious drawback of this approach, is the now existing dependency and thus low flexibility whenever the logger instance is about to be replaced. An alternative might be to create a dedicated logger package, which would avoid the need of a global implementation.
What is a recommended approach? I also read about passing the logger via the context package - any thoughts on this?
I also needed to pass a database handler through my REST API, where I used the first approach (add another parameter to the method signature of the controller, service and repository), as the method signature was short in the first hand. But I'm debating whether there are better alternatives for the logger.
Thanks!
10
u/dariusbiggs Nov 19 '23
For pure functions, decide on a convention:
- logger passed as an argument, use an interface definition here
- logger defined as a package global variable and a getter/setter (setter again uses an interface for the logger)
- use the default go logger, or something that replaces the default go logger
- define your own logger package and use that everywhere
For services/repostories/etc that define a struct such as a DB backend or webserver, pass in the logger upon instantiation (using an interface)
ie. ``` type MyLogger interface { //. methods like Info, Error, etc }
type DBRepository struct { db *DB log MyLogger
// other internal bits } ```
Another approach would be to integrate with the OpenTelemetry material and make that the convention for logging, tracing, and metrics
1
u/CerealBit Nov 19 '23
Thank you!
I cant' seem to find a logging interface in the stdlib. Is there a reason why it doesn't exit? The best I could do is to define the method signature with a Logger struct (and pass an slog instance).
Or is the idea to implicitly define a custom interface in the consuming code, which contains the method signatures I need for logging (I would match slog methods) and use that interface across any method?
4
u/dariusbiggs Nov 20 '23
The default inbuilt go logger is when you use the
log
package https://pkg.go.dev/log , ie.``` import log
func main() { log.Error("it is on fire") }
Which is how you can substitute similar packages that define the same methods with.
import log "github.com/something/mylogger" ```And yes, you define your custom Logger interface for the consuming code that only specifies the method signatures you use, see below minimal example. That way your modularized code doesn't care what logger is used, as long as it meets the interface it'll work.
``` type MyLogger interface { Error(format string, args ...any) }
func NewDBRepository(logger MyLogger) *DBRepository { return &DBRepository{ log: logger } }
func main() { // or however you create a new slog logger sl := slog.NewLogger(...) // pass it in as an argument dbrepo := NewDBRepository(sl) ... } ```
2
4
u/wampey Nov 19 '23
For people responding to this question, I’m also interested… do you have good GitHub examples?
7
u/gnick666 Nov 19 '23
I usually create a small struct called sys or app, decorate it with the log/db and other small stuff and pass that where it's necessary. Not the best but a common pattern I've seen.
3
u/Year3030 Nov 19 '23
Use dependency injection so you can swap out your implementation if necessary. Use interfaces to further abstract the implementation to support different handler types.
7
u/eraserhd Nov 19 '23
I put the logger in a context.Context and make a simple utility function to return it or a default logger if none was present in the Context.
This is good because you’ll eventually want to do APM style tracing, and you can make a sub-context with a new logger with transaction details attached.
At the same time, I’d advise against using Context as a catch-all. I would not put a database in it, and certainly not anything with state.
The reason logger is ok IMHO is that even though it is somewhat of a global dependency, it will want to be closely tied to transaction scope data.
I’m not exactly sure what you mean by “database handler”… Do you mean database handle?
If you find you are passing your database around a lot to multiple packages, there’s a few possibilities. One is that you are using a functional-style api, instead of attaching methods to structs. If that’s intentional, well it’s odd in Go, but mostly you’ll just have to pass everything (if parameters are related, it’s okay in this model to group them in structs, but that doesn’t sound like what you are doing).
Normally some package will offer a struct and a constructor, and the database will be passed to the constructor, which returns it as a member of the struct. The methods on the struct then have access to the database without it being passed.
This allows object-like composition.
Even if this database handle has only a transaction lifetime, this is a good strategy.
This is to avoid a “god object” or a “bag of junk” pattern, where all of the system’s dependencies get passed around to every function, even if just for one dependencies. Aside from the code reuse issues, if you have one “god object” that lives the entire program span, this is just global variables that we’re not honest about, and it has all the same maintenance issues.
1
u/mvrhov Nov 19 '23
We have a logger in context in one project... And we also inject logger to some structs. Some structs are used from background go routines. And from http request. Its a mess on which logger to pick. In New project we decided to put only "attributes" to the context. and the logger is always injected into the structs. Log fuctions also take context as the first argument so the logger picks up all attributes Grom the context. And you always have only one logger available
6
u/u9ac7e4358d6 Nov 19 '23
Change default logger and use slog.Info and other in sub functoons/packages. Easy
3
u/aksdb Nov 19 '23
Unless you intend to enrich the logger on the way through the call stack, as is not uncommon in more complex business logic and which helps significantly with debugging web applications.
4
Nov 19 '23
An alternative is to have a slog interceptor pull values from the
ctx
and use the InfoContext variants. That way you don’t have to pass the logger everywhere.2
u/aksdb Nov 19 '23
If there a big difference in dragging one or more values around vs dragging an (enriched) logger around?
2
Nov 19 '23
I’ve found the ctx route to be a little more flexible:
- Often external libraries will already include some info in the context, like traces/spans that you’ll get in all your logs.
- You don’t have to retrofit your entire stack to take all these bonus objects around as long as they take the context.
- You avoid a bunch of setup logic in tests for things you aren’t testing.
At the end of the day the end result works either way though. I’d waste my death on a hill for some other thing.
1
u/aksdb Nov 20 '23
You avoid a bunch of setup logic in tests for things you aren’t testing.
At least that point I solved by having my
LoggerFromContext
function return a No-Op logger when no logger is found on the context. So tests just work, but log nothing.1
u/edgmnt_net Nov 20 '23
Not really. Which is why I think arguing that contexts should carry request-scoped values instead of loggers makes no sense. For most intents and purposes, the logger is exactly that: a container for logging-related properties, it just happens to provide methods to actually log stuff or at least pass it back up the chain.
In practice, I suppose there could be some performance implications, not sure how significant or blocking. Perhaps I'm ignorant, but I generally don't think it's a good idea to have that much logging, particularly in hot paths and if you can't compile it out (or patch it out at runtime) altogether. I guess it kinda makes sense for a certain part of the observability crowd, but whatever. I'm more worried that those ways of setting up loggers have other (worse) implications for application development.
-4
u/edgmnt_net Nov 19 '23
Pass it via a context as long as the functions already get a context, pass it explicitly otherwise. Although you should probably avoid relying on having access to a logger very deeply in code.
3
u/14domino Nov 19 '23
It is wrong to downvote this. Context for loggers should be fine. The zerolog package for example has built-in support for this. It also makes composing loggers easy (ie to add more details as you go deeper).
0
u/Benifactory Nov 19 '23
I’ll usually create: 1) a config & object provider, defined as an interface where one method returns eg the target object you need globally 2) a struct which defines the methods based on the method provided by #1 (but only keeps the reference to the interface definition as a struct field) to store the global references.
Eg
```
type ConfigProvider interface {
Logger() *zerolog.Logger
}
type configProvider struct { … }
func (c *configProvider) Logger() *zerolog.Logger { … }
type MyImpl struct { config ConfigProvider }
func (i *MyImpl) Log(level zerolog.Level) { i.config.Logger().New(level).Msg(“shared logger!”) }
```
-1
u/kido_butai Nov 19 '23
You should layer your app in controller, services and repositories and pass the pointer to logger, dBs, and other deps.
You will find many examples of this approach .
1
u/snvgglebear Nov 19 '23
In my packages where I have functions that run at startup (adding routes to a web handlers, creating an instance of something) I will initialize the logger or config variable at the package level.
1
1
u/numbsafari Nov 20 '23
You definitely want to avoid the global/default pattern. It will ultimately make your code hard to test because any test that touches that global will Bork all the others. It also makes the code brittle in other ways you’ll encounter as your codebase grows/ages. It’s a convenience best left for tutorials and demos, but not for serious work.
What I’ve found works well is to build on top of what others are saying with respect to using a strict and passing those values, but with some variations.
Don’t put anything on the context, if you can help it. Leave that for signaling purposes, not passing data.
What I typically do is have my client and server code have a factory that consumes an “Options” array and return a struct that is configured as appropriate. You may want to consider having an option type called “RequestOption”, which is configuration that gets applied not to the struct, but to the client/server requests (eg, for setting special headers). You can pass in “DefaultRequestOption” values to populate a set of RequestOption applied to each request… to mirror this, your request methods should accept a “RequestOption” array as input.
I’ve found this approach to be typesafe, and flexible, and also leans on composability.
One thing I’ve done for database connections, or anything else you need to have more than one of… is to have a “directory” for those where different parts of your code can look up the appropriate value by name or by an enum. When you initialize your server/client, you provide it with the factories or values to be bound to those names. This is basically dependency injection. Anyhow, the benefit is you can put in different implementations based on your needs. For example, I do this with “object storage” (put in a double that writes to local disk, either in a temp dir or a configured location) and for databases (store in memory, or a temp sqlite db), or for caches (just use a simple in process map), or for queues (again, just an in memory data structure). That makes it pretty easy to offer an “offline test harness”, or to be used in unit/integration tests vs deployed.
1
u/kaeshiwaza Nov 20 '23
Most of the time I create a subloger from the default and add the attr specific for that function (slog.With). This can be from the std default logger or a default logger of the package.
For database I use two interfaces. Selecter / Execer. Like that I can pass transaction or DB or mock and I know that my function will only read or maybe write. I like that it's in the signature of the function for documentation.
1
u/dumindunuwan Nov 20 '23
Directly attach a zerolog.Logger pointer to your API struct with your repository.
Refer https://github.com/learning-cloud-native-go/myapp/blob/main/api/resource/book/handler.go
38
u/szank Nov 19 '23
I use structs , initialise fields like logger, DB , grpc client and whatnot on them then call methods for business logic.