Using Packman to Generate gRPC Project Boilerplate

Matan Golan
December 2nd, 2020

gRPC is a great framework, especially for communication between two micro-services. gRPC uses protobuf files to define the service methods; For example to define a location service I’ll write the following:

syntax = "proto3";
package services;

option go_package = "email";

message Coordinates {
  double latitude = 1;
  double longitude = 2;
}

message Location {
  string region = 1;
  string country = 2;
  string city = 3;
  Coordinates coordinates = 4;
}

message IP {
  string address = 1;
}

service Location {
  // Takes an ip address and convert's it into geo-location
  rpc GeoIP (IP) returns (Location);

  // Takes geo-coordinates and converts them into geo-location
  rpc ReverseLookup (Coordinates) returns (Location);
}

As you can see protobuf files are pretty descriptive, by parsing them we can extract useful information. Let’s give it a try with our golang protobuf parser:

import (
	"fmt"
	"github.com/securenative/GoProtobufReader/proto_reader"
	"io/ioutil"
)

func main() {
	// Create an instance of the protobuf reader:
	reader := proto_reader.NewReader()

	// Read a protobuf file:
	bytes, _ := ioutil.ReadFile("path-to-file.proto")

	// Read the file, the result is the `struct` version of the protobuf file
	definition, _ := reader.Read(string(bytes))

  fmt.Printf("%v", definition.Services)
  fmt.Printf("%v", definition.Messages)
}

It can’t get simpler than that… right?!

Now that we know how to parse the protobuf file let’s write our script file. First, let’s define the reply model, this data will be available for query when writing the template files:

type ReplyModel struct {
	// The name of the gRPC service:
	Name        string
	// The methods defined in the gRPC service:
	Methods     map[string]*proto_reader.Method
	// The messages defined in the protobuf file:
	Messages    map[string]*proto_reader.Message
	// The package name (for go imports)
	PackageName string
}

The following script file will parse the .proto file, compile it using docker, finally building the reply model:

func main() {
	// Parse the flags:
	flags := packman.ReadFlags()
	// path to .proto file:
	protoPath := flags["proto"]  
	packageName := flags[packman.PackageNameFlag]

	// Read the protobuf file contents:
	protoContent := readFile(protoPath)

	// Parse the protobuf file:
	reader := proto_reader.NewReader()
	protoDef, _ := reader.Read(protoContent)

	// Copy the protobuf file to the pkg folder:
	ioutil.WriteFile(filepath.Join("..", "pkg", filepath.Base(protoPath)), []byte(protoContent), os.ModePerm)

	// Run the protobuf compiler (with docker):
	pwd, _ := os.Getwd()
	pkgFolder := filepath.Join(filepath.Dir(pwd), "pkg")
	fileName := filepath.Base(protoPath)
	run(fmt.Sprintf("docker run -v %s:/defs namely/protoc-all -f %s -l go -o .", pkgFolder, fileName))

	// For the sake of the example, we only need the first service defined:
	srv := first(protoDef.Services)

	// Build the reply model:
	reply := ReplyModel{
		Name:        srv.Name,
		Methods:     srv.Methods,
		Messages:    protoDef.Messages,
		PackageName: packageName,
	}

	// Write the reply back to packman's driver
	packman.WriteReply(reply)
}

Now that we have the script file we can start to template our project.

Defining the Project Structure

One of the most crucial parts of creating a template project is to define a good project structure, a one that solves your problem and keeps it as simple as possible. For this example I’ll divide the project structure into 3 different layers:

1. Data Layer 

The data layer is responsible for …surprise… getting the data from various sources, such as: databases, REST calls, files or any other kind of other data source.

2. Business Layer

 The business layer should use the components from the data layer to compose the business logic, allowing us to keep the business logic decoupled from the actual implementation of the data sources.

3. Application Layer

 The application layer contains components that are specific for the application, things like controllers, web servers, cli interfaces, etc …

Application -> Business -> Data

This packaging technique allows us to keep our code decoupled thus, testing, refactoring and maintaining it should be an easy task since each layer is responsible for its own purpose embracing the SOLID principles.

The dependency rule states that the outer layers can only depend on the inner layers.

We ended up setting the following project structure:

.
├── README.md
├── main.go  // main entry point for our project
├── go.mod
├── cmd      // the application layer
│   └── server 
│       ├── config.go
│       ├── module.go
│       ├── server.go
│       ├── server_test.go
│       └── setup_test.go
├── internal
│   ├── business          // the business layer
│   │   ├── manifest.go
│   │   └── service.go
│   ├── data              // the data layer
│   │   ├── manifest.go
│   │   └── repository.go
│   ├── etc
│   └── models
└── pkg                   // pkg files are meant to be exposed
    ├── proto_file.pb.go
    └── proto_file.proto

Notice that we place the compiled and original protobuf files inside the pkg folder.

Writing the Template Files

Finally we get to the last part… we have a data model to query, we have the project structure ready, all we have to do now is to write the actual template files.

I’ll give example of 3 different templates files and explain the idea behind them, there are more files in the project so if your’e serious about packman or you need to see the whole picture, check out this repository.

gRPC Server (server.go)

To bootstrap a gRPC server we need to do the following steps:

The protobuf compiler generates an interface based on the methods we defined in the protobuf service. For example, the location protobuf file shown at the beginning of the article will generate a server interface that will look like the following:

type LocationServer interface {
	GeoIp(context.Context, *IPv4) (*GeoLocation, error)
	ReverseGeo(context.Context, *Coordinates) (*GeoLocation, error)
}

When we have an implementation of the server interface (let’s call it srvImpl) we can bootstrap the gRPC server using:

server := grpc.NewServer(options...) 
RegisterLocationServer(server, srvImpl) // generated by protoc as well

Our project structure defines that the application layer should handle the requests (maybe extracting metadata from the gRPC request as well) and call the business layer to fulfil the request.

The full server.go template:

// A struct that encpsulates the gRPC server
type GrpcServer struct {
	Config Config
	server *grpc.Server
}

func NewGrpcServer(config Config, service business.Service) *GrpcServer {
	impl := newServerImpl(service)
	server := initGrpcServer(impl)
	return &GrpcServer{Config: config, server: server}
}

func (this *GrpcServer) Start() error {
	listener, _ := net.Listen("tcp", fmt.Sprintf(":%d", this.Config.GrpcPort))
	return this.server.Serve(listener)
}

func initGrpcServer(impl *serverImpl) *grpc.Server {
	// A place to add middlewares to the gRPC server:
	unaryInterceptors := grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
		panichandler.UnaryPanicHandler,  	// gRPC won't handle panics gracefully by itself
		grpc_prometheus.UnaryServerInterceptor,
	))

	streamInterceptors := grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
		grpc_prometheus.StreamServerInterceptor,
	))

	var options []grpc.ServerOption
	options = append(options, unaryInterceptors, streamInterceptors)

	// Finally create and register the gRPC server
	server := grpc.NewServer(options...)
	Register{{{ .Name }}}Server(server, impl)
	return server
}

// This struct implements the gRPC server interface
// as required by: https://grpc.io/docs/tutorials/basic/go/
type serverImpl struct {
	service business.Service
}

func newServerImpl(service business.Service) *serverImpl {
	return &serverImpl{service: service}
}

// Here we want to generate the methods implementing the gRPC server interface
{{{- range $k, $v := .Methods }}} // for each method in the data-model. $k is the method name, $v is the method struct
func (this* serverImpl) {{{ $k }}}(ctx context.Context, input *{{{ $v.Input.Name }}}) (*{{{ $v.Output.Name }}}, error) {
	return this.service.{{{ $k }}}(input)
}
{{{ end }}}

The important part of this template file is at lines 49–53:

{{{- range $k, $v := .Methods }}}
func (this* serverImpl) {{{ $k }}}(ctx context.Context, input *{{{ $v.Input.Name }}}) (*{{{ $v.Output.Name }}}, error) {
	return this.service.{{{ $k }}}(input)
}
{{{ end }}}

Remember our data-model? we had a field called Methods in it, all we have to do is to use the range feature to iterate over the keys and values of the methods map in order to generate the required implementation of the gRPC server interface.

For example, these lines will emit the following output:

func (this* serverImpl) GeoIp(ctx context.Context, input *IPv4) (*GeoLocation, error) {
	return this.service.GeoIp(input)
}

func (this* serverImpl) ReverseGeo(ctx context.Context, input *Coordinates) (*GeoLocation, error) {
	return this.service.ReverseGeo(input)
}

Take few seconds to understand how we got this generated code.

Business Logic Service (service.go)

The generated business logic service is just a stub, the developer will need to fill-up the actual business logic needed, so we will return an error that says “implement me” for each method:

type ServiceImpl struct {
	repository data.Repository
}

func NewServiceImpl(repository data.Repository) *ServiceImpl {
	return &ServiceImpl{repository: repository}
}

{{{ range $k, $v :=.Methods }}}
func (this *ServiceImpl) {{{ $k }}}(input *{{{ $v.Input.Name }}}) (*{{{ $v.Output.Name }}}, error) {
	return nil, errors.New("implement me")
}
{{{ end }}}

You can see that we’ve used the same trick here, just iterating over the keys and values of the methods map.

The rendered version of service.go:

type ServiceImpl struct {
	repository data.Repository
}

func NewServiceImpl(repository data.Repository) *ServiceImpl {
	return &ServiceImpl{repository: repository}
}

func (this *ServiceImpl) GeoIp(input *IPv4) (*GeoLocation, error) {
	return nil, errors.New("implement me")
}

func (this *ServiceImpl) ReverseGeo(input *Coordinates) (*GeoLocation, error) {
	return nil, errors.New("implement me")
}

Integration Tests (server_test.go)

Tests… most developers will tell you that they like tests and they can jabber about the importance of tests for hours but, the truth is that writing tests isn’t the most fun part. In our opinion a service isn’t complete without tests simply because, if you’ve wrote some lines of code, how do you know that it will work?

Let’s try to tackle this problem by generating several integration tests that should give you some level of confidence. First things first, let’s prepare the infrastructure for integration tests:

var client {{{ .Name }}}Client

func setup() {
	cfg := ParseConfig()

	conn, err := grpc.Dial(fmt.Sprintf("localhost:%d", cfg.GrpcPort))
	if err != nil {
		panic(err)
	}
	client = New{{{ .Name }}}Client(conn)

	// You can place mocks here if you need to:
	module := NewModule(cfg)
	go func() {
		panic(module.GrpcServer.Start())
	}()
}

func teardown() {

}

func TestMain(m *testing.M) {
	setup()
	defer teardown()
	time.Sleep(1*time.Second)
	os.Exit(m.Run())
}

This file is just using GO’s tests mechanism to define two methods: setup which runs before all test-cases and teardown which runs after all test-cases.

The setup function will parse the config, initialize the gRPC client (generated by protoc as well), creates the application module (where all the dependencies are been initialized) and finally runs the actual gRPC server.

This way we have the entire web-application started, just like it would run on production and we have a gRPC client that will help us to make the calls to the server.

Now we can go and write the actual integration tests template:


{{{ range $k, $v := .Methods }}} // for each method
func TestIntegration_{{{ $k }}}(t *testing.T) { // generate a function with the name of the method
	// prepare the input to that method
	input := &{{{ $v.Input.Name }}}{ 
		{{{- range $tk, $tv := $v.Input.Fields }}} // for each field of the input struct
		{{{ $tv.Name }}}: nil,
		{{{- end }}}
	}

	output, err := client.{{{ $k }}}(context.TODO(), input) // make the call to the gRPC server
	assert.Nil(t, err)
	assert.NotNil(t, output)
	{{{- range $tk, $tv := $v.Output.Fields }}}  // for each field of the output struct
	// generate assert statement with a stub
	assert.EqualValues(t, "PLACE VALUE FOR {{{ $tv.Name }}}", output.{{{ $tv.Name }}})
	{{{- end }}}
}
{{{ end }}}

In our example this file will be rendered to the following:


func TestIntegration_GeoIp(t *testing.T) {
	input := &IPv4{
		Address: nil,
	}

	output, err := client.GeoIp(context.TODO(), input)
	assert.Nil(t, err)
	assert.NotNil(t, output)
	assert.EqualValues(t, "PLACE VALUE FOR City", output.City)
	assert.EqualValues(t, "PLACE VALUE FOR Country", output.Country)
	assert.EqualValues(t, "PLACE VALUE FOR Location", output.Location)
}

func TestIntegration_ReverseGeo(t *testing.T) {
	input := &Coordinates{
		Latitude: nil,
		Longitude: nil,
	}

	output, err := client.ReverseGeo(context.TODO(), input)
	assert.Nil(t, err)
	assert.NotNil(t, output)
	assert.EqualValues(t, "PLACE VALUE FOR City", output.City)
	assert.EqualValues(t, "PLACE VALUE FOR Country", output.Country)
	assert.EqualValues(t, "PLACE VALUE FOR Location", output.Location)
}

How cool is that?

Sweet, so we were able to generate the project skeleton and some tests, good starting point isn’t it?

It’s Alive?

Well… basically yes, you can use packman (and your own protobuf file) to unpack this template by just typing:


packman unpack \
    https://github.com/securenative/packman-proto-example \
    packmanGrpcServer \
    -proto https://srv-file7.gofile.io/download/mgE2XQ/geo_location.proto \
    -port 10012
Packman open source live

But this is just an example and it’s meant to be simple, for a production-grade projects you should add your own monitoring tools, your deployment/docker stuff, maybe even add kubernetes readiness and liveness probes or anything else based on your requirements; There is no actual limit on what you can do with packman.

Summing it all up

So… we took a pretty long trip where we’ve learned how the internals of packman work and how to use it to generate a production-grade project from a single protobuf using packman. Kudos!!

We think that packman can really help your business to be more efficient and cut-off development cycles by automating the things that no one wants to do.

It’s easy to learn, it’s easy to use. Why don’t you use it?