It may be just me, or tired coding, however I found it very difficult to create gcp cloud functions with golang and set them up with gcp pub/sub's to create a chain of event processes.

My use case is a mobile app I am working on, which uses Firebase as its main authentication, data and file storage. The project needs to be able to read data from firestore, and send push notifications periodically with useful tips and updates. Although this is encouraged by firebase to be done using node, I had already made and tested my functions in golang and decided to give it a shot.

I hit MANY issues along the way and wanted to list it out here for future reference and hopefully to help anyone in the same mess I was in.

Note: Firebase projects are basically GCP projects behind the hood, with a good UI to make things a bit more easier. The benefit of digging into the behind the scenes of a firebase project is that you can use golang for cloud functions, instead of the nodejs cloud function set up supplied by the firebase front end

There are a few "gotchas" when creating, deploying and linking up things on GCP.

You cannot deploy functions from main package

Unlike normally, you are unable to deploy functions to gcp cloud functions where the package is main. The issue with this is, changing the package means that locally you are unable to run the code. I address a solution to this later.

You must use the net/http request and response writer within the entry point function

I was attempting to create a cloud function which was not invoked by a HTTP endpoint (--trigger-http). I made the assumption that I didn't need the request and response writer parameters. This caused every deployment attempt to fail.

Golang cloud functions must look like the following:

func Run(w http.ResponseWriter, r *http.Request) {
    // code goes here
}

This will be used as a "validator" of sorts within gcp uses to verify the function code. From here, you can do whatever you like.

Pub message json structure

One part that tripped me up is trying to unmarshal the json which came from pub/sub. There are a few pointers here:

  • Use *pubsub.Message: Older articles and stackoverflow posts mention using a *pubsub.PubSubMessage as the type when unmarshalling the response data. This is incorrect. The object required is now *pubsub.Message (notice removal of additional PubSub on object name).
  • The entire message returned by pubsub is in a data key/value. From here, you can then get the nested data, attributes etc. This wasn't clear to me to begin with.
// function.go

import (
    cloud.google.com/go/pubsub"
)

type PushRequest struct {
    Message pubsub.Message `json:"data"`
}

func Run(w http.ResponseWriter, r *http.Request) {
    var d PushRequest

    if err := json.NewDecoder(r.Body).Decode(&d); err != nil {
        log.Printf("json.NewDecoder: %v", err)
        http.Error(w, "Error parsing request", http.StatusBadRequest)
        return
    }

    // d = pubsub.Messages
    //    Data []byte,
    //    Attributes map[string]string
    // ]
}

Local dependancy importing

The normal golang way of importing local dependancies in your project does not working with gcp cloud functions. For example, let's say you had the following file structure:

go.mod
go.sum
function.go
something/
-- something.go

Normally, you would be able to import this package within function.go using the project module specified in the go.mod file:

// function.go

package function

import (
    "example.com/function/something"
)

...

With golang and cloud functions, you must change this so the package name is appended first, followed by the local dependancy package/<subpackage>:

// function.go

package function

import (
    // "example.com/function/something"
    "function/something"
)
...

Without this, the cloud deploy compiler does not know where to look for sub packages within the project.

This leads on to my next gotcha...

Separate local testing code from cloud function code

Once you have made the dependancy changes, you will no longer be able to run your code locally in a normal manner. I'm sure it is possible to use the replace syntax within the go.mod to replace packages when go does its package discovery, however this can become a headache.

Best advice would be to separate your deployment code from your working code.

go.sum
go.mod
function.go
something/
-- something.go
cmd/
-- main.go # Holds local version of functionality

Debugging code issues once deployed may be difficult

Major failures within your code, e.g. project authentication codes, will result in an error which looks like.

Infrastructure cannot communicate with function…

This has proved to be hard to debug. Add loads of fmt.Println("Something has happened") or send to a logger in order to diagnose the issue.

Firebase may already be authenticated

The original code I wrote complicated matters as I was trying to upload the admin sdk key supplied by the firebase project to authenticate cloud functions.

As mentioned before, firebase projects are basically gcp projects under the hood. If the service account has permission to use the sdk all should be find. However, this means locally you still need to authenticate, so functions may need to take in a parameter which passes the instance for firebase, with settings dependant on the environment.

Otherwise you can check if there is an environment variable available and if not authenticate via json key

// function.go

var projectID = os.Getenv("GOOGLE_CLOUD_PROJECT")

func Run(w http.ResponseWriter, r *http.Request) {
    if (projectID == nil) {
        opts := []option.ClientOption{option.WithCredentialsFile("auth/auth_file.json")}
    } else {
        opts := nil
    }

    client, err := firestore.NewClient(ctx, id, opts)
    if err != nil {
        log.Fatalf("error initializing app: %v\n", err)
    }
}

Note: I have done this and made sure to add the auth folder to my .gitignore file.

I'll come back and update this post later if I find anything else.

Hope this helps.