Tutorial

How To Build a GraphQL API With Golang to Upload Files to DigitalOcean Spaces

Published on December 20, 2021
How To Build a GraphQL API With Golang to Upload Files to DigitalOcean Spaces

The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

Introduction

For many applications, one desirable feature is the user’s ability to upload a profile image. However, building this feature can be a challenge for developers new to GraphQL, which has no built-in support for file uploads.

In this tutorial, you will learn to upload images to a third-party storage service directly from your backend application. You will build a GraphQL API that uses an S3-compatible AWS GO SDK from a Go backend application to upload images to DigitalOcean Spaces, which is a highly scalable object storage service. The Go back-end application will expose a GraphQL API and store user data in a PotsgreSQL database provided by DigitalOcean’s Managed Databases service.

By the end of this tutorial, you will have built a GraphQL API using Golang that can receive a media file from a multipart HTTP request and upload the file to a bucket within DigitalOcean Spaces.

Prerequisites

To follow this tutorial, you will need:

Step 1 — Bootstrapping a Golang GraphQL API

In this step, you will use the Gqlgen library to bootstrap the GraphQL API. Gqlgen is a Go library for building GraphQL APIs. Two important features that Gqglen provides are a schema-first approach and code generation. With a schema-first approach, you first define the data model for the API using the GraphQL Schema Definition Language (SDL). Then you generate the boilerplate code for the API from the defined schema. Using the code generation feature, you do not need to manually create the query and mutation resolvers for the API as they are automatically generated.

To get started, execute the command below to install gqlgen:

  1. go install github.com/99designs/gqlgen@latest

Next, create a project directory named digitalocean to store the files for this project:

  1. mkdir digitalocean

Change into the digitalocean project directory:

  1. cd digitalocean

From your project directory, run the following command to create a go.mod file that manages the modules within the digitalocean project:

 go mod init digitalocean

Next, using nano or your favorite text editor, create a file named tools.go within the project directory:

  1. nano tools.go

Add the following lines into the tools.go file as a tool for the project:

// +build tools

 package tools

 import _ "github.com/99designs/gqlgen" 

Next, execute the tidy command to install the gqlgen dependency introduced within the tools.go file:

  1. go mod tidy

Finally, using the installed Gqlgen library, generate the boilerplate files needed for the GraphQL API:

  1. gqlgen init

Running the gqlgen command above generates a server.go file for running the GraphQL server and a graph directory containing a schema.graphqls file that contains the Schema Definitions for the GraphQL API.

In this step, you used the Gqlgen library to bootstrap the GraphQL API. Next, you’ll define the schema of the GraphQL application.

Step 2 — Defining the GraphQL Application Schema

In this step, you will define the schema of the GraphQL application by modifying the schema.graphqls file that was automatically generated when you ran the gqlgen init command. In this file, you will define a User, Query, and Mutation types.

Navigate to the graph directory and open the schema.graphqls file, which defines the schema of the GraphQL application. Replace the boilerplate schema with the following code block, which defines the User type with a Query to retrieve all user data and a Mutation to insert data:

schema.graphqls

scalar Upload

type User {
  id: ID!
  fullName: String!
  email: String!
  img_uri: String!
  DateCreated: String!
}

type Query {
  users: [User]!
}

input NewUser {
  fullName: String!
  email: String!
  img_uri: String
  DateCreated: String
}

input ProfileImage {
  userId: String
  file: Upload
}

type Mutation {
  createUser(input: NewUser!): User!
  uploadProfileImage(input: ProfileImage!): Boolean!
}

The code block defines two Mutation types and a single Query type for retrieving all users. A mutation is used to insert or mutate existing data in a GraphQL application, while a query is used to fetch data, similar to the GET HTTP verb in a REST API.

The schema in the code block above used the GraphQL Schema Definition Language to define a Mutation containing the CreateUser type, which accepts the NewUser input as a parameter and returns a single user. It also contains the uploadProfileImage type, which accepts the ProfileImage and returns a boolean value to indicate the status of the success upload operation.

Note: Gqlgen automatically defines the Upload scalar type, and it defines the properties of a file. To use it, you only need to declare it at the top of the schema file, as it was done in the code block above.

At this point, you have defined the structure of the data model for the application. The next step is to generate the schema’s query and the mutation resolver functions using Gqlgen’s code generation feature.

Step 3 — Generating the Application Resolvers

In this step, you will use Gqlgen’s code generation feature to automatically generate the GraphQL resolvers based on the schema that you created in the previous step. A resolver is a function that resolves or returns a value for a GraphQL field. This value could be an object or a scalar type such as a string, number, or even a boolean.

The Gqlgen package is based on a schema-first approach. A time-saving feature of Gqlgen is its ability to generate your application’s resolvers based on your defined schema in the schema.graphqls file. With this feature, you do not need to manually write the resolver boilerplate code, which means you can focus on implementing the defined resolvers.

To use the code generation feature, execute the command below in the project directory to generate the GraphQL API model files and resolvers:

  1. gqlgen generate

A few things will happen after executing the gqlgen command. Two validation errors relating to the schema.resolvers.go file will be printed out, some new files will be generated, and your project will have a new folder structure.

Execute the tree command to view the new files added to your project.

tree *

The current directory structure will look similar to this:

Output
go.mod go.sum gqlgen.yml graph ├── db.go ├── generated │   └── generated.go ├── model │   └── models_gen.go ├── resolver.go ├── schema.graphqls └── schema.resolvers.go server.go tmp ├── build-errors.log └── main tools.go 2 directories, 8 files

Among the project files, one important file is schema.resolvers.go. It contains methods that implement the Mutation and Query types previously defined in the schema.graphqls file.

To fix the validation errors, delete the CreateTodo and Todos methods at the bottom of the schema.resolvers.go file. Gqlgen moved the methods to the bottom of the file because the type definitions were changed in the schema.graphqls file.

schema.resolvers.go

package graph

// This file will be automatically regenerated based on the schema, any resolver implementations
// will be copied through when generating and any unknown code will be moved to the end.

import (
	"context"
	"digitalocean/graph/generated"
	"digitalocean/graph/model"
	"fmt"
)

func (r *mutationResolver) CreateUser(ctx context.Context, input model.NewUser) (*model.User, error) {
	panic(fmt.Errorf("not implemented"))
}

func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
	panic(fmt.Errorf("not implemented"))
}

func (r *queryResolver) User(ctx context.Context) (*model.User, error) {
	panic(fmt.Errorf("not implemented"))
}

// Mutation returns generated.MutationResolver implementation.
func (r *Resolver) Mutation() generated.MutationResolver { return &mutationResolver{r} }

// Query returns generated.QueryResolver implementation.
func (r *Resolver) Query() generated.QueryResolver { return &queryResolver{r} }

type mutationResolver struct{ *Resolver }
type queryResolver struct{ *Resolver }

// !!! WARNING !!!
// The code below was going to be deleted when updating resolvers. It has been copied here so you have
// one last chance to move it out of harms way if you want. There are two reasons this happens:
//  - When renaming or deleting a resolver the old code will be put in here. You can safely delete
//    it when you're done.
//  - You have helper methods in this file. Move them out to keep these resolver files clean.

func (r *mutationResolver) CreateTodo(ctx context.Context, input model.NewTodo) (*model.Todo, error) {
	panic(fmt.Errorf("not implemented"))
}
func (r *queryResolver) Todos(ctx context.Context) ([]*model.Todo, error) {
	panic(fmt.Errorf("not implemented"))
}

As defined in the schema.graphqls file, Gqlgen’s code generator created two mutations and one query resolver method. These resolvers serve the following purposes:

  • CreateUser: This mutation resolver inserts a new user record into the connected Postgres database.

  • UploadProfileImage: This mutation resolver uploads a media file received from a multipart HTTP request and uploads the file to a bucket within DigitalOcean Spaces. After the file upload, the URL of the uploaded file is inserted into the img_uri field of the previously created user.

  • Users: This query resolver queries the database for all existing users and returns them as the query result.

Going through the methods generated from the Mutation and Query types, you would observe that they cause a panic with a not implemented error when executed. This indicates that they are still auto-generated boilerplate code. Later in this tutorial, you will return to the schema.resolver.go file to implement these generated methods.

At this point, you generated the resolvers for this application based on the content of the schema.graphqls file. You will now use the Managed Databases service to create a database that will store the data passed to the mutation resolvers to create a user.

Step 4 — Provisioning and Using a Managed Database Instance on DigitalOcean

In this step, you will use the DigitalOcean console to access the Managed Databases service and create a PostgreSQL database to store data from this application. After the database has been created, you will securely store the details in a .env file.

Although the application will not store images directly in a database, it still needs a database to insert each user‘s record. The stored record will then contain links to the uploaded files.

A user’s record will consist of a Fullname, email, dateCreated, and an img_uri field of String data type. The img_uri field contains the URL pointing to an image file uploaded by a user through this GraphQL API and stored within a bucket on DigitalOcean Spaces.

Using your DigitalOcean dashboard, navigate to the Databases section of the console to create a new database cluster, and select PostgreSQL from the list of databases offered. Leave all other settings at their default values and create this cluster using the button at the bottom.

Digitalocean database cluster

The database cluster creation process will take a few minutes before it is completed.

After creating the cluster, follow the Getting Started steps on the database cluster page to set up the cluster for use.

At the second step of the Getting Started guide, click the Continue, I’ll do this later text to proceed. By default, the database cluster is open to all connections.

Note: In a production-ready scenario, the Add Trusted Sources input field at the second step should only contain trusted IP addresses, such as the IP Address of the DigitalOcean Droplet running the application. During development, you can alternatively add the IP address of your development machine to the Add Trusted Sources input field.

Click the Allow these inbound sources button to save and proceed to the next step.

At the next step, the connection details of the cluster are displayed. You can also find the cluster credentials by clicking the Actions dropdown, then selecting the Connection details option.

Digitalocean database cluster credentials

In this screenshot, the gray box at right shows the connection credentials of the created demo cluster.

You will securely store these cluster credentials as environment variables. In the digitalocean project directory, create a .env file and add your cluster credentials in the following format, making sure to replace the highlighted placeholder content with your own credentials:

.env

 DB_PASSWORD=YOUR_DB_PASSWORD
 DB_PORT=PORT
 DB_NAME=YOUR_DATABASE_NAME
 DB_ADDR=HOST
 DB_USER=USERNAME

With the connection details securely stored in the .env file, the next step will be to retrieve these credentials and connect the database cluster to your project.

Before proceeding, you will need a database driver to work with Golang’s native SQL package when connecting to the Postgres database. go-pg is a Golang library for translating ORM (object-relational mapping) queries into SQL Queries for a Postgres database. godotenv is a Golang library for loading environment credential from a .env file into your application. Lastly, go.uuid generates a UUID (universally unique identifier) for each user’s record that will be inserted into the database.

Execute this command to install these:

  1. go get github.com/go-pg/pg/v10 github.com/joho/godotenv github.com/satori/go.uuid

Next, navigate to the graph directory and create a db.go file. You will gradually put together the code within the file to connect with the Postgres database created in the Managed Databases cluster.

First, add the content of the code block into the db.go file. This function (createSchema) creates a user table in the Postgres database immediately after a connection to the database has been established.

db.go
package graph

import (
	"github.com/go-pg/pg/v10"
	"github.com/go-pg/pg/v10/orm"
	"digitalocean/graph/model"
)

func createSchema(db *pg.DB) error {
	for _, models := range []interface{}{(*model.User)(nil)}{
		if err := db.Model(models).CreateTable(&orm.CreateTableOptions{
			IfNotExists: true,
		}); err != nil {
			panic(err)
		}
	}

	return nil
}

Using the IfNotExists option passed to the CreateTable method from go-pg, the createSchema function only inserts a new table into the database if the table does not exist. You can understand this process as a simplified form of seeding a newly created database. Rather than creating the Tables manually through the psql client or GUI, the createSchema function takes care of the table creation.

Next, add the content of the code block below into the db.go file to establish a connection to the Postgres database and execute the createSchema function above when a connection has been established successfully:

db.go

import (
	  // ...

		 "fmt" 
		 "os" 
	)

func Connect() *pg.DB {
	DB_PASSWORD := os.Getenv("DB_PASSWORD")
	DB_PORT := os.Getenv("DB_PORT")
	DB_NAME := os.Getenv("DB_NAME")
	DB_ADDR := os.Getenv("DB_ADDR")
	DB_USER := os.Getenv("DB_USER")

	connStr := fmt.Sprintf(
		"postgresql://%v:%v@%v:%v/%v?sslmode=require",
		DB_USER, DB_PASSWORD, DB_ADDR, DB_PORT, DB_NAME )

	opt, err := pg.ParseURL(connStr); if err != nil {
  	  panic(err)
      }

	db := pg.Connect(opt)

	if schemaErr := createSchema(db); schemaErr != nil {
		panic(schemaErr)
	}

	if _, DBStatus := db.Exec("SELECT 1"); DBStatus != nil {
		panic("PostgreSQL is down")
	}

	return db 
}

When executed, the exported Connect function in the code block above establishes a connection to a Postgres database using go-pg. This is done through the following operations:

  • First, the database credentials you stored in the root .env file are retrieved. Then, a variable is created to store a string formatted with the retrieved credentials. This variable will be used as a connection URI when connecting with the database.

  • Next, the created connection string is parsed to see if the formatted credentials are valid. If valid, the connection string is passed into the connect method as an argument to establish a connection.

To use the exported Connect function, you will need to add the function to the server.go file, so it will be executed when the application is started. Then the connection can be stored in the DB field within the Resolver struct.

To use the previously created Connect function from the graph package immediately after the application is started, and to load the credentials from the .env file into the application, open the server.go file in your preferred code editor and add the lines highlighted below:

Note: Make sure to replace the existing srv variable in the server.go file with the srv variable highlighted below.

server.go
 package main

import (
  "log"
  "net/http"
  "os"
  "digitalocean/graph"
  "digitalocean/graph/generated"

  "github.com/99designs/gqlgen/graphql/handler"
  "github.com/99designs/gqlgen/graphql/playground"
 "github.com/joho/godotenv"
)

const defaultPort = "8080"

func main() {
     err := godotenv.Load(); if err != nil {
     log.Fatal("Error loading .env file")
    } 

  // ...

	 Database := graph.Connect()
	 srv := handler.NewDefaultServer(
	 		generated.NewExecutableSchema(
	 				generated.Config{
	 					Resolvers: &graph.Resolver{
	 						DB: Database,
	 					},
	 				}),
	 	)

  // ...
}

In this code snippet, you loaded the credentials stored in the .env through the Load() function. You called the Connect function from the db package and also created the Resolver object with the database connection stored in the DB field. (The stored database connection will be accessed by the resolvers later in this tutorial.)

Currently, the boilerplate Resolver struct in the resolver.go file does not contain the DB field where you stored the database connection in the code above. You will need to create the DB field.

In the graph directory, open the resolver.go file and modify the Resolver struct to have a DB field with a go-pg pointer as its type, as shown below:

resolver.go
package graph

import "github.com/go-pg/pg/v10"

// This file will not be regenerated automatically.
//
// It serves as dependency injection for your app, add any dependencies you require here.

type Resolver struct {
	DB *pg.DB
}

Now a database connection will be established each time the entry server.go file is run and the go-pg package can be used as an ORM to perform operations on the database from the resolver functions.

In this step, you created a PostgreSQL database using the Managed Database service on DigitalOcean. You also created a db.go file with a Connect function to establish a connection to the PostgreSQL database when the application is started. Next, you will implement the generated resolvers to store data in the PostgreSQL database.

Step 5 — Implementing the Generated Resolvers

In this step, you will implement the methods in the schema.resolvers.go file, which serves as the mutation and query resolvers. The implemented mutation resolvers will create a user and upload the user’s profile image, while the query resolver will retrieve all stored user details.

Implementing the Mutation Resolver Methods

In the schema.graphqls file, two mutation resolvers were generated. One with the purpose of inserting the user’s record, while the other handles the profile image uploads. However, these mutations have not yet been implemented as they are boilerplate code.

Open the schema.resolvers.go file. Modify the imports and the CreateUser mutation with the highlighted lines to insert a new row containing the user details input into the database:

schema.resolvers.go
package graph

import (
  "context"
  "fmt"
   "time" 

  "digitalocean/graph/generated"
  "digitalocean/graph/model"
  "github.com/satori/go.uuid" 
)

func (r *mutationResolver) CreateUser(ctx context.Context, input model.NewUser) (*model.User, error) {
	 user := model.User{ 
	 	ID:          fmt.Sprintf("%v", uuid.NewV4()), 
	 	FullName:    input.FullName, 
	 	Email:       input.Email, 
	 	ImgURI:      "https://bit.ly/3mCSn2i", 
	 	DateCreated: time.Now().Format("01-02-2006"), 
	 } 
 
	 _, err := r.DB.Model(&user).Insert(); if err != nil { 
	 	return nil, fmt.Errorf("error inserting user: %v", err) 
	 } 
 
	 return &user, nil 
}

In the CreateUser mutation, there are two things to note about the user rows inserted. First, each row that is inserted is given a UUID. Second, the ImgURI field in each row has a placeholder image URL as the default value. This will be the default value for all records and will be updated when a user uploads a new image.

Next, you will test the application that has been built at this point. From the project directory, run the server.go file with the following command:

  1. go run ./server.go

Now, navigate to http://localhost:8080 through your web browser to access the GraphQL playground built-in to your GraphQL API. Paste the GraphQL Mutation in the code block below into the playground editor to insert a new user record.

graphql

mutation createUser {
  createUser(
    input: {
      email: "johndoe@gmail.com"
      fullName: "John Doe"
    }
  ) {
    id
  }
}

The output in the right pane will look similar to this:

A create user mutation on the GraphQL Playround

You executed the CreateUser mutation to create a test user with the name of John Doe, and the id of the newly inserted user record was returned as a result of the mutation.

Note: Copy the id value returned from the executed GraphQL query. You will use the id when uploading a profile image for the test user created above.

At this point, you have the second UploadProfileImage mutation resolver function left to implement. But before you implement this function, you need to implement the query resolver first. This is because each upload is linked to a specific user, which is why you retrieved the ID of a specific user before uploading an image.

Implementing the Query Resolver Method

As defined in the schema.resolvers.graphqls file, one query resolver was generated to retrieve all created users. Similar to the previous mutation resolvers methods, you also need to implement the query resolver method.

Open scheme.resolvers.go and modify the generated Users query resolver with the highlighted lines. The new code within the Users method below will query the Postgres database for all user rows and return the result.

schema.resolvers.go
package graph

func (r *queryResolver) Users(ctx context.Context) ([]*model.User, error) {
  var users []*model.User

  err := r.DB.Model(&users).Select()
	if err != nil {
		return nil, err
	} 

  return users, nil 
}

Within the Users resolver function above, fetching all records within the user table is made possible by using go-pg’s select method on the User model without passing the WHERE or LIMIT clause into the query.

Note: For a bigger application where many records will be returned from the query, it is important to consider paginating the data returned for improved performance.

To test this query resolver from your browser, navigate to http://localhost:8080 to access the GraphQL playground. Paste the GraphQL Query below into the playground editor to fetch all created user records.

graphql

query fetchUsers {
  users {
      fullName
      id
      img_uri
  }
}

The output in the right pane will look similar to this:

Query result GraphQL playground

In the returned results, you can see that a users object with an array value was returned. For now, only the previously created user was returned in the users array because that it is the only record in the table. More users will be returned in the users array if you execute the createUser mutation with new details. You can also observe that the img_uri field in the returned data has the hardcoded fallback image URL.

At this point, you have now implemented both the CreateUser mutation and the User query. Everything is in place for you to receive images from the second UploadProfileImage resolver and upload the received image to a bucket with DigitalOcean Spaces using an S3 compatible AWS-GO SDK.

Step 6 — Uploading Images to DigitalOcean Spaces

In this step, you will use the powerful API within the second UploadProfileImage mutation to upload an image to your Space.

To begin, navigate to the Spaces section of your DigitalOcean console, where you will create a new bucket for storing the uploaded files from your backend application.

Click the Create New Space button. Leave the settings at their default values and specify a unique name for the new Space:

Digitalocean spaces

After a new Space has been created, navigate to the settings tab and copy the Space’s endpoint, name, and region. Add these to the .env file within the GraphQL project in this format:

.env
SPACE_ENDPOINT=BUCKET_ENDPOINT
DO_SPACE_REGION=DO_SPACE_REGION
DO_SPACE_NAME=DO_SPACE_NAME

As an example, the following screenshot shows the Setting tab, and highlights the name, region, and endpoint details of the demo space (Victory-space):

Victory-space endpoint, name, and region

As part of the prerequisites, you created a Space Access key and Secret key for your Space. Paste in your Access and Secret keys into the .env file within the GraphQL application in the following format:

.env
ACCESS_KEY=YOUR_SPACE_ACCESS_KEY
SECRET_KEY=YOUR_SPACE_SECRET_KEY

At this point, you will need to use the CTRL + C key combination to stop the GraphQL server, and execute the command below to restart the GraphQL application with the new credentials loaded into the application.

  1. go run ./server.go

Now that your Space credentials are loaded into the application, you will create the upload logic in the UploadProfileImage mutation resolver. The first step will be to add and configure the aws-sdk-go SDK to connect to your DigitalOcean Space.

One way to programmatically perform operations on your bucket within Spaces is through the use of compatible AWS SDKs. The AWS Go SDK is a development kit that provides a set of libraries to be used by Go developers. The libraries provided by the SDK can be used by a Go written application when performing operations with AWS resources such as file transfers to S3 buckets.

The DigitalOcean Spaces documentation provides a list of operations you can perform on the Spaces API using an AWS SDK. We will use the aws-sdk-go SDK to connect to the your DigitalOcean Space.

Execute the go get command to install the aws-sdk-go SDK into the application:

  1. go get github.com/aws/aws-sdk-go

Over the next few code blocks, you will gradually put together the upload logic in the UploadProfileImage mutation resolver.

First, open the schema.resolvers.go file. Add the highlighted lines to configure the AWS SDK with the stored credentials and establish a connection with your DigitalOcean Space:

Note: The code within the code block below is incomplete, as you are gradually putting the upload logic together. You will complete the code in the subsequent code blocks.

schema.resolvers.go
package graph

import (
   ...

   "os"

   "github.com/aws/aws-sdk-go/aws"
   "github.com/aws/aws-sdk-go/aws/credentials"
   "github.com/aws/aws-sdk-go/aws/session"
   "github.com/aws/aws-sdk-go/service/s3"
)

func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {

	SpaceRegion := os.Getenv("DO_SPACE_REGION")
	accessKey := os.Getenv("ACCESS_KEY")
	secretKey := os.Getenv("SECRET_KEY")

	s3Config := &aws.Config{
		Credentials: credentials.NewStaticCredentials(accessKey, secretKey, ""),
		Endpoint:    aws.String(os.Getenv("SPACE_ENDPOINT")),
		Region:      aws.String(SpaceRegion),
	}

	newSession := session.New(s3Config)
	s3Client := s3.New(newSession)

}

Now that the SDK is configured, the next step is to upload the file sent in the multipart HTTP request.

One way to handle files sent is to read the content from the multipart request, temporarily save the content to a new file in memory, upload the temporary file using the aws-SDK-go library, and then delete it after an upload. Using this approach, a client application such as a web application consuming this GraphQL API still uses the same GraphQL endpoint to perform file uploads, rather than using a third party API to upload files.

To achieve this, add the highlighted lines to the existing code within the UploadProfileImage mutation resolver in the schema.resolvers.go file:

schema.resolvers.go

package graph

import (
   ...
  
   "io/ioutil"
   "bytes"

)

func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
...

SpaceName := os.Getenv("DO_SPACE_NAME")

...


  userFileName := fmt.Sprintf("%v-%v", input.UserID, input.File.Filename)
  stream, readErr := ioutil.ReadAll(input.File.File)
	if readErr != nil {
		fmt.Printf("error from file %v", readErr)
	}

	fileErr := ioutil.WriteFile(userFileName, stream, 0644); if fileErr != nil {
		fmt.Printf("file err %v", fileErr)
	}

	file, openErr := os.Open(userFileName); if openErr != nil {
		fmt.Printf("Error opening file: %v", openErr)
	}

	defer file.Close()

	buffer := make([]byte, input.File.Size)

_, _ = file.Read(buffer)

	fileBytes := bytes.NewReader(buffer)

	object := s3.PutObjectInput{
		Bucket: aws.String(SpaceName),
		Key:    aws.String(userFileName),
		Body:   fileBytes,
		ACL:    aws.String("public-read"),
	}

	if _, uploadErr := s3Client.PutObject(&object); uploadErr != nil {
		return false, fmt.Errorf("error uploading file: %v", uploadErr)
	}

	_ = os.Remove(userFileName)

 
return true, nil
}

Using the ReadAll method from the io package in the code block above, you first read the content of the file added to the multipart request sent to the GraphQL API, and then a temporary file is created to dump this content into.

Next, using the PutObjectInput struct, you created the structure of the file to be uploaded by specifying the Bucket, Key, ACL, and Body field to be the content of the temporarily stored file.

Note: The Access Control List (ACL) field in the PutObjectInput struct has a public-read value to make all uploaded files available for viewing over the internet. You can remove this field if your application requires that uploaded data be kept private.

After creating the PutObjectInput struct, the PutObject method is used to make a PUT operation, sending the values of the PutObjectInput struct to the bucket. If there is an error, a false boolean value and an error message are returned, ending the execution of the resolver function and the mutation in general.

To test the upload mutation resolver, you can use an image of Sammy the Shark, DigitalOcean’s mascot. Use the wget command to download an image of Sammy:

wget https://html.sammy-codes.com/images/small-profile.jpeg

Next, execute the cURL command below to make an HTTP request to the GraphQL API to upload Sammy’s image, which has been added to the request form body.

Note: If you are on a Windows Operating System, it is recommended that you execute the cURL commands using the Git Bash shell due to the backslash escapes.

curl localhost:8080/query  -F operations='{ "query": "mutation uploadProfileImage($image: Upload! $userId : String!) { uploadProfileImage(input: { file: $image  userId : $userId}) }", "variables": { "image": null, "userId" : "12345" } }' -F map='{ "0": ["variables.image"] }'  -F 0=@small-profile.jpeg

Note: We are using a random userId value in the request above because the process of updating a user’s record has not yet been implemented.

The output will look similar to this, indicating that the file upload was successful:

Output
{"data": { "uploadProfileImage": true }}

In the Spaces section of the DigitalOcean console, you will find the image uploaded from your terminal:

A bucket within Digitalocean showing a list of uploaded files

At this point, file uploads within the application are working; however, the files are linked to the user who performed the upload. The goal of each file upload is to have the file uploaded into a storage bucket and then linked back to a user by updating the img_uri field of the user.

Open the resolver.go file in the graph directory and add the code block below. It contains two methods: one to retrieve a user from the database by a specified field, and the other function to update the record of a user.

resolver.go

import (
...

  "digitalocean/graph/model"
  "fmt"
)

...

func (r *mutationResolver) GetUserByField(field, value string) (*model.User, error) {
	user := model.User{}

	err := r.DB.Model(&user).Where(fmt.Sprintf("%v = ?", field), value).First()

	return &user, err
}


func (r *mutationResolver) UpdateUser(user *model.User) (*model.User, error) {
	_, err := r.DB.Model(user).Where("id = ?", user.ID).Update()
	return user, err
}

The first GetUserByField function above accepts a field and value argument, both of a string type. Using go-pg’s ORM, it executes a query on the database, fetching data from the user table with a WHERE clause.

The second UpdateUser function in the code block uses go-pg to execute an UPDATE statement to update a record in the user table. Using the where method, a WHERE clause with a condition is added to the UPDATE statement to update only the row having the same ID passed into the function.

Now you can use the two methods in the UploadProfileImage mutation. Add the content of the highlighted code block below to the UploadProfileImage mutation within the schema.resolvers.go file. This will retrieve a specific row from the user table and update the img_uri field in the user’s record after the file has been uploaded.

Note: Place the highlighted code at the line above the existing return statement within the UploadProfileImage mutation.

schema.resolvers.go

package graph

 
func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
  _ = os.Remove(userFileName)

 
    user, userErr := r.GetUserByField("ID", *input.UserID)
  
  	if userErr != nil {
  		return false, fmt.Errorf("error getting user: %v", userErr)
  	}
  
   fileUrl := fmt.Sprintf("https://%v.%v.digitaloceanspaces.com/%v", SpaceName, SpaceRegion, userFileName)
  
  	user.ImgURI = fileUrl
  
   	if _, err := r.UpdateUser(user); err != nil {
  		return false, fmt.Errorf("err updating user: %v", err)
  	}
  

  return true, nil
}

From the new code added to the schema.resolvers.go file, an ID string and the user’s ID are passed to the GetUserByField helper function to retrieve the record of the user executing the mutation.

A new variable is then created and given the value of a string formatted to have the link of the recently uploaded file in the format of https://BUCKET_NAME.SPACE_REGION.digitaloceanspaces.com/USER_ID-FILE_NAME. The ImgURI field in the retrieved user model was reassigned the value of the formatted string as a link to the uploaded file.

Paste the curl command below into your terminal, and replace the highlighted USER_ID placeholder in the command with the userId of the user created through the GraphQL playground in a previous step. Make sure the userId is wrapped in quotation marks so that the terminal can encode the value properly.

  1. curl localhost:8080/query -F operations='{ "query": "mutation uploadProfileImage($image: Upload! $userId : String!) { uploadProfileImage(input: { file: $image userId : $userId}) }", "variables": { "image": null, "userId" : "USER_ID" } }' -F map='{ "0": ["variables.image"] }' -F 0=@small-profile.jpeg

The output will look similar to this:

Output
{"data": { "uploadProfileImage": true }}

To further confirm that the user’s img_uri was updated, you can use the fetchUsers query from the GraphQL playground in the browser to retrieve the user’s details. If the update was successful, you will see that the default placeholder URL of https://bit.ly/3mCSn2i in the img_uri field has been updated to the value of the uploaded image.

The output in the right pane will look similar to this:

A query mutation to retrieve an updated user record using the GraphQL Playground

In the returned results, the img_uri in the first user object returned from the query has a value that corresponds to a file upload to a bucket within DigitalOcean Spaces. The link in the img_uri field is made up of the bucket endpoint, the user’s ID, and lastly, the filename.

To test the permission of the uploaded file set through the ACL option, you can open the img_uri link in your browser. Due to the default Metadata on the uploaded image, it will automatically download to your computer as an image file. You can open the file to view the image.

Downloaded view of the uploaded file

The image at the img_uri link will be the same image that was uploaded from the command line, indicating that the methods in the resolver.go file were executed correctly, and the entire file upload logic in the UploadProfileImage mutation works as expected.

In this step, you uploaded an image into a DigitalOcean Space by using the AWS SDK for Go from the UploadProfileImage mutation resolver.

Conclusion

In this tutorial, you performed a file upload to a created bucket on a DigitalOcean Space using the AWS SDK for Golang from a mutation resolver in a GraphQL application.

As a next step, you could deploy the application built within this tutorial. The Go Dev Guide provides a beginner-friendly guide on how to deploy a Golang application to DigitalOcean’s App Platform, which is a fully managed solution for building, deploying, and managing your applications from various programming languages.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

I’m a software engineer and technical writer focused on making cloud services easier for developers to use in their serverless journey.


Default avatar

Technical Editor


Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.