Golang PGO builds using GitHub Actions

GOLANG
22 min read

In February of this year, I announced that Dolt releases are now built as profile-guided optimized (pgo) binaries, levering this powerful feature of Golang 1.20 to increase Dolt's read performance by 5.3%.

Prior to my announcement, Zach, one of our resident Golang experts, experimented and tested Golang's pgo feature and wrote about the performance gains he observed in Dolt after building it with profiles created while running our Sysbench benchmarking tests against it first. From there, we knew we had to get those performance gains into our released binaries, so we retooled Dolt's release process to build pgo releases.

Today, I'll cover Dolt's general release process, which uses GitHub Actions, and I'll breakdown all phases of our release process. I'll also go over what we changed in our process to start releasing pgo builds. Hopefully this will allow you to glean some insights you can use for working pgo builds into your own Golang releases!

Let's dive right in.

Dolt releases with GitHub Actions

Dolt leverages GitHub Actions to perform a number of automated tasks, one of which is creating and publishing releases.

GitHub Actions uses files called workflows to define jobs that it will do the work defined in the workflow file. These jobs are deployed to runners, or host machines, that you can either host yourself or allow GitHub to host for you.

Self-hosted runners are provisioned and maintained by you, external to GitHub Actions. GitHub-hosted runners, which are free for public repositories, are all hosted and maintained by GitHub, but they have specific storage, memory, and cpu limits depending on your tier of subscription. For Dolt, we use the free-tier GitHub-hosted runners.

At a high-level, the Dolt release process needs to accomplish a few objectives.

First, and most importantly, the process needs to create a tag and release for the new version of Dolt and upload precompiled binaries of Dolt to the release assets.

Second, the release process needs to run our Sysbench benchmarking tests against this new version of Dolt and email the results to our DoltHub team.

Third, and not super relevant to this blog, the process needs to kick off any other auxiliary tasks we need to perform during a release, like creating Dolt's release notes that depend on pull request descriptions from multiple repositories, publishing the release to various package managers so that it can be easily installed from them, pushing new Docker images to DockerHub, or upgrading the Dolt dependency in various repositories we own.

So, with these objectives in mind, we came up with a suite of GitHub Actions workflows that leverage the repository_dispatch event so that we can accomplish each of these objectives. Let's look at a diagram that shows what this design looks like in principle, then we'll dive into the specifics of the workflows.

Diagram of Dolt Release Process Flow

In the above diagram you'll see two contexts, the GitHub Actions context and the Kubernetes (K8s) context. Let's discuss the GitHub Actions context first.

For Dolt's original release process, we used three workflows: the "Release Dolt" workflow, the "Deploy K8s Sysbench benchmarking Job" workflow, and the "Email team" workflow.

The "Release Dolt" workflow kicks off the entire Dolt release process, and is run manually by our engineering team when they're ready to release a new version of Dolt. Here is a pared-down version of the workflow that references the steps shown in the diagram above.

name: Release Dolt

on:
  workflow_dispatch:
    inputs:
      version:
        description: 'SemVer format release tag, i.e. 0.24.5'
        required: true

jobs:
  format-version:
    runs-on: ubuntu-22.04
    outputs:
      version: ${{ steps.format_version.outputs.version }}
    steps:
      - name: Format Input
        id: format_version
        run: |
          version="${{ github.event.inputs.version }}"
          if [[ $version == v* ]];
          then
            version="${version:1}"
          fi
          echo "version=$version" >> $GITHUB_OUTPUT

  create-release:
    needs: format-version
    name: Create release
    runs-on: ubuntu-22.04
    outputs:
      release_id: ${{ steps.create_release.outputs.id }}
    steps:
      - name: Checkout code
        uses: actions/checkout@v3
      - name: Set up Go 1.x
        uses: actions/setup-go@v3
        with:
          go-version: ^1.21
      - name: Update dolt version command
        run: sed -i -e 's/	Version = ".*"/	Version = "'"$NEW_VERSION"'"/' "$FILE"
        env:
          FILE: ${{ format('{0}/go/cmd/dolt/dolt.go', github.workspace) }}
          NEW_VERSION: ${{ needs.format-version.outputs.version }}
      - name: Set minver TBD to version
        run: sed -i -e 's/minver:"TBD"/minver:"'"$NEW_VERSION"'"/' "$FILE"
        env:
          FILE: ${{ format('{0}/go/cmd/dolt/commands/sqlserver/yaml_config.go', github.workspace) }}
          NEW_VERSION: ${{ needs.format-version.outputs.version }}
      - name: update minver_validation.txt
        working-directory: ./go
        run: go run -mod=readonly ./utils/genminver_validation/ $FILE
        env:
          FILE: ${{ format('{0}/go/cmd/dolt/commands/sqlserver/testdata/minver_validation.txt', github.workspace) }}
      - uses: EndBug/add-and-commit@v9.1.1
        with:
          message: ${{ format('[ga-bump-release] Update Dolt version to {0} and release v{0}', needs.format-version.outputs.version) }}
          add: ${{ format('["{0}/go/cmd/dolt/dolt.go", "{0}/go/cmd/dolt/commands/sqlserver/yaml_config.go", "{0}/go/cmd/dolt/commands/sqlserver/testdata/minver_validation.txt"]', github.workspace) }}
          cwd: "."
          pull: "--ff"
      - name: Build Binaries
        id: build_binaries
        run: |
          latest=$(git rev-parse HEAD)
          echo "commitish=$latest" >> $GITHUB_OUTPUT
          GO_BUILD_VERSION=1.21 go/utils/publishrelease/buildbinaries.sh
      - name: Create Release
        id: create_release
        uses: dolthub/create-release@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          tag_name: v${{ needs.format-version.outputs.version }}
          release_name: ${{ needs.format-version.outputs.version }}
          draft: false
          prerelease: false
          commitish: ${{ steps.build_binaries.outputs.commitish }}
      - name: Upload Linux AMD64 Distro
        id: upload-linux-amd64-distro
        uses: dolthub/upload-release-asset@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ steps.create_release.outputs.upload_url }}
          asset_path: go/out/dolt-linux-amd64.tar.gz
          asset_name: dolt-linux-amd64.tar.gz
          asset_content_type: application/zip
...
      - name: Upload Install Script
        id: upload-install-script
        uses: dolthub/upload-release-asset@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ steps.create_release.outputs.upload_url }}
          asset_path: go/out/install.sh
          asset_name: install.sh
          asset_content_type: text/plain

  trigger-performance-benchmark-email:
    needs: [format-version, create-release]
    runs-on: ubuntu-22.04
    steps:
      - name: Trigger Performance Benchmarks
        uses: peter-evans/repository-dispatch@v2.0.0
        with:
          token: ${{ secrets.REPO_ACCESS_TOKEN }}
          event-type: release-dolt
          client-payload: '{"version": "${{ needs.format-version.outputs.version }}", "actor": "${{ github.actor }}"}'

This workflow is triggered manually, using the workflow_dispatch event, and requires a new version number as input. From there, it does some quick formatting of the version input, then writes and commits this new version to Dolt's main branch so that the released binaries will output this new number from the dolt version command.

In the "Build Binaries" step, the create-release job runs the buildbinaries.sh script, which builds Dolt from source using a Golang Docker container that runs the go build command.

We use Docker containers to build Dolt so that the path's output by stack traces are generic Linux go paths and not paths that reference a go installation on the runner or on one of our personal computers (which has happened in early, early versions of Dolt 🤠).

Next, the "Create Release" step creates the tag and publishes the release on GitHub. It also provides an upload_url which is used in all subsequent steps of the create-release job to upload the compiled binaries to the new GitHub release.

The final portion of this workflow is another job that runs after all the previous jobs have completed. This job is called trigger-performance-benchmark-email. It uses a GitHub Action we found on their marketplace to emit a repository_dispatch event which kicks off a separate Dolt workflow. One that we can see if we look back at our diagram.

Diagram of Dolt Release Process Flow Highlighting Release Dolt Workflow to Deploy K8s Sysbench Job

Our diagram shows the final step of the "Release Dolt" workflow pointing to another workflow called "Deploy K8s Sysbench benchmarking Job". This is the workflow that is started by the trigger-performance-benchmark-email workflow job.

This workflow, and others like it, were designed to be dispatched asynchronously in part so that it wouldn't be tightly coupled with only the "Release Dolt" workflow.

In fact, various workflows will trigger this workflow using a repository_dispatch event, since we need to run performance benchmarks at different times, not just during a release. Interestingly, this workflow, itself, kicks off another asynchronous process, which we can see in the diagram indicated by the arrow—it deploys a K8s Job that runs our Sysbench benchmarks.

As it happens, we've written quite a bit about Dolt's use of Sysbench for benchmarking Dolt and MySQL to compare their performance, but I don't think we discussed the implementation details for how we do that specifically. This blog is a good one for me to go over that, so I will momentarily. Before I do though, let's look briefly at the "Deploy K8s Sysbench benchmarking Job" workflow.

name: Benchmark Latency

on:
  repository_dispatch:
    types: [ benchmark-latency ]

jobs:
  performance:
    runs-on: ubuntu-22.04
    name: Benchmark Performance
    strategy:
      matrix:
        dolt_fmt: [ "__DOLT__" ]
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - uses: azure/setup-kubectl@v3.0
        with:
          version: 'v1.23.6'
      - name: Install aws-iam-authenticator
        run: |
          curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/linux/amd64/aws-iam-authenticator && \
          chmod +x ./aws-iam-authenticator && \
          sudo cp ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
          aws-iam-authenticator version
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2.2.0
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-west-2
      - name: Create and Auth kubeconfig
        run: |
          echo "$CONFIG" > kubeconfig
          KUBECONFIG=kubeconfig kubectl config set-credentials github-actions-dolt --exec-api-version=client.authentication.k8s.io/v1alpha1 --exec-command=aws-iam-authenticator --exec-arg=token --exec-arg=-i --exec-arg=eks-cluster-1
          KUBECONFIG=kubeconfig kubectl config set-context github-actions-dolt-context --cluster=eks-cluster-1 --user=github-actions-dolt --namespace=performance-benchmarking
          KUBECONFIG=kubeconfig kubectl config use-context github-actions-dolt-context
        env:
          CONFIG: ${{ secrets.CORP_KUBECONFIG }}
      - name: Create Sysbench Performance Benchmarking K8s Job
        run: ./.github/scripts/performance-benchmarking/run-benchmarks.sh
        env:
          FROM_SERVER: ${{ github.event.client_payload.from_server }}
          FROM_VERSION: ${{ github.event.client_payload.from_version }}
          TO_SERVER: ${{ github.event.client_payload.to_server }}
          TO_VERSION: ${{ github.event.client_payload.to_version }}
          MODE: ${{ github.event.client_payload.mode }}
          ISSUE_NUMBER: ${{ github.event.client_payload.issue_number }}
          ACTOR: ${{ github.event.client_payload.actor }}
          ACTOR_EMAIL: ${{ github.event.client_payload.actor_email }}
          REPO_ACCESS_TOKEN: ${{ secrets.REPO_ACCESS_TOKEN }}
          KUBECONFIG: "./kubeconfig"
          INIT_BIG_REPO: ${{ github.event.client_payload.init_big_repo }}
          NOMS_BIN_FORMAT: ${{ matrix.dolt_fmt }}
          TEMPLATE_SCRIPT: ${{ github.event.client_payload.template_script }}
      - name: Create TPCC Performance Benchmarking K8s Job
        run: ./.github/scripts/performance-benchmarking/run-benchmarks.sh
        env:
          FROM_SERVER: ${{ github.event.client_payload.from_server }}
          FROM_VERSION: ${{ github.event.client_payload.from_version }}
          TO_SERVER: ${{ github.event.client_payload.to_server }}
          TO_VERSION: ${{ github.event.client_payload.to_version }}
          MODE: ${{ github.event.client_payload.mode }}
          ISSUE_NUMBER: ${{ github.event.client_payload.issue_number }}
          ACTOR: ${{ github.event.client_payload.actor }}
          ACTOR_EMAIL: ${{ github.event.client_payload.actor_email }}
          REPO_ACCESS_TOKEN: ${{ secrets.REPO_ACCESS_TOKEN }}
          KUBECONFIG: "./kubeconfig"
          INIT_BIG_REPO: ${{ github.event.client_payload.init_big_repo }}
          NOMS_BIN_FORMAT: ${{ matrix.dolt_fmt }}
          WITH_TPCC: "true"
          TEMPLATE_SCRIPT: ${{ github.event.client_payload.template_script }}

Short, but kinda busy, this workflow authenticates a kubectl client against a K8s cluster where we run our Sysbench benchmarks and supplies the required environment variables to run a script called run-benchmarks.sh. This script uses the values from these variables to write a K8s Job configuration file and then apply it, which deploys the benchmarking Job in our K8s cluster.

At this point you might be wondering why we choose to run benchmarks for Dolt in a K8s cluster and not just use GitHub Actions and its runners to benchmark Dolt. Well there are a couple reasons for this.

One, GitHub-hosted runners have very specific limits, at least for the free-tier, and for benchmarking our database we do not necessarily want to be constrained to these.

Additionally, there's no way for us to know, or control, what other processes or software is running on the GitHub-hosted runners while we'd be doing a benchmarking run, and that could negatively impact the results of the run in unpredictable ways.

And while it's certainly possible to use a self-hosted runner in GitHub Actions to circumvent these two problems, in which case we could benchmark Dolt using only GitHub Actions, we already have easily provision-able hosts available in our K8s cluster, so we opted to simply use those instead.

In fact, simply applying our K8s benchmarking Job will provision a new benchmarking host using the K8s cluster autoscaler, which is pretty cool.

Anyway, returning to our diagram for a brief moment, we see that after authenticating the kubectl client the "Deploy Sysbench benchmarking Job" workflow deploys the K8s Job, and the process moves to the K8s context and the "K8s Sysbench benchmarking job" is run.

Diagram of Dolt Release Process Flow Highlighting Deploy K8s Sysbench Job Workflow to K8s Sysbench Job

Now technically, this part of the original Dolt release process was more of a post-release step. Running the benchmarking job was not required to create a new Dolt release on GitHub, it just provided our team with a report on the release's latency. But it's important to see this part of our original release process so that our pgo updates to the Dolt release process will make more sense, but more on that later.

In the K8s context of the diagram we can see that the benchmarking Job performs a few steps. It builds a Dolt binary from a supplied commit SHA. In this case it's the SHA from the HEAD of Dolt main.

Next, it runs the Sysbench tests against that compiled Dolt version, then uploads the results of the Sysbench run to an AWS S3 bucket. Finally, it triggers a different GitHub Actions workflow that lives in the Dolt repository called the "Email team" workflow.

To perform all of this benchmarking and uploading and triggering, we've written an internal tool that can be used to benchmark a version of Dolt against a version of MySQL.

This tool uses some library code we maintain in the Dolt repository, but I'll provide some relevant snippets from the internal tool and the library code so you get a sense of how we've implemented these to run our benchmarks.

Our internal benchmarking tool code is essentially the following go function:

Click to see code
func compare(ctx context.Context,
	fromServer,
	toServer runner.ServerType,
	fromVersion,
	toVersion,
	fromProfile,
	toProfile,
	dir,
	doltCommand,
	doltgresCommand,
	mysqlExec,
	mysqlProtocol,
	mysqlSocketPath,
	postgresExec,
	initDbExec,
	nomsBinFormat,
	resultsDir,
	resultsPrefix,
	resultsFilename,
	scriptDir,
	schema,
	outputFormat string,
	defaultRuns int,
	initBigRepo,
	useDoltHubLuaScriptsRepo,
	writeResultsToFile bool,
	queries []string) (string, error) {
	config := benchmark.NewComparisonBenchmarkingConfig(
		fromServer,
		toServer,
		fromVersion,
		toVersion,
		fromProfile,
		toProfile,
		dir,
		doltCommand,
		doltgresCommand,
		mysqlExec,
		mysqlProtocol,
		mysqlSocketPath,
		postgresExec,
		initDbExec,
		nomsBinFormat,
		scriptDir,
		defaultRuns,
		initBigRepo,
		useDoltHubLuaScriptsRepo)

	sr := benchmark.NewSysbenchComparer(config)

	err := sr.Run(ctx)
	if err != nil {
		return "", err
	}

	fromServerConfig, err := config.GetFromServerConfig(ctx)
	if err != nil {
		return "", err
	}

	toServerConfig, err := config.GetToServerConfig(ctx)
	if err != nil {
		return "", err
	}

	resultsDbName := fmt.Sprintf("sysbench-%s", benchmark.ComparisonDbFilename)

	db := benchmark.NewSqlite3ResultsDb(fromServerConfig, toServerConfig, dir, schema, resultsDir, resultsPrefix, resultsFilename, resultsDbName, outputFormat, queries, writeResultsToFile)
	uploadDir, err := db.QueryResults(ctx)
	if err != nil {
		return "", err
	}

	return uploadDir, nil
}

compare is used to compare the Sysbench results of one version of a database to another. You can see from the function's parameters that this tool is not only used for Dolt and MySQL, but is also used for benchmarking our latest product DoltgreSQL, and it's competitor PostgreSQL.

The compare function refers to a fromServerConfig, which is the configuration for the "from" database server, and refers to a toServerConfig, which is the configuration of the "to" database server. Semantically, here, this tool will compare the "from" database to the "to" database side-by-side for easy analysis. During the Dolt release process, MySQL will be the "from" server and Dolt will be the "to" server.

You may also notice that we use sqlite3 in this tool, as referenced by benchmark.NewSqlite3ResultsDb, which is a legacy artifact from days before Dolt v1.0.0, but it still has some unique value here.

Under-the-hood, after the benchmarks run with sr.Run(), we load the results into a sqlite3 database and run some queries against it to get the comparative results of each database server. A benefit of using sqlite3 over Dolt for doing this, which works just as well, is that sqlite3 returns query output in many formats with the use of simple flags, like --html and --markdown, which saves us from having to code up query result transformation logic.

The uploadDir returned from db.QueryResults() contains the results of the comparison queries and a copy of the sqlite3 database to be uploaded to S3. These results will soon be downloaded by the "Email team" workflow, as we'll see shortly.

When it comes to actually running the Sysbench benchmarks, the benchmark.NewSysbenchComparer(config) is simply a wrapper struct around the Run function from some benchmarking library code we maintain in the Dolt repository.

Click to see code
func Run(config *Config) error {
	err := config.Validate()
	if err != nil {
		return err
	}

	ctx := context.Background()

	err = sysbenchVersion(ctx)
	if err != nil {
		return err
	}

	cwd, err := os.Getwd()
	if err != nil {
		return err
	}

	for _, serverConfig := range config.Servers {
		var results Results
		var b Benchmarker
		switch serverConfig.Server {
		case Dolt:
			fmt.Println("Running dolt sysbench tests")
			b = NewDoltBenchmarker(cwd, config, serverConfig)
		case Doltgres:
			fmt.Println("Running doltgres sysbench tests")
			b = NewDoltgresBenchmarker(cwd, config, serverConfig)
		case MySql:
			fmt.Println("Running mysql sysbench tests")
			b = NewMysqlBenchmarker(cwd, config, serverConfig)
		case Postgres:
			fmt.Println("Running postgres sysbench tests")
			b = NewPostgresBenchmarker(cwd, config, serverConfig)
		default:
			panic(fmt.Sprintf("unexpected server type: %s", serverConfig.Server))
		}

		results, err = b.Benchmark(ctx)
		if err != nil {
			return err
		}

		fmt.Printf("Successfuly finished %s\n", serverConfig.Server)

		err = WriteResults(serverConfig, results)
		if err != nil {
			return err
		}

		fmt.Printf("Successfuly wrote results for %s\n", serverConfig.Server)
	}
	return nil
}

This function creates a Benchmarker based on the type of server it sees, then calls Benchmark(), which runs the Sysbench tests against that server. Here's an example of what Dolt Benchmarker's Benchmark() implementation looks like:

Click to see code
func (b *doltBenchmarkerImpl) Benchmark(ctx context.Context) (Results, error) {
	err := b.checkInstallation(ctx)
	if err != nil {
		return nil, err
	}

	err = b.updateGlobalConfig(ctx)
	if err != nil {
		return nil, err
	}

	testRepo, err := b.initDoltRepo(ctx)
	if err != nil {
		return nil, err
	}

	serverParams, err := b.serverConfig.GetServerArgs()
	if err != nil {
		return nil, err
	}

	server := NewServer(ctx, testRepo, b.serverConfig, syscall.SIGTERM, serverParams)
	err = server.Start(ctx)
	if err != nil {
		return nil, err
	}

	tests, err := GetTests(b.config, b.serverConfig, nil)
	if err != nil {
		return nil, err
	}

	results := make(Results, 0)
	for i := 0; i < b.config.Runs; i++ {
		for _, test := range tests {
			tester := NewSysbenchTester(b.config, b.serverConfig, test, stampFunc)
			r, err := tester.Test(ctx)
			if err != nil {
				server.Stop(ctx)
				return nil, err
			}
			results = append(results, r)
		}
	}

	err = server.Stop(ctx)
	if err != nil {
		return nil, err
	}

	return results, os.RemoveAll(testRepo)
}

During the Benchmark() call, this implementation will check the Dolt installation, update some global Dolt configuration, get the arguments used to start the Dolt SQL server, start the server, acquire the Sysbench tests it's going to run, then run those tests by calling tester.Test().

When it's done, it returns the results and cleans up what it wrote to disk.

And, as we've seen in the internal tool's compare function, these results are loaded into sqlite3 and uploaded to S3 so they can be emailed to the DoltHub team. But, we're still missing one step—that is, triggering the "Email team" workflow with a repository_dispatch event after the internal benchmarking tool finishes uploading the results.

So the final piece of our internal tool includes:

err := d.DispatchEmailReportEvent(ctx, *toVersion, *nomsBinFormat, *bucket, key)
if err != nil {
		log.Fatal(err)
}

The DispatchEmailReportEvent() method is on a Dispatcher interface we wrote. It simply makes an HTTP request to the GitHub Actions Workflow REST API which emits the repository_dispatch event, triggering the "Email team" workflow to run. So let's look at that next.

Diagram of Dolt Release Process Flow Highlighting K8s Sysbench Job to Email Team Workflow

Like the "Deploy K8s Sysbench benchmarking Job" workflow the "Email team" workflow is used by multiple processes besides the just Dolt release process, so that's why we trigger it with repository_dispatch events. The workflow file is as follows:

name: Email Team Members

on:
  repository_dispatch:
    types: [ email-report ]

jobs:
  email-team:
    runs-on: ubuntu-22.04
    name: Email Team Members
    steps:
      - uses: actions/checkout@v3
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2.2.0
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-west-2
      - name: Get Results
        id: get-results
        run: aws s3api get-object --bucket="$BUCKET" --key="$KEY" results.log
        env:
          KEY: ${{ github.event.client_payload.key }}
          BUCKET: ${{ github.event.client_payload.bucket }}
      - name: Get Addresses
        id: get-addresses
        run: |
          addresses="$TEAM"
          if [ ! -z "$RECIPIENT" ]; then
            addresses="[\"$RECIPIENT\"]"
          fi
          echo "addresses=$addresses" >> $GITHUB_OUTPUT
        env:
          RECIPIENT: ${{ github.event.client_payload.email_recipient }}
          TEAM: '["${{ secrets.PERF_REPORTS_EMAIL_ADDRESS }}"]'
      - name: Send Email
        uses: ./.github/actions/ses-email-action
        with:
          template: ${{ github.event.client_payload.template }}
          region: us-west-2
          version: ${{ github.event.client_payload.version }}
          format: ${{ github.event.client_payload.noms_bin_format }}
          toAddresses: ${{ steps.get-addresses.outputs.addresses }}
          dataFile: ${{ format('{0}/results.log', github.workspace) }}

As shown in the diagram, the summary of this workflow is that it downloads the Sysbench results for the Dolt release and then sends them in an email to our team; nothing crazy.

And that's the Dolt release process. Or that was the Dolt release process. Now I'll go over how we updated this process to start building pgo binaries of Dolt on release.

PGO releases with GitHub Actions

For those unfamiliar with pgo builds, they require the -pgo flag with the path to a Golang profile supplied during the go build command. That part is actually very simple. But prior to that, you need to create the profile you want to use for your optimized build, and this required us to update some of our benchmark library code, and our internal tool code, so that they could both generate a profile and accept a profile as input. Let me explain in more detail.

In our benchmarking library code, we use another Dolt utility called dolt_builder to actually build Dolt binaries from source. To use this tool, you simply provide the commit SHA or tag you want to build Dolt from, and it will build it for you. So we use this tool in a number of places for easily building multiple versions of Dolt simultaneously.

So the first thing we did was update this tool to accept a Golang profile it can use to build Dolt:

Click to see code
// goBuild builds the dolt binary and returns the path to the binary
func goBuild(ctx context.Context, source, dest, profilePath string) (string, error) {
	goDir := filepath.Join(source, "go")
	doltFileName := "dolt"
	if runtime.GOOS == "windows" {
		doltFileName = "dolt.exe"
	}

	args := make([]string, 0)
	args = append(args, "build")

	if profilePath != "" {
		args = append(args, fmt.Sprintf("-pgo=%s", profilePath))
	}

	toBuild := filepath.Join(dest, doltFileName)
	args = append(args, "-o", toBuild, filepath.Join(goDir, "cmd", "dolt"))

	build := ExecCommand(ctx, "go", args...)
	build.Dir = goDir
	err := build.Run()
	if err != nil {
		return "", err
	}
	return toBuild, nil
}

The next thing we did was update the benchmark library code to run in a "profiling" mode. In the default mode, as described above, this code calls Benchmark() and returns the results. In the new "profiling" mode, the code calls Profile() on a Profiler interface:

...
		case Dolt:
			// handle a profiling run
			sc, ok := serverConfig.(ProfilingServerConfig)
			if ok {
				if string(sc.GetServerProfile()) != "" {
					fmt.Println("Profiling dolt while running sysbench tests")
					p := NewDoltProfiler(cwd, config, sc)
					return p.Profile(ctx)
				}
			}
...

Profile() works similarly to Benchmark(), but creates a golang profile taken while running the Sysbench benchmarks. This allows us to easily generate profiles for Dolt we can use in our new release process.

We also update this library code to accept a profile as an input. That way we can supply it a profile, which it in turn will supply to dolt_builder to create a pgo binary, and then run Sysbench and output those results.

To clarify, we basically updated this library code so that we can run it in one mode to generate a Golang profile, and then run it in the default mode to get our normal benchmarking results, but it will also accept a Golang profile as input, and use that to build Dolt with the go build -pgo . Hopefully that makes sense to you, since it's a bit tricky for me to describe 🤠.

Moving on, we then needed to update our internal tool that uses all this library code to also have a "profiling" mode and accept Golang profiles as input. Our plan for the new release process was to run the internal tool once, in profiling mode, to create a Golang profile. Then, run the internal tool again in default mode, but supply the Golang profile back to it, which would produce the benchmarking results against a pgo built Dolt.

So, like the compare function, we were able to add a profile function to the internal tool that generates a Golang cpu profile of a Dolt version.

Click to see code
func profile(ctx context.Context, dir, profileDir, resultsDir, resultsPrefix, version, profile, doltCommand, scriptsDir string, useDoltHubLuaScriptsRepo bool) (string, error) {
	config := benchmark.NewProfilingConfig(
		dir,
		profileDir,
		version,
		profile,
		doltCommand,
		scriptsDir,
		useDoltHubLuaScriptsRepo)
	toUpload := filepath.Join(resultsDir, resultsPrefix)
	sr := benchmark.NewSysbenchProfiler(config, toUpload, profileDir)
	return toUpload, sr.Run(ctx)
}

This function returns its toUpload directory like compare does, but this time it contains the profile to be uploaded to S3.

After these changes to the code, we were ready to update our GitHub Actions workflows to start creating pgo releases of Dolt. Here's a diagram showing the new Dolt release process with GitHub Actions.

Diagram of PGO Dolt Release Process Flow

As you can see from the new release workflow diagram we've added some new GitHub Actions workflows, but they're similar to the original ones. Let's look more closely at them.

For the new Dolt release process, the first workflow we run, called "Release Dolt (Profile)", does not actually create a GitHub release or build any Dolt binaries.

Instead, its only function is to trigger a second workflow called "Deploy K8s Sysbench Profiling Job".

name: Release Dolt (Profile)

on:
  workflow_dispatch:
    inputs:
      version:
        description: 'SemVer format release tag, i.e. 0.24.5'
        required: true

jobs:
  format-version:
    runs-on: ubuntu-22.04
    outputs:
      version: ${{ steps.format_version.outputs.version }}
    steps:
      - name: Format Input
        id: format_version
        run: |
          version="${{ github.event.inputs.version }}"
          if [[ $version == v* ]];
          then
            version="${version:1}"
          fi
          echo "version=$version" >> $GITHUB_OUTPUT

  profile-benchmark-dolt:
    runs-on: ubuntu-22.04
    needs: format-version
    name: Trigger Benchmark Profile K8s Workflows
    steps:
      - uses: actions/checkout@v4
        with:
          ref: main
      - name: Get sha
        id: get_sha
        run: |
          sha=$(git rev-parse --short HEAD)
          echo "sha=$sha" >> $GITHUB_OUTPUT
      - uses: peter-evans/repository-dispatch@v3
        with:
          token: ${{ secrets.REPO_ACCESS_TOKEN }}
          event-type: profile-dolt
          client-payload: '{"from_version": "${{ steps.get_sha.outputs.sha }}", "future_version": "${{ needs.format-version.outputs.version }}", "mode": "release", "actor": "${{ github.actor }}", "actor_email": "dustin@dolthub.com", "template_script": "./.github/scripts/performance-benchmarking/get-dolt-profile-job-json.sh"}'

The "Deploy K8s Sysbench Profiling Job" works almost identically to the "Deploy K8s Sysbench Benchmarking Job", except it deploys a benchmarking Job running in "profiling" mode to the K8s cluster, so that we create a Golang profile using the HEAD of Dolt main.

name: Profile Dolt while Benchmarking

on:
  repository_dispatch:
    types: [ profile-dolt ]

jobs:
  performance:
    runs-on: ubuntu-22.04
    name: Profile Dolt while Benchmarking
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - uses: azure/setup-kubectl@v4
        with:
          version: 'v1.23.6'
      - name: Install aws-iam-authenticator
        run: |
          curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/linux/amd64/aws-iam-authenticator && \
          chmod +x ./aws-iam-authenticator && \
          sudo cp ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
          aws-iam-authenticator version
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-west-2
      - name: Create and Auth kubeconfig
        run: |
          echo "$CONFIG" > kubeconfig
          KUBECONFIG=kubeconfig kubectl config set-credentials github-actions-dolt --exec-api-version=client.authentication.k8s.io/v1alpha1 --exec-command=aws-iam-authenticator --exec-arg=token --exec-arg=-i --exec-arg=eks-cluster-1
          KUBECONFIG=kubeconfig kubectl config set-context github-actions-dolt-context --cluster=eks-cluster-1 --user=github-actions-dolt --namespace=performance-benchmarking
          KUBECONFIG=kubeconfig kubectl config use-context github-actions-dolt-context
        env:
          CONFIG: ${{ secrets.CORP_KUBECONFIG }}
      - name: Create Profile Benchmarking K8s Job
        run: ./.github/scripts/performance-benchmarking/run-benchmarks.sh
        env:
          PROFILE: "true"
          FUTURE_VERSION: ${{ github.event.client_payload.future_version }}
          FROM_VERSION: ${{ github.event.client_payload.from_version }}
          MODE: ${{ github.event.client_payload.mode }}
          ACTOR: ${{ github.event.client_payload.actor }}
          ACTOR_EMAIL: ${{ github.event.client_payload.actor_email }}
          REPO_ACCESS_TOKEN: ${{ secrets.REPO_ACCESS_TOKEN }}
          KUBECONFIG: "./kubeconfig"
          INIT_BIG_REPO: ${{ github.event.client_payload.init_big_repo }}
          NOMS_BIN_FORMAT: "__DOLT__"
          TEMPLATE_SCRIPT: ${{ github.event.client_payload.template_script }}

Once the benchmarking K8s Job is running in "profiling" mode, we can see the steps it performs in our updated diagram. We also see that the output of this Job is a fresh Golang profile, uploaded to S3, that's ready to be used by the remaining steps of our process to create pgo builds.

At the end of the profiling K8s Job, after uploading the profile, it triggers the "Release Dolt" workflow. This workflow works basically the same as the original "Release Dolt" workflow, except that its first step is downloading the Golang Profile that the profiling Job uploaded.

...
  create-pgo-release:
    needs: format-version
    runs-on: ubuntu-22.04
    name: Release PGO Dolt
    outputs:
      release_id: ${{ steps.create_release.outputs.id }}
    steps:
      - uses: actions/checkout@v4
        with:
          ref: main
      - name: Set up Go 1.x
        uses: actions/setup-go@v5
        with:
          go-version-file: go/go.mod
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-west-2
      - name: Get Results
        id: get-results
        run: aws s3api get-object --bucket="$BUCKET" --key="$KEY" dolt-cpu-profile.pprof
        env:
          KEY: ${{ github.event.inputs.profile_key || github.event.client_payload.profile_key }}
          BUCKET: ${{ github.event.inputs.profile_bucket || github.event.client_payload.bucket }}
...

It then supplies the downloaded profile, here called dolt-cpu-profile.pprof to the buildbinaries.sh script, which runs go build -pgo=./dolt-cpu-profile.pprof, which compiles the new Dolt binaries. Then, like the original version of the workflow, it creates a GitHub release and uploads these binaries as release assets.

Before completing, one of the final jobs in this workflow kicks off another benchmarking K8s Job, only this time supplying the job with the S3 key to the Golang profile it used to build the Dolt binaries.

...
  trigger-performance-benchmark-email:
    needs: [format-version, create-pgo-release]
    runs-on: ubuntu-22.04
    steps:
      - name: Trigger Performance Benchmarks
        uses: peter-evans/repository-dispatch@v3
        with:
          token: ${{ secrets.REPO_ACCESS_TOKEN }}
          event-type: release-dolt
          client-payload: '{"version": "${{ needs.format-version.outputs.version }}", "actor": "${{ github.actor }}", "profile_key": "${{ github.event.inputs.profile_key || github.event.client_payload.profile_key }}"}'

This deploys a benchmarking Job to our K8s cluster once again, but now the Job will download the Golang profile from S3 and use it to construct pgo binaries of Dolt to use for benchmarking and producing results.

And we can see from the diagram in the K8s context, the final step of this second benchmarking Job kicks off the "Email team" workflow, so that our team gets the benchmarking results for the now pgo'd Dolt.

And so we've done it! We are now releasing pgo builds of Dolt.

Conclusion

As you can see there's a bit of complexity involved in updating a release process to produce pgo binaries, at least that was the case for us. But the effort was definitely worth the performance gains we've seen.

I hope you found this helpful for your own endeavors, and we encourage you to try updating your releases as well. If you do, we'd love to hear about it. Come by and share your experience on our Discord.

Also, don't forget to check out each of our different product offerings below:

SHARE

JOIN THE DATA EVOLUTION

Get started with Dolt

Or join our mailing list to get product updates.