2 Days at GopherCon UK 2019

Teiva Harsanyi
Utility Warehouse Technology
15 min readAug 23, 2019

--

This post is a summary of my 2-day experience at GopherCon UK 2019. I did not attend the first day (workshops only).

Disclaimer: In this post, I am expressing my opinions about some talks. Under no circumstances, I am making a value judgment on the speakers.

Finding Dependable Go Packages

The conference started with a keynote from @JQiu25 who’s working at Google in the Go team.

She described the 3-step process when it comes to working with an external open-source library:

  • Discovery: Searching and finding a solution for a given problem (via Google, Twitter or whatever).
  • Evaluation: Evaluating a solution.
  • Maintenance: Contributing back to the project through PR, issues, etc.

Regarding the evaluation, she described the steps that we (should) all do:

  • Checking the license (I have to admit I do not always respect this one :).
  • Checking the project popularity via the Github stars, forks, etc. But more importantly the number of projects being dependent on it.
  • Taking a look at the overall project quality by navigating in the documentation, the tests coverage, the code quality (with tooling or websites like goreportcard for example).
  • Verifying the upkeep of the project by looking at the history of the commits, the number of issues, the API stability, etc.
  • Checking the transient dependencies brought by the library itself.

The talk was cool but the big announcement was that the Go team is working on a solution to ease the process of choosing a library. Indeed, they intend to create a portal to aggregate all of this information.

You can find more information about this future portal here but this should be quite a useful tool. Looking forward to it.

Implementing a Search Index Using Machine Learning in Go

Then, I went to a talk about machine learning applied to search engines given by @dahernan.

We saw that the technique used by the standard search engines like Elasticsearch, Redis search, etc. is based on tf-idf (Term Frequency — Inverse Document Frequency). This technique is great for matching keywords but not for capturing the semantics of a sentence.

The example given by the speaker was to search for does go have generics on Stack Overflow. The first result is the For-each over an array in JavaScript? page because it is the one that matches the most the keywords passed in the query. Obviously, this is not what we could expect as a user.

To be honest, I am not a ML expert (at all). Despite having understood the goals, I did not catch the fundamental concepts brought by the speaker during the talk. So, instead of summarizing something already unclear for me, I will rather give you some references that you could may find interesting.

Quicksilver: How Cloudflare Controls its Network Across the Planet Using Go

The next talk was done by @_jimenezrick from Cloudfare about their internal project called Quicksilver.

In a nutshell, the issue they faced was how to efficiently share a configuration across thousands of servers. What efficiently meant in their context was mainly:

  • No inconsistencies.
  • ACID guarantees.
  • Fast replication (again across thousands of replicas).
  • Great read performance.

As a data store, the have chosen LMDB. This is an in-memory key/value store implemented in C. Under the hood, it is based on the B+ tree data structure that is an alternative of the widely used B-tree.

According to the speaker, LMDB has good write performance and excellent read performance. Also, it is optimized for small reads (so perfect in the context of a configuration) and using various optimizations like page cache, copy-on-write, etc.

Yet, LMDB is not distributed. So, it was up to the Cloudfare engineers to implement the replication.

First things first, the integration with LMDB was done with cgo. They claimed to be happy with the small footprint while calling C functions.

Coming back to our topic, he claimed to be able to replicate a config change across thousands of servers in less than 1 second (which is very impressive).

Each change was a log entry composed of a sequence id, a content and a hash (to guarantee data integrity). Each entry was then replicated using a peer-to-peer approach (if I understood correctly).

Each server was then hosting a Quicksilver daemon that was their Go wrapper on top of LMDB.

He also highlighted the importance of observability to ease troubleshooting.

I enjoyed this talk. I would have like to get more information about whether they had to handle potential conflicts for example. But in only 30 minutes that was simply impossible. Great talk anyway.

Go as a Scripting Language in Linux

Another talk from a Cloudfare engineer, @secumod, on how to use Go as a scripting language (because why not?).

As you may have noticed, Go does not support shebang (the first line of a script like #!/bin/sh). This was rejected a couple of times by the Go team.

Hence, this code does not even compile:

There is an alternative solution, though:

If you execute this Go file with ./shebang.go, this will print Hello, World!. Yet, if the script exits with a specific code, it will not be caught by the OS (as the standardgo run does not return it).

According to the speaker, another approach could be to replace go run with erning/gorun for example:

Here, adding the shebang line to reference erning/gorun makes shebang.go executable (if erning/gorun is installed obviously). But again, the code does not compile so using go build for example, will not work.

The final approach introduced by the speaker was a Kernel function called binfmt_misc. In a nutshell, this is a way to add a custom interpreter for a given file extension.

With this mechanism, we can directly configure the OS to use the standard go run without having to add any custom shebang line.

This was a very cool talk and if you want more information, check out the official Cloudfare post:

Advanced Testing Techniques

Yet, another interesting talk. This time about testing techniques by @Caust1c.

During the talk, the speaker mentioned a concrete example I found interesting: How do we test a function that depends on the current time? For example, this duration function:

The obvious problem with this implementation is the fact that it depends on an external context (the current time that will obviously vary).

Personally, what I use to do in this context is to add a now time.Time parameter as an input of this function to make it pure. So instead of calling time.Now().Add(d), I am using now.Add(d). Yet, the impurity is now propagated to the caller as it must pass a value for this parameter.

The alternative proposed by the speaker was to have a global variable like this:

Here now is a func() time.Time.

Therefore, if we want to test it, we could do that:

First, we need to make sure the now global variable is modified only in the scope of the test (line 2 and 3).

Then, we assigned it to whatever value we need to test the function behavior.

This trick is still not perfect, though. For example, it prevents our tests to be executed in parallel. I found the technique very interesting nonetheless.

Another topic brought by the speaker was interfaces:

As a library author, don’t export interfaces without a strong case.

As a program author, create interfaces for external dependencies.

Indeed, we may expect from libraries to expose interfaces to be able to mock them. According to the speaker, this should not be automatic.

He took the example of Sarama that exposes more than 20 interfaces leading to confusion for the users. Meanwhile, as a library author, adding a method to an existing interface breaks the backward compatibility as each implementation (including the mocks) must also be changed.

Instead, his recommendation was to create interfaces for our external dependencies. For example, creating an interface to encompass the integration with a DB.

First, this adds a level of abstraction allowing us to not change our code depending on the implementation (even though finding the right level of abstraction can be quite a challenge sometimes in my opinion).

Then, if we wrap an external dependency within an interface, we can mock it. In this case, we don’t depend on an external interface and we are in control of what must be mocked.

Moreover, he proposed a methodology to improve our tests:

  • Write a single implementation
  • Test it (doing some integration tests with a DB for example)
  • Abstract it using interfaces
  • Write other implementations of this interface (is this is needed)
  • Refactor the tests

If we need to test two interface implementations, we can then write a common function taking an interface as input and implementing the testing logic. This function would be then called from the two tests.

Last but not least, the speaker did a focus on the httptest package and how useful it was. Especially when we have to expose an HTTP endpoint.

I don’t know whether this was the best talk I attended. I would have been glad if this could have last longer than 30 minutes. Yet, it was the one that made me think the most once finished (which is obviously a great thing).

A follow-up by the speaker:

Going Infinite: Handling 1 Million Websocket Connections in Go

The next talk done by @eran_yanay was on how to handle a million of concurrent Websocket connections.

He started first with a naive implementation using the standard library and http.Serve triggering one goroutine per request. He showed that this approach does not scale really well as it would require 20GB of memory to handle 1M connections (using pprof for the profiling).

The first memory optimization was on the number of created goroutines. He proposed to rather use async I/O and epoll.

The second optimization was to work on buffer allocations. In his demo, he switched to gobwas/ws library that allows reusing the I/O buffers between the connections (among other cool features like zero-copy, a low-level API for implementing our logic of packet handling, etc.)

Also, he discussed the OS limits that we would face if we had to handle 1M connections:

  • Too many open files problem: as every resource in Linux is a file descriptor (hence a connection is also a file descriptor), this needs to be tuned using ulimit.
  • A lack of capacity in the Linux conntrack table that also needs to be tuned.

After all these optimizations, he reduced the memory footprint of handling 1M connections from 20GB to 600MB (97%).

This talk was really cool and you can find his Github repository here:

Productionise any Application in Kubernetes

Then, I attended a talk given by @OpsDavy about some real-world Kubernetes patterns:

  • The sidecar pattern: Having a helper container beside our application container (the pattern used by service meshes).
  • The adapter pattern: Normalizing application outputs (for example to get the logs produced and integrate them in whatever solution).
  • The ambassador pattern: Acting as a proxy to the external world for accessing databases etc.

I would say I have not been particularly captivated by this talk. Despite being very well presented, that was simply not the kind of talk I was expecting at GopherCon. It was too general and not really related to the Go programming language itself.

Impossible Go!

The last talk of the first day was done by @gautamrege.

It was a (surprising) mix of Go snippets where we had to find the expected outputs for example and entrepreneurship lessons.

It’s really difficult to write about this talk as I will not list here all the examples. Most of them were cool but not that important to know in real-world situations in my opinion. Except maybe for the functional options pattern that we can find in libraries like the gRPC client or RxGo for example.

Yet, it was dynamic and entertaining. I would say the perfect combination that you would expect for the last talk of a long day :)

Building a net.Conn Type From the Ground Up

I started my second day with a talk from @mdlayher on how to build a custom net.Conn.

The use case was to implement a specific socket type called vsock that is used for communication between a hypervisor and its virtual machines.

We started with an introduction on net.Conn and net.Listener with some common examples of a TCP client a TCP server. As these two are interfaces, we can decide to implement them for our use case.

We dug into the actual Go vsock implementation by seeing the usages of the the unix package for standard calls like Connect, Bind, Listen etc. But also how to create a non-blocking file descriptor using SetNonBlock. Moreover, we saw an example of why epoll was (again) a nice alternative to the standard blocking I/O and how to use it.

What I really liked in this talk was to understand how flexible it was to base the solution on net.Conn and net.Listener.

Indeed, once the implementation was finished, we saw how easy it was to make vsock available over HTTP or gRPC for example. As these libraries are accepting net.Conn and net.Listener as input, it was not requiring many efforts.

This reminds me of io.Reader and how a powerful abstraction can lead to great flexibility.

Optimizing Go Code Without a Blindfold

The next talk by @mvdan_ was maybe the one I was expecting the most (being in love with everything related to optimizations).

It started with the usual recommendations to check first whether an application is slow before trying to optimize it (premature optimization is the root of all evil, isn’t it?).

Then, the speaker introduces two tools:

  • benchcmp: Used to compare changes between different benchmark executions.
  • benchstat: Used to measure the variance between multiple benchmark executions.

He mentioned that when we execute a benchmark, we should have a variance close to 1% but not more.

One obvious action to act on the variance when running a local benchmark is to close the applications like Slack, our music player, etc. The second action he proposed (that is working only on Linux, unfortunately) is to use a tool called perflock. It allows running a benchmark at a given percentage of the total CPU speed. This is a way to avoid the loss of performance that we may face when we run a series of benchmarks at 100% of the CPU.

Then, he discussed a couple of interesting points:

  • The -gcflags='-m -m' option to understand in details the optimizations done by the compiler (inlining, escape analysis, etc.).
  • Bounds check optimizations (which is also described by William Kennedy here).
  • Destroying a large map should be done by deleting all the keys rather than simply setting the map reference to nil.

Then, he did a live coding that went… really bad :(

I’m not going to judge it as it’s already difficult to be a public speaker so trying a live coding in the meantime is brave (and it could also be sometimes interesting to see a live process of debugging). Yet, to be honest, I did not understand at all the context of the live coding, what we wanted to solve, etc.

So, that was not a very nice ending in my opinion but overall, the talk was still really cool and I enjoyed it.

Lock-free Observations for Prometheus Histograms

The next talk was done by @beorn7, a famous Go speaker working at Prometheus.

The use case was how to handle histograms in the Prometheus Go client library. In a nutshell, each observation required to increment 3 different counters. On the other hand, the observer has to get a consistent view of these counters.

He needed something with better throughput than the traditional Go channels so his first idea was to use sync.Mutex or sync.RWMutex on both sides (read and write). Yet, it did not allow him to achieve what he was expecting in terms of performance. Regarding sync.RWMutex, he said that it should be used when we have many reads but few writes. His use case was simply the opposite.

He kept the sync.Mutex on the write side but he tried to write a lock-free implementation on the read side.

The naive approach was to increment each of these counters independently. Yet, as it was not possible to increment them atomically, this went wrong. One interesting point he mentioned: the data race detector does not work with the atomic package. His recommendation was to be careful when we are in this situation then.

He ended up with a (brilliant) solution that you can find here. It is a mixed approach of atomic calls and bit vectors to handle multiple changes atomically. He also explained why he had to use runtime.Gosched within a loop to make sure this would not generate too much CPU load.

You can check out his slides here. It was such a brilliant talk and I will check closely the solution later on as there are many things to learn from it.

From Chaos to Domain-Driven Design

Then, I attended a talk done by @joanjan14 on how to use DDD in the context of a Go project.

He started with an example of a flat Go project with all the files in the same package. He explained the problems with this approach like tight coupling, testing, lack of readability, etc.

He described how DDD could be a solution to structure an application (by keeping aside the domain and the application logic for example). He also gave an introduction about the basic concepts of DDD and the impacts on the development side (how to handle entities vs value object for example).

I would say that DDD being very dear to my heart, I was expecting way more details. But again, this is just my opinion and overall, it was a nice introduction and a nice talk.

Fun With Pointers

Then, a talk by @danicat83 on the Go pointers.

She started with a nice introduction on why some people are afraid of pointers. She also explained why pointers in Go are not that complex compared to other languages (cleaner syntax, nil default value, etc.).

Then, we have seen the benefits of using the stack vs the heap and also a bunch of examples to understand what is really a pointer.

In summary, we should only use pointers in one of these scenarios:

  • Passing data by reference (because a structure must be shared for example).
  • Minimizing data copy which may or may not have a positive performance impact (I would recommend you this presentation in addition).
  • Being able to make a difference between zero value and missing value.
  • Using low-level interfaces.

The talk was cool. I did not learn much things personally. But if I had attended this talk a couple of years ago, it would have prevented me from doing some common mistakes.

Tackling Contention: The Monsters Inside the ‘sync.Locker’

Last but not least, the talk I expected the most (I know I already said that) by @empijei.

First, he defined what does contention mean:

The competition between different processes to acquire a resource.

He also presented pprof and the -trace testing option in details and how to use them to detect bottlenecks.

According to the speaker, reducing the contention can be tackled from two different angles:

  • Changing the primitives
  • Or changing the algorithm

As a nice example, he presented the sharding pattern applied to the standard Go concurrency primitives by using multiple channels.

Then, we saw a nice example of false sharing where updating a []uint64 was actually faster than updating a []uint8 and how to prevent it from using padding. We saw that this could even happen while acquiring a read lock on a sync.RWMutex as it requires a write on a counter.

The talk concluded that for any performance problems we have to:

  • Measure the current implementation (using pprof, trace or whatever).
  • Understand the problem.
  • Fix it by applying some patterns (sharding, padding etc.).
  • Measure it again.

I enjoyed this talk. It was dynamic, easy to follow with great examples. A perfect ending.

The slides are here.

As a conclusion, I would say that I enjoyed my experience at GopherCon UK 2019. Generally, the talks were really interesting and I learned a lot of stuff.

The place was maybe too small in regards to the number of attendees and it would have been nice to see more woman speakers but overall, this was a great experience.

Thanks to my company Utility Warehouse for sending me to GopherCon. We are full of Gophers so if you are interested, please check at https://uw.engineering/.

Follow me on Twitter @teivah

--

--

Teiva Harsanyi
Utility Warehouse Technology

Software Engineer @Google | 📖 100 Go Mistakes author | 改善