According to Jackie Stewart, a three-time world champion F1 driver, having an understanding of how a car works made him a better pilot.

Martin Thompson (the designer of the LMAX Disruptor) applied the concept of mechanical sympathy to programming. In a nutshell, understanding the underlying hardware should make us a better developer when it comes to designing algorithms, data structures, etc.

In this post, we will dig into the processor and see how understanding some of its concepts can help…

In 2019, I published a post called . As the name states, it was a way to explain some of the most common mistakes made by Go developers.

This post was pretty much popular (at least for me), and I started to think that writing a book about common mistakes might be a good idea. Hence, until late 2020, I kept collecting ideas throughout my experiences, be it my own errors (yes, I’m a provider of mistakes!) or some I observed in my job or on open-source projects. …

A couple of months ago, I wrote a post about Go and CPU Caches:

Then, I wanted to extend the scope of this post. Following this idea, I had the chance to give a talk to GopherCon Turkey 2020. The topic is the following: Mechanical Sympathy in Go.

I tried to cover the following topics:

  • CPU architecture introduction
  • Locality of reference principles: temporal vs spatial locality, predictability, striding
  • Data-oriented design: introduction, slice of structs vs struct of slices, etc.
  • Pitfalls due to a bad utilization of CPU caches: cache associativity, critical stride
  • Concurrency: false sharing

Here is the video of my talk:

The beginning of the talk was unfortunately ousted. Yet, you can find the slides here for the missing introduction:


  • Find the only element that appears once
In an unsorted array of integers, every element appears twice except for one element that appears only once. Return this element.Example:Input: a=[1, 4, 2, 4, 1, 3, 2]
Output: 3


With the rise of distributed architectures, consistent hashing became mainstream. But what is it exactly and how is it different from a standard hashing algorithm? What are the exact motivations behind it?

First, we will describe the main concepts. Then, we will dig into existing algorithms to understand the challenges associated with consistent hashing.

Main Concepts


Hashing is the process to map data of arbitrary size to fixed-size values. Each existing algorithm has its own specification:

  • MD5 produces 128-bit hash values.
  • SHA-1 produces 160-bit hash values.
  • etc.

Hashing has many applications in computer science. For example, one of these applications is called…


  • ReactiveX is an API for asynchronous programming based on the observer pattern.
  • RxGo is the official Go implementation of ReactiveX (a cousin of RxJS, RxJava, etc.).
  • There are +50 new operators and multiple new features (hot vs cold observable, connectable observable, backpressure, etc.)
  • Yes, Go has already great concurrency primitives and yes, there is still no generics. Nonetheless, RxGo may still be worth a look.


In ReactiveX, the two main concepts are the Observer and the Observable.
An Observable is a bounded or unbounded stream of data. An Observer subscribes to an Observable. Then, it reacts to whatever item the Observable…

I wanted to let you know that I just released Algo Deck. It is a free & open-source collection of +200 algorithmic cards.

Each card is a synthesized way to describe a concept. For example:

This is not an analogy for the sync package quality :)

Let’s take a look at the Go package in charge to provide synchronization primitives: sync.


sync.Mutex is probably the most widely used primitive of the sync package. It allows a mutual exclusion on a shared resource (no simultaneous access):

It must be pointed out that a sync.Mutex cannot be copied (just like all the other primitives of sync package). If a structure has a sync field, it must be passed by pointer.


sync.RWMutex is a reader/writer mutex. It provides the same methods that we have just seen with Lock() and Unlock() (as both structures implement sync.Locker interface). …

Testing is fun

A small trick I wanted to share.

Sometimes, you need to make sure your Go tests are executed sequentially. I did not mention unit tests as one would argue that if unit tests cannot be executed in parallel, these are not unit tests.

Anyway, the first option is to run your tests using go test -p 1.

The second option is the following:

In every test, we have to call defer seq()(). The locking is the first statement executed whereas the unlocking is the last one. This option guarantees that TestFoo and TestBar are executed sequentially regardless of the test option used.

This post is a summary of my 2-day experience at GopherCon UK 2019. I did not attend the first day (workshops only).

Disclaimer: In this post, I am expressing my opinions about some talks. Under no circumstances, I am making a value judgment on the speakers.

Finding Dependable Go Packages

The conference started with a keynote from @JQiu25 who’s working at Google in the Go team.

She described the 3-step process when it comes to working with an external open-source library:

  • Discovery: Searching and finding a solution for a given problem (via Google, Twitter or whatever).
  • Evaluation: Evaluating a solution.
  • Maintenance: Contributing back to…

Teiva Harsanyi

Software Engineer, Go, Rust, Java | 改善

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store