Cookie Consent by Privacy Policies Generator Vegapit
Sieve of Eratosthenes: Benchmarking Rust, Nim, Go against C
C Rust Nim Go Thu, 03 Sep 2020 20:35:18 +0000

How much slower than C are modern compiled languages? With the plethora of programming languages at the world's disposal, is performance the real decider? This experiment benchmarks a recursion algorithm in C, Rust, Nim and Go to try and get an hint of an answer to this popular question.

Prime number generator: Sieve of Eratosthenes

The algorithm chosen to conduct this experiment is the recursive version of a prime number generator called the sieve of Eratosthenes. The particular test used in the benchmark consists in generating the first 50k prime numbers and print in the console the last 10 found. To make the comparison between the different languages as fair as possible, we start with a C implementation and transpile it as precisely as possible to the other languages. This way we are really comparing how each compiler deals with very similar code instructions. Here is the implementation:

#include <stdio.h>

#define NUM_PRIMES 50000

static unsigned int size = 0;
static unsigned int n = 2;

void sieve(unsigned int * primes) {
    for(unsigned int i = 0; i < size ; i++) {
        if(n % primes[i] == 0){
    primes[size] = n;

int main() {
    unsigned int primes[NUM_PRIMES];
    while(size < NUM_PRIMES) {
    for(unsigned int k = NUM_PRIMES-10; k < NUM_PRIMES; k++) {
        printf("%u\n", primes[k]);

It was very important to keep some key elements the same across languages. First, the variable types used. In C, there is a significant gain in perfomance going from intto unsigned int for example. Second, the use of a fixed size array to store all the prime numbers to avoid the overhead of dynamic allocation. And finally, maximising reference passing between function calls for maximal speed. All the implementations in other languages follow this C template. First up, our usual language of choice: Rust.

Rust version

The Rust implementation had to be different on a couple of points to fit the language specifications. To allow for recursive calls pure functions have to be used hence there can not be any static variable access. The sieve function has to take extra parameters for this reason. Also the base type for array indices is usize which is not the unsigned int (or u32in Rust) we have used in the C code. So extra casting operations are done from u32 to usize to keep the compiler happy. Here is the code:

fn main() {
    const NUM_PRIMES : usize = 50000;
    let mut i : u32 = 2;
    let mut size : u32 = 0;
    let mut primes : [u32; NUM_PRIMES] = [0; NUM_PRIMES];

    fn sieve(v: &mut [u32; NUM_PRIMES], n: &mut u32, s: &mut u32) {
        for i in 0..*s {
            if *n % v[i as usize] == 0 {
                *n += 1;
                sieve(v, n, s);
        v[*s as usize] = *n;
        *s += 1;
    while size < NUM_PRIMES as u32 {
        sieve(&mut primes, &mut i, &mut size);
    for k in NUM_PRIMES-10..NUM_PRIMES {
        println!("{}", primes[k]);

Despite these few points we can recognise the logic of the C code quite easily. Will these extra casting operations performed be costly to performance? We will get soon back to this point. Next, the Nim implementation.

Nim version

Even though garbage collection seems to be its default use case, the Nim compiler is very flexible as the language supports reference passing and even pointers for the bravest of coders. Our implementation will only use reference passing for the prime number array which keeps the code relatively clean and very similar to the C implementation.

const NUM_PRIMES : uint32 = 50000

var size : uint32 = 0
var n : uint32 = 2

proc sieve(primes: ref seq[uint32]) =
    var i : uint32 = 0
    while i < size:
        if n mod primes[][i] == 0:
    primes[][size] = n

when isMainModule:
    var primes : ref seq[uint32]
    primes[] = newSeq[uint32](NUM_PRIMES)
    while(size < NUM_PRIMES):
    for k in countup(NUM_PRIMES-10,NUM_PRIMES-1):
        echo primes[][k]

While we are dabbling with the modern garbage-collected but flexible languages, we would need to add Go as the last language in our comparison study.

Go version

The translation from C to Go is fairly straightfoward given how closely related to C the Go syntax was chosen to be. For this reason, there is nothing to add as a general commentary to this implementation other than the code.

package main

import "fmt"

const NUM_PRIMES uint32 = 50000

func sieve(v *[NUM_PRIMES]uint32, n *uint32, s *uint32) {
		var i uint32 = 0
		for i < *s {
				if *n % v[i] == 0 {
						sieve(v, n ,s)
		v[*s] = *n

func main() {
		var size uint32 = 0
		var i uint32 = 2
		var primes [NUM_PRIMES]uint32
		for size < NUM_PRIMES {
				sieve(&primes, &i, &size)
		for k := NUM_PRIMES-10; k < NUM_PRIMES; k++ {
				fmt.Printf("%d\n", primes[k])

Now that we have all our implementations ready, let's compile and compare the results.

Benchmarking results

All testing has been performed on a MacPro OSX High Sierra Desktop running on two 2.4 GHz 6-Core Intel Xeon and 12 GB of RAM. Not the purest testing environment but this exercise is meant to be a rough guide and not a testing standard. All programs were compiled as release builds without any other optimisation options:

  • Rust Cargo: cargo build --release sieve
  • Nim: nim c -d:release sieve.nim
  • Go: go build -buildmode=exe sieve.go

The time terminal application was used on the executable files generated. Here are the results for 10 consecutive runs of the executable in each language:


Based on this set of measurements, Nim is the clear winner as it is matching C's performance and stability. Rust and Go are lagging behind but staying pretty close. The sample size is clearly not big enough to get a good statistical certainty on the ranking but, given how close all these results are from each other, we do not need to dig further to come to a reasonable conclusion...


These modern compiled languages can be optimised to run at very comparable speed to the equivalent C implementation. So if the open source communities behind these language write libraries with performance in mind, execution speeds similar to the old established low level languages can be expected. But if performance is not the criterion to look into to pick one language over another, then what should it be? Well, it is probably more relevant to look at how appealing the syntax looks, the package manager, the community, interoperability, safety and so on. There is also a message for teams designing languages in this: Nailing performance alone is not enough to popularise adoption, you need to bring something new to the table.

If you like this post, don't forget to follow me on Twitter and get notified of the next publication.