Stopwatch
Performance report #4193
ScalewayC2S
Report
Share test result

Test results for C2S at Scaleway


Server specs:


4 × Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
8 GB RAM / 48 GB disk space
Ubuntu 16.04 Xenial
France

Benchmark results summary:


UnixBench - 2070.6
Disk Read - 1774 MB/s
Disk Write - 788 MB/s
Bandwidth - 1711.60 MB/s

More: https://serverscope.io/trials/e59M
Share
Share this page
Cpu chip
Server Specs
CPU
4 × Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
CPU Cores
4 × 2394 MHz
RAM
8 GB
Disk
48 GB
OS
Ubuntu 16.04 Xenial
Location
Benchmark summary
Benchmark summary
UnixBench
2070.6
Disk Read
1774 MB/s
Disk Write
788 MB/s
Bandwidth
1711.60 Mbit/s
Speedtest
365.58 Mbit/s
Graph analysis
UnixBench Score
UnixBench (all CPUs)
2070.6
UnixBench (one CPU)
864.4
Terminal
Raw Output
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum="register int" src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
src/pipe.c: In function xe2x80x98mainxe2x80x99:
src/pipe.c:52:2: warning: ignoring return value of xe2x80x98pipexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
  pipe(pvec);
  ^
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
In file included from src/execl.c:34:0:
src/big.c: In function xe2x80x98dummyxe2x80x99:
src/big.c:109:5: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
     freopen("masterlog.00", "a", stderr);
     ^
src/big.c:197:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:221:3: warning: ignoring return value of xe2x80x98dupxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
   dup(pvec[0]);
   ^
src/big.c:225:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:318:4: warning: ignoring return value of xe2x80x98writexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
    write(fcopy, cp->line, p - cp->line + 1);
    ^
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DUNIX -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
make distr
make[2]: Entering directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
make[2]: Leaving directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
make programs
make[2]: Entering directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
make[2]: Nothing to be done for "programs".
make[2]: Leaving directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
make[1]: Leaving directory "/root/serverscope-0Wgxc8/byte-unixbench/UnixBench"
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

4 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

4 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

4 x Execl Throughput  1 2 3

4 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

4 x File Copy 256 bufsize 500 maxblocks  1 2 3

4 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

4 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

4 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

4 x Process Creation  1 2 3

4 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

4 x Shell Scripts (1 concurrent)  1 2 3

4 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: scw-f55fc6: GNU/Linux
   OS: GNU/Linux -- 4.4.38-std-1 -- #1 SMP Mon Dec 12 10:45:29 UTC 2016
   Machine: x86_64 (x86_64)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4788.0 bogomips)
          Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
   CPU 1: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4788.0 bogomips)
          Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
   CPU 2: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4788.0 bogomips)
          Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
   CPU 3: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4788.0 bogomips)
          Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
   13:04:39 up 13:16,  1 user,  load average: 7.94, 4.77, 2.03; runlevel 2017-09-26

------------------------------------------------------------------------
Benchmark Run: Wed Sep 27 2017 13:04:39 - 13:34:08
4 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       12333160.3 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     2030.8 MWIPS (10.0 s, 7 samples)
Execl Throughput                               2798.8 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        428350.9 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          117370.8 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       1008130.8 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1123089.8 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 153446.0 lps   (10.0 s, 7 samples)
Process Creation                               6384.1 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   4328.5 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   1474.6 lpm   (60.0 s, 2 samples)
System Call Overhead                        1751522.0 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   12333160.3   1056.8
Double-Precision Whetstone                       55.0       2030.8    369.2
Execl Throughput                                 43.0       2798.8    650.9
File Copy 1024 bufsize 2000 maxblocks          3960.0     428350.9   1081.7
File Copy 256 bufsize 500 maxblocks            1655.0     117370.8    709.2
File Copy 4096 bufsize 8000 maxblocks          5800.0    1008130.8   1738.2
Pipe Throughput                               12440.0    1123089.8    902.8
Pipe-based Context Switching                   4000.0     153446.0    383.6
Process Creation                                126.0       6384.1    506.7
Shell Scripts (1 concurrent)                     42.4       4328.5   1020.9
Shell Scripts (8 concurrent)                      6.0       1474.6   2457.7
System Call Overhead                          15000.0    1751522.0   1167.7
                                                                   ========
System Benchmarks Index Score                                         864.4

------------------------------------------------------------------------
Benchmark Run: Wed Sep 27 2017 13:34:08 - 14:02:07
4 CPUs in system; running 4 parallel copies of tests

Dhrystone 2 using register variables       49142656.2 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     8124.8 MWIPS (10.0 s, 7 samples)
Execl Throughput                               8037.1 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        472077.9 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          130588.3 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       1287922.1 KBps  (30.0 s, 2 samples)
Pipe Throughput                             4460979.9 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 618488.4 lps   (10.0 s, 7 samples)
Process Creation                              22995.8 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                  11779.3 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   1832.2 lpm   (60.1 s, 2 samples)
System Call Overhead                        4469487.2 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   49142656.2   4211.0
Double-Precision Whetstone                       55.0       8124.8   1477.2
Execl Throughput                                 43.0       8037.1   1869.1
File Copy 1024 bufsize 2000 maxblocks          3960.0     472077.9   1192.1
File Copy 256 bufsize 500 maxblocks            1655.0     130588.3    789.1
File Copy 4096 bufsize 8000 maxblocks          5800.0    1287922.1   2220.6
Pipe Throughput                               12440.0    4460979.9   3586.0
Pipe-based Context Switching                   4000.0     618488.4   1546.2
Process Creation                                126.0      22995.8   1825.1
Shell Scripts (1 concurrent)                     42.4      11779.3   2778.1
Shell Scripts (8 concurrent)                      6.0       1832.2   3053.7
System Call Overhead                          15000.0    4469487.2   2979.7
                                                                   ========
System Benchmarks Index Score                                        2070.6

Hard drive
dd
dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 13.1989 s, 163 MB/s


dd if=/dev/zero of=benchmark bs=1M count=2048 conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.0878 s, 213 MB/s


Hard drive
FIO random read
Performance
1774.5 MB/s
IOPS
454267
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=19772: Wed Sep 27 13:01:33 2017
  read : io=106473MB, bw=1774.5MB/s, iops=454267, runt= 60002msec
    clat (usec): min=1, max=52162, avg=12.93, stdev=164.43
     lat (usec): min=1, max=52163, avg=13.54, stdev=173.48
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    2], 10.00th=[    2], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    2], 50.00th=[    2], 60.00th=[    2],
     | 70.00th=[    2], 80.00th=[    3], 90.00th=[    3], 95.00th=[    3],
     | 99.00th=[  478], 99.50th=[  490], 99.90th=[  516], 99.95th=[  700],
     | 99.99th=[10048]
    bw (KB  /s): min= 5781, max=842824, per=12.42%, avg=225622.06, stdev=214615.49
    lat (usec) : 2=0.22%, 4=97.51%, 10=0.27%, 20=0.01%, 50=0.02%
    lat (usec) : 100=0.01%, 250=0.10%, 500=1.63%, 750=0.19%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.01%
    lat (msec) : 100=0.01%
  cpu          : usr=12.49%, sys=16.08%, ctx=558855, majf=0, minf=81
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=27256986/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=106473MB, aggrb=1774.5MB/s, minb=1774.5MB/s, maxb=1774.5MB/s, mint=60002msec, maxt=60002msec

Disk stats (read/write):
  nbd0: ios=524288/134, merge=0/122, ticks=218230/680, in_queue=218730, util=50.33%
Hard drive
FIO random direct read
Performance
63.3 MB/s
IOPS
16205
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=19807: Wed Sep 27 13:02:33 2017
  read : io=3798.3MB, bw=64823KB/s, iops=16205, runt= 60001msec
    clat (usec): min=168, max=16574, avg=487.22, stdev=63.55
     lat (usec): min=168, max=16576, avg=487.73, stdev=63.55
    clat percentiles (usec):
     |  1.00th=[  270],  5.00th=[  454], 10.00th=[  466], 20.00th=[  474],
     | 30.00th=[  478], 40.00th=[  482], 50.00th=[  486], 60.00th=[  490],
     | 70.00th=[  490], 80.00th=[  498], 90.00th=[  506], 95.00th=[  524],
     | 99.00th=[  724], 99.50th=[  740], 99.90th=[  756], 99.95th=[  772],
     | 99.99th=[  964]
    bw (KB  /s): min= 4816, max= 9536, per=12.51%, avg=8106.41, stdev=344.38
    lat (usec) : 250=0.57%, 500=84.66%, 750=14.58%, 1000=0.18%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=1.15%, sys=3.46%, ctx=1002467, majf=0, minf=82
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=972363/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=3798.3MB, aggrb=64823KB/s, minb=64823KB/s, maxb=64823KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  nbd0: ios=970967/15, merge=0/10, ticks=448750/0, in_queue=448500, util=99.78%
Hard drive
FIO random write
Performance
788.68 MB/s
IOPS
201901
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=19855: Wed Sep 27 13:04:34 2017
  write: io=47321MB, bw=807606KB/s, iops=201901, runt= 60001msec
    clat (usec): min=3, max=1970.2K, avg=32.76, stdev=1820.62
     lat (usec): min=4, max=1970.2K, avg=33.31, stdev=1821.34
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    5], 10.00th=[    5], 20.00th=[    7],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    8], 60.00th=[    8],
     | 70.00th=[    9], 80.00th=[   12], 90.00th=[   12], 95.00th=[   12],
     | 99.00th=[   15], 99.50th=[   19], 99.90th=[ 9792], 99.95th=[19840],
     | 99.99th=[39168]
    bw (KB  /s): min=    0, max=200792, per=13.78%, avg=111309.88, stdev=24819.72
    lat (usec) : 4=0.01%, 10=75.11%, 20=24.45%, 50=0.26%, 100=0.03%
    lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.02%, 20=0.05%, 50=0.04%
    lat (msec) : 100=0.01%, 250=0.01%, 1000=0.01%, 2000=0.01%
  cpu          : usr=8.88%, sys=22.21%, ctx=3057831, majf=0, minf=84
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=12114289/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=47321MB, aggrb=807605KB/s, minb=807605KB/s, maxb=807605KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  nbd0: ios=0/10852, merge=0/131, ticks=0/108510, in_queue=108530, util=99.25%
Hard drive
FIO random direct write
Performance
7.92 MB/s
IOPS
2027
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=19841: Wed Sep 27 13:03:33 2017
  write: io=486588KB, bw=8109.2KB/s, iops=2027, runt= 60005msec
    clat (usec): min=193, max=2339.7K, avg=3941.10, stdev=53102.44
     lat (usec): min=193, max=2339.7K, avg=3941.68, stdev=53102.44
    clat percentiles (usec):
     |  1.00th=[  237],  5.00th=[  258], 10.00th=[  462], 20.00th=[  474],
     | 30.00th=[  478], 40.00th=[  486], 50.00th=[  486], 60.00th=[  494],
     | 70.00th=[  498], 80.00th=[  506], 90.00th=[  524], 95.00th=[  548],
     | 99.00th=[25984], 99.50th=[185344], 99.90th=[831488], 99.95th=[1351680],
     | 99.99th=[1875968]
    bw (KB  /s): min=    1, max= 8968, per=15.80%, avg=1280.95, stdev=1831.41
    lat (usec) : 250=3.92%, 500=67.69%, 750=27.05%, 1000=0.04%
    lat (msec) : 2=0.03%, 4=0.09%, 10=0.06%, 20=0.09%, 50=0.22%
    lat (msec) : 100=0.18%, 250=0.20%, 500=0.19%, 750=0.12%, 1000=0.04%
    lat (msec) : 2000=0.07%, >=2000=0.01%
  cpu          : usr=0.14%, sys=0.65%, ctx=245699, majf=0, minf=82
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=121647/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=486588KB, aggrb=8109KB/s, minb=8109KB/s, maxb=8109KB/s, mint=60005msec, maxt=60005msec

Disk stats (read/write):
  nbd0: ios=0/121759, merge=0/2040, ticks=0/61100, in_queue=61190, util=94.80%
Download
Download benchmark results
Download speed
1711.60 Mbit/s
Downloaded 104857600 bytes in 0.533 sec
Downloaded 104857600 bytes in 0.427 sec
Downloaded 104857600 bytes in 0.421 sec
Downloaded 104857600 bytes in 0.509 sec
Downloaded 104857600 bytes in 0.447 sec
Finished! Average download speed is 1711.60 Mbit/s
Upload
Speedtest results
Upload speed
365.58 Mbit/s
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from ONLINE SAS ...
Selecting 15 servers that are not too close:
  1. iperf.fr (Rouen) [111.21 km]: 18.729 ms
  2. Ikoula (Reims) [131.50 km]: 18.885 ms
  3. LaFibre.info (Douai) [176.53 km]: 19.644 ms
  4. Magic-VPN (Roubaix) [212.70 km]: 10.822 ms
  5. MyTheValentinus (Roubaix) [213.32 km]: 11.584 ms
  6. ePlay TV (Roubaix) [213.32 km]: 19.354 ms
  7. AdvancedFight (Roubaix) [213.32 km]: 13.587 ms
  8. Lufu (Roubaix) [213.32 km]: 17.68 ms
  9. Matthews Tech (Gravelines) [237.06 km]: 26.358 ms
  10. Magic-VPN (Gravelines) [237.20 km]: 46.237 ms
  11. CloudConnX (Eastbourne) [258.84 km]: 10.899 ms
  12. Universite Catholique de Louvain (Louvain-La-Neuve) [259.09 km]: 8.093 ms
  13. RIV54 (Saulnes) [264.06 km]: 25.259 ms
  14. Combell (Brussels) [264.26 km]: 7.055 ms
  15. Riffle Media BVBA (Brussels) [264.26 km]: 16.79 ms
Testing upload speeds
  1. iperf.fr (Rouen):                            ......................... 351.80 Mbit/s
  2. Ikoula (Reims):                              ......................... 268.32 Mbit/s
  3. LaFibre.info (Douai):                        ......................... 397.40 Mbit/s
  4. Magic-VPN (Roubaix):                         ......................... 284.92 Mbit/s
  5. MyTheValentinus (Roubaix):                   ......................... 526.53 Mbit/s
  6. ePlay TV (Roubaix):                          ......................... 115.12 Mbit/s
  7. AdvancedFight (Roubaix):                     ......................... 392.28 Mbit/s
  8. Lufu (Roubaix):                              ......................... 305.98 Mbit/s
  9. Matthews Tech (Gravelines):                  ......................... 382.54 Mbit/s
  10. Magic-VPN (Gravelines):                     ......................... 96.20 Mbit/s
  11. CloudConnX (Eastbourne):                    ......................... 400.82 Mbit/s
  12. Universite Catholique de Louvain (Louvain-La-Neuve):......................... 355.33 Mbit/s
  13. RIV54 (Saulnes):                            ......................... 160.41 Mbit/s
  14. Combell (Brussels):                         ......................... 115.45 Mbit/s
  15. Riffle Media BVBA (Brussels):               ......................... 355.43 Mbit/s
Average upload speed is 300.57 Mbit/s