Stopwatch
Benchmark report #11316
BuyVM – KVM 512
Report
Share test result

Test results for KVM 512 at BuyVM


Server specs:


Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz
492 MB RAM / 10 GB disk space
Debian 9.9

Benchmark results summary:


UnixBench - 995.2
Disk Read - 32 MB/s
Disk Write - 159 MB/s
Bandwidth - 459.62 MB/s

More: https://serverscope.io/trials/Y0eE
Share
Share this page
Cpu chip
Server Specs
CPU
Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz
CPU Cores
1 × 3500 MHz
RAM
492 MB
Disk
10 GB
OS
Debian 9.9
Benchmark summary
Benchmark summary
UnixBench
995.2
Disk Read
32 MB/s
Disk Write
159 MB/s
Bandwidth
459.62 Mbit/s
Speedtest
229.39 Mbit/s
Graph analysis
UnixBench Score
UnixBench (one CPU)
995.2
Terminal
Raw Output
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum="register int" src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DUNIX -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
make distr
make[2]: Entering directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
make[2]: Leaving directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
make programs
make[2]: Entering directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
make[2]: Nothing to be done for "programs".
make[2]: Leaving directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
make[1]: Leaving directory "/root/serverscope-3BllGx/byte-unixbench/UnixBench"
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com

Wide character in print at ./Run line 1529.
Wide character in printf at ./Run line 1560.

1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3Wide character in printf at ./Run line 1502.


========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: s6: GNU/Linux
   OS: GNU/Linux -- 4.9.0-9-amd64 -- #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13)
   Machine: x86_64 (unknown)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz (7000.0 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   16:06:44 up 19:19,  1 user,  load average: 8.20, 4.89, 2.12; runlevel 2019-06-16

------------------------------------------------------------------------
Benchmark Run: xd0x9fxd0xbd xd0xb8xd1x8exd0xbd 17 2019 16:06:44 - 16:34:46
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       24284526.0 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     4425.4 MWIPS (9.6 s, 7 samples)
Execl Throughput                               3764.2 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        615345.3 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          162522.5 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       1523757.7 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1036154.7 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 133116.5 lps   (10.0 s, 7 samples)
Process Creation                               8020.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   5932.7 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    782.1 lpm   (60.0 s, 2 samples)
System Call Overhead                         748980.9 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   24284526.0   2080.9
Double-Precision Whetstone                       55.0       4425.4    804.6
Execl Throughput                                 43.0       3764.2    875.4
File Copy 1024 bufsize 2000 maxblocks          3960.0     615345.3   1553.9
File Copy 256 bufsize 500 maxblocks            1655.0     162522.5    982.0
File Copy 4096 bufsize 8000 maxblocks          5800.0    1523757.7   2627.2
Pipe Throughput                               12440.0    1036154.7    832.9
Pipe-based Context Switching                   4000.0     133116.5    332.8
Process Creation                                126.0       8020.5    636.5
Shell Scripts (1 concurrent)                     42.4       5932.7   1399.2
Shell Scripts (8 concurrent)                      6.0        782.1   1303.6
System Call Overhead                          15000.0     748980.9    499.3
                                                                   ========
System Benchmarks Index Score                                         995.2

Hard drive
dd
dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync
32768+0 xd0xb7xd0xb0xd0xbfxd0xb8xd1x81xd0xb5xd0xb9 xd0xbfxd0xbexd0xbbxd1x83xd1x87xd0xb5xd0xbdxd0xbe
32768+0 xd0xb7xd0xb0xd0xbfxd0xb8xd1x81xd0xb5xd0xb9 xd0xbexd1x82xd0xbfxd1x80xd0xb0xd0xb2xd0xbbxd0xb5xd0xbdxd0xbe
2147483648 xd0xb1xd0xb0xd0xb9xd1x82 (2,1 GB, 2,0 GiB) xd1x81xd0xbaxd0xbexd0xbfxd0xb8xd1x80xd0xbexd0xb2xd0xb0xd0xbd, 5,25798 s, 408 MB/s


dd if=/dev/zero of=benchmark bs=1M count=2048 conv=fdatasync
2048+0 xd0xb7xd0xb0xd0xbfxd0xb8xd1x81xd0xb5xd0xb9 xd0xbfxd0xbexd0xbbxd1x83xd1x87xd0xb5xd0xbdxd0xbe
2048+0 xd0xb7xd0xb0xd0xbfxd0xb8xd1x81xd0xb5xd0xb9 xd0xbexd1x82xd0xbfxd1x80xd0xb0xd0xb2xd0xbbxd0xb5xd0xbdxd0xbe
2147483648 xd0xb1xd0xb0xd0xb9xd1x82 (2,1 GB, 2,0 GiB) xd1x81xd0xbaxd0xbexd0xbfxd0xb8xd1x80xd0xbexd0xb2xd0xb0xd0xbd, 4,34684 s, 494 MB/s


Hard drive
FIO random read
Performance
32.23 MB/s
IOPS
8251
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=10212: Mon Jun 17 16:03:39 2019
  read : io=2070.1MB, bw=33006KB/s, iops=8251, runt= 64252msec
    clat (usec): min=1, max=5861.9K, avg=933.63, stdev=17598.51
     lat (usec): min=2, max=5861.9K, avg=934.70, stdev=17598.59
    clat percentiles (usec):
     |  1.00th=[   55],  5.00th=[   96], 10.00th=[  141], 20.00th=[  215],
     | 30.00th=[  286], 40.00th=[  362], 50.00th=[  438], 60.00th=[  540],
     | 70.00th=[  692], 80.00th=[  980], 90.00th=[ 1736], 95.00th=[ 3024],
     | 99.00th=[ 8640], 99.50th=[11840], 99.90th=[18560], 99.95th=[22144],
     | 99.99th=[28288]
    bw (KB  /s): min=   94, max=12872, per=13.50%, avg=4455.39, stdev=1364.63
    lat (usec) : 2=0.01%, 4=0.15%, 10=0.04%, 20=0.01%, 50=0.41%
    lat (usec) : 100=4.76%, 250=19.67%, 500=31.63%, 750=15.91%, 1000=8.02%
    lat (msec) : 2=10.95%, 4=4.99%, 10=2.72%, 20=0.66%, 50=0.08%
    lat (msec) : 100=0.01%, >=2000=0.01%
  cpu          : usr=1.31%, sys=3.74%, ctx=534902, majf=0, minf=83
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=530169/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=2070.1MB, aggrb=33005KB/s, minb=33005KB/s, maxb=33005KB/s, mint=64252msec, maxt=64252msec

Disk stats (read/write):
  vda: ios=529124/9, merge=0/5, ticks=369164/32, in_queue=419900, util=93.17%
Hard drive
FIO random direct read
Performance
31.22 MB/s
IOPS
7992
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=10286: Mon Jun 17 16:04:39 2019
  read : io=1873.6MB, bw=31971KB/s, iops=7992, runt= 60008msec
    clat (usec): min=22, max=66922, avg=985.90, stdev=1836.37
     lat (usec): min=23, max=96142, avg=987.51, stdev=1842.75
    clat percentiles (usec):
     |  1.00th=[   62],  5.00th=[  111], 10.00th=[  167], 20.00th=[  249],
     | 30.00th=[  326], 40.00th=[  402], 50.00th=[  490], 60.00th=[  604],
     | 70.00th=[  780], 80.00th=[ 1080], 90.00th=[ 1912], 95.00th=[ 3344],
     | 99.00th=[ 9792], 99.50th=[12864], 99.90th=[19584], 99.95th=[23168],
     | 99.99th=[32384]
    bw (KB  /s): min= 2232, max= 7827, per=12.51%, avg=3999.20, stdev=884.71
    lat (usec) : 50=0.28%, 100=3.69%, 250=16.22%, 500=30.95%, 750=17.58%
    lat (usec) : 1000=9.05%
    lat (msec) : 2=12.80%, 4=5.40%, 10=3.09%, 20=0.86%, 50=0.09%
    lat (msec) : 100=0.01%
  cpu          : usr=1.53%, sys=4.12%, ctx=485564, majf=0, minf=65
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=479633/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=1873.6MB, aggrb=31971KB/s, minb=31971KB/s, maxb=31971KB/s, mint=60008msec, maxt=60008msec

Disk stats (read/write):
  vda: ios=479367/0, merge=0/0, ticks=368512/0, in_queue=368244, util=92.24%
Hard drive
FIO random write
Performance
159.82 MB/s
IOPS
40913
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=10425: Mon Jun 17 16:06:40 2019
  write: io=9590.8MB, bw=163655KB/s, iops=40913, runt= 60010msec
    clat (usec): min=1, max=261236, avg=172.13, stdev=3523.40
     lat (usec): min=1, max=261236, avg=175.71, stdev=3577.40
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    2], 10.00th=[    2], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    3], 90.00th=[    4], 95.00th=[    6],
     | 99.00th=[   56], 99.50th=[  716], 99.90th=[64768], 99.95th=[90624],
     | 99.99th=[138240]
    bw (KB  /s): min=    2, max=54576, per=12.62%, avg=20651.08, stdev=6948.65
    lat (usec) : 2=1.25%, 4=80.50%, 10=15.26%, 20=1.32%, 50=0.63%
    lat (usec) : 100=0.23%, 250=0.22%, 500=0.07%, 750=0.03%, 1000=0.01%
    lat (msec) : 2=0.03%, 4=0.02%, 10=0.08%, 20=0.11%, 50=0.12%
    lat (msec) : 100=0.09%, 250=0.03%, 500=0.01%
  cpu          : usr=1.77%, sys=4.23%, ctx=23011, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2455234/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=9590.8MB, aggrb=163654KB/s, minb=163654KB/s, maxb=163654KB/s, mint=60010msec, maxt=60010msec

Disk stats (read/write):
  vda: ios=0/932158, merge=0/3373, ticks=0/4092528, in_queue=4095228, util=76.79%
Hard drive
FIO random direct write
Performance
10.76 MB/s
IOPS
2753
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=10355: Mon Jun 17 16:05:40 2019
  write: io=661084KB, bw=11015KB/s, iops=2753, runt= 60014msec
    clat (usec): min=37, max=158134, avg=2895.46, stdev=12664.13
     lat (usec): min=37, max=158135, avg=2896.61, stdev=12665.24
    clat percentiles (usec):
     |  1.00th=[   59],  5.00th=[   68], 10.00th=[   75], 20.00th=[   86],
     | 30.00th=[   97], 40.00th=[  113], 50.00th=[  141], 60.00th=[  189],
     | 70.00th=[  278], 80.00th=[  462], 90.00th=[ 1048], 95.00th=[ 4768],
     | 99.00th=[71168], 99.50th=[83456], 99.90th=[107008], 99.95th=[119296],
     | 99.99th=[144384]
    bw (KB  /s): min=  277, max= 3416, per=12.56%, avg=1383.83, stdev=542.31
    lat (usec) : 50=0.09%, 100=31.51%, 250=35.85%, 500=13.80%, 750=5.31%
    lat (usec) : 1000=2.88%
    lat (msec) : 2=3.70%, 4=1.59%, 10=0.79%, 20=0.15%, 50=1.55%
    lat (msec) : 100=2.63%, 250=0.16%
  cpu          : usr=0.41%, sys=1.93%, ctx=330311, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=165271/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=661084KB, aggrb=11015KB/s, minb=11015KB/s, maxb=11015KB/s, mint=60014msec, maxt=60014msec

Disk stats (read/write):
  vda: ios=0/165049, merge=0/1961, ticks=0/52224, in_queue=52172, util=83.66%
Download
Download benchmark results
Download speed
459.62 Mbit/s
Downloaded 104857600 bytes in 1,775135 sec
Downloaded 104857600 bytes in 1,730563 sec
Downloaded 104857600 bytes in 1,635297 sec
Downloaded 104857600 bytes in 1,793934 sec
Downloaded 104857600 bytes in 1,767941 sec
Finished! Average download speed is 459.62 Mbit/s
Upload
Speedtest results
Upload speed
229.39 Mbit/s
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from FranTech Solutions ...
Selecting 15 servers that are not too close:
  1. Docler Holding (Bettembourg) [30.02 km]: 23.082 ms
  2. Join Experience (Kayl) [34.04 km]: 23.875 ms
  3. RIV54 (Saulnes) [34.71 km]: 20.88 ms
  4. telenetwork AG (Trier) [39.07 km]: 24.489 ms
  5. KUES DATA GmbH (Losheim am See) [56.07 km]: 19.558 ms
  6. ORNE THD (Rombas) [59.53 km]: 327.766 ms
  7. Regie Talange (Talange) [61.56 km]: 20.963 ms
  8. Net-Build GmbH (Saarwellingen) [70.21 km]: 15.149 ms
  9. inexio (Saarlouis) [70.40 km]: 17.019 ms
  10. Vialis (Woippy) [70.81 km]: 20.783 ms
  11. REFO Falck (Falck) [73.63 km]: 19.698 ms
  12. Enes (Creutzwald) [77.86 km]: 24.779 ms
  13. intersaar GmbH (Saarbrucken) [86.67 km]: 2515.036 ms
  14. Fibragglo (Forbach) [88.24 km]: 26.516 ms
  15. Enes (Hombourg-Haut) [88.44 km]: 21.796 ms
Testing upload speeds
  1. Docler Holding (Bettembourg):                ......................... 223.64 Mbit/s
  2. Join Experience (Kayl):                      ......................... 166.83 Mbit/s
  3. RIV54 (Saulnes):                             ......................... 226.29 Mbit/s
  4. telenetwork AG (Trier):                      ......................... 169.78 Mbit/s
  5. KUES DATA GmbH (Losheim am See):             ......................... 260.02 Mbit/s
  6. ORNE THD (Rombas):                           ......................... 185.09 Mbit/s
  7. Regie Talange (Talange):                     ......................... 206.19 Mbit/s
  8. Net-Build GmbH (Saarwellingen):              ......................... 284.81 Mbit/s
  9. inexio (Saarlouis):                          ......................... 290.36 Mbit/s
  10. Vialis (Woippy):                            ......................... 99.87 Mbit/s
  11. REFO Falck (Falck):                         ......................... 225.32 Mbit/s
  12. Enes (Creutzwald):                          ......................... 206.05 Mbit/s
  13. intersaar GmbH (Saarbrucken):               ......................... 205.34 Mbit/s
  14. Fibragglo (Forbach):                        ......................... 255.19 Mbit/s
  15. Enes (Hombourg-Haut):                       ......................... 244.05 Mbit/s
Average upload speed is 216.59 Mbit/s