Stopwatch
Performance report #4037
Vultr25 GB SSD
Report
Share test result

Test results for 25 GB SSD at Vultr


Server specs:


Virtual CPU 714389bda930
992 MB RAM / 26 GB disk space
Ubuntu 16.04 Xenial
Frankfurt am Main, Germany

Benchmark results summary:


UnixBench - 1646.6
Disk Read - 162 MB/s
Disk Write - 727 MB/s
Bandwidth - 1267.43 MB/s

More: https://serverscope.io/trials/N0oE
Share
Share this page
Cpu chip
Server Specs
CPU
Virtual CPU 714389bda930
CPU Cores
1 × 2400 MHz
RAM
992 MB
Disk
26 GB
OS
Ubuntu 16.04 Xenial
Location
Frankfurt am Main, Germany
Benchmark summary
Benchmark summary
UnixBench
1646.6
Disk Read
162 MB/s
Disk Write
727 MB/s
Bandwidth
1267.43 Mbit/s
Speedtest
439.42 Mbit/s
Graph analysis
UnixBench Score
UnixBench (one CPU)
1646.6
Terminal
Raw Output
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum="register int" src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
src/pipe.c: In function xe2x80x98mainxe2x80x99:
src/pipe.c:52:2: warning: ignoring return value of xe2x80x98pipexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
  pipe(pvec);
  ^
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
In file included from src/execl.c:34:0:
src/big.c: In function xe2x80x98dummyxe2x80x99:
src/big.c:109:5: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
     freopen("masterlog.00", "a", stderr);
     ^
src/big.c:197:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:221:3: warning: ignoring return value of xe2x80x98dupxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
   dup(pvec[0]);
   ^
src/big.c:225:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:318:4: warning: ignoring return value of xe2x80x98writexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
    write(fcopy, cp->line, p - cp->line + 1);
    ^
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DUNIX -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
make distr
make[2]: Entering directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
make[2]: Leaving directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
make programs
make[2]: Entering directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
make[2]: Nothing to be done for "programs".
make[2]: Leaving directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
make[1]: Leaving directory "/root/serverscope-z1jVwi/byte-unixbench/UnixBench"
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: 1GB: GNU/Linux
   OS: GNU/Linux -- 4.4.0-87-generic -- #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017
   Machine: x86_64 (x86_64)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: Virtual CPU 714389bda930 (4800.0 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   12:00:57 up 55 min,  1 user,  load average: 8.62, 5.01, 2.94; runlevel 2017-09-03

------------------------------------------------------------------------
Benchmark Run: Sun Sep 03 2017 12:00:57 - 12:29:08
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       30110798.6 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     3223.5 MWIPS (9.9 s, 7 samples)
Execl Throughput                               4279.0 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks       1186327.3 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          324317.5 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       2392440.4 KBps  (30.0 s, 2 samples)
Pipe Throughput                             2328587.6 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 300207.8 lps   (10.0 s, 7 samples)
Process Creation                              11420.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   8149.7 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   1070.5 lpm   (60.0 s, 2 samples)
System Call Overhead                        3744451.3 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   30110798.6   2580.2
Double-Precision Whetstone                       55.0       3223.5    586.1
Execl Throughput                                 43.0       4279.0    995.1
File Copy 1024 bufsize 2000 maxblocks          3960.0    1186327.3   2995.8
File Copy 256 bufsize 500 maxblocks            1655.0     324317.5   1959.6
File Copy 4096 bufsize 8000 maxblocks          5800.0    2392440.4   4124.9
Pipe Throughput                               12440.0    2328587.6   1871.9
Pipe-based Context Switching                   4000.0     300207.8    750.5
Process Creation                                126.0      11420.5    906.4
Shell Scripts (1 concurrent)                     42.4       8149.7   1922.1
Shell Scripts (8 concurrent)                      6.0       1070.5   1784.2
System Call Overhead                          15000.0    3744451.3   2496.3
                                                                   ========
System Benchmarks Index Score                                        1646.6

Hard drive
dd
dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.51594 s, 476 MB/s


dd if=/dev/zero of=benchmark bs=1M count=2048 conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.51848 s, 475 MB/s


Hard drive
FIO random read
Performance
162.34 MB/s
IOPS
41559
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=3967: Sun Sep  3 11:57:53 2017
  read : io=9740.8MB, bw=166238KB/s, iops=41559, runt= 60001msec
    clat (usec): min=1, max=9991, avg=190.13, stdev=132.35
     lat (usec): min=1, max=9992, avg=190.38, stdev=132.37
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    6], 10.00th=[  145], 20.00th=[  163],
     | 30.00th=[  173], 40.00th=[  181], 50.00th=[  189], 60.00th=[  199],
     | 70.00th=[  209], 80.00th=[  225], 90.00th=[  251], 95.00th=[  278],
     | 99.00th=[  362], 99.50th=[  414], 99.90th=[ 2040], 99.95th=[ 2896],
     | 99.99th=[ 4704]
    bw (KB  /s): min=17280, max=27512, per=12.51%, avg=20790.84, stdev=1877.14
    lat (usec) : 2=0.90%, 4=1.50%, 10=4.66%, 20=0.74%, 50=0.10%
    lat (usec) : 100=0.01%, 250=81.82%, 500=9.97%, 750=0.07%, 1000=0.02%
    lat (msec) : 2=0.11%, 4=0.09%, 10=0.01%
  cpu          : usr=1.88%, sys=6.96%, ctx=2303417, majf=0, minf=80
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2493619/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=9740.8MB, aggrb=166238KB/s, minb=166238KB/s, maxb=166238KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  vda: ios=2291560/8, merge=0/5, ticks=388116/0, in_queue=387736, util=98.74%
Hard drive
FIO random direct read
Performance
159.21 MB/s
IOPS
40756
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=3982: Sun Sep  3 11:58:53 2017
  read : io=9552.5MB, bw=163026KB/s, iops=40756, runt= 60001msec
    clat (usec): min=108, max=15727, avg=193.69, stdev=55.87
     lat (usec): min=109, max=15743, avg=193.94, stdev=55.89
    clat percentiles (usec):
     |  1.00th=[  137],  5.00th=[  147], 10.00th=[  155], 20.00th=[  165],
     | 30.00th=[  173], 40.00th=[  179], 50.00th=[  187], 60.00th=[  195],
     | 70.00th=[  205], 80.00th=[  217], 90.00th=[  239], 95.00th=[  262],
     | 99.00th=[  318], 99.50th=[  346], 99.90th=[  454], 99.95th=[  652],
     | 99.99th=[ 2288]
    bw (KB  /s): min=14704, max=21240, per=12.50%, avg=20381.23, stdev=591.35
    lat (usec) : 250=92.83%, 500=7.10%, 750=0.03%, 1000=0.01%
    lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=2.08%, sys=5.40%, ctx=2449251, majf=0, minf=80
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=2445426/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=9552.5MB, aggrb=163025KB/s, minb=163025KB/s, maxb=163025KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  vda: ios=2442361/0, merge=0/0, ticks=399348/0, in_queue=399004, util=99.68%
Hard drive
FIO random write
Performance
727.12 MB/s
IOPS
186143
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=4022: Sun Sep  3 12:00:54 2017
  write: io=43632MB, bw=744574KB/s, iops=186143, runt= 60007msec
    clat (usec): min=0, max=138867, avg=38.43, stdev=1136.25
     lat (usec): min=0, max=158368, avg=39.16, stdev=1149.95
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    1], 10.00th=[    1], 20.00th=[    1],
     | 30.00th=[    2], 40.00th=[    2], 50.00th=[    2], 60.00th=[    2],
     | 70.00th=[    2], 80.00th=[    2], 90.00th=[    3], 95.00th=[    4],
     | 99.00th=[   14], 99.50th=[   16], 99.90th=[12224], 99.95th=[20352],
     | 99.99th=[57600]
    bw (KB  /s): min=62429, max=149517, per=12.53%, avg=93271.52, stdev=14338.32
    lat (usec) : 2=24.35%, 4=67.45%, 10=6.58%, 20=1.30%, 50=0.09%
    lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.02%, 10=0.06%, 20=0.07%, 50=0.04%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=1.81%, sys=5.86%, ctx=49574, majf=0, minf=84
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=11169909/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=43632MB, aggrb=744573KB/s, minb=744573KB/s, maxb=744573KB/s, mint=60007msec, maxt=60007msec

Disk stats (read/write):
  vda: ios=0/904383, merge=0/876, ticks=0/7248936, in_queue=7249296, util=99.60%
Hard drive
FIO random direct write
Performance
41.1 MB/s
IOPS
10522
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=4002: Sun Sep  3 11:59:53 2017
  write: io=2466.4MB, bw=42091KB/s, iops=10522, runt= 60002msec
    clat (usec): min=65, max=56978, avg=757.12, stdev=5440.93
     lat (usec): min=65, max=56978, avg=757.49, stdev=5440.94
    clat percentiles (usec):
     |  1.00th=[   72],  5.00th=[   74], 10.00th=[   76], 20.00th=[   78],
     | 30.00th=[   81], 40.00th=[   84], 50.00th=[   87], 60.00th=[   91],
     | 70.00th=[   97], 80.00th=[  103], 90.00th=[  116], 95.00th=[  133],
     | 99.00th=[43264], 99.50th=[47360], 99.90th=[49920], 99.95th=[50944],
     | 99.99th=[53504]
    bw (KB  /s): min= 3289, max= 7256, per=12.51%, avg=5263.76, stdev=677.09
    lat (usec) : 100=74.79%, 250=23.62%, 500=0.09%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=1.37%
    lat (msec) : 100=0.11%
  cpu          : usr=0.67%, sys=1.87%, ctx=1272779, majf=0, minf=81
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=631383/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=2466.4MB, aggrb=42090KB/s, minb=42090KB/s, maxb=42090KB/s, mint=60002msec, maxt=60002msec

Disk stats (read/write):
  vda: ios=55/630674, merge=0/1542, ticks=16/47456, in_queue=47332, util=78.87%
Download
Download benchmark results
Download speed
1267.43 Mbit/s
Downloaded 104857600 bytes in 0.448 sec
Downloaded 104857600 bytes in 0.947 sec
Downloaded 104857600 bytes in 0.889 sec
Downloaded 104857600 bytes in 0.419 sec
Downloaded 104857600 bytes in 0.453 sec
Finished! Average download speed is 1267.43 Mbit/s
Upload
Speedtest results
Upload speed
439.42 Mbit/s
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Choopa ...
Selecting 15 servers that are not too close:
  1. Neon IT Kevin Wendel (Riedstadt) [34.80 km]: 6.704 ms
  2. PfalzKom (Ludwigshafen) [73.09 km]: 3.929 ms
  3. TWL-KOM (Ludwigshafen) [73.09 km]: 3.484 ms
  4. KEVAG Telekom GmbH (Koblenz) [83.82 km]: 3.019 ms
  5. WIRCON GmbH (Waghaeusel) [97.06 km]: 10.959 ms
  6. iWelt AG (Eibelstadt) [101.96 km]: 459.86 ms
  7. bc-networks (Ludwigsburg) [140.04 km]: 6.579 ms
  8. hotspot.koeln (Cologne) [154.37 km]: 4.866 ms
  9. NetCologne (Cologne) [154.37 km]: 5.722 ms
  10. KUES DATA GmbH (Losheim am See) [156.23 km]: 6.013 ms
  11. intersaar GmbH (Saarbrucken) [158.07 km]: 5.096 ms
  12. suec//dacor GmbH (Coburg) [161.42 km]: 6.825 ms
  13. inexio (Saarlouis) [166.58 km]: 5.462 ms
  14. Newone (Ilmenau) [168.61 km]: 8.142 ms
  15. LWLcom GmbH (Dusseldorf) [183.75 km]: 9.171 ms
Testing upload speeds
  1. Neon IT Kevin Wendel (Riedstadt):            ......................... 82.44 Mbit/s
  2. PfalzKom (Ludwigshafen):                     ......................... 410.82 Mbit/s
  3. TWL-KOM (Ludwigshafen):                      ......................... 470.85 Mbit/s
  4. KEVAG Telekom GmbH (Koblenz):                ......................... 581.05 Mbit/s
  5. WIRCON GmbH (Waghaeusel):                    ......................... 368.29 Mbit/s
  6. iWelt AG (Eibelstadt):                       ......................... 582.93 Mbit/s
  7. bc-networks (Ludwigsburg):                   ......................... 483.64 Mbit/s
  8. hotspot.koeln (Cologne):                     ......................... 555.48 Mbit/s
  9. NetCologne (Cologne):                        ......................... 538.18 Mbit/s
  10. KUES DATA GmbH (Losheim am See):            ......................... 310.90 Mbit/s
  11. intersaar GmbH (Saarbrucken):               ......................... 305.23 Mbit/s
  12. suec//dacor GmbH (Coburg):                  ......................... 429.36 Mbit/s
  13. inexio (Saarlouis):                         ......................... 473.53 Mbit/s
  14. Newone (Ilmenau):                           ......................... 305.59 Mbit/s
  15. LWLcom GmbH (Dusseldorf):                   ......................... 335.98 Mbit/s
Average upload speed is 415.62 Mbit/s