Stopwatch
Performance report #3024
LinodeLinode 1GB
Report
Share test result

Test results for Linode 1GB at Linode


Server specs:


Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
989 MB RAM / 20 GB disk space
Ubuntu 16.04 Xenial
Fremont, United States

Benchmark results summary:


UnixBench - 1517.0
Disk Read - 78 MB/s
Disk Write - 443 MB/s
Bandwidth - 851.06 MB/s

More: https://serverscope.io/trials/XEwv
Share
Share this page
Cpu chip
Server Specs
CPU
Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
CPU Cores
1 × 2500 MHz
RAM
989 MB
Disk
20 GB
OS
Ubuntu 16.04 Xenial
Location
Fremont, United States
Benchmark summary
Benchmark summary
UnixBench
1517.0
Disk Read
78 MB/s
Disk Write
443 MB/s
Bandwidth
851.06 Mbit/s
Speedtest
383.99 Mbit/s
Graph analysis
UnixBench Score
UnixBench (one CPU)
1517.0
Terminal
Raw Output
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum="register int" src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
src/pipe.c: In function xe2x80x98mainxe2x80x99:
src/pipe.c:52:2: warning: ignoring return value of xe2x80x98pipexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
  pipe(pvec);
  ^
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
In file included from src/execl.c:34:0:
src/big.c: In function xe2x80x98dummyxe2x80x99:
src/big.c:109:5: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
     freopen("masterlog.00", "a", stderr);
     ^
src/big.c:197:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:221:3: warning: ignoring return value of xe2x80x98dupxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
   dup(pvec[0]);
   ^
src/big.c:225:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:318:4: warning: ignoring return value of xe2x80x98writexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
    write(fcopy, cp->line, p - cp->line + 1);
    ^
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DUNIX -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
make distr
make[2]: Entering directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
make[2]: Leaving directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
make programs
make[2]: Entering directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
make[2]: Nothing to be done for "programs".
make[2]: Leaving directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
make[1]: Leaving directory "/root/serverscope-Twzhwo/byte-unixbench/UnixBench"
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: localhost: GNU/Linux
   OS: GNU/Linux -- 4.9.7-x86_64-linode80 -- #2 SMP Thu Feb 2 15:43:55 EST 2017
   Machine: x86_64 (x86_64)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (5001.3 bogomips)
          x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
   20:06:50 up 35 min,  1 user,  load average: 8.71, 4.89, 2.07; runlevel 2017-04-05

------------------------------------------------------------------------
Benchmark Run: Wed Apr 05 2017 20:06:50 - 20:35:01
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       28527479.5 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     3097.5 MWIPS (9.9 s, 7 samples)
Execl Throughput                               4582.4 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks       1039652.9 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          316419.6 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       1523262.5 KBps  (30.0 s, 2 samples)
Pipe Throughput                             2387514.7 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 292874.6 lps   (10.0 s, 7 samples)
Process Creation                              11169.1 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   7474.5 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    938.0 lpm   (60.0 s, 2 samples)
System Call Overhead                        3355147.0 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   28527479.5   2444.5
Double-Precision Whetstone                       55.0       3097.5    563.2
Execl Throughput                                 43.0       4582.4   1065.7
File Copy 1024 bufsize 2000 maxblocks          3960.0    1039652.9   2625.4
File Copy 256 bufsize 500 maxblocks            1655.0     316419.6   1911.9
File Copy 4096 bufsize 8000 maxblocks          5800.0    1523262.5   2626.3
Pipe Throughput                               12440.0    2387514.7   1919.2
Pipe-based Context Switching                   4000.0     292874.6    732.2
Process Creation                                126.0      11169.1    886.4
Shell Scripts (1 concurrent)                     42.4       7474.5   1762.9
Shell Scripts (8 concurrent)                      6.0        938.0   1563.3
System Call Overhead                          15000.0    3355147.0   2236.8
                                                                   ========
System Benchmarks Index Score                                        1517.0

Hard drive
dd
dd if=/dev/zero of=benchmark bs=64K count=32K conv=fdatasync
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.26562 s, 503 MB/s


dd if=/dev/zero of=benchmark bs=1M count=2048 conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.50138 s, 613 MB/s


Hard drive
FIO random read
Performance
78.55 MB/s
IOPS
20109
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=27862: Wed Apr  5 20:03:46 2017
  read : io=4713.3MB, bw=80438KB/s, iops=20109, runt= 60001msec
    clat (usec): min=1, max=101158, avg=360.85, stdev=835.91
     lat (usec): min=1, max=101158, avg=364.69, stdev=838.15
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[   80], 10.00th=[  102], 20.00th=[  131],
     | 30.00th=[  155], 40.00th=[  185], 50.00th=[  223], 60.00th=[  278],
     | 70.00th=[  366], 80.00th=[  506], 90.00th=[  756], 95.00th=[ 1032],
     | 99.00th=[ 1912], 99.50th=[ 2448], 99.90th=[ 4080], 99.95th=[ 5152],
     | 99.99th=[14912]
    bw (KB  /s): min= 5976, max=18008, per=12.51%, avg=10061.94, stdev=1269.18
    lat (usec) : 2=0.03%, 4=2.40%, 10=1.01%, 20=0.07%, 50=0.03%
    lat (usec) : 100=5.68%, 250=46.10%, 500=24.44%, 750=10.08%, 1000=4.84%
    lat (msec) : 2=4.45%, 4=0.77%, 10=0.09%, 20=0.01%, 50=0.01%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=1.20%, sys=8.55%, ctx=1335025, majf=0, minf=82
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1206592/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=4713.3MB, aggrb=80438KB/s, minb=80438KB/s, maxb=80438KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  sda: ios=1158837/6, merge=0/3, ticks=164823/0, in_queue=164123, util=86.00%
Hard drive
FIO random direct read
Performance
91.33 MB/s
IOPS
23381
./fio --time_based --name=benchmark --size=256M --runtime=60 --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=27872: Wed Apr  5 20:04:47 2017
  read : io=5480.3MB, bw=93526KB/s, iops=23381, runt= 60002msec
    clat (usec): min=39, max=102060, avg=306.81, stdev=733.27
     lat (usec): min=39, max=102061, avg=310.09, stdev=735.59
    clat percentiles (usec):
     |  1.00th=[   70],  5.00th=[   89], 10.00th=[  103], 20.00th=[  124],
     | 30.00th=[  143], 40.00th=[  163], 50.00th=[  189], 60.00th=[  227],
     | 70.00th=[  286], 80.00th=[  394], 90.00th=[  628], 95.00th=[  876],
     | 99.00th=[ 1640], 99.50th=[ 2064], 99.90th=[ 3664], 99.95th=[ 4576],
     | 99.99th=[11200]
    bw (KB  /s): min= 5052, max=16296, per=12.50%, avg=11694.23, stdev=1642.67
    lat (usec) : 50=0.01%, 100=8.63%, 250=56.05%, 500=20.76%, 750=7.50%
    lat (usec) : 1000=3.33%
    lat (msec) : 2=3.18%, 4=0.47%, 10=0.07%, 20=0.01%, 50=0.01%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=1.40%, sys=8.65%, ctx=1603281, majf=0, minf=82
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1402944/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=5480.3MB, aggrb=93526KB/s, minb=93526KB/s, maxb=93526KB/s, mint=60002msec, maxt=60002msec

Disk stats (read/write):
  sda: ios=1401095/0, merge=0/0, ticks=180340/0, in_queue=179653, util=87.29%
Hard drive
FIO random write
Performance
443.87 MB/s
IOPS
113629
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=27895: Wed Apr  5 20:06:47 2017
  write: io=26636MB, bw=454518KB/s, iops=113629, runt= 60010msec
    clat (usec): min=1, max=96626, avg=61.05, stdev=1188.06
     lat (usec): min=1, max=96627, avg=62.55, stdev=1220.86
    clat percentiles (usec):
     |  1.00th=[    2],  5.00th=[    2], 10.00th=[    2], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    2], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    3], 90.00th=[    4], 95.00th=[    4],
     | 99.00th=[    8], 99.50th=[   21], 99.90th=[22912], 99.95th=[28800],
     | 99.99th=[38656]
    bw (KB  /s): min=    4, max=80603, per=12.52%, avg=56917.56, stdev=9905.65
    lat (usec) : 2=0.32%, 4=84.61%, 10=14.14%, 20=0.32%, 50=0.25%
    lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.11%, 50=0.13%
    lat (msec) : 100=0.01%
  cpu          : usr=1.38%, sys=5.10%, ctx=44693, majf=0, minf=90
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=6818909/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=26636MB, aggrb=454518KB/s, minb=454518KB/s, maxb=454518KB/s, mint=60010msec, maxt=60010msec

Disk stats (read/write):
  sda: ios=3/2143687, merge=0/215, ticks=0/2174186, in_queue=2167450, util=91.90%
Hard drive
FIO random direct write
Performance
30.94 MB/s
IOPS
7920
./fio --time_based --name=benchmark --size=256M --runtime=60 --filename=benchmark --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 256MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=27882: Wed Apr  5 20:05:47 2017
  write: io=1856.4MB, bw=31681KB/s, iops=7920, runt= 60003msec
    clat (usec): min=59, max=106766, avg=1005.32, stdev=5387.37
     lat (usec): min=60, max=106767, avg=1005.86, stdev=5387.40
    clat percentiles (usec):
     |  1.00th=[   67],  5.00th=[   71], 10.00th=[   76], 20.00th=[   83],
     | 30.00th=[   91], 40.00th=[   97], 50.00th=[  104], 60.00th=[  112],
     | 70.00th=[  122], 80.00th=[  137], 90.00th=[  175], 95.00th=[  262],
     | 99.00th=[32640], 99.50th=[35584], 99.90th=[45312], 99.95th=[49920],
     | 99.99th=[63744]
    bw (KB  /s): min= 1563, max= 6936, per=12.51%, avg=3963.75, stdev=853.13
    lat (usec) : 100=43.18%, 250=51.58%, 500=1.82%, 750=0.28%, 1000=0.14%
    lat (msec) : 2=0.18%, 4=0.09%, 10=0.02%, 20=0.01%, 50=2.65%
    lat (msec) : 100=0.05%, 250=0.01%
  cpu          : usr=0.68%, sys=2.83%, ctx=963312, majf=0, minf=89
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=475232/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=1856.4MB, aggrb=31680KB/s, minb=31680KB/s, maxb=31680KB/s, mint=60003msec, maxt=60003msec

Disk stats (read/write):
  sda: ios=114/474917, merge=0/1829, ticks=4287/41813, in_queue=45940, util=64.10%
Download
Download benchmark results
Download speed
851.06 Mbit/s
Downloaded 104857600 bytes in 1.364 sec
Downloaded 104857600 bytes in 0.680 sec
Downloaded 104857600 bytes in 0.858 sec
Downloaded 104857600 bytes in 0.564 sec
Downloaded 104857600 bytes in 1.234 sec
Finished! Average download speed is 851.06 Mbit/s
Upload
Speedtest results
Upload speed
383.99 Mbit/s
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Linode ...
Selecting 15 servers that are not too close:
  1. Sonic.net, Inc (San Jose, CA) [30.81 km]: 63.156 ms
  2. Speedtest.net (San Jose, CA) [30.81 km]: 1800000.0 ms
  3. Sneaker Server (San Jose, CA) [30.81 km]: 22.887 ms
  4. DNASOLES (San Jose, CA) [30.81 km]: 56.659 ms
  5. Speedtest.net (San Jose, CA) [30.81 km]: 1800000.0 ms
  6. Fastmetrics Inc. (San Francisco, CA) [44.76 km]: 44.839 ms
  7. Unwired (San Francisco, CA) [44.76 km]: 22.178 ms
  8. AT&T (San Francisco, CA) [44.76 km]: 282.761 ms
  9. Comcast (San Francisco, CA) [44.76 km]: 281.127 ms
  10. Monkey Brains (San Francisco, CA) [44.76 km]: 35.758 ms
  11. Race Communications (San Francisco, CA) [44.76 km]: 78.754 ms
  12. Lude.Me (San Francisco, CA) [44.76 km]: 35.573 ms
  13. Cruzio (Santa Cruz, CA) [66.28 km]: 22.626 ms
  14. Velociter Wireless (Stockton, CA) [74.76 km]: 61.103 ms
  15. Ayera Technologies, Inc. (Modesto, CA) [87.70 km]: 39.516 ms
Testing upload speeds
  1. Sonic.net, Inc (San Jose, CA):               ......................... 435.99 Mbit/s
  2. Speedtest.net (San Jose, CA):                ......................... 199.21 Mbit/s
  3. Sneaker Server (San Jose, CA):               ......................... 319.59 Mbit/s
  4. DNASOLES (San Jose, CA):                     ......................... 453.78 Mbit/s
  5. Speedtest.net (San Jose, CA):                ......................... 372.20 Mbit/s
  6. Fastmetrics Inc. (San Francisco, CA):        ......................... 544.57 Mbit/s
  7. Unwired (San Francisco, CA):                 ......................... 523.92 Mbit/s
  8. AT&T (San Francisco, CA):                    ......................... 517.21 Mbit/s
  9. Comcast (San Francisco, CA):                 ......................... 542.30 Mbit/s
  10. Monkey Brains (San Francisco, CA):          ......................... 123.36 Mbit/s
  11. Race Communications (San Francisco, CA):    ......................... 411.48 Mbit/s
  12. Lude.Me (San Francisco, CA):                ......................... 264.96 Mbit/s
  13. Cruzio (Santa Cruz, CA):                    ......................... 73.70 Mbit/s
  14. Velociter Wireless (Stockton, CA):          ......................... 192.45 Mbit/s
  15. Ayera Technologies, Inc. (Modesto, CA):     ......................... 214.18 Mbit/s
Average upload speed is 345.93 Mbit/s