Stopwatch
Benchmark report #1277
Scaleway – VC1S
Report
Share test result

Test results for VC1S at Scaleway


Server specs:


2 × Intel(R) Atom(TM) CPU C2750 @ 2.40GHz
2 GB RAM / 48 GB disk space
Ubuntu 16.04 Xenial
Southend-on-Sea, United Kingdom

Benchmark results summary:


Disk Read - 101 MB/s
Disk Write - 325 MB/s
Bandwidth - 734.75 MB/s

More: https://serverscope.io/trials/PoA
Share
Share this page
Cpu chip
Server Specs
CPU
2 × Intel(R) Atom(TM) CPU C2750 @ 2.40GHz
CPU Cores
1 × 2394 MHz
RAM
2 GB
Disk
48 GB
OS
Ubuntu 16.04 Xenial
Location
Southend-on-Sea, United Kingdom
Benchmark summary
Benchmark summary
UnixBench
N/A
Disk Read
101 MB/s
Disk Write
325 MB/s
Bandwidth
734.75 Mbit/s
Speedtest
360.35 Mbit/s
Graph analysis
UnixBench Score
UnixBench (one CPU)
N/A
Terminal
Raw Output
gcc -o pgms/arithoh -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Darithoh src/arith.c 
gcc -o pgms/register -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum="register int" src/arith.c 
gcc -o pgms/short -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=short src/arith.c 
gcc -o pgms/int -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=int src/arith.c 
gcc -o pgms/long -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=long src/arith.c 
gcc -o pgms/float -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=float src/arith.c 
gcc -o pgms/double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -Ddatum=double src/arith.c 
gcc -o pgms/hanoi -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/hanoi.c 
gcc -o pgms/syscall -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/syscall.c 
gcc -o pgms/context1 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/context1.c 
gcc -o pgms/pipe -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/pipe.c 
src/pipe.c: In function xe2x80x98mainxe2x80x99:
src/pipe.c:52:2: warning: ignoring return value of xe2x80x98pipexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
  pipe(pvec);
  ^
gcc -o pgms/spawn -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/spawn.c 
gcc -o pgms/execl -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/execl.c 
In file included from src/execl.c:34:0:
src/big.c: In function xe2x80x98dummyxe2x80x99:
src/big.c:109:5: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
     freopen("masterlog.00", "a", stderr);
     ^
src/big.c:197:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:221:3: warning: ignoring return value of xe2x80x98dupxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
   dup(pvec[0]);
   ^
src/big.c:225:6: warning: ignoring return value of xe2x80x98freopenxe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
      freopen(logname, "w", stderr);
      ^
src/big.c:318:4: warning: ignoring return value of xe2x80x98writexe2x80x99, declared with attribute warn_unused_result [-Wunused-result]
    write(fcopy, cp->line, p - cp->line + 1);
    ^
gcc -o pgms/dhry2 -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/dhry2reg -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DHZ= -DREG=register ./src/dhry_1.c ./src/dhry_2.c
gcc -o pgms/looper -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/looper.c 
gcc -o pgms/fstime -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME src/fstime.c 
gcc -o pgms/whetstone-double -Wall -pedantic -O3 -ffast-math -march=native -mtune=native -I ./src -DTIME -DDP -DUNIX -DUNIXBENCH src/whets.c -lm
make all
make[1]: Entering directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
make distr
make[2]: Entering directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
Checking distribution of files
./pgms  exists
./src  exists
./testdir  exists
make[2]: Leaving directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
make programs
make[2]: Entering directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
make[2]: Nothing to be done for "programs".
make[2]: Leaving directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
make[1]: Leaving directory "/tmp/serverscope-ed6h6S/byte-unixbench/UnixBench"
sh: 1: 3dinfo: not found

   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

2 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

2 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

2 x Execl Throughput  1 2 3

2 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

2 x File Copy 256 bufsize 500 maxblocks  1 2 3

2 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

2 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

2 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

2 x Process Creation  1
**********************************************
Run: "Process Creation": Fork failed at iteration 2632
Reason: Resource temporarily unavailable; aborting
Hard drive
dd
dd if=/dev/zero of=benchmark bs=64K count=64K conv=fdatasync
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 22.61 s, 190 MB/s


dd if=/dev/zero of=benchmark bs=1M count=4096 conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 22.2997 s, 193 MB/s


Hard drive
FIO random read
Performance
101.21 MB/s
IOPS
25910
./fio --time_based --name=benchmark --size=511M --runtime=60 --ioengine=libaio --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)
benchmark: Laying out IO file(s) (1 file(s) / 511MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=13482: Fri Aug 19 23:58:25 2016
  read : io=6072.1MB, bw=103642KB/s, iops=25910, runt= 60001msec
    slat (usec): min=4, max=61231, avg=294.90, stdev=320.52
    clat (usec): min=6, max=73019, avg=9571.58, stdev=3052.62
     lat (usec): min=184, max=73496, avg=9868.80, stdev=3124.56
    clat percentiles (usec):
     |  1.00th=[ 5280],  5.00th=[ 6304], 10.00th=[ 6944], 20.00th=[ 7648],
     | 30.00th=[ 8160], 40.00th=[ 8640], 50.00th=[ 9152], 60.00th=[ 9664],
     | 70.00th=[10176], 80.00th=[10816], 90.00th=[12224], 95.00th=[14016],
     | 99.00th=[22400], 99.50th=[27264], 99.90th=[35072], 99.95th=[39168],
     | 99.99th=[57600]
    bw (KB  /s): min= 6848, max=21160, per=12.51%, avg=12963.11, stdev=2216.39
    lat (usec) : 10=0.01%, 50=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
    lat (usec) : 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.03%, 10=67.44%, 20=31.17%, 50=1.34%
    lat (msec) : 100=0.01%
  cpu          : usr=3.73%, sys=10.14%, ctx=1531187, majf=0, minf=329
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=1554660/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=6072.1MB, aggrb=103642KB/s, minb=103642KB/s, maxb=103642KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
  vda: ios=1471749/11, merge=0/6, ticks=305850/30, in_queue=306060, util=99.74%
Hard drive
FIO random direct read
Performance
172.6 MB/s
IOPS
44186
./fio --time_based --name=benchmark --size=511M --runtime=60 --ioengine=libaio --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randread --blocksize=4k --group_reporting
benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=13492: Fri Aug 19 23:59:26 2016
  read : io=10357MB, bw=176746KB/s, iops=44186, runt= 60006msec
    slat (usec): min=9, max=79682, avg=110.50, stdev=1016.68
    clat (usec): min=2, max=103754, avg=5637.85, stdev=7090.92
     lat (usec): min=74, max=110788, avg=5751.77, stdev=7161.51
    clat percentiles (usec):
     |  1.00th=[  394],  5.00th=[  596], 10.00th=[  748], 20.00th=[ 1032],
     | 30.00th=[ 1368], 40.00th=[ 1848], 50.00th=[ 2768], 60.00th=[ 4256],
     | 70.00th=[ 6176], 80.00th=[ 9024], 90.00th=[14016], 95.00th=[19840],
     | 99.00th=[34048], 99.50th=[40704], 99.90th=[52992], 99.95th=[59648],
     | 99.99th=[72192]
    bw (KB  /s): min= 5397, max=46570, per=12.52%, avg=22129.22, stdev=5217.63
    lat (usec) : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.15%, 500=2.45%
    lat (usec) : 750=7.50%, 1000=8.97%
    lat (msec) : 2=23.01%, 4=16.55%, 10=24.18%, 20=12.28%, 50=4.75%
    lat (msec) : 100=0.15%, 250=0.01%
  cpu          : usr=6.22%, sys=15.29%, ctx=126022, majf=0, minf=349
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=2651452/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=10357MB, aggrb=176745KB/s, minb=176745KB/s, maxb=176745KB/s, mint=60006msec, maxt=60006msec

Disk stats (read/write):
  vda: ios=2650344/17, merge=0/16, ticks=3956930/70, in_queue=3963040, util=99.50%
Hard drive
FIO random write
Performance
325.4 MB/s
IOPS
83301
./fio --time_based --name=benchmark --size=511M --runtime=60 --filename=benchmark --ioengine=libaio --randrepeat=1 --iodepth=32 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.8
Starting 8 processes

benchmark: (groupid=0, jobs=8): err= 0: pid=13535: Sat Aug 20 00:01:28 2016
  write: io=19608MB, bw=333207KB/s, iops=83301, runt= 60258msec
    slat (usec): min=6, max=1376.9K, avg=82.52, stdev=2903.36
    clat (usec): min=8, max=1410.7K, avg=2978.42, stdev=17113.94
     lat (usec): min=385, max=1410.7K, avg=3063.17, stdev=17377.28
    clat percentiles (usec):
     |  1.00th=[  414],  5.00th=[  430], 10.00th=[  438], 20.00th=[  450],
     | 30.00th=[  458], 40.00th=[  470], 50.00th=[  482], 60.00th=[  510],
     | 70.00th=[  556], 80.00th=[  644], 90.00th=[ 1448], 95.00th=[20864],
     | 99.00th=[46336], 99.50th=[55552], 99.90th=[89600], 99.95th=[160768],
     | 99.99th=[774144]
    bw (KB  /s): min= 1502, max=97765, per=12.98%, avg=43241.66, stdev=12933.55
    lat (usec) : 10=0.01%, 20=0.01%, 500=57.04%, 750=27.55%, 1000=3.50%
    lat (msec) : 2=2.71%, 4=1.84%, 10=0.61%, 20=1.42%, 50=4.49%
    lat (msec) : 100=0.76%, 250=0.05%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2000=0.01%
  cpu          : usr=6.42%, sys=12.44%, ctx=46465, majf=0, minf=91
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5019596/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=19608MB, aggrb=333206KB/s, minb=333206KB/s, maxb=333206KB/s, mint=60258msec, maxt=60258msec

Disk stats (read/write):
  vda: ios=0/795359, merge=0/134, ticks=0/4918990, in_queue=5058240, util=80.73%
Hard drive
FIO random direct write
Performance
93.21 MB/s
IOPS
23862
./fio --time_based --name=benchmark --size=511M --runtime=60 --filename=benchmark --ioengine=libaio --randrepeat=1 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting
benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.8
Starting 8 processes
benchmark: Laying out IO file(s) (1 file(s) / 511MB)

benchmark: (groupid=0, jobs=8): err= 0: pid=13509: Sat Aug 20 00:00:27 2016
  write: io=5675.5MB, bw=95450KB/s, iops=23862, runt= 60887msec
    slat (usec): min=12, max=1307.4K, avg=232.39, stdev=7032.82
    clat (usec): min=2, max=1328.1K, avg=10479.88, stdev=49105.84
     lat (usec): min=119, max=1331.9K, avg=10714.58, stdev=49613.37
    clat percentiles (usec):
     |  1.00th=[  692],  5.00th=[  868], 10.00th=[ 1064], 20.00th=[ 1752],
     | 30.00th=[ 3120], 40.00th=[ 4448], 50.00th=[ 5984], 60.00th=[ 8640],
     | 70.00th=[11072], 80.00th=[13760], 90.00th=[17536], 95.00th=[21632],
     | 99.00th=[34560], 99.50th=[45824], 99.90th=[1056768], 99.95th=[1204224],
     | 99.99th=[1286144]
    bw (KB  /s): min=    3, max=34184, per=14.31%, avg=13654.76, stdev=4603.57
    lat (usec) : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.03%, 500=0.11%
    lat (usec) : 750=1.83%, 1000=6.45%
    lat (msec) : 2=13.91%, 4=14.21%, 10=29.04%, 20=27.85%, 50=6.12%
    lat (msec) : 100=0.18%, 250=0.06%, 500=0.01%, 750=0.02%, 1000=0.03%
    lat (msec) : 2000=0.16%
  cpu          : usr=2.77%, sys=9.15%, ctx=112143, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1452913/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=5675.5MB, aggrb=95449KB/s, minb=95449KB/s, maxb=95449KB/s, mint=60887msec, maxt=60887msec

Disk stats (read/write):
  vda: ios=0/1452906, merge=0/2534, ticks=0/4582440, in_queue=4724140, util=98.30%
Download
Download benchmark results
Download speed
734.75 Mbit/s
Downloaded 104857600 bytes in 1.181 sec
Downloaded 104857600 bytes in 1.173 sec
Downloaded 104857600 bytes in 0.997 sec
Downloaded 104857600 bytes in 0.954 sec
Downloaded 104857600 bytes in 1.139 sec
Finished! Average download speed is 734.75 Mbit/s
Upload
Speedtest results
Upload speed
360.35 Mbit/s
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Scaleway ...
Selecting 15 servers that are not too close:
  1. iperf.fr (Rouen) [111.21 km]: 4.236 ms
  2. Heberg.fr (Amiens) [114.99 km]: 6.665 ms
  3. Ikoula (Reims) [131.50 km]: 9.318 ms
  4. Orange (Reims) [131.50 km]: 10.347 ms
  5. LaFibre.info (Douai) [176.53 km]: 4.158 ms
  6. Charlus (Valenciennes) [187.35 km]: 5.131 ms
  7. ATE (Villeneuve-d'Ascq) [204.64 km]: 10.299 ms
  8. Treudler FR (Roubaix) [213.32 km]: 10.112 ms
  9. Verelox (Roubaix) [213.32 km]: 12.889 ms
  10. ePlay TV (Roubaix) [213.32 km]: 17.086 ms
  11. Ultimate AIR Gamers (Roubaix) [213.32 km]: 261.473 ms
  12. myplex.org (Gravelines) [237.06 km]: 12.344 ms
  13. Alabar (Saint-Lo) [251.82 km]: 8.2 ms
  14. CloudConnX (Eastbourne) [258.84 km]: 29.337 ms
  15. Universite Catholique de Louvain (Louvain-La-Neuve) [259.09 km]: 24.091 ms
Testing upload speeds
  1. iperf.fr (Rouen):                            ......................... 480.26 Mbit/s
  2. Heberg.fr (Amiens):                          ......................... 396.93 Mbit/s
  3. Ikoula (Reims):                              ......................... 421.26 Mbit/s
  4. Orange (Reims):                              ......................... 287.93 Mbit/s
  5. LaFibre.info (Douai):                        ......................... 500.03 Mbit/s
  6. Charlus (Valenciennes):                      ......................... 411.61 Mbit/s
  7. ATE (Villeneuve-d'Ascq):                     ......................... 400.90 Mbit/s
  8. Treudler FR (Roubaix):                       ......................... 355.60 Mbit/s
  9. Verelox (Roubaix):                           ......................... 270.87 Mbit/s
  10. ePlay TV (Roubaix):                         ......................... 97.89 Mbit/s
  11. Ultimate AIR Gamers (Roubaix):              ......................... 315.23 Mbit/s
  12. myplex.org (Gravelines):                    ......................... 45.58 Mbit/s
  13. Alabar (Saint-Lo):                          ......................... 394.25 Mbit/s
  14. CloudConnX (Eastbourne):                    ......................... 226.59 Mbit/s
  15. Universite Catholique de Louvain (Louvain-La-Neuve):......................... 223.13 Mbit/s
Average upload speed is 321.87 Mbit/s