Delphi Snappy64 - The fast compressor/decompressor used inside Google

emailx45

Местный
Регистрация
5 Май 2008
Сообщения
3,571
Реакции
2,439
Credits
574
Delphi Snappy64 - The fast compressor/decompressor used inside Google

A fast compressor/decompressor


8 aug.2016, from Google project page: Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.

Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems; is useful for higher-level framing and encapsulation of data, e.g. for transporting compressed data across HTTP in a streaming fashion.


Those C builds are compiled with LLVM 3.8.1 CLANG for WIN32/WIN64, BCCIOSARM 7.20 for IOS, BCCAARM 7.20 for ANDROID
Provided a basic sample tested with DCC32/DCC64/DCCIOSARM/DCCAARM 31.0, FPC 3.0

Http Json 50KB TMemoryStream file test
Intel core i7 2.6ghz, Windows 10 Pro

Compression ratio 6x

Snappy 64bit WIN64
compress in 237.33ms, ratio=85%, 1.6 GB/s
uncompress in 92.43ms, 4.3 GB/s

Snappy 32bit WIN32
compress in 269.96ms, ratio=85%, 1.4 GB/s
uncompress in 135.88ms, 2.9 GB/s

Zlib fastest mode 64bit WIN64
compress in 1.77s, ratio=89%, 231.7 MB/s
uncompress in 961.10ms, 427.6 MB/s

Zlib fastest mode 32bit WIN32
compress in 2.12s, ratio=89%, 193.6 MB/s
uncompress in 1.43s, 286.1 MB/s

Using TParallel.For from System.Theading WIN64
Snappy compress in 54.94ms, ratio=85%, 7.3 GB/s
Snappy uncompress in 46.05ms, 8.7 GB/s

Link to Delphi Snappy64 v.1.1.3 (stable)
Для просмотра ссылки Войди или Зарегистрируйся 83KBytes

Polly (dev) WIN64 static object built with clang 4.0 and polly, a high-level loop and data-locality polyhedral compiler (tiling, vectorize and parallelize optimizations)
(from my test into a single thread you can get a negligible speed gain on large data, consider this a test only, I'm waiting a final release)
Для просмотра ссылки Войди или Зарегистрируйся - 4KBytes

Feel free to test it and/or enhance it.
Please check internal comments, thank you.

source: Для просмотра ссылки Войди или Зарегистрируйся - Roberto Della Pasqua
 
Последнее редактирование модератором:

HatM

Турист
Регистрация
8 Дек 2014
Сообщения
4
Реакции
1
Credits
8
I have done few tests and Snappy was the fastest, zlib and synlz included.
But it has the lowest compress factor, in my case I have a lower bandwith to connect, and the result in this model was zlib with max compress. It gave me the best performance, since the network latency was the main slower in my scenary