Ремонт и сервис каров. Прокат оборудования. Языки веб-сайта. Язык главной странички Российский. Технологии на веб-сайте. Плагины Autoptimize. Аналитика Yandex. Маркетинг и реклама Google. Язык программирования PHP. База данных MySQL. Сервер веб-сайта Nginx. Выборка остальных технологий. Relevanssi — A Better Search. AnyWhere Elementor. Gutenberg Blocks and Template Library by Otter. WooCommerce Customizer. WP Fastest Cache. Heartland Payment Systems.
Simple Custom CSS. Razorpay for WooCommerce. IP Похожие компании. Крайние обновления. Источник Советы. Show More. Read Next Альткоины Альткоины Leave a Reply Cancel reply Your email address will not be published. Now reading. Аналитик именовал 5 альткоинов, которые «выстрелят» в августе Close Search for. Go to mobile version. Станьте миллионером, торгуя криптовалютой!
Indices of the array are the same as indices of the source strings. Also we read a link field of the string in hand and copy the link value into and separate index data structure, again indices of strings and links correspond. These strings cannot typically produce a reasonable solution.
For the higher sizes, the insight is following. Input data should not be distinguishable from random data given that blake2b is a good hash function. It is highly improbable that more then 13 strings collide in a particular segment. But it happens in reality quite often when duplicate strings are involved in in some colliding strings transitively. Filtering of collision groups greater then 13 tries to filter invalid solutions early. It is a really cheap check as well. While eliminating collision groups of bad sizes, we calculate cumulative sum of number of strings participating in a not-eliminated collision groups.
At the and, each collision group segment value has a continuous range of indices in an output array Collisions. Ranges of different collision groups are of course non-overlapping. After processing complete TmpSeg array, we have indices of all colliding strings sitting next to each other in one array.
We know where exactly indices for a particular collision group are and how big the group is. Here, we know that the groups are limited in size and complete. We generate all output strings string for the next step together. While generating new strings, we reduce one segment and write the strings into particular output bucket, based on the now new first segment.
We store indices of the source strings into a pair link field within the producing string. This link field will be copied into standalone index data structure in the next step, while computing then next histogram. When collisions for 9th segment are found by the same approach already described, we do not produce any other strings, but we use information stored in the pair link objects to track down the source strings.
The translation is pretty straightforward. The output is a list of indices. If the list contains any duplicate items, the solution candidate is rejected. Otherwise indices are reorder to conform to zcash algorithm binding and a solution is produced. It can seem that this algorithm does too much unnecessary work. There are some explanations of this approach. In general, to produce a fast solver one has to make a lot of design decisions. There is a strong tension between solution generality and exploiting every possible information to be faster.
We tried to go the second way with allowing the code to be general in cases when a modern compiler can reduce a general case to efficient particular case in compile time. There are two main logical parts of the solver. A strings generator, producing blake2b hashes and collision search part. An algorithm of the first is exactly defined and the only space for efficiency is reduction of unnecessary work and efficient usage of CPU power. It is surprisingly large part of the whole computation.
It is worth to make the hash computation as fast as possible. This if performed for each input block one by one. Reference implementation of blake2b hash function does call compress twice per hash result even when the first block is provided at the beginning only once for all output strings.
Not necessary data copying is performed too and it makes the computation even slower. This is independent of used ISA specialization. The second block of input function is mostly empty - only first 16 bytes is non-empty B - B and the rest is by definition set to 0.
By using this fact we can simplify implementation of the compress function because we eliminate loading and adding of most of the input data. Similarly, we do not need all result bytes only 50 from 64 so we can eliminate further computation. All common implementations of blake2b hash function are constructed for computing single hash output for single input data.
Even when some implementation is specialized e. So we can choose different vectorization strategy to explit this fact. The basic idea is to use vector instructions to compute more then one hash function at the same time. AVX2 instructions operate on 4 64bit integer values at once so we can compute 4 hashes. Other nice property is that a structure of the code is the same as for basic scalar implementation. The only difference is that a vector instruction computes with more data.
Explain low-level optimization techniques applied, how and why the code is shaped becasuse of this:. Since the strings must be collided in prescribed way and the strings are treated as big endian, some extra work is needed for efficient implementation. We want to read the first segment of a string really fast on little endian machines because it is a frequent operation in a hot loop.
So we preprocess the strings to reorder bits in the produced hashes. The final form is so that only single load is needed with possibly one bitwise shift. The reorder is done only once, during initial strings generation, particularly lower and higher 4 bits are swapped in every 3rd byte in every 5 bytes. The exact representation as in original solver is not needed, only each segment must contain the same bits even reordered as specified by "algorithm binding".
There is an option of the solver to produce "expanded" version of the strings so that each segment occupies exactly 3 bytes not only 20bits. This eliminates additional shift operation otherwise needed when accessing the first segment every second algorithm step. It turns out that it the additional memory causes the algorithm to run slower it is not so surprising.
The reorder is done during copying the strings to a final position to eliminate copying. It can be beneficial to switch to simpler algorithm for later steps. Some measurements show that when the strings. It is easy to find best ones for one machine or maybe CPU, but it is usually not the wanted result.
But it can be run on any platform capable of building the code x considered mainly. Better support for older architectures can be added. Thanks to this integration AVX1 machines can benefit from fast blake2b as well. The fact that intrinsic implementation for AVX2 is quite competitive performance-wise to manually written asm code is promising.
The amount of code for the intrinsics-based AVX2 batch implementation is roughly lines of code. Roughly the same code can be written for older CPUs to still benefit from independent vectorization of 2 blake2b. The code needs to be properly cleaned up and documented. On the Windows platform, the correct file extension is. On Linux, it is. Below is an example of a batch file that we use on one of our Windows-based mining rigs.
Everything is contained in a single line in the. Finally, and most importantly, one of Zcash wallet addresses is specified so mined Zcash goes to the right place. Two minus signs indicate variables or settings that will be supplied to the miner when it starts. Whatever directly follows a minus sign and setting is the value for the mining software to use.
The settings not marked as optional are the bare minimum that must be used to mine ZEC at flypool. Mining GPU Availability. Sign-up below to be notified when the FREE class launches! First Name. Last Name.
Branches Tags. Could not load branches. Could not load tags. Latest commit. Git stats 46 commits. Failed to load latest commit information. View code. Features Supports: Windows and Linux 64bit only. Expected speed stock card : Fiji 4GB : 7. About No description, website, or topics provided. Releases No releases published. Packages 0 No packages published.
Contributors 2. You signed in with another tab or window. Version 0. It should now work with all radeon cards. Any possibilities for CUDA reaching these hashrates? I only have a GTX Same, Im too lazy to mess around. Amazing works. Thanks again optiminer. I can assure you that it contains no backdoors or trojans or viruses. As said before I have developed many miners before. One for HOdlcoin and also for ZCash. You can see this on my github account. I decided this time to go the dev fee way as donations have not worked well so far and selling it privately is a tedious work and IMO also less fair as this will leave the field open for big miners only.
Windows version please. Supports: Windows and Linux 64bit only. Radeon cards only.
This cookie name is associated with the product Visual Website Optimiser, by USA based Wingify. The tool helps site owners measure the performance of different. tiames.ru ( × 91 пиксель, размер файла: 2 Кб, MIME-тип: image/png) English: zcash logo × 91 (2 Кб), Semi-Brace, optimize. Биткоин спотыкается, падая ниже 60 тысяч долларов из-за опасений выплат Mt. Gox, токены Metaverse стремятся вверх — MANA и Enjin растут на.