Is it possible to have multiple Algos like DGB? Is it possible to keep the large group of GPU miners and expand with Random X to bring in a large number of CPU newbies. Also if for some reason Random X doesn't workout we still have current algo that gets updated every 6 months. DGB borrowed from Huntercoin which mines 2 algos and Myriad that has 5 mining algos. Could Monero do the same?
> the guys over at Ryo are claiming their CN-GPU algo is ASIC resistant
Yes, and they may be right. A "GPU only" PoW algorithm (or one at least heavily leaning towards GPUs) has a different set of trade-offs than RandomX however, and judgement may depend on how you value them.
Ah yes, the "We are a small team of professionals and we like to move fast" people [https://medium.com/@crypto\_ryo/cryptonight-gpu-fpga-proof-pow-algorithm-based-on-floating-point-instructions-92524debf8e8](https://medium.com/@crypto_ryo/cryptonight-gpu-fpga-proof-pow-algorithm-based-on-floating-point-instructions-92524debf8e8) who take 4 releases to fix a memory leak bug [\[1\]](https://www.reddit.com/r/Monero/comments/ayj4qr/xmrstak_2100_full_cnr_support_and_graft/) [\[2\]](https://www.reddit.com/r/Monero/comments/azzoyg/xmrstak_2101_the_big_bug_fixing/) [\[3\]](https://www.reddit.com/r/Monero/comments/b1kyyf/xmrstak_2102_opencl_memory_leak_in_cnr/) [\[4\]](https://www.reddit.com/r/Monero/comments/b6pn7b/xmrstak_2103_simpler_setup_and_opencl_memory_leak/) [\[5\]](https://www.reddit.com/r/MoneroMining/comments/b8h3l7/xmrstak_2104_laborious_process_of_fixing_bugs_in/) ...
Any static algorithm can be efficiently hardwired into an ASIC and will beat all programmable hardware for both performance and efficiency. CN-GPU is no different, it has zero ASIC-resistance.
I've mentioned multiple times my solution... But the Bitcoin folk don't want ASIC resistance...
Hash the entire Blockchain not just the current block. And do it backwards to prevent caching the partial result.
The speed of the hash is the irrelevant, bus speed will be the limiting factor. PC's already contain cutting edge buses, there's no real practical way for custom hardware to get any huge advantage.
Edit: it also occurs to me that this is also botnet resistant. Internet connections are so slow compared to local bus speeds that no botnet could be fed with the necessary data fast enough to validate even a single incoming block, let alone multiple nonces of a potential mined block.
You can take a PC MOBO, put 256 GB of RAM, keep the entire blockchain in RAM, and put a fast and cheap Ryzen CPU to do the hashing. How's that box different from an ASIC? People who have regular computers with 32 GB of RAM cannot compete with these server-grade beasts, so mining hardware can hardly be commoditized like with GPUs.
Also: what's stopping mining farms to have ASICs for pure PoW, and a few servers to do the hashing part of PoW while ASICs do the numbers crunching part?
Pow is nothing but number crunching. I don't understand what separation you're saying there is between "hashing" and pow?
For the first part... I'm not saying that people won't throw some money at servers, but that is still general purpose computing. And that 256gb will run out in the end. It also isn't multiple orders of magnitude faster like ASICS are.
Anyway... Whatever, it was only my suggestion. It's not like I've written a research paper or anything.
In my proposal mining is done by hashing the entire chain. Implies you need the whole chain to hash.
Pools would be effectively pointless. Which is the idea: get back to fully decentralised mining.
Presently, yes it's not needed... But I'm not talking about presently.
ok i see, I totally agree ASICS an particularly pooled mining results in centralization in the POW consensus layer - where pool admins have incredible amounts of power. This was demonstrated in the BCH hashwar & when the unknown miner tried to claim segwit funs ( legitimate block ). Apparently some devs worked with two pools to double spend the block.
Mining pools tends to make miners NPC's and ASICS make mining inaccessible therefore this is the worse combination. Have you heard about the betterhash protocol - which allows miners to run full nodes and create the block - but share the reward?
You're absolutely correct about the centralisation of power around pool admins being a point of failure and is another good reason to dislike pools.
I've not heard of that no. Thanks for the reference. As you describe it sounds useful. Would be nice, in fact, if something like it were built in. Then every contributed hash earns exactly it's share of the reward so every miner wouldn't have to worry about discontinuous income caused by bad luck.
I believe it's built using Rust, stratum mining is also insecure so if there some attacks on some pools that might help better hash adoption. I think mining centralization is the pink elephant in the room so far no coin has successfully addressed it.
Edit: I was wrong.
~~It's a cool idea, but it probably won't work. Hashing works by reading in chunks of data, doing some math, caching the result and repeating until there is no more chunks, then the cached result becomes the actual result.~~
~~You don't necessarily need to have the entire file when generating a hash. Take a file A, split it into 2 overlapping pieces A1, A2. Compute the hash for A1, save the state of the hashing algorithm, resume on the overlapping part of A2. It's the same as hashing all of A.~~
~~I can't speak for every hashing algorithm, so maybe there are cases where I'm wrong. But this is why hashing algorithms are so fast & so powerful. We are able to hash files where the file is larger than the available system memory.~~
~~You can test this by hashing a large file over a network share & watching how much memory on your system is used.~~
~~Because of this, it's possible to precompute all but the last block & use that saved state as a computational starting point.~~
I know how hashing works, that's why I said do it backwards, nonce first.
You can't cache the internal hash state then because every state along the way is different for every new block and every new nonce.
(In reality it wouldn't have to be literally nonce first since the nonce is part of the potential new block you're mining so it's near enough to the end of the chain that it doesn't really make much difference whether it's literally the first bytes of a multi gigabyte input or just in the first few thousand, there's still nothing worth caching).
>Hash the entire Blockchain not just the current block. And do it backwards to prevent caching the partial result.
wouldn't the ram/ssd have to read the entire chain every single time to do this? how long would it take?
Yep. That's the idea. It's that read/transfer that becomes the bottleneck. Who cares if your ASIC can hash a trillion hashes a second if you can't get the data to it fast enough? The ASIC sits idle most of the time. In fact, because a PC would hash using the CPU it would likely be faster than an ASIC because there are two transfers to get the data to the ASIC (although I guess miners would be willing to throw ram at the problem for a while, even that will be exhausted as the block chain grows).
As long as the time to hash the whole thing is much less than the time between blocks, there is time for miners to try various nonces plus the whole block chain (nonce first of course) multiple times and so the difficulty adjustment algorithm takes care of making the blocks come at the right rate.
Why? Verification is one pass every time a block is released. If one pass is fast enough that multiple can be done so that proof of work is practical then verification is practical too.
Remember that the transaction chain is not the same as the block chain. Once your local Blockchain had been verified you can trust it's entire contents, you don't have to verify it all again for every incoming transaction.
It doesn't have to be literally the whole Blockchain... Pick whatever amount is sensible for a practical result. 500MB? 10GB. It's perfectly tuneable. It could even be the literal difficulty parameter if one were so inclined.
Edit: also remember that the transactions don't need to be hashed every time. They are represented in the block by the root of a merkle tree (at least it is in bitcoin, I'm not familiar enough with monero internals to know if it's the same). So a 100GB block chain isn't really 100GB of input.
But those really are details for someone with more familiarity of internals than I. The principle is sound for ASIC resistance I think.
So instead of warehouses full of ASIC’s in China there will be warehouses full of servers mining. Either way, your single CPU in your desktop at home isn’t going to make any money. Nobody cares that they can turn 1 penny into 2 pennies. It needs to be at a large scale and those capital requirements are what push small time miners out, not ASIC vs. CPU.
> So what would “in theory” be the best setup? Cheap motherboards and a fast chips?
dual-socket Epyc 2 should be the best. if you can't afford that, any Zen 2 CPU should do pretty well. if you want to use a cheap motherboard, Ryzen 7 3700X will probably be the best option.
it should be the most efficient, and therefore cheaper for the amount of Monero mined. two 64 core Epycs in a single machine is probably also going to be cheaper up front than the equivalent 16 machines with 8 core Ryzens or 8 machines with 16 core Ryzens.
That's a good point. And then 6 months later Monero give up on RandomX and switch to an ASIC friendly algo.
At the top of your head, would you happen to know how much it would roughly cost to build a dual epyc 2 system specifically for mining monero? The cheapest dual core capable epyc is ~ $3.5k.
Would be nice if they could pull this off but someone will probably get a FPGA or ASIC cracking on the new algo eventually. Once you get to ASIC mining it always ends up centralized. Bitcoin is somewhat protected because even though mining has become centralized; attacking BTC would take everything down with it.
This isn't just a change of algo though. It's quite a bit more substantial a change.
I suppose the thinking of switching to a process that favors CPUs is that there are CPUs galore just lying around the place, offices, dorm rooms, basements etc.
This could potentially allow hobbiest miners to be profitable again instead of the entry barrier being high end gpus or asic miners.
It's very interesting stuff.
OverkillerSRBPlatinum | QC: XMR 133, CC 96, BCH 18, BUTT 81 week ago
At the point where you basically have to design a full cpu inside an FPGA/ASIC there is a question whether making them makes sense at all.
I retired my hobby GPU farm years ago, but I will absolutely drop a new CPU in my workstation and add a few joules to network security while I sleep. It's exciting to think many people will be doing the same. The hobbists are back!
I am looking forward to the RandomX "experiment". I wonder if it really will be good enough to deal with ASICs for 2-3 years or if it will be cracked in less than a year. In my opinion it will be even more interesting this time since it has been made from scratch and a slew of other projects have already communicated their interest in implementing it for their own project. Therefore pressure will increase as it is / remains successful and other projects implement it (and in turn create greater incentive to build an ASIC).
I think hyc (the Dev of RandomX) said you would almost have to build a whole CPU for RandomX. I don't want to question Howard's knowledge and skills, as he is much smarter than I am (and has the track record to proof it). However, from a history and economic perspective it is a very bold claim since many projects before this have made similar promises and the huge incentive (since if successful many other projects will use it too, adding to the incentive).
It's really impossible to predict. Any hardware experts who think they can beat its efficiency are naturally keeping their mouths shut. Only time will tell.
* also note I'm mainly just the coordinator/technical contact for the auditing effort, not the main dev on this algorithm.