Are we sure we have a complete picture of the ASICs problem? We should have one before trying to fix it, right!?
I'm in this space for almost 8 years and till today I haven't found anybody with a superior or universal understanding of that topic. We base our actions (Anti-ASIC, ASIC-resistance, ASIC competition,...) on a wide range of assumptions rather than actual research. Shouldn't we try to simulate systemic complexity and interdependences (e.g. positive and negative feedback loops) before acting on superficial knowledge!?
Your submission was automatically filtered because coindesk.com is not an approved site. You can read more about the changes on /r/altcoin this here. If you think coindesk.com should be whitelisted, please submit coindesk.com here.
I think the article is inacurate. There was not an emergency hardfork, it was scheduled. There was speculation there were ASICs, and only after the upcoming hardfork was announced, did Bitmain start selling their used ASICs (offloading their soon-to-be-worthless miners onto unsuspecting people). I'm pretty sure Bitmain also ran the propoganda for XMR Original / XMR Classic to squeeze the last possible amount of profit out of their miners.
> I'm sure this carefully thought out code will ensure that Monero doesn't get hacked any more.
Good point. This is an insta-mine attack waiting to happen.
It almost makes me want to try my hand at mining Monero.
For anyone reading this, archive.is is probably the highest trafficked archiving site on reddit, and only receives few donations per week (and definitely not enough to run on donations alone). If you can, you can support the creator on Liberapay in order to make sure we have this service for the future.
Surely, even if RandomJS is successful, large mining operations will be incentivized to manufacture CPUs which have been optimized to process RandomJS more efficiently: even if it means doing something like increasing the CPU L3 cache size. These would be a kind of ASIC. These would once again give the users of this specialized, ASIC-like, hardware a competitive advantage over commodity hardware users.
As time goes on, I am seeing more and more arguments and evidence that suggests that ASIC-resistance will always be a never-ending battle that cannot be won. One of the best write-ups on this issue was written by a Sia developer: here.
I'm still not convinced there's nothing left for us to learn in the fight for ASIC resistance.
But I'm also not convinced that we can rely on random code generation without more clarity on boundary conditions. For example it may be necessary to ensure the effective uniformity of the difficulty of generated algorithms so that an ASIC couldn't just pick out the algorithms which are more trivial to execute by abstract code analysis and throw the rest away. u/hyc_symas indicated we could simulate this immediately but I don't think anyone has presently taken ownership of doing so.
You are correct in a theoretical sense but in a practical one, the coin market cap is not high enough to justify the billions of dollars required to significantly outperform AMD/Intel at general-purpose CPU design.
It's possible to design a mining algorithm for which existing mainstream x86_64 CPUs are already the nearly-optimal ASIC. There has been a long and hilarious line of failures in past attempts to do that, but that is not proof of impossibility, just proof that it's difficult, and the solution may not be one single algorithm, but a human and/or algorithm that generates or switches to other algorithms over time.
For someone trying to do this, the $10M+ mask costs and the ~3 month mfg latency for a custom ASIC are the major vulnerability that you want to exploit. A miner R&D firm may be willing to pay $10M once every two years, but are they willing to pay it once a week for a chance at the $1M in coin issuance? Nope.
Something to add: I think that just the fact that the Monero developers have changed the PoW algorithm should discourage the development of ASICs. ASIC manufacturers now have this huge risk that they could invest millions of dollars in ASIC development and with relatively little effort the Monero devs can turn their investment into unsightly paperweights.
> but now you only have small benefits,
I'm not sure that is true. I literally mean "I'm not sure". I'm open to seeing evidence that only small gains can be made.
From what little I do know about processors, I have seen just how big of a difference small changes can make to certain applications. Sometimes the presence of a specific CPU op code can give a huge advantage to an application. Sometimes a larger cache can make a huge difference due to the fact that many 10s or even 100s of CPU cycles can be completed in the time it takes for data to be loaded from RAM.
Something else I wonder is exactly how small a gain would have to be in order to stop a large mining organisation from building custom-CPUs to gain that advantage. Ultimately it will come down to a cost/benefit analysis and if the benefits outweight the costs they will be incentivized to do it. In the competitive game of mining, this could mean that all smaller mining operations who do not have custom-fabbed CPUs will now be at a disadvantage and may not be able to compete.
Of course home users would still be able to mine in a pool and would have the advantage of already owning the CPU. A scenario where vast numbers of home users are mining like this might be enough to counter-act the mining centralized I described above. I don't know.
> Something else I wonder is exactly how small a gain would have to be in order to stop a large mining organisation from building custom-CPUs to gain that advantage. Ultimately it will come down to a cost/benefit analysis and if the benefits outweight the costs they will be incentivized to do it.
Just imagine if most of the mains-powered, personal computers of the world were spending there otherwise idle CPU cycles supporting the Monero network.
Something I thought of which could facilitate the realisation of such a dream would be if full nodes had pool-mining features built directly into them and light-nodes/wallets had mining software built into them. The light-nodes/wallets would connect to those full nodes and pay them for their services by contributing hashes. In this scenario you could have 1000s of full-pool-nodes all around the world each serving 10s of 1000s of users who are all hashing away.
Anyone with a big hard drive, a fast internet connection and some open ports could easily become a full-pool-node.
Anyone with a CPU a bit faster than gameboy's could run a wallet and contribute hash power.
Wow, that's an intriguing idea...
One of the hardest things I've encountered besides getting more people *into* crypto is getting them to run nodes and/or mine Monero or any other CPU-friendly coin. They're either terrified of it or they don't have the patience for it. In the end, the economics of DLT *require* ubiquity both in terms of mining and the running of full nodes. We have all these supercharged personal computing devices with more resources that we'll ever need to stream video, type word documents, and browse Facebook. I think the project that can get a node into every home is going to win in the end. I wish there was more of a concerted effort by crytponight enthusiasts for such a push because they're all doing amazing work to keep Monero & Co. accessible to the masses, but no one cares. I'd hate to see us lose out to centralized, cartel-organized projects because we slept on adoption.
I can't speak to all the specifics since I don't know for sure either, but I can talk about the general approach.
Any potential custom hardware that sought to tweak the CPU to provide better JS performance will then not be orders of magnitude better than normal CPUs. Sure, they could be slightly better, but not so much better that normal CPUs no longer stand a chance.
Similarly JS compilers have been worked on by many huge companies for years and are highly optimized. It's unlikely a small company will revolutionize this software.
So yes, there is an incentive to make better mining hardware that could outperform current offerings, but this is the best idea we have right now to make the incentive as small as possible for more than 6 months.
> So yes, there is an incentive to make better mining hardware that could outperform current offerings, but this is the best idea we have right now to make the incentive as small as possible for more than 6 months.
This comment seems very reasonable & realistic to me. If the goal is just to lower the benefit gained by development of custom-chips and to increase the time it takes to create them, then I can see how RandomJS could be very fruitful.
Thanks for taking the time to have an honest discussion with me about this. I've been quite put off by some of the other responses I've received. I get the sense that some people see criticism & scepticism as a negative. I see it as the opposite. A crypto currency grows stronger if it is developed with an understanding of reality and what its weaknesses are.
It just so happens that I care a lot about Monero, so it's not even as if I'm some kind-of anti-monero zcash propagandist or something like that. I want Monero grow and survive.
Also, ASICs are expensive to produce and have a long development cycle, so the gains have to be outsized for the development effort to be worthwhile. I would expect mining ASIC manufacturers to give up their efforts if the battle is drawn out too long.
However, FPGAs may pose an ongoing threat to CPU and GPU crypto mining. I'm not sure exactly how flexible they are or how their performance compares versus a GPU or ASIC, but small tweaks to a mining algorithm that would disable ASICs should be trivial for an FPGA to be reprogrammed to accommodate.
Yes, FPGAs are probably amenable to small algorithmic tweaks. There's of course a cost for reprogramming them. As such, a PoW algorithm that only changes once per block would provide an opportunity for an FPGA to adapt on every block, and amortize reprogramming cost. This is why RandomJS is designed to change on every nonce.
Er, no. If anything really is flexible in the domain of logic hardware, it's a FPGA. You can program it now to "be" something, and an hour later to implement something completely different: From speech analyzer to software-defined radio to IP router and back in one day :) See [Wikipedia](https://en.wikipedia.org/wiki/Field-programmable_gate_array)
I think there must be other factors why FPGAs have not yet taken over the world of cryptocurrency mining in storm. If I had to speculate where the problems may be: few people able to code PoW algorithms as VHDL or similar; FPGAs being flexible, but somehow not flexible enough for mining (e.g. not enough gates / memory / whatever); FPGAs being not cost effective (e.g. 10x more expensive than 10 graphic cards, but not 10 times faster than them)
> You can program it now to "be" something, and **an hour later** to implement something completely different
That's a stretch. And I'm sure it won't take an hour to write new code for a new mining algorithm.
Agreeing with you. VHDL is low level code. The amount of good coders isn't abundance. The amount of good coders when it comes to low level code is even less abundance. Then have fun debugging VHDL.
Hardware wise, the few FPGA that I've seen are shit in term of hardware quality. As much as everyone hates Bitmain, their miners are very well built and design for the purpose of crypto mining. Have you seen the damn heat sink and fan on those FPGA cards? You think those design will even last a month of mining?
> (e.g. 10x more expensive than 10 graphic cards, but not 10 times faster than them)
If this is true, it's more flexible to just buy 10 GPU.
Good luck craigslisting FPGA cards.
All these factors add up to how inflexible FPGA cards are. I'm sure there could be more.
**Field-programmable gate array**
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC) (circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare).
FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.
^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/Monero/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^]
^Downvote ^to ^remove ^| ^v0.28
Custom CPUs that can compete with current top CPUs are insanely pricey to manufacture, if not impossible. Not comparable to current ASICs. Also the speed benefits would be minimal.
RandomJS is like a real world task. If there are speed ups for it, everyone would benefit.
> Custom CPUs that can compete with current top CPUs are insanely pricey to manufacture, if not impossible. Not comparable to current ASICs. Also the speed benefits would be minimal.
What I have in mind is a manufacturer licensing an existing CPU design from a company like AMD, ARM or Intel (or even potentially using RISC-V) and making small modifications to make it perform RandomJS (or equivalent) more efficiently.
I expect that modern Bitcoin ASICs are already having their chips manufactured in some of the same fabrication plants that produce modern CPUs and GPUs.
I've noticed there are some smallish operations involved with creating RISC-V based CPUs around at the moment. For [example](https://www.sifive.com/products/risc-v-core-ip/e3/e31/).
> RandomJS is like a real world task. If there are speed ups for it, everyone would benefit.
My point is, I think it's likely there will always be ways to make custom hardware that perform that one specific application faster. If RandomJS happens to benefit from a large L3 Cache, then a custom CPU with a larger L3 cache than any consumer CPU would give a miner an advantage.
A custom CPU might even be created that *removes* features from the CPU that don't benefit RandomJS. The removal of features and subsequent refactoring of the design may allow the CPU to process RandomJS faster but with the same amount of electricity. It may also bring the cost of fabricating the custom-CPU down.
> RandomJS happens to benefit from a large L3 Cache ...
I've pointed out in various discussions that careful design and tuning of the program generator is very important. It has to mimic a "workload" that is common to a range of everyday applications for which general purpose CPUs/computers (and JS implementations) are reasonably well optimized already.
This may well be the most significant part of the effort
Even if everything you said came to pass, it'd still narrow an ASIC-miner's profit margin massively. And this is what it all comes down to, is it worth dropping the millions of dollars to make, compared to the potential gains? If RandomJS lives up to the potential, I think you'll find the that the risk in getting back their investment skyrocketing compared to the anticipated payoff. And not just in money, but also time, not to mention that if they try this again, there is no guarantee the POW won't be changed *yet again* just to screw over all their new plans.
The thinking that RandomJS (if it works) is the only thing these guys need to worry about is not taking into consideration that hardforking the RandomJS POW is also going to be on the table if these shennanigans continue, this is a two pronged attack on ASIC miners, 1. RandomJS to bamboozle miners and stretch out production times even longer than just making an ASIC that may have a turnaround before the next hardfork, and 2. Threatening to change the RandomJS PoW to defund all their efforts.
Both these things combined would make even a huge miner with deep pockets and money to burn look elsewhere.
Yeah, I noticed after I made my comment ;) It's good to be sceptical though, that's how people find, and solve potential issues. Hyc has a fairly good grasp on hardware too, so I'm fairly sure hes also thinking about the what-ifs WRT the viability of RandomJS.
> a never-ending battle that cannot be won
Depending on your point of view, some battles are worth fighting, even if no end is in sight. And it's not as if Monero's battle against the ASICs required extraordinary amounts of manpower, at least not so far. How about viewing it as a "never-ending skirmish" to put it into perspective?
> One of the best write-ups on this issue was written by a Sia developer: here.
Yeah, but that's now well over one year old, as they use to say in cryptocurrency land "an eternity". And it was written *before* Monero's (temporary? tentative?) winning the first round of the battle against ASICs with the last hardfork.
Exactly... it’s like saying antibiotics are worthless because pathogens find ways to become more and more resistant.
Better yet, think of it as these changing algorithms as vaccines; and just how annoying your mom’s friend is on Facebook about how vaccines cause autism...
These host/pathogen metaphors are fun (B-cell maturation in the adaptive immune response is another example to add to your list...), but I can't see where you would incorporate a *natural selection* component into evolving PoW algorithms - no feedback loop from encountered ASICs to new PoW variants.
> Depending on your point of view, some battles are worth fighting, even if no end is in sight. And it's not as if Monero's battle against the ASICs required extraordinary amounts of manpower, at least not so far. How about viewing it as a "never-ending skirmish" to put it into perspective?
> Yeah, but that's now well over one year old, as they use to say in cryptocurrency land "an eternity". And it was written before Monero's (temporary? tentative?) winning the first round of the battle against ASICs with the last hardfork
The age of the article does not imply it's wrong. It's a really good read. I recommend you read it through if you haven't already.
I'm not promoting or advocating Sia. I don't know if Sia is any good at what it does. I just found this article very compelling.
Yes, but I *really* don't think times have changed enough in the last 2 years for this article to be considered archaic or outdated :P
Seriously: give [it](https://blog.sia.tech/choosing-asics-for-sia-b318505b5b51) a read if you haven't already. Set your browser to reading mode, adjust the font to your liking and work through it. There is no better article that I'm aware of arguing in support of the ASIC position. At the very least you may come away from it better equip to argue against people who support ASICs. You may however come away from the article reconsidering some of you current position; or at least: having greater understanding of why people support the ASIC approach.
My current stance is:
* I prefer the simple-PoW-algorithm ASIC approach for the reality I live in today.
* I'd prefer an anti-ASIC PoW algorithm approach if I thought it would be effective in the long term at keeping commodity CPU mining competitive.
* I respect the founding principles / roadmap of each coin. If a coin is founded with an ASIC-resistant design/roadmap, then I think a divergence from that roadmap needs to be extremely well justified. So for Monero: anyone arguing in support of an ASIC-embracing algorithm has to make a very strong case in support of it, as they are arguing to change a fundamental characteristic of the Monero crypto currency. The opposite is true for Bitcoin & Sia.
yes, all the work research and billions of dollars spent by top EE/CEs @ Intel/AMD will be overthrown by increasing a L3 cache :\^)
theres much more to CPUs than caches.
[A Superscalar Out-of-Order x86 Soft Processor for FPGA - HenryWong.pdf](http://www.eecg.toronto.edu/~jayar/pubs/theses/HWong/HenryWong.pdf)
Why are you being a dick?
It should be obvious from my comment what my point is. That being: small changes could be made to the design of existing CPUs in order to make them perform better for just 1 specific purpose. My L3 cache suggestion was merely an example: I wasn't even claiming that it *would* make more CPUs process RandomJS more quickly.
How about you respond to what I actually said instead of making strawman argument which implies I said something that I didn't.
id love to give u a proper response but unfortunately that would require me wasting time on this shitcoin scam and its forks scams. i've moved past that. altho its a nobrainer someone can run statistical analysis using Intels VTune or some other profiler and get a good idea of all instructions executed and HW counters then use that with analysis of RandomJS source code + its parameters of generating random code and strip down an Intel CPU like AES/SGX/VM/HYPERVISOR/AVX512 instructions for like 2-3 extra ALUs if RandomJS does a lot of arithmetics
> You admit you made a strawman comment about increasing L3 cache
You might want to look up what a strawman argument is and compare what I said to the definition you find. If after doing that you don't take back your comment to me then you're either an idiot or egotistical.
Given the high variability in compiler implementations, we could have no expectation of equivalent complexity or runtime using other languages. My first inclination would have been to use C but I wouldn't want to embed gcc into a miner or monerod.
OpenCL is actually losing ground these days, both iOS and Android have abandoned it.
If only there were people who understood that alternatives were already considered and rejected for valid reasons.
do you really think there isn't even higher variability in JS implementations?
iOS and Android abandoned OpenCL because mobile devices aren't suitable for the kinds of tasks OpenCL is designed for.
but really a much better option would be LLVM IR. much more predictable performance characteristics than JS, and it can be easily and quickly translated to native code for CPUs and GPUs.
The point of all of this is to use an interpreted language, not a natively compiled language. Executing the interpreter is an integral part of the Work. Also, every computing device with a web browser is guaranteed to already have a JS engine onboard. Platform support for most other solutions would require extra porting and/or installation effort.
That may be true, but isn't a problem - the fact that the compiler or interpreter is an integral part of the PoW is what matters. It means if you actually want to accelerate things with an ASIC, you need to implement some form of language processor on the chip, or leave the CPU to do it and just crunch the result on the ASIC. Either way it's a bottleneck.
Converting JS to OpenCL for use on a GPU is a non-starter. Since a new program is generated for every nonce, that would mean you need to compile a new OpenCL kernel for every nonce. The overhead would far outweigh the actual PoW runtime. The only viable way forward for a GPU is to compile a JS engine as an OpenCL kernel and keep it running on the GPU.