Fpga Ethereum ETH Miner Diy

by
Fpga Ethereum ETH Miner Diy 6,3/10 7668reviews
Lisk (lsk)

I just don't get this kind of blatant fraud. At 25 MH/s, you're using ~200+ GB/sec of VRAM bandwidth. That's not a theory, it's a fact. So, this 200 MH/s Z02 would need 1.6 TB/sec of memory throughput to a single 3 GB GDDR VRAM memory.

- ETH ASIC are out! The dimension on their USB miner. If I ever get appointed to one of Vitalik's recently metioned 'Ethereum courts' any virtual. Before you get HEAVILY invested in ETH Mining. Finally a Multi-Miner Linux 16.04. It seems the same arguments being made against Ethereum are the same.

That is not possible with today's memory technology, or tomorrow's either. That doesn't even consider 80 ASICs trying to share the address and data bus to the same VRAM. This thing is so far from reality that 'fairy tale' is a radical understatement. The saddest thing is, some folks will actually believe this crap and buy it.

Let's assume for a moment this is a valid product and you are willing toss a chunk of money to get some of the usb's or one of the boxes. What are you going to mine when ethereum goes pos? There are not many options.

Has anyone found 5 or 6 other mineable coins for this specific product (2 or 3 might not be enough for me) with some kind of a business plan that would create value for their coin and draw the mining community? Large up front cost Not shipping today Limited mining options Coming out right before AMD Polaris (hashrate unknown).

How on earth could you have 80 asic chips sharing a 3GB VRAM? That's not possible. The chips may be capable of 2.5 MH/s without a memory bottleneck, but what difference does that make? The only other explanation would be they have 3 GB of VRAM per chip, which would be.24 TB of VRAM. And, of course, they give no price. This was actually discussed in critisism's for dagger's design VB didn't make it sequentially memory hard, so shared memory was deemed a significant potential advantage for asic hardware. That said, I doubt these 'ASIC' chips are real.

One of the original criticisms: The way I see it, the solution is simple. (To put it oversimplified), you don't store and access the dag, you store and access the cache, and then each asic chip on-the-fly computes the full dag, grabbing the needed components for a given nonce and discarding the buffer. Claymore's miner doesn't need DAG files now because the GPU can compute the DAG in under 5 seconds. Create an asic designed to do exactly that, and you might get that down to a handful of clock cycles. This is why I have somewhat regularly pointed out (well, at least a few times), that ethash isn't really memory-hard, it is memory-capacity-hard. And requiring memory-capacity certainly doesn't make an algorith sufficiently memory hard. I registered using junk mail to see if I can snoop something out.

No notifications/emails from them about registration or my order = they don't want to leave a trace. Website was registered with Go-daddy and it's based on a server in the US. Payments are accepted to bitcoin addresses generated at random. That way unless you will send them money yourself you will never know where the money is going.

(most likely straight into a mixer) I will make a call this hasn't got a thing to do with China. I'd suspect it's a US citizen trying to scam ETH noobs. Bitcoin world/scene is full of it.

Another reason why Bitcoin will have a serious hard time to make it to the general public. Blockchain-based currencies trade network security for personal security. From a network perspective, a blockchain is significantly more secure then a bank; but, from a user perspective, it is the opposite. Comparison bitcoin to classic fintech, risk moves from the network to the user, and security moves from the user to the network. It's really just another way of thinking that demands people be more careful with their money. This is hardly a bad thing, but it's definitely a major change from the 'reverse-anything' banking world we have now. So, you're computing the DAG hashes (2 sequential) you need for the loop iteration based on their position (index) in the current 'virtual' DAG?

Im pretty sure claymore miner just sends over the 16MB cache to the GPU, and he just creates the data set locally on the GPU instead of the CPU. Its still brilliant, as there is like a 100x speedup in DAG creation, and eliminates the need to send the whole data set from CPU RAM over the bus to GPU VRAM, and no need to store it since the GPU can generate it fast enough. One of the original criticisms: The way I see it, the solution is simple. (To put it oversimplified), you don't store and access the dag, you store and access the cache, and then each asic chip on-the-fly computes the full dag, grabbing the needed components for a given nonce and discarding the buffer.

Claymore's miner doesn't need DAG files now because the GPU can compute the DAG in under 5 seconds. Create an asic designed to do exactly that, and you might get that down to a handful of clock cycles. This is why I have somewhat regularly pointed out (well, at least a few times), that ethash isn't really memory-hard, it is memory-capacity-hard. And requiring memory-capacity certainly doesn't make an algorith sufficiently memory hard. Brilliant explanation work, now I comprehend why Claymore's miner works so well. If I ever get appointed to one of Vitalik's recently metioned 'Ethereum courts' any virtual violations will be dismissed automatically.

Obviously a lot more research would have to go into it, but I find it hard to believe that grabbing the dag bits via an 'on the fly' asic method would be any faster than random memory accesses on a DAG stored in memory. The dag creation core on the asic would have to compute the random bits on the fly before the next main loop is finished on the hashing core. I see the hashing core waiting for a 'long' time before it can get the next loops random bits, so probably the same thing as the current memory access bottleneck in the GPU implementation. What this would do though is save ALOT of cost in memory.

Since essentially all you would need is 16MB of cache. Interesting idea either way. If it was a scam why didn't they just say they made a 1Gh/s miner using 1000Watts for $500. The price is high and everything and makes it seem like its plausible. SCAM in every letter. USB sticker-size miner have VRAM: 3GB Hynix GDDR VRAM What a f*cking f%ck?

Ok, let's call Capt. 3 GB GDDR5 it's at least SIX memory chips for now. It's impossible to fit them in that USB stick board, but you also need a memory controller and computing chip to place there. Each GDDR5 chip on high clocks consumes at least 1 watt, so we have 6 watt consumption only for memory, and unknown amount for memory controller and computing chip. So it's impossible to power this device from USB port.

Other weird things: 1) why 3 GB? Not 2, not 4 - but 3? Are there some video? If it was a scam why didn't they just say they made a 1Gh/s miner using 1000Watts for $500. The price is high and everything and makes it seem like its plausible. SCAM in every letter. USB sticker-size miner have VRAM: 3GB Hynix GDDR VRAM What a f*cking f%ck?

Ok, let's call Capt. 3 GB GDDR5 it's at least SIX memory chips for now. It's impossible to fit them in that USB stick board, but you also need a memory controller and computing chip to place there. Each GDDR5 chip on high clocks consumes at least 1 watt, so we have 6 watt consumption only for memory, and unknown amount for memory controller and computing chip. Mine 1 Syscoin SYS Per Month. So it's impossible to power this device from USB port.

Other weird things: 1) why 3 GB? Not 2, not 4 - but 3? Cuda GameCredits GAME Miner. Are there some video? They appear to be using QFP 64 package for their ASICs and there are 6 of them on each USB dongle Orginal version Lightminer version.