So, what has cryptomining got to do with VMware?
Late last year, when the bitcoin was around the US$600 mark, I embarked into cryptocurrency mining. This was where I used my desktop together with some software like cgminer and began scrypt number crunching using my video card. During a couple of months of trial, I was mining Anoncoin, then moved on to Novacoin, and dabbled briefly on Peercoin which really didn’t work out. There was enough justification to go into this in a bigger way, i.e. 5 mining computers instead of one. I bought a few video cards, actually not a few, 3x Radeon 7950 cards, 7x Radeon 7850 cards, and a Radeon 7870 card. I even pressed into service my older Radeon 5850 card when gave up the ghost after its fan failed one day, but I replaced the fan and heatsink with an after-market cooler and kept it workng. I played around with a lot of other cryptocoins – that is until the returns from mining would not cover the cost of our expensive electricity. In addition the room was getting quite hot and having to have the aircon running during summer was just not acceptable. Okay – basically everything was shutdown in June this year, so now I have this hardware sitting around essentially doing nothing.
“Retask.it” – the mining hardware, of course. My VMware ESXi 4.0 Host Server was getting old, having run for several years and perhaps now was an opportunity to “Replace.it“. The current version of VMware ESXi is 5.5 Update 2. I put together some hardware to test this version – and had lots of issues installing it because some previously working hardware was no longer supported. There is another story there that I might tell another day. Anyway, after creating my customized installation cd that contains the Realtek 8168 network drivers, and updated adaptec array controller drivers – I was ready to install the production server.
The current configuration for my server contains the following parts:
Asrock 970 Extreme 4 AMD AM3+ motherboard with AMD Athlon X3 420e triple-core cpu and 8GB of ram. The motherboard can handle up to 64GB of ram, so is sufficient for future expansion. There is no onboard video so I had to buy a single slot Gigabyte GV-R545 video card which houses a Radeon 5450 for $33. I don’t need a high performance card, just one that is a low power card. The disk storage is an Adaptec 5805 Sata Array Controller (this was found for $300, normally $700+) – initially with 3x WD 3TB Nas Red drives, configured with Raid Level 1E. I chose the Nas Red drives because they are designed for 24 hr operation – a little more expensive but hopefully are worth it, only time will tell.
My server needs multiple network cards, one for onboard management, one for internet connection, one for general network and one for backbone network. Backbone is where I plan to have multiple host servers communicating – not implemented as yet. The motherboard only has two PCI slots, so I could only install two network cards. I have a couple of PCI-e networks cards on order – one of those will be for the backbone network. I found from experience that having a few drives running 24 hours a day has a bit of heat, which requires a bit of cooling. To that end, I have reused the Antec 1100 case to house all of these items. This case is a very good for gaming and has lots of cooling, apparently better than the Corsair 500R that was also available.
One more thing is missing, the power supply – I have two FSP Aurum Pro 1000W power supplies left over – one of these was pressed into service and should easily handle another half a dozen drives for future expansion. Almost forgot – add a cd/dvd-rom drive – I need one in order to install from my customized cd. To save power, I can always disconnect it after installation – a good idea as this will be running 24×7, since one of the virtual machines is a firewall that protects my internal network from the world wide web.
Current capacity is 4.1TB of which I have used just 80GB, so still another 4000GB to go. If I add five more 3TB drives in Raid 5, this will give me 12TB additional capacity. In comparison, the old ESXi 4.0 server had 5x 1TB drives in a Raid 1 and a Raid 5 configuration giving me a total of 3TB. I didn’t know at the time that if I had upgraded the firmware on the Adaptec 5405 Sata Array Controller, I could have achieved this capacity with only 4 drives in Raid 5. The older firmware at that time only allowed a maximum array size of 2TB to be created. This was one of the benefits that came out of my testing of ESXi 5.5 – to work out what can be improved.
Anyway, still more work to do. Need to sort out all of the virtual machines, work out which to keep and migrate those to the new server. Better get on with it, I guess.
[PS] There is a very good reason for including the updated adaptec array controller drivers in my customized installation cd. During testing and installing adaptec monitoring software, I found that the included adaptec drivers that are bundled with VMware ESXi 5.5 Update 2 did not allow array monitoring, so I had to install some updated drivers from Adaptec. After doing this, each time I rebooted the server, the datastore went missing. The datastore houses all of the virtual machines – if this was missing, no virtual machines can run. It turns out that upgrading the driver caused the VMware to think that the datastore is now a snapshot. We cannot run from a snapshot (which is like a copy or an image), so the only thing that could be done to fix this permanently is to resignature the storage, but that meant I will need to relink every virtual machine (like 20 of them) – what a headache, so it is best not to upgrade the drivers unless absolutely necessary which means – use the right drivers from the start.
The technical document is here