It’s time for a new update about my new Homelab. In the past I was running HP Microserver 2x N40L and 1x Gen8. All these machines are still here but no longer in use and will be for sale soon. My new Homelab contains now 2x HP ML10v2, 1x Zyxel 16 Gbit Switch, 1x Lenovo ix4-300d storage.
I have installed vSphere 6 on my servers. The Lenovo storage is providing 2 iSCSI targets to fulfil the requirement of 2 shared datastores for HA. Both datastores are part of an SDRS cluster. Besides this I have created a cluster which has HA and DRS enabled. On the network side I have created 3 standard vSwitches each with 2 Gbit network adapters. vSwitch0 is for the virtual machine traffic. vSwitch1 is for iSCSI and vSwitch2 is for VMotion. Yes I know thats a bit the old school way of networking but hey thats only my Homelab so a dvSwitch with Network IO control should not be required. A simple setup is sufficient for Homelab use. Maybe in the future I’m going to change it but that will have to wait until the next vSphere release.
So whats next?
Next will be to setup vRealize Operations Manager 6.3 to improve my troubleshooting knowledge of this product. Additionally I have an idea about how to simplify the troubleshooting and improve the supportability of the product. But this is a different topic. Today I’m going to install the lastest ESXi patches and check for upgrades of my vCenter appliance.
My planed new Homelab requires a lot of space due to the fact of the size of the components. But what if someone would like to have a Homelab but does not have the space in the office. One option would be to run a fully virtualized Homelab but this is limited to the resources of the computer/laptop. A more costly option would be to use multiple INTEL NUC and a storage system.
After an short weekend with a minimal of sleep … I’m done my Homelab is up and running. Furthermore I have created an template of an nested ESXi (virtualized ESXi). These nested ESXi are perfect for testing of Powershell scripts which I use to initial configure an new ESXi. Next steps which I have planned is to migrate my existing scripts to Powershell 2.0 modules and additionally extend the useability of it. So hopefully it will be possible to enter the IP into an GUI and select per SelectBox different configurations and than just hit configure. :-)
After the 3 days of vSphere 5 training. I would like to point out 3 interesting components.
First of all the new vCenter appliance. This is based on SUSE linux. This is a really cool thing for the private Homelab as it is ready to use including an database and there are no Windows and MS SQL licenses necessary. Just deploy the OVF and boot the appliance. After the boot process has been finished point your browser to https://vCenterIP:5480. Login with User root/vmware, accept the EULA and start with the configuration of the database. The next step is to start the vCenter from the Status tab and that’s it. Now you should be able to access the vCenter using your vSphere client.
Second one is the new VDR 2.0. It is the major upgrade of the existing VDR 1.2.1 which was more or less useless as it was really buggy. The new version includes now some new functionality regarding reporting and scheduling of maintenance work. The new daily backup email report is really nice. But there are still some points open and I don’t understand why. For example why is it not possible to use CIFS share bigger than 500GB (do not use more than 499GB) and why does the virtual appliance still have only 2GB of memory and the same restriction not to increase the memory. You will loose your support if you increase it. There are still things in like an backup store will be locked until an damaged restore point is manually marked for deletion.
The last one is the VSA. The idea behind this is really nice, to use the local disk storage of an ESXi as shared storage. But the implementation of it is not really useful. You need between 9 and 11 IP’s for this solution. Why??? Furthermore it offers only an NFS share. Why? Would it be better to setup 2 appliance using linux, heartbeat, drbd and iSCSI of NFS? This will require only 2 external IP’s and 2 private IP’s for sync/heartbeat. That would be more efficient. Additionally this is enable you to use multipathing as each of the appliances has it’s own IP and iSCSI target. So the ESXi will recognize it as 1 LUN with 2 paths.
After the system finished the setup I used the internal console to download an Ubuntu 10.04 template. With this template I created an new virtual machine and after this I installed the OpenVZ Web Panel on it. Next step was to install the hw-daemon of the OpenVz web panel onto the OpenNode machine. There was an small config file called hw-daemon.ini, I just entered the IP, Auth Key and started the daemon.
For an automated start I created an simple init script with just start and stop functionality. I used the chkconfig tool to add the init script to all necessary runlevels.
Currently I setup a Windows XP appliance which I would like to use for a new way to manage a VMware ESXi plattform. A colleague had this idea to create an interface to manage ESXi free server’s from any kind of CMS.
OS: Windows XP
Software: Apache, PHP, Powershell, VMware PowerCLI, JSON for PHP
more details will follow soon …