Homelab the current state

It’s time for a new update about my new Homelab. In the past I was running HP Microserver 2x N40L and 1x Gen8. All these machines are still here but no longer in use and will be for sale soon. My new Homelab contains now 2x HP ML10v2, 1x Zyxel 16 Gbit Switch, 1x Lenovo ix4-300d storage.

2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory
1x Zyxel 16 Port Gbit Switch: Zyxel GS1100-16-EU0101F Gigabit Switch (16-Port, RJ-45)
1x Lenovo ix4-300d Storage: Lenovo Iomega ix4-300d Network Storage (0TB Diskless EMEA, Marvell Armada TM XP, 1,3GHz, 4x HDD, 512MB RAM)
2x HP Quad Network adapter: HP 538696-B21 PCI-e Quad Port Gigabit Server Adapter
2x Intel SSD 240GB: Intel SSDSC2BW240H601 interner Solid State Drive 240GB schwarz
4x WD RED 2TB: WD 2TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll), 5400rpm, SATA III) WD20EFRX

xmesx01e

xmesx02e

I have installed vSphere 6 on my servers. The Lenovo storage is providing 2 iSCSI targets to fulfil the requirement of 2 shared datastores for HA. Both datastores are part of an SDRS cluster. Besides this I have created a cluster which has HA and DRS enabled. On the network side I have created 3 standard vSwitches each with 2 Gbit network adapters. vSwitch0 is for the virtual machine traffic. vSwitch1 is for iSCSI and vSwitch2 is for VMotion. Yes I know thats a bit the old school way of networking but hey thats only my Homelab so a dvSwitch with Network IO control should not be required. A simple setup is sufficient for Homelab use. Maybe in the future I’m going to change it but that will have to wait until the next vSphere release.

So whats next?

Next will be to setup vRealize Operations Manager 6.3 to improve my troubleshooting knowledge of this product. Additionally I have an idea about how to simplify the troubleshooting and improve the supportability of the product. But this is a different topic. Today I’m going to install the lastest ESXi patches and check for upgrades of my vCenter appliance.

More to come …

Alternative Setup of a VMware Homelab

My planed new Homelab requires a lot of space due to the fact of the size of the components. But what if someone would like to have a Homelab but does not have the space in the office. One option would be to run a fully virtualized Homelab but this is limited to the resources of the computer/laptop. A more costly option would be to use multiple INTEL NUC and a storage system.

Shopping list:

2x Intel NUC i5: Intel NUC6i5SYH
2x Samsung 32GB Memory: SAMSUNG 32GB Dual Channel Kit 2 x 16 GB 260 pin DDR4 2133 SO-DIMM (2133Mhz, PC4-17000, CL15) passend für alle Intel “Skylake” Notebooks mit DDR4 Standard
2x Samsung M2 SSD 250GB: Samsung MZ-N5E250BW 850 EVO interne SSD 250GB (SATA) grün
1x QNAP NAS 4 Bay: QNAP TS-453A-8G-NAS 4-Bay Intel Celeron Braswell N
4x WD Red 3TB: WD 3TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll) SATA) WD30EFRX
1x Netgear Managed Switch 8 port: Netgear ProSafe GS108T (8-Port Gigabit Smart Managed Switch 8 x 10/100/1000 – desktop)

You can use the standard ESXi (5.5 or 6) images provided from the VMware web site and install it using a USB stick.

As soon as I have all hardware available I’m going to document the complete setup on my web site.

HomeLab and nested ESXi server

After an short weekend with a minimal of sleep … I’m done my Homelab is up and running. Furthermore I have created an template of an nested ESXi (virtualized ESXi). These nested ESXi are perfect for testing of Powershell scripts which I use to initial configure an new ESXi. Next steps which I have planned is to migrate my existing scripts to Powershell 2.0 modules and additionally extend the useability of it. So hopefully it will be possible to enter the IP into an GUI and select per SelectBox different configurations and than just hit configure. :-)

vCenter Appliance – VDR 2.0 – VSA

After the 3 days of vSphere 5 training. I would like to point out 3 interesting components.

First of all the new vCenter appliance. This is based on SUSE linux. This is a really cool thing for the private Homelab as it is ready to use including an database and there are no Windows and MS SQL licenses necessary. Just deploy the OVF and boot the appliance. After the boot process has been finished point your browser to https://vCenterIP:5480. Login with User root/vmware, accept the EULA and start with the configuration of the database. The next step is to start the vCenter from the Status tab and that’s it. Now you should be able to access the vCenter using your vSphere client.

Second one is the new VDR 2.0. It is the major upgrade of the existing VDR 1.2.1 which was more or less useless as it was really buggy. The new version includes now some new functionality regarding reporting and scheduling of maintenance work. The new daily backup email report is really nice. But there are still some points open and I don’t understand why. For example why is it not possible to use CIFS share bigger than 500GB (do not use more than 499GB) and why does the virtual appliance still have only 2GB of memory and the same restriction not to increase the memory. You will loose your support if you increase it. There are still things in like an backup store will be locked until an damaged restore point is manually marked for deletion.

The last one is the VSA. The idea behind this is really nice, to use the local disk storage of an ESXi as shared storage. But the implementation of it is not really useful. You need between 9 and 11 IP’s for this solution. Why??? Furthermore it offers only an NFS share. Why? Would it be better to setup 2 appliance using linux, heartbeat, drbd and iSCSI of NFS? This will require only 2 external IP’s and 2 private IP’s for sync/heartbeat. That would be more efficient. Additionally this is enable you to use multipathing as each of the appliances has it’s own IP and iSCSI target. So the ESXi will recognize it as 1 LUN with 2 paths.

Further updates will follow … :-)

OpenNode – CentOS based virtualization system

This is an CentOS 5 based bare-metal installer. OpenNode includes KVM and OpenVZ. They offer an management tools but I prefere to use the OpenVZ Web Panel (Link).

I have installed OpenNode onto my HP Proliant ML115 G5. This was really easy as a normal CentOS installation.

Link to OpenNode

After the system finished the setup I used the internal console to download an Ubuntu 10.04 template. With this template I created an new virtual machine and after this I installed the OpenVZ Web Panel on it. Next step was to install the hw-daemon of the OpenVz web panel onto the OpenNode machine. There was an small config file called hw-daemon.ini, I just entered the IP, Auth Key and started the daemon.
For an automated start I created an simple init script with just start and stop functionality. I used the chkconfig tool to add the init script to all necessary runlevels.

to be continued …

Setup a Windows XP appliance …

Currently I setup a Windows XP appliance which I would like to use for a new way to manage a VMware ESXi plattform. A colleague had this idea to create an interface to manage ESXi free server’s from any kind of CMS.

Some details:

OS: Windows XP
Software: Apache, PHP, Powershell, VMware PowerCLI, JSON for PHP

more details will follow soon …