Homelab the current state

It’s time for a new update about my new Homelab. In the past I was running HP Microserver 2x N40L and 1x Gen8. All these machines are still here but no longer in use and will be for sale soon. My new Homelab contains now 2x HP ML10v2, 1x Zyxel 16 Gbit Switch, 1x Lenovo ix4-300d storage.

2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory
1x Zyxel 16 Port Gbit Switch: Zyxel GS1100-16-EU0101F Gigabit Switch (16-Port, RJ-45)
1x Lenovo ix4-300d Storage: Lenovo Iomega ix4-300d Network Storage (0TB Diskless EMEA, Marvell Armada TM XP, 1,3GHz, 4x HDD, 512MB RAM)
2x HP Quad Network adapter: HP 538696-B21 PCI-e Quad Port Gigabit Server Adapter
2x Intel SSD 240GB: Intel SSDSC2BW240H601 interner Solid State Drive 240GB schwarz
4x WD RED 2TB: WD 2TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll), 5400rpm, SATA III) WD20EFRX

xmesx01e

xmesx02e

I have installed vSphere 6 on my servers. The Lenovo storage is providing 2 iSCSI targets to fulfil the requirement of 2 shared datastores for HA. Both datastores are part of an SDRS cluster. Besides this I have created a cluster which has HA and DRS enabled. On the network side I have created 3 standard vSwitches each with 2 Gbit network adapters. vSwitch0 is for the virtual machine traffic. vSwitch1 is for iSCSI and vSwitch2 is for VMotion. Yes I know thats a bit the old school way of networking but hey thats only my Homelab so a dvSwitch with Network IO control should not be required. A simple setup is sufficient for Homelab use. Maybe in the future I’m going to change it but that will have to wait until the next vSphere release.

So whats next?

Next will be to setup vRealize Operations Manager 6.3 to improve my troubleshooting knowledge of this product. Additionally I have an idea about how to simplify the troubleshooting and improve the supportability of the product. But this is a different topic. Today I’m going to install the lastest ESXi patches and check for upgrades of my vCenter appliance.

More to come …

NetApp NFS APD issues – reduction of MaxQueueDepth

If you face APD’s in your environment you can follow the KB below to possible improve the situation.

http://kb.vmware.com/kb/2016122
https://kb.netapp.com/support/index?page=content&id=1014696

When using NFS datastores on some NetApp NFS filer models on an ESXi/ESX host, you experience these symptoms:
* The NFS datastores appear to be unavailable (grayed out) in vCenter Server, or when accessed through the vSphere Client
* The NFS shares reappear after few minutes
* Virtual machines located on the NFS datastore are in a hung/paused state when the NFS datastore is unavailable
* This issue is most often seen after a host upgrade to ESXi 5.x or the addition of an ESXi 5.x host to the environment but can also occur in vSphere 6 environment.

/var/log/vmkernel.log

NFSLock: 515: Stop accessing fd 0xc21eba0 4
NFS: 283: Lost connection to the server 192.168.100.1 mount point /vol/datastore01, mounted as bf7ce3db-42c081a2-0000-000000000000 (“datastore01”)
NFSLock: 477: Start accessing fd 0xc21eba0 again
NFS: 292: Restored connection to the server 192.168.100.1 mount point /vol/datastore01, mounted as bf7ce3db-42c081a2-0000-000000000000 (“datastore01”)
T

Additionally VMware released a new Patch for ESXi 5.5 / 6 which contains improvements of the NFS implementation which should make the ESXi more resilient to APDs.

You can find an great overview on the following sites. ESXi 5.5 Patches and ESXi 6 Patches

Besides running the latest version of ESXi it is highly recommended to apply the NetApp NFS vSphere recommendations.

Get all guest IPs of all your Virtual Machines using Powershell

This is a small powershell script to get all Guest IPs from all virtual machines

$VCENTER="10.1.1.3"
Connect-VIServer -Server $VCENTER

$vms = Get-VM | Sort

$file = "c:\ip_list.txt"

foreach ($vm in $vms){
	
	foreach($ip in $vm.Guest.IpAddress){
		Write-Host "$vm $ip"
		"$vm $ip" | Out-File -FilePath $file -Append

	}

}

Disconnect-VIServer -Server $VCENTER -Confirm:$false -Force:$true

Update 2: So what has changed … I joined VMware

Some of you might have noticed, there was a countdown on my website. So what has happened … I have decided that it was time to move on to new challenges. So I quit my job at Vodafone after nearly 15 years and joined VMware at the 1st of April 2016 as Senior Technical Support Engineer.

Currently I’m really busy by doing a lot of trainings, exam’s and meeting a lot of interesting people mostly in Cork.

Trainings:

* Data Center Virtualization Fundamentals [V6] – done

* VMware vSphere: Install, Configure, Manage [V6] – done

* VMware vSphere: Optimize and Scale [V6]

* VMware Log Insight [V2.0] Fundamentals – done

* VMware vRealize Operations Manager: Install, Configure, Manage [V6.0]

* VMware Cloud Fundamentals – done

* vCloud Air Fundamentals – done

* VMware vCloud Director Fundamentals [V5.1/V5.5]

* VMware vCloud Director: Install, Configure, Manage [V5.5]

* VMware NSX: Install, Configure, Manage [V6.2]

Exam’s:

* vSphere 6 Foundations – done
* VCA6-DCV – done
* VCA6-HC – done
* VCA6-CMA – done
* VCA6-NV
* VCP6-DCV
* VCP6-CMA
* VCP6-NV

That’s the reason why I do not post a lot of new stuff on my site. Keep you posted. :-)

Alternative Setup of a VMware Homelab

My planed new Homelab requires a lot of space due to the fact of the size of the components. But what if someone would like to have a Homelab but does not have the space in the office. One option would be to run a fully virtualized Homelab but this is limited to the resources of the computer/laptop. A more costly option would be to use multiple INTEL NUC and a storage system.

Shopping list:

2x Intel NUC i5: Intel NUC6i5SYH
2x Samsung 32GB Memory: SAMSUNG 32GB Dual Channel Kit 2 x 16 GB 260 pin DDR4 2133 SO-DIMM (2133Mhz, PC4-17000, CL15) passend für alle Intel “Skylake” Notebooks mit DDR4 Standard
2x Samsung M2 SSD 250GB: Samsung MZ-N5E250BW 850 EVO interne SSD 250GB (SATA) grün
1x QNAP NAS 4 Bay: QNAP TS-453A-8G-NAS 4-Bay Intel Celeron Braswell N
4x WD Red 3TB: WD 3TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll) SATA) WD30EFRX
1x Netgear Managed Switch 8 port: Netgear ProSafe GS108T (8-Port Gigabit Smart Managed Switch 8 x 10/100/1000 – desktop)

You can use the standard ESXi (5.5 or 6) images provided from the VMware web site and install it using a USB stick.

As soon as I have all hardware available I’m going to document the complete setup on my web site.

Homelab upgrade and Options to install a SSD into the HP Microserver Gen8

I started to plan the upgrade my current homelab from 1x HP Microserver Gen8, 2x HP Microserver N40L and a Netgear 24 port switch. The only part which I will keep is the HP Microserver Gen8 it has already 16GB of memory and a HP SmartArray P410 with 512MB Cache module incl. battery. This machine will run my infrastructure machines like a tiny vCenter. As work horses I plan to use HP ML10v2 due to the fact that they are cost efficient and they support up to 32GB of memory.

Parts list which I will use in my future homelab:

1x Server HP Microserver Gen8: HP ProLiant MicroServer (Gen8, G1610T, 1P, 4 GB-U, B120i, SATA-Server)
2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSDs per Server: Mushkin MKNSSDRE1TB Reactor 7mm SSD, 1TB
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
2x Disks per Server: HGST Deskstar NAS 3TB 6Gb/s SATA 7200rpm 24×7 RV S
Switch: TP-Link TL-SG3424 Pure-Gigabit L2 Managed Switch (24x 10/100/1000Mbps RJ45-Ports inkl. 4 kombinierter SFP-Anschlüsse, lüfterloses Passivkühlkonzept)
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory

There are at least 2 options to install SSDs into the HP Microserver Gen8.

Option 1 is to connect a single SSD to the SATA port on the mainboard an place it where normally the DVD drive should be installed. That’s the cheapest method and might be not the best one.

Option 2 will use a adapter to map the 2.5″ SSD into a 3.5″ tray. I personally would use the Kingston SNA-DC2/35 adapter.

How to configure VMware monitoring in Check_MK

Check_MK is able to monitor ESXi or vCenter out of the box. The configuration requires 2 steps and preferred a read only user.

1. Login to Check_MK

check_mk_login

2. Create new Host

create_new_host

Hostname: ESXi/vCenter Name or fqdn
IP Address: ESXi/vCenter Management IP

create_new_host_esxi

Save & Go to Services

The error message is normal and we can ignore it.

create_host_error

3. Go to Host & Service Parameters

host_service_parameters

4. Click on Datasource Programs

datasource_programs

5. Click Check state of VMWare ESX via vSphere

Bildschirmfoto 2016-03-03 um 14.53.28

6. Create rule in folder Main directory

Explicit hosts: ESXi-Name
vSphere User name: ReadOnlyUser-Name
vSphere secret: Password of the ReadOnlyUser
Select  
 Host Systems
 Virtual Machines
 Datastores
 Performance Counters
 License Usage
Select
 Display ESX Host power state on 
 Display VM power state on
 Placeholder VMs 

Bildschirmfoto 2016-03-03 um 14.57.08

8. Click on Save

Bildschirmfoto 2016-03-03 um 15.01.20

9. Activate your changes

Bildschirmfoto 2016-03-03 um 15.02.46

Bildschirmfoto 2016-03-03 um 15.04.36

10. Click on Hosts

Bildschirmfoto 2016-03-03 um 15.05.46

11. Click on Hosts

3rd icon – Edit services of host

Bildschirmfoto 2016-03-03 um 15.07.44

12. Activate missing

Bildschirmfoto 2016-03-03 um 15.08.40

13. Activate your changes –> see step 9

14. Check your discovered services.

Go to Views –> All Services

Bildschirmfoto 2016-03-03 um 15.11.56

15. Click on the refresh icon next to Check_MK and Check_MK Discovery

Bildschirmfoto 2016-03-03 um 15.13.38

That’s all. If you have question just leave a comment.

vCenter Appliance Webclient HTML5 consoles

I use the vCenter appliance as console proxy to access virtual machine consoles using HTML5 without a direct connection to the ESXi hosts. In vSphere 5.1 this was possible with a hidden configuration in the webclient.properties file. This configuration has changed in vSphere 5.5 and is now official available.

/var/lib/vmware/vsphere-client/webclient.properties


html.console.enabled = TRUE
html.console.port.ssl = 7331
html.console.port = 7343

/etc/init.d/vsphere-client restart

VMware tools repository

You can install the VMware tools using the Wizzard in the vSphere client but this is not the best solution. VMware is offering an repository which you can use to install the VMware tools using your package manager.

URL of the repository: http://packages.vmware.com/tools/esx/5.5/index.html

Example for Redhat 6 64bit running on ESXi 5.5:


cd /etc/yum.repos.d/

vi VMware-Tools.repo
[vmware-tools]

name=VMware Tools
baseurl=http://packages.vmware.com/tools/esx/5.5/rhel6/x86_64
enabled=1
gpgcheck=1
gpgkey=http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub

VMware vSphere 6 available

Today VMware released the new VMware vSphere 6. I was running the Beta for a long time and I must say it was working very good.

VMware vSphere 6
VMware vSphere 6 Whats New

The release of vSphere 6 is already exciting but there is one additional extrem cool new product from VMware called VIO (VMware Integrated OpenStack). It is an fully automated setup of an complete OpenStack environment based on VMware. I will try it asap because I’m playing around with OpenStack since a long time and it is definitely not an easy setup. Additionally as stated on the VMware page “Available for free for all customers with vSphere Enterprise Plus, vSphere with Operations Management Enterprise Plus or vCloud Suite.” it’s included in some of the licenses.

VMware Integrated OpenStack

As soon as I could play around with I will write an article about the complete setup process.