ESX IP STORAGE TROUBLESHOOTING BEST PRACTICE

VMware has released a new White paper about ESXi IP storage troubleshooting.

In this paper, we:
• Describe how you can analyze packet traces to identify functional and performance issues in an
ESX IP storage environment.
• Compare packet capture alternatives, and explain why we recommend an inline optical network
tap connected to a packet capture system.
• Present the challenges of 10G packet capture, and describe key features of commercial 10G
capture solutions.
• Describe the design of an inexpensive, self-assembled 10G packet capture solution optimized for
troubleshooting that you can build relatively easily. We also describe our experience with multiple
prototypes of this design, which we have used in our ESX IP storage testbeds for NFS and iSCSI
performance for many years.
• Present examples of analyzing packet traces to solve ESX performance issues for NFSv41, software
iSCSI over IPv6, and hardware iSCSI.

ESX-IP-storage-troubleshooting.pdf

Time to replace my Homelab – Part 1

My current homelab contains 2x HP ML10v2, 1x HP Microserver Gen8 and 2 Switches (1 Mikrotik Cloud Switch 24 Port, Zyxcel 16 Port). This setup was sufficient for a long time. But due to the increased resource requirements of some VMware products and my activities with vROPs, Log Insight and other solutions I require more resources. Furthermore I would like to reduce from 3 to 1 server and run more activities virtually based on the lab scripts from William Lam.

Current Setup:

HostnameCPUMemoryStorageAdd-On
xmesx01eIntel® Pentium® G3240 (3.1GHz/2-core/3MB/54W)32 GB1x 240GB Intel SSD1x HP Quad Gbit NIC
xmesx02eIntel® Pentium® G3240 (3.1GHz/2-core/3MB/54W)20 GB1x 240GB Intel SSD1x HP Quad Gbit NIC
Storage (Freenas on HP Microserver Gen8):Intel® Pentium® G2020T (2.5GHz/2-core/3MB/35W)16 GB4x 2TB WD RED1x HP Smart Array P410 + 512MB Cache Modul + Battery

New Setup:

CategoryDescription
Type:HP Proliant DL380 G7  
ChipsetIntel  ®   5520 Tylersburg + Intel ICH 10
CPU TypIntel ®  Xeon ®  CPU L5630
CPU Speed2.13 GHz
Cores / Threads2 CPU(s), 4 Cores per CPU, 2 Thread(s) per Core, 16 total
L1, L2, L3 Cache32k+32k,   256k,    12288k (12 MB)
Memory32 GB DDR3 ECC 
Installed8 x 4 GB
SAS - Controller:HP Smart Array P410i Controller
Controller PCIe:1x GB LAN Broadcom PCIe Karte
2x Fiber Channel Finsar Singel Port PCIe card
Disks8x 2,5“ SAS HDDs 

The plan is to extend the machine with 8x 146GB SAS disks and additionally upgrade the memory to 144GB at least. The FC card will be removed as it will not be used in my Homelab.

If someone is interested in buying one or two HP ML10v2 please send my a message.

to be continued …

vCenter notification send push message using Prowl to iOS devices

I would like to receive push messages for triggered alerts from my vCenter. There is the possibility to execute a command when a alert has been triggered so I decided to write a script which will use Prowl to send push notification to my mobile apple device. I used Prowl already for other devices like my Homematic home automation system. It’s easy to use with an simple curl command. You can find a lot of good examples available by using Google.

Sources and Services:

VMware Documentation

Push notification service

You have to create an account on www.prowlapp.com and generate a new API key. The new API key has to be added to script (replace PLACE_YOUR_API_KEY_HERE with your API key).

Script:

This script is really basic and is only a proof of concept. I will extend and improve it over time.

root@vcenter [ ~ ]# mkdir bin
root@vcenter [ ~ ]# cd bin
root@vcenter [ ~/bin ]# vi alert.sh
#!/bin/bash
#set -x

value="$VMWARE_ALARM_ALARMVALUE"
if [ $$value == "red" ]; then
priority=2
else
priority=0
fi
app="$VMWARE_ALARM_NAME"
event="$VMWARE_ALARM_EVENTDESCRIPTION"
description="$VMWARE_ALARM_TARGET_NAME"
apikey=PLACE_YOUR_API_KEY_HERE

curl https://prowl.weks.net/publicapi/add -F apikey=$apikey -F priority=$priority -F application="$app" -F event="$event" -F description="$description"

Edit vCenter alert definition:

To receive the push notification you have to install the prowl app on your iOS device and login from the app to prowl.

Thats basically all what is required for vCenter push notifications on your iPhone.

Homelab the current state

It’s time for a new update about my new Homelab. In the past I was running HP Microserver 2x N40L and 1x Gen8. All these machines are still here but no longer in use and will be for sale soon. My new Homelab contains now 2x HP ML10v2, 1x Zyxel 16 Gbit Switch, 1x Lenovo ix4-300d storage.

2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory
1x Zyxel 16 Port Gbit Switch: Zyxel GS1100-16-EU0101F Gigabit Switch (16-Port, RJ-45)
1x Lenovo ix4-300d Storage: Lenovo Iomega ix4-300d Network Storage (0TB Diskless EMEA, Marvell Armada TM XP, 1,3GHz, 4x HDD, 512MB RAM)
2x HP Quad Network adapter: HP 538696-B21 PCI-e Quad Port Gigabit Server Adapter
2x Intel SSD 240GB: Intel SSDSC2BW240H601 interner Solid State Drive 240GB schwarz
4x WD RED 2TB: WD 2TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll), 5400rpm, SATA III) WD20EFRX

xmesx01e

xmesx02e

I have installed vSphere 6 on my servers. The Lenovo storage is providing 2 iSCSI targets to fulfil the requirement of 2 shared datastores for HA. Both datastores are part of an SDRS cluster. Besides this I have created a cluster which has HA and DRS enabled. On the network side I have created 3 standard vSwitches each with 2 Gbit network adapters. vSwitch0 is for the virtual machine traffic. vSwitch1 is for iSCSI and vSwitch2 is for VMotion. Yes I know thats a bit the old school way of networking but hey thats only my Homelab so a dvSwitch with Network IO control should not be required. A simple setup is sufficient for Homelab use. Maybe in the future I’m going to change it but that will have to wait until the next vSphere release.

So whats next?

Next will be to setup vRealize Operations Manager 6.3 to improve my troubleshooting knowledge of this product. Additionally I have an idea about how to simplify the troubleshooting and improve the supportability of the product. But this is a different topic. Today I’m going to install the lastest ESXi patches and check for upgrades of my vCenter appliance.

More to come …

NetApp NFS APD issues – reduction of MaxQueueDepth

If you face APD’s in your environment you can follow the KB below to possible improve the situation.

http://kb.vmware.com/kb/2016122
https://kb.netapp.com/support/index?page=content&id=1014696

When using NFS datastores on some NetApp NFS filer models on an ESXi/ESX host, you experience these symptoms:
* The NFS datastores appear to be unavailable (grayed out) in vCenter Server, or when accessed through the vSphere Client
* The NFS shares reappear after few minutes
* Virtual machines located on the NFS datastore are in a hung/paused state when the NFS datastore is unavailable
* This issue is most often seen after a host upgrade to ESXi 5.x or the addition of an ESXi 5.x host to the environment but can also occur in vSphere 6 environment.

/var/log/vmkernel.log

NFSLock: 515: Stop accessing fd 0xc21eba0 4
NFS: 283: Lost connection to the server 192.168.100.1 mount point /vol/datastore01, mounted as bf7ce3db-42c081a2-0000-000000000000 (“datastore01”)
NFSLock: 477: Start accessing fd 0xc21eba0 again
NFS: 292: Restored connection to the server 192.168.100.1 mount point /vol/datastore01, mounted as bf7ce3db-42c081a2-0000-000000000000 (“datastore01”)
T

Additionally VMware released a new Patch for ESXi 5.5 / 6 which contains improvements of the NFS implementation which should make the ESXi more resilient to APDs.

You can find an great overview on the following sites. ESXi 5.5 Patches and ESXi 6 Patches

Besides running the latest version of ESXi it is highly recommended to apply the NetApp NFS vSphere recommendations.

Get all guest IPs of all your Virtual Machines using Powershell

This is a small powershell script to get all Guest IPs from all virtual machines

$VCENTER="10.1.1.3"
Connect-VIServer -Server $VCENTER

$vms = Get-VM | Sort

$file = "c:\ip_list.txt"

foreach ($vm in $vms){
	
	foreach($ip in $vm.Guest.IpAddress){
		Write-Host "$vm $ip"
		"$vm $ip" | Out-File -FilePath $file -Append

	}

}

Disconnect-VIServer -Server $VCENTER -Confirm:$false -Force:$true

Update 2: So what has changed … I joined VMware

Some of you might have noticed, there was a countdown on my website. So what has happened … I have decided that it was time to move on to new challenges. So I quit my job at Vodafone after nearly 15 years and joined VMware at the 1st of April 2016 as Senior Technical Support Engineer.

Currently I’m really busy by doing a lot of trainings, exam’s and meeting a lot of interesting people mostly in Cork.

Trainings:

* Data Center Virtualization Fundamentals [V6] – done

* VMware vSphere: Install, Configure, Manage [V6] – done

* VMware vSphere: Optimize and Scale [V6]

* VMware Log Insight [V2.0] Fundamentals – done

* VMware vRealize Operations Manager: Install, Configure, Manage [V6.0]

* VMware Cloud Fundamentals – done

* vCloud Air Fundamentals – done

* VMware vCloud Director Fundamentals [V5.1/V5.5]

* VMware vCloud Director: Install, Configure, Manage [V5.5]

* VMware NSX: Install, Configure, Manage [V6.2]

Exam’s:

* vSphere 6 Foundations – done
* VCA6-DCV – done
* VCA6-HC – done
* VCA6-CMA – done
* VCA6-NV
* VCP6-DCV
* VCP6-CMA
* VCP6-NV

That’s the reason why I do not post a lot of new stuff on my site. Keep you posted. :-)