Update: Trigger a User event from the VCSA command line

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Recently I faced the challenge with a script running on a VCSA which executes some checks and in case something gets detected it has to trigger a vCenter alarm.

My solution is to log a user event which will be captured by an vCenter alert definition.

1. vCenter alert definition

as trigger manually enter (type): vim.event.GeneralUserEvent

2. The script which will log the user event

#!/usr/bin/python

#!/usr/bin/python

import sys
sys.path.insert(0, '/usr/lib/vmware/site-packages/')
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
import atexit

username = 'administrator@vsphere.local'
password = 'VMware1!'

v = sys.version_info
try:

        if sys.version_info[1] <7 or v.minor < 7:
                si = SmartConnect(host="localhost", user=username, pwd=password)
                content = si.RetrieveContent()
                ds = content.rootFolder
                log = si.content.eventManager.LogUserEvent(entity=ds, msg="Postgres Corruption detected")
        else:
                s = ssl.SSLContext(ssl.PROTOCOL_TLS)
                s.verify_mode = ssl.CERT_NONE
                si = SmartConnect(host="localhost", user=username, pwd=password, sslContext=s)
                content = si.RetrieveContent()
                ds = content.rootFolder
                log = si.content.eventManager.LogUserEvent(entity=ds, msg="Postgres Corruption detected")
except:
        print("Unexpected error:", sys.exc_info()[0])

Result:

The resulting alert can be catched by vROPs and trigger further events/ tickets.

vROps log bundle automation Part 1

In my role as an onsite GSS Senior TSE at a large VMware customer in Germany I have to collect vROPs log bundles multiple times a week. As this is time consuming manual process I have created a script in my spare-time which fully automates this task.
The script triggers the creation of a log bundle using the CASA API, checks the status of the creation and download the log bundle when it is ready. After successful download it will automatically upload the bundle to sftpsite.vmware.com in the corresponding SR folder.

The script is avialable in my Gogs: vrops-support-bundle

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Steps how to use the script:

Setup an Linux virtual machine as host for the script. A machine with 1 vCPU, 2GB Memory and 100Gb storage is sufficient. The storage can be smaller it really depends on the size of your vROPs environment and the size of the log bundles.

Requirements

* Perl
* curl
* access to vROPs master on port 443
* direct sftp access to sftpsite.vmware.com

Instructions

Usage: ./standard_vrops_api.pl MASTER-FQDN

You have to provide the admin user password and the VMware SR number.

* the script collects environment information
* triggers a full log bundle
* checks the status of the log bundle creation
* copy log bundle from all nodes to Master
* download log bundle to local machine
* upload of log bundle to sftpsite.vmware.com

Screenshots and more details will follow in part 2.

Lenovo ix4-300d SSH, root home directory and cronjobs

Couple of weeks ago I got my hands on 3 Lenovo ix4-300d storage boxes. These boxes are no longer produced and sometimes you can get them really cheap. First I had plans to use them as iSCSI storage for my home lab but the performance is not good enough. Therefore I decided to play around with one of the boxes to see what is possible.

I started searching in google about SSH access. Outcome was great blog post which contained a how to enable SSH.

Steven Breuls SSH access ix4-300d

In short just access this page http://[NAS IP]/manage/diagnostics.html and you can enable SSH and set a password for root.

Attention:

Username : root
Password : soho + password (if password = VMware ==> sohoVMware )

As soon as you have successfully logged in you can change the password by simply running passwd like on any other linux. Afterwards the soho part in the beginning of the password is no longer required.

Next step is to create a home directory for the root user. You have to login to the web interface and go to shares. Now create a new share called root.

Attention: This way ensures that the folder is persistent. If you create it using ssh and mkdir it will be lost after reboot.

In your SSH session you can go to /nfs there you will find all your shares.

cd /nfs

Next step is to change the home directory of the root user to /nfs/root.

vi /etc/passwd

You have to adjust the first line like on the screenshot below.

Save the file and reboot the NAS.

:wq

Restart using the Web interface.

Now login using SSH and check that root has now /nfs/root as home directory.

pwd

Limitation: SSH key authentication is not working due to the wrong permissions of the home directory.

In the new home directory you can now create folders and file on commandline they will be persistent only the main folder of the nfs has to be created using the web interface.

If you would like to run schedules jobs (cronjobs) you can edit the crontab file in /etc. It is also persistent.

I’m using the Lenovo 1×4-300d as backup system for my vServers.

Get all guest IPs of all your Virtual Machines using Powershell

This is a small powershell script to get all Guest IPs from all virtual machines

$VCENTER="10.1.1.3"
Connect-VIServer -Server $VCENTER

$vms = Get-VM | Sort

$file = "c:\ip_list.txt"

foreach ($vm in $vms){
	
	foreach($ip in $vm.Guest.IpAddress){
		Write-Host "$vm $ip"
		"$vm $ip" | Out-File -FilePath $file -Append

	}

}

Disconnect-VIServer -Server $VCENTER -Confirm:$false -Force:$true

Upload file to webdav using Powershell

This is small powershell script to upload a file from your local disk to a webdav server. It is also proxy capable.

#Complete path of the file to be uploaded
$file = "d:\test_file.txt"
 
#URL without the last "/"
$url  = "https://YOUR-SERVER/webdav"
$proxy = "http=PROXY-SERVER:PORT"  
 
#User and Pwd for Webdav Access
$user = "USERNAME"
$pass = "PASSWORD"
 
$url += "/" + $file.split('\')[(($file.split("\")).count - 1)]

# Set binary file type
Set-Variable -name adFileTypeBinary -value 1 -option Constant 
 
$objADOStream = New-Object -ComObject ADODB.Stream
$objADOStream.Open()
$objADOStream.Type = $adFileTypeBinary
$objADOStream.LoadFromFile("$file")
$buffer = $objADOStream.Read()
$objXMLHTTP = New-Object -ComObject MSXML2.ServerXMLHTTP
$objXMLHTTP.setProxy(2, $proxy,"")
$objXMLHTTP.Open("PUT", $url, $False, $user, $pass)
$objXMLHTTP.send($buffer)

Monitor System Logs with Logwatch

I’m using the tool Logwatch to get a daily log report from all my servers by email.

Install Logwatch:

apt-get update
apt-get install logwatch

Config file:

/usr/share/logwatch/default.conf/logwatch.conf

To simplify the access to the config file I use a symlink.

cd /etc/logwatch
ln -s /usr/share/logwatch/default.conf/logwatch.conf

Configuration (parameters which I have changed):

#Output = stdout
Output = mail
#To make Html the default formatting Format = html
Format = html
MailTo = YOUR-EMAIL-ADDRESS

Cronjob:

crontab -e
30 0 * * * /usr/sbin/logwatch

Based on this configuration you will receive a nice daily overview report.

Collect ESXi stats with powershell and send it to Graphite server

I was not quite happy with the statistics from the vCenter. It is not possible to get an overview overall my ESXi servers. :-( Someone pointed me to the tool Graphite that this is a cool solution to visualize such kind of statistics. So I decided to give it a try.

I created and virtual machine running Centos and Graphite as target for my collected statistics. I will post an How to later.

Furthermore I had to create an powershell script which collects the stats of each ESXi in my cluster, transform it a graphite compatible format and transfer it to the graphite server.

#vCenter settings
$vCenter = "VCENTER-IP"
$user = "USERNAME"
$password = "PASSWORD"
$cluster = "YOUR-CLUSTER"
#Graphite server 
$remoteHost = "GRAPHITE-SERVER-IP"

#Socket to send results to Graphite server	 
$socket = new-object System.Net.Sockets.TcpClient($remoteHost, 2003)
$stream = $socket.GetStream()
$writer = new-object System.IO.StreamWriter $stream

Write-Host "Connected"
#Connect to vCenter
Connect-VIServer -Server $vCenter -User $user -Password $password 

#Get all ESXi hosts from Cluster
$esxhosts = Get-VMHost -Location $cluster | Sort

#Collect stats foreach ESXi server and bring it in a Graphite compatible format
foreach ($esxName in $esxhosts){

	$allstats = Get-Stat -Entity (Get-VMHost $esxName) -MaxSamples 1 -Realtime -Stat cpu.usage.average,disk.usage.average,net.usage.average | Sort
	Write-Host $esxName
	foreach($stat in $allstats){
		#Get Timestamp of stats and convert to UNIX timestamp
		$time = $stat.Timestamp
		$date = [int][double]::Parse((Get-Date -Date $time -UFormat %s))
		#Filter only average stats (Stats for CPU's are available foreach CPU and as average of all CPU's)
		$instance = $stat.Instance
		if($instance -eq [DBNull]::Value){
			#create a base for the graphite tree
			$base = "dc.vmware.prod."
			#remove the .usage.average
			$type = (($stat.MetricId).Replace(".usage.average",""))
			#remove the domain from the esxi name
			$name = (($esxName.Name).Replace(" ","")).Replace(".your-domain.de","")
			$value = $stat.Value
			#build the graphite compatible string
			$result = "$base$name.$type $value $date"
			#Console output just for testing
			Write-Host "Sent Result: $result"
			#send result to graphite server
			$writer.WriteLine($result)
			$writer.Flush()
		}
	}
	Write-Host " "
}
## Close the streams
$writer.Close()
$stream.Close()
#disconnect from vcenter
Disconnect-VIServer -Server $vCenter -Confirm:$false -Force
Write-Host "Done"

Below is a screenshot of an Graphite graph displaying the CPU average usage of all ESXi servers.

That’s it :-)