Time to replace my Homelab – Part 1

My current homelab contains 2x HP ML10v2, 1x HP Microserver Gen8 and 2 Switches (1 Mikrotik Cloud Switch 24 Port, Zyxcel 16 Port). This setup was sufficient for a long time. But due to the increased resource requirements of some VMware products and my activities with vROPs, Log Insight and other solutions I require more resources. Furthermore I would like to reduce from 3 to 1 server and run more activities virtually based on the lab scripts from William Lam.

Current Setup:

HostnameCPUMemoryStorageAdd-On
xmesx01eIntel® Pentium® G3240 (3.1GHz/2-core/3MB/54W)32 GB1x 240GB Intel SSD1x HP Quad Gbit NIC
xmesx02eIntel® Pentium® G3240 (3.1GHz/2-core/3MB/54W)20 GB1x 240GB Intel SSD1x HP Quad Gbit NIC
Storage (Freenas on HP Microserver Gen8):Intel® Pentium® G2020T (2.5GHz/2-core/3MB/35W)16 GB4x 2TB WD RED1x HP Smart Array P410 + 512MB Cache Modul + Battery

New Setup:

CategoryDescription
Type:HP Proliant DL380 G7  
ChipsetIntel  ®   5520 Tylersburg + Intel ICH 10
CPU TypIntel ®  Xeon ®  CPU L5630
CPU Speed2.13 GHz
Cores / Threads2 CPU(s), 4 Cores per CPU, 2 Thread(s) per Core, 16 total
L1, L2, L3 Cache32k+32k,   256k,    12288k (12 MB)
Memory32 GB DDR3 ECC 
Installed8 x 4 GB
SAS - Controller:HP Smart Array P410i Controller
Controller PCIe:1x GB LAN Broadcom PCIe Karte
2x Fiber Channel Finsar Singel Port PCIe card
Disks8x 2,5“ SAS HDDs 

The plan is to extend the machine with 8x 146GB SAS disks and additionally upgrade the memory to 144GB at least. The FC card will be removed as it will not be used in my Homelab.

If someone is interested in buying one or two HP ML10v2 please send my a message.

to be continued …

vCenter notification send push message using Prowl to iOS devices

I would like to receive push messages for triggered alerts from my vCenter. There is the possibility to execute a command when a alert has been triggered so I decided to write a script which will use Prowl to send push notification to my mobile apple device. I used Prowl already for other devices like my Homematic home automation system. It’s easy to use with an simple curl command. You can find a lot of good examples available by using Google.

Sources and Services:

VMware Documentation

Push notification service

You have to create an account on www.prowlapp.com and generate a new API key. The new API key has to be added to script (replace PLACE_YOUR_API_KEY_HERE with your API key).

Script:

This script is really basic and is only a proof of concept. I will extend and improve it over time.

root@vcenter [ ~ ]# mkdir bin
root@vcenter [ ~ ]# cd bin
root@vcenter [ ~/bin ]# vi alert.sh
#!/bin/bash
#set -x

value="$VMWARE_ALARM_ALARMVALUE"
if [ $$value == "red" ]; then
priority=2
else
priority=0
fi
app="$VMWARE_ALARM_NAME"
event="$VMWARE_ALARM_EVENTDESCRIPTION"
description="$VMWARE_ALARM_TARGET_NAME"
apikey=PLACE_YOUR_API_KEY_HERE

curl https://prowl.weks.net/publicapi/add -F apikey=$apikey -F priority=$priority -F application="$app" -F event="$event" -F description="$description"

Edit vCenter alert definition:

To receive the push notification you have to install the prowl app on your iOS device and login from the app to prowl.

Thats basically all what is required for vCenter push notifications on your iPhone.

Lenovo ix4-300d SSH, root home directory and cronjobs

Couple of weeks ago I got my hands on 3 Lenovo ix4-300d storage boxes. These boxes are no longer produced and sometimes you can get them really cheap. First I had plans to use them as iSCSI storage for my home lab but the performance is not good enough. Therefore I decided to play around with one of the boxes to see what is possible.

I started searching in google about SSH access. Outcome was great blog post which contained a how to enable SSH.

Steven Breuls SSH access ix4-300d

In short just access this page http://[NAS IP]/manage/diagnostics.html and you can enable SSH and set a password for root.

Attention:

Username : root
Password : soho + password (if password = VMware ==> sohoVMware )

As soon as you have successfully logged in you can change the password by simply running passwd like on any other linux. Afterwards the soho part in the beginning of the password is no longer required.

Next step is to create a home directory for the root user. You have to login to the web interface and go to shares. Now create a new share called root.

Attention: This way ensures that the folder is persistent. If you create it using ssh and mkdir it will be lost after reboot.

In your SSH session you can go to /nfs there you will find all your shares.

cd /nfs

Next step is to change the home directory of the root user to /nfs/root.

vi /etc/passwd

You have to adjust the first line like on the screenshot below.

Save the file and reboot the NAS.

:wq

Restart using the Web interface.

Now login using SSH and check that root has now /nfs/root as home directory.

pwd

Limitation: SSH key authentication is not working due to the wrong permissions of the home directory.

In the new home directory you can now create folders and file on commandline they will be persistent only the main folder of the nfs has to be created using the web interface.

If you would like to run schedules jobs (cronjobs) you can edit the crontab file in /etc. It is also persistent.

I’m using the Lenovo 1×4-300d as backup system for my vServers.

Homelab the current state

It’s time for a new update about my new Homelab. In the past I was running HP Microserver 2x N40L and 1x Gen8. All these machines are still here but no longer in use and will be for sale soon. My new Homelab contains now 2x HP ML10v2, 1x Zyxel 16 Gbit Switch, 1x Lenovo ix4-300d storage.

2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory
1x Zyxel 16 Port Gbit Switch: Zyxel GS1100-16-EU0101F Gigabit Switch (16-Port, RJ-45)
1x Lenovo ix4-300d Storage: Lenovo Iomega ix4-300d Network Storage (0TB Diskless EMEA, Marvell Armada TM XP, 1,3GHz, 4x HDD, 512MB RAM)
2x HP Quad Network adapter: HP 538696-B21 PCI-e Quad Port Gigabit Server Adapter
2x Intel SSD 240GB: Intel SSDSC2BW240H601 interner Solid State Drive 240GB schwarz
4x WD RED 2TB: WD 2TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll), 5400rpm, SATA III) WD20EFRX

xmesx01e

xmesx02e

I have installed vSphere 6 on my servers. The Lenovo storage is providing 2 iSCSI targets to fulfil the requirement of 2 shared datastores for HA. Both datastores are part of an SDRS cluster. Besides this I have created a cluster which has HA and DRS enabled. On the network side I have created 3 standard vSwitches each with 2 Gbit network adapters. vSwitch0 is for the virtual machine traffic. vSwitch1 is for iSCSI and vSwitch2 is for VMotion. Yes I know thats a bit the old school way of networking but hey thats only my Homelab so a dvSwitch with Network IO control should not be required. A simple setup is sufficient for Homelab use. Maybe in the future I’m going to change it but that will have to wait until the next vSphere release.

So whats next?

Next will be to setup vRealize Operations Manager 6.3 to improve my troubleshooting knowledge of this product. Additionally I have an idea about how to simplify the troubleshooting and improve the supportability of the product. But this is a different topic. Today I’m going to install the lastest ESXi patches and check for upgrades of my vCenter appliance.

More to come …

Alternative Setup of a VMware Homelab

My planed new Homelab requires a lot of space due to the fact of the size of the components. But what if someone would like to have a Homelab but does not have the space in the office. One option would be to run a fully virtualized Homelab but this is limited to the resources of the computer/laptop. A more costly option would be to use multiple INTEL NUC and a storage system.

Shopping list:

2x Intel NUC i5: Intel NUC6i5SYH
2x Samsung 32GB Memory: SAMSUNG 32GB Dual Channel Kit 2 x 16 GB 260 pin DDR4 2133 SO-DIMM (2133Mhz, PC4-17000, CL15) passend für alle Intel “Skylake” Notebooks mit DDR4 Standard
2x Samsung M2 SSD 250GB: Samsung MZ-N5E250BW 850 EVO interne SSD 250GB (SATA) grün
1x QNAP NAS 4 Bay: QNAP TS-453A-8G-NAS 4-Bay Intel Celeron Braswell N
4x WD Red 3TB: WD 3TB Red interne NAS-Festplatte (8,9 cm (3,5 Zoll) SATA) WD30EFRX
1x Netgear Managed Switch 8 port: Netgear ProSafe GS108T (8-Port Gigabit Smart Managed Switch 8 x 10/100/1000 – desktop)

You can use the standard ESXi (5.5 or 6) images provided from the VMware web site and install it using a USB stick.

As soon as I have all hardware available I’m going to document the complete setup on my web site.

Homelab upgrade and Options to install a SSD into the HP Microserver Gen8

I started to plan the upgrade my current homelab from 1x HP Microserver Gen8, 2x HP Microserver N40L and a Netgear 24 port switch. The only part which I will keep is the HP Microserver Gen8 it has already 16GB of memory and a HP SmartArray P410 with 512MB Cache module incl. battery. This machine will run my infrastructure machines like a tiny vCenter. As work horses I plan to use HP ML10v2 due to the fact that they are cost efficient and they support up to 32GB of memory.

Parts list which I will use in my future homelab:

1x Server HP Microserver Gen8: HP ProLiant MicroServer (Gen8, G1610T, 1P, 4 GB-U, B120i, SATA-Server)
2x Server HP ML10 v2: HP ProLiant ML10 v2 G3240-Server (nicht Hot-Plug-fähig, 4 GB-U, B120i, 4 LFF, 350-W-Netzteil)
2x SSDs per Server: Mushkin MKNSSDRE1TB Reactor 7mm SSD, 1TB
2x SSD adapter per Server: Kingston SNA-DC2/35 SATA DriveCarrier Einbaurahmen für 6,4 cm (2,5 Zoll) bis 8,9 cm (3,5 Zoll) schwarz
2x Disks per Server: HGST Deskstar NAS 3TB 6Gb/s SATA 7200rpm 24×7 RV S
Switch: TP-Link TL-SG3424 Pure-Gigabit L2 Managed Switch (24x 10/100/1000Mbps RJ45-Ports inkl. 4 kombinierter SFP-Anschlüsse, lüfterloses Passivkühlkonzept)
Memory: Samsung 8GB (1x 8GB) DDR3 1600MHz (PC3 12800E) 2Rx8 ECC Unbuffered Dimm Server Workstation Arbeitsspeicher RAM Memory

There are at least 2 options to install SSDs into the HP Microserver Gen8.

Option 1 is to connect a single SSD to the SATA port on the mainboard an place it where normally the DVD drive should be installed. That’s the cheapest method and might be not the best one.

Option 2 will use a adapter to map the 2.5″ SSD into a 3.5″ tray. I personally would use the Kingston SNA-DC2/35 adapter.

UPDATE: How to update ESXi from online repository

UPDATEHP has changed the URL of their online repository. I have adjusted the post accordingly.

Here is a way how to install patches using esxcli directly on the ESXi host from an online repository

I found a great side about the available patches including commands to install them.

https://esxi-patches.v-front.de/

Example host: HP Microserver Gen8

Get a list of all available updates

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Example output:

[root@micro-gen8:~] esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Name                              Vendor        Acceptance Level
--------------------------------  ------------  ----------------
ESXi-5.1.0-20140102001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120904001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20130504001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20140704001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20140704001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20141004001-standard   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20141202001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20111104001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20131001001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20150304001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20141202001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20130701001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20140604001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20151104001-standard   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20141201001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.0.0-20121202001-standard   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20140604001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20130801001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.1.0-20151004001-standard   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20140102001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120701001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20141004001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20140302001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20141204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20130304001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20150304001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20160104001-standard   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120504001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20151004001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-2494585-standard       VMware, Inc.  PartnerSupported
ESXi-5.1.0-20140604001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20140501001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160204001-standard   VMware, Inc.  PartnerSupported
ESXi-5.1.0-20121201001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.0.0-469512-standard        VMware, Inc.  PartnerSupported
ESXi-5.0.0-20150204001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150504001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20140101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.5.0-20150204001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20150902001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120701001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.0.0-20141204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20140901001s-standard  VMware, Inc.  PartnerSupported
...

Get a list of all available updates for ESXi 6 released in 2016

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-6.0.0-2016
[root@micro-gen8:~] esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-6.0.0-2016
ESXi-6.0.0-20160104001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160101001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160104001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160204001-standard   VMware, Inc.  PartnerSupported

To install a update package:

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.0.0-20160204001-standard
esxcli network firewall ruleset set -e false -r httpClient

The same is possible with for example HP packages for ESXi

esxcli software vib update --depot=http://vibsdepot.hpe.com/hpq/latest/index.xml --force

Example output:

[root@micro-gen8:~] esxcli software vib update --depot=http://vibsdepot.hpe.com/hpq/latest/index.xml --force
Installation Result
   Message: Host is not changed.
   Reboot Required: false
   VIBs Installed:
   VIBs Removed:
   VIBs Skipped: Hewlett-Packard_bootbank_char-hpcru_6.0.6.14-1OEM.600.0.0.2159203, Hewlett-Packard_bootbank_char-hpilo_600.9.0.2.8-1OEM.600.0.0.2159203, Hewlett-Packard_bootbank_hp-ams_600.10.3.0-15.2494585, Hewlett-Packard_bootbank_hp-conrep_6.0.0.1-0.0.13.2159203, Hewlett-Packard_bootbank_hp-esxi-fc-enablement_600.2.4.6-2494585, Hewlett-Packard_bootbank_hpbootcfg_6.0.0.02-01.00.11.2159203, Hewlett-Packard_bootbank_hpnmi_600.2.3.14-2159203, Hewlett-Packard_bootbank_hponcfg_6.0.0.04-00.13.17.2159203, Hewlett-Packard_bootbank_hpssacli_2.30.6.0-6.0.0.2159203, Hewlett-Packard_bootbank_hptestevent_6.0.0.01-00.00.8.2159203
esxcli software vib update --depot=http://vibsdepot.hpe.com/hpq/latest/index-drv.xml --force 

Example output:

[root@micro-gen8:~] esxcli software vib update --depot=http://vibsdepot.hpe.com/hpq/latest/index-drv.xml --force
Installation Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: Mellanox_bootbank_net-mlx4-core_1.9.9.4-1OEM.550.0.0.1331820, Mellanox_bootbank_net-mlx4-en_1.9.9.4-1OEM.550.0.0.1331820, QLogic_bootbank_ima-qla4xxx_500.2.01.31-1vmw.0.3.100400, QLogic_bootbank_scsi-qla4xxx_644.55.36.0-1OEM.550.0.0.1331820
   VIBs Removed: QLogic_bootbank_scsi-qla4xxx_644.6.04.0-1OEM.600.0.0.2159203, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585
   VIBs Skipped: BRCM_bootbank_net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585, EMU_bootbank_elxnet_10.5.121.7-1OEM.600.0.0.2159203, EMU_bootbank_ima-be2iscsi_10.5.101.0-1OEM.600.0.0.2159203, EMU_bootbank_lpfc_10.5.70.0-1OEM.600.0.0.2159203, EMU_bootbank_scsi-be2iscsi_10.5.101.0-1OEM.600.0.0.2159203, Emulex_bootbank_scsi-lpfc820_10.5.55.0-1OEM.500.0.0.472560, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.46-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_scsi-hpsa_6.0.0.114-1OEM.600.0.0.2494585, Hewlett-Packard_bootbank_scsi-hpvsa_5.5.0.100-1OEM.550.0.0.1331820, Intel_bootbank_intelcim-provider_0.5-1.4, Intel_bootbank_net-i40e_1.2.48-1OEM.550.0.0.1331820, Intel_bootbank_net-igb_5.2.10-1OEM.550.0.0.1331820, Intel_bootbank_net-ixgbe_3.21.4.3-1OEM.550.0.0.1331820, LSI_bootbank_scsi-mpt2sas_15.10.06.00.1vmw-1OEM.550.0.0.1198610, MEL_bootbank_nmlx4-core_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmlx4-en_3.1.0.0-1OEM.600.0.0.2348722, MEL_bootbank_nmst_4.0.0.20-1OEM.600.0.0.2295424, Mellanox_bootbank_net-mst_4.0.0.20-1OEM.550.0.0.1331820, QLogic_bootbank_misc-cnic-register_1.712.50.v60.1-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2_2.2.5j.v60.3-1OEM.600.0.0.2494585, QLogic_bootbank_net-bnx2x_2.712.50.v60.6-1OEM.600.0.0.2494585, QLogic_bootbank_net-cnic_2.712.50.v60.6-1OEM.600.0.0.2494585, QLogic_bootbank_net-nx-nic_6.0.643-1OEM.600.0.0.2494585, QLogic_bootbank_net-qlcnic_6.1.191-1OEM.600.0.0.2494585, QLogic_bootbank_qlnativefc_2.1.27.0-1OEM.600.0.0.2768847, QLogic_bootbank_scsi-bfa_3.2.5.0-1OEM.550.0.0.1331820, QLogic_bootbank_scsi-bnx2fc_1.712.50.v60.7-1OEM.600.0.0.2494585, QLogic_bootbank_scsi-bnx2i_2.712.50.v60.4-1OEM.600.0.0.2494585, QLogic_bootbank_scsi-qla2xxx_934.5.45.0-1OEM.500.0.0.472560

Homelab + ESXi Hostclient

My Homelab contains currently 2x HP Microserver N40L and one HP Microserver Gen8. The N40L’s are running ESXi 5.5 and the Gen8 is running ESXi 6. That’s my basic setup. Due the fact that I’m only using Apple computers I have searched a long time for an alternative to the vSphere fat client. Since a couple of months VMware has a Fling available called ESXi host client. The new host client is a HTML and JavaScript based vSphere client running directly on the hypervisor.

Hostclient installation:

Download the latest VIB from https://labs.vmware.com/flings/esxi-embedded-host-client

Upload the file to the ESXi datastore and install it using esxcli.

esxcli software vib install -v /vmfs/volumes/datastore/esxui-3530804.vib

Afterwards you can access the hostclient using the management IP of your ESXi host https://MGMT-IP/ui

hostclient_login

Login using your ESXi credentials.

hostclient_host_view

Most of the standard configurations can be handled with the new host client which simplifies everything.

Now I hope that VMware will integrate this host client directly into the ESXi and finally retire the old vSphere fat client.

Few words about my current Homelab and my future extension plans:

* HP Microserver N40L
* 1 AMD Fusion
* 8GB Memory
* HP Smart Array P410 512MB Cache + Battery
* 2x 2TB Western Digital HDD
* ILO cards

* HP Microserver Gen8
* Intel Pentium
* 16GB Memory
* HP Smart Array P410 512MB Cache + Battery
* 2x 2TB Western Digital + 1x 4TB Seagate SSHD
* SSD cache
* ILO 4 onboard

Amazon: HP Microserver Gen8

I’m currently planing the replacement of the N40L’s with the new HP ML10 v2. The ML10 v2 has an very attractive price in Germany (around 180€) containing 1x Intel Pentium, 4GB Memory, no HDD. The new box is capable to handle up to 32GB of Memory, 4 HDDs and can also be equipped with an Intel Xeon.

Amazon: HP ML10 v2

There will be another post dedicated to the new ESXi host client including more screenshots and descriptions how to configure an ESXi host just using this client.

New HP Microserver NL54

HP has announced a new version of the nice little Microserver. After the NL36 and NL40 now it is the NL54. The basics are the same the only real improvement is the new CPU which is now running on 2.2Ghz instead of 1.5Ghz. The official memory limit is still 8GB. :-( I had expected that HP will increase the limit to 16GB. Nevertheless it is a cool small server which is perfect for home lab setups.

Quickspecs

long time no post … :-(

I’m very busy at work as we are currently in major reorganisation. Personal for myself everything has changed. I’m now in a new department including a new boss and new responsibilities. But now everything settles and I have time for new stuff like my preparation for the VCP 5 exam. :-)

As preparation I have upgraded my Homelab completely to vSphere 5. Both Microserver and the ML115 G5 are running ESXi 5 with the latest patches. The vCenter is running as virtual machine located on the ML115 G5 as this box has much more power than the Microservers. Next step is setup of a shared storage as this is still missing in my homelab and the VDR for backups.

Hopefully I’m able to finalize these things asap.