October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

VM Operations in ESXi 4.1 ESXi 5.5 Command Line

List all the VMs running on the host:

~ # vim-cmd vmsvc/getallvms
Vmid     Name                        File                 Guest OS      Version   Annotation
612    vm_name1    [datastore] vm_name1/vm_name1.vmx     rhel6_64Guest   vmx-10
633    vm_name2    [datastore] vm_name2/vm_name2.vmx     rhel6_64Guest   vmx-10
646    vm_name3    [datastore] vm_name3/vm_name3.vmx     rhel6_64Guest   vmx-10
647    vm_name4    [datastore] vm_name4/vm_name4.vmx     rhel6_64Guest   vmx-10
664    vm_name5    [datastore] vm_name5/vm_name5.vmx     rhel6_64Guest   vmx-10


To poweroff a VM



vim-cmd vmsvc/power.off 612   #here 612 is a vim id


To poweron a VM

vim-cmd vmsvc/power.on 612   #here 612 is a vim id

There are somany commands you can use are as below:

Commands available under vmsvc/:
acquiremksticket                   get.snapshotinfo
acquireticket                          get.spaceNeededForConsolidation
connect                                   get.summary
convert.toTemplate              get.tasklist
convert.toVm                         getallvms
createdummyvm                   gethostconstraints
destroy                                    login
device.connection                  logout
device.connusbdev                message
device.disconnusbdev           power.getstate
device.diskadd                       power.hibernate
device.diskaddexisting         power.off
device.diskremove                power.on
device.getdevices                  power.reboot
device.toolsSyncSet              power.reset
device.vmiadd                       power.shutdown
device.vmiremove                power.suspend
devices.createnic                   power.suspendResume
disconnect                              queryftcompat
get.capability                         reload
get.config                                setscreenres
get.config.cpuidmask            snapshot.create
get.configoption                     snapshot.dumpoption
get.datastores                        snapshot.get
get.disabledmethods             snapshot.remove
get.environment                    snapshot.removeall
get.filelayout                          snapshot.revert
get.filelayoutex                      snapshot.setoption
get.guest                                 tools.cancelinstall
get.guestheartbeatStatus    tools.install
get.managedentitystatus     tools.upgrade
get.networks                         unregister
get.runtime                           upgrade

VAAI primitive UNMAP and Space Reclaim::

VAAI primitive UNMAP and Space Reclaim::

VAAI UNMAP or space reclaim was introduced in vSphere 5.0 to allow the ESXi host to inform the storage array (VAAI TP Supported) that files (vmdk) or VMs had be moved or deleted from a Thin Provisioned VMFS datastore. This allows the array to reclaim the freed blocks.

Recommended use of this primitive is in maintenance window, since running it on a datastore that is in-use by a VM can adversely affect I/O for the VM. I/O can take longer to complete, resulting in lower I/O throughput and higher I/O latency.

The UNMAP performance is fully dependent on the storage array, how quickly or slowly it completes the process.

However, there is no way of knowing how long an UNMAP operation will take to complete. It can be anywhere from few minutes to couple of hours depending on the size of the datastore, the amount of content that needs to be reclaimed and how well the storage array can handle the UNMAP operation.

To run the command, you should change directory to the root of the VMFS volume that you wish to reclaim space from. The command is run as:

 

vmkfstools –y <% of free space to unmap> Storage Array’s that support VAAI can be found from the VMware Hardware Compatibility List. In vSphere 5.5x esxcli command has been introduced instead of vmkfstools -y to do the space reclaim from the storage array. To check whether the VMFS datastore is supported for reclaim are not, run the following command using naa number of the vmfs datastore. ~ # esxcli storage core device vaai status get -d naa.5006016050912500f7d9a21b8794f321    naa.5006016050912500f7d9a21b8794f321    VAAI Plugin Name: VMW_VAAIP_RX    ATS Status: supported    Clone Status: supported    Zero Status: supported    Delete Status: supported   <<<< TP (You can do the Space Reclaim)

Rollback ESXi 5.x Upgrade or Reverting to a previous version of ESXi

To revert to a previous version of ESXi after upgrade:

1). From the Console screen press F2
2). Then press F12 to see the shutdown options for your ESXi.
3). Press F11 to reboot.
4). When the Hypervisor progress bar starts loading, press Shift+R.
5). Now you will see a warning message:

Current hypervisor will permanently be replaced
with build: X.X.X-XXXXXX. Are you sure? [Y/n]

6). Press Shift+Y to roll back the ESXi to previous version.
7). Press Enter to boot in to the selected ESXi version.

ESXi, import if failing with error “Failed to open ‘xyz.vmdk’: The system cannot find the file specified (25).”

Imports would fail if “multiextent” module is not loaded in your ESXi.

This may be the reason “multiextent” is not loaded by default.

Need to load this module by running the following command:

 

# vmkload_mod multiextent Module multiextent loaded successfully

PSOD vSphere ESXi 5.x Collect Logs from ESXi Server

PSOD vSphere ESXi 5.x Collect Logs from ESXi Server

PSOD vSphere ESXi 5.x Collect Logs from ESXi Server

If you hit a PSOD, collect the logs (step by step method).

1. Reboot the PSODed ESX/ESXi server.
2. Run command
    esxcfg-dumppart –copy –devname /vmfs/devices/disks/<naa.xxxxxxxxxxx:y –zdumpname    <esxpsodname>
3. extrat the zdump by running the command
   esxcfg-dumppart –log <esxpsodname>
4. Run vm-support (to collect all the logs)
Send the files to vmware to get support.

ESXi Upgrade Commands CLI : ESXi 5.0 to ESXi 5.x

~ # esxcli software sources profile list [cmd options]
Cmd options:
-d|–depot=[ <str> … ]
Specifies full remote URLs of the depot index.xml or server file path pointing to an offline bundle .zip file. (required)
–proxy=<str>
Specifies a proxy server to use for HTTP, FTP, and HTTPS connections.  The format is proxy-url:port.

~ # esxcli software profile Usage: esxcli software profile {cmd} [cmd options]
Available Commands:

 get Display the installed image profile.
 install  Installs or applies an image profile from a depot to this host. This command completely replaces the installed image with the image defined by the new image profile, and may result in the loss of installed VIBs. The common vibs between host and image profile will be skipped. To preserve installed VIBs, use profile update instead. WARNING: If your installation requires a reboot, you need to disable HA first.
update Updates the host with VIBs from an image profile in a depot. Installed VIBs may be upgraded (or downgraded if –allow-downgrades is specified), but they will not be removed. Any VIBs in the image profile which are not related to any installed VIBs will be added to the host. WARNING: If your installation requires a reboot, you need to disable HA first.
 validate Validates the current image profile on the host against an image profile in a depot.

This command will take 10-15mins approx., meanwhile check /var/log/esxupdate.log for log messages, you should see below logs –>

After this please reboot your ESX host.

check the Path status in ESXi

~ # esxcfg-mpath –l     (to list all the device paths)

~ # esxcfg-mpath –s     (to change the path state)

~ # esxcfg-mpath –bd     <naa name of your device/lun> (to list the paths and state)

Other ESXi path command options:

esxcfg-mpath <options> [–path=<path>]

-l|–list                     List all Paths on the system with their

detailed information.

-L|–list-compact             List all Paths with abbreviated information

-m|–list-map                 List all Paths with adapter and device mappings.

-b|–list-paths               List all devices with their corresponding paths.

-G|–list-plugins             List all Multipathing Plugins loaded into the

-s|–state <active|off>       Set the state for a specific LUN Path.  Requires

path UID or path Runtime Name in –path

-P|–path                     Used to specify a specific path for operations.

The path name may be either the long Path UID or

the shorter runtime name of the path.  This can

be used to filter any of the list commands to

a specific path if applicable.

-d|–device                   Used to filter the list commands to display

only a specific device.

 

-h|–help                     Show this message.

OpenFiler

OpenFiler
Openfiler is a Linux based storage solution for SMBs. Openfiler is a file-based network-attached storage and block-based storage area network. Openfiler can run with minimal resources.

By using Openfiler you can provide cost effective solutions for your storage needs, instead going for the Enterprise Storage Arrays. Openfiler supports both regular Operating Systems and Virtualization also.Below I will show how we can take the benefit of openfiler to make SAN/NAS storage for our VMware ESX host.
There are two types of openfiler installation. You can click on the installation link to see all details on how to help
1.Text Based Installation
2.Graphical Installation
Note:-Make sure you have not partitioned all the disk space. Make the OS partition to small size and keep rest as free space which later we will use as iSCSI LUN.
At the time of installation or post installation you have to configure static IP address and necessary DNS configuration.
Once your installation is done, after reboot you will get a screen like below where it will give the URL where you can access opnefiler from browser.

image1

Note:- Configure static IP address and necessary DNS configuration before doing anything
Step-1:Now open the URL given at openfiler startup screen using any browser to login to its web interface. For first time it will give a security warning like the picture shown below.

image2

You just need to ignore the warning and proceed to the login page.
You will get a login screen like given below

image3

Note:-Don’t login using the root user credentials what you have provided during openfiler installation.
Default username openfiler
Default Password:-password
Once after successful login you will get a screen which will be containing all your openfiler server information

image4

Step-2:Go to volumes tab.If you are opening this for first time you will get a screen like below.
image5
You have to select create new volume group. Then you will get a screen like below.

image6

Step-3:Click on disk name (Here in my case its /dev/sdb) under Edit Disk which contains the free space we are going to use iSCSI storage. Now you will get a screen where you can make your physical volume.
image7

From Partition Type dropdown menu select Physical volume and click Create.It will create a physical volume.
Step-4:Now click on Volume Groups on the right hand side. You will be presented a window like below.
image8

Here give some valid name for the volume group and select the physical volumes shown in the list in order to add to the volume group. Then click on Add volume group.
Once volume group creation finishes you can see the volume group listed under volumes tab

image9

Step-5:Now volume group creation is done,So we have to configure LUN(Logical Unit Number).To do this click on Add Volume.

image10

Here give some volume name and slide the slider to allocate space to LUN.Choose Filesystemtype as iSCSI and click on create.
Step-6:Next action is to enable iSCSI service. For this open Services tab. You will find there are some service listed and have option to disable/enable the services. From here you have to enable iSCSI target server Service.it will be disabled by default

image11

Step-7:Once iSCSI is enabled we need to share the volume to specific IP/Networks. To do this open System tab and on bottom you will find Network Access Configuration. Here provide the details of the ESX/ESXi server information and click on update

image12

Note:-Sometimes in Network Access Configuration if you will allow only host then the iSCSI volume may not be shown in storage volumes under vSphere client.To overcome this allow the whole network as shown in the second rule in Network Access Configuration.
Step-8:Now you need to configure the iSCSI target and allow for the host.To do this gotoVolumes tab.On right hand side you will find iSCSI Targets.From there goto Target Configuration as shown in figure below.

image13

You will see some long name under Target IQN.This is going to be your iSCSI target name.Click on Add to add the target.
Step-9:Then go to LUN Mapping and click on Map to map the LUN to the iSCSI target.

image14

Step-10:Next goto Network ACL on the same page.You will see the list of network/hostname present here are similar to what you gave in Network Access Configuration before and all will be deny by default.You have to allow them and click onUpdate.

image15

image16

image17

Difference in VMware VMotion & Storage VMotion (SVMotion)?

The differences by VMware VMotion and VMware Storage VMotion (SVMotion) and their benefits.

With VMotion, VM guests are able to move from one ESX Server to another with no downtime for the users. What is required is a shared SAN storage system between the ESX Servers and a VMotion license.

Storage VMotion (or SVMotion) is similar to VMotion in the sense that it moves VM guests without any downtime. However, what SVMotion also offers is the capability to move the storage for that guest at the same time that it moves the guest. Thus, you could move a VM guest from one ESX server’s local storage to another ESX server’s local storage with no downtime for the end users of that VM guest.

 

vmotion

CASE: We had set up a local environment with 1 ESX server. therefore we used local storage for the VM’s. Expansion and the request for DRS and HA brought us to 3 ESX servers and a W2K8 Storage Server as iSCSI SAN. Now we needed to bring the Virtual Machines from local storage to shared storage. This is how I did it:

1) download and install remote Command Line Interface (RCLI)
2) run this code:

C:\Program Files\VMware\VMware VI Remote CLI\bin> svmotion.pl –interactive
Entering interactive mode. All other options and environment variables will be ignored.

Enter the VirtualCenter service url you wish to connect to: VCENTER
Enter your username: DOMAIN\USER
Enter your password: *****

Attempting to connect to https://vcenter/sdk. Connected to server.

Enter the name of the datacenter: DATACENTER01
Enter the datastore path of the virtual machine: [ESX01:storage1] VMachine01/VMachine01.vmx
Enter the name of the destination datastore: MySAN_LUN0

You can also move disks independently of the virtual machine. If you want the disks to stay with the virtual machine, then skip this step..
Would you like to individually place the disks ? no

Performing Storage VMotion. 0%
|——————————————————-|100% ################################################################

Storage VMotion completed succesfully.

Disconnecting.

C:\Program Files\VMware\VMware VI Remote CLI\bin>

 

 

VMotion is a non interruptive process, that means while doing the VMotion the VM and the applications running in that VM
(GOS) will not get effected.

There are 3 types of Migrations are possible from vCenter (VC).

VMware VMotion requirements:
1. Both the ESX hosts should have the same CPU.
2. Both the ESX hosts should have shared storage.
3. Both the ESX hosts should have VMkernal port group configured for VMotion.
4. Both the ESX hosts using the private network of atleast 1 Gbps for VMotion.
5. Both the ESX hosts are added to the vCenter.

========================================================================================

VMotion

Migrating a powered-on VM (Virtual Machine) from one ESX host to another ESX host is called the VMotion.

Vmotion

VMotion is a non interruptive process, that means while doing the VMotion the VM and the applications running in that VM
(GOS) will not get effected.

There are 3 types of Migrations are possible from vCenter (VC).

VMware VMotion requirements:
1. Both the ESX hosts should have the same CPU.
2. Both the ESX hosts should have shared storage.
3. Both the ESX hosts should have VMkernal port group configured for VMotion.
4. Both the ESX hosts using the private network of atleast 1 Gbps for VMotion.
5. Both the ESX hosts are added to the vCenter.

 

Scan for New Scsi Device to Detect New Lun Without Reboot -Centos

This will scan the scsi host and no need to reboot to make devices(luns) visible.

echo “- – -” > /sys/class/scsi_host/host#/scan

Verify

fdisk -l
tail -f /var/log/message

Replace host# with actual value such as host0. You can find scsi_host value using the following command

# ls /sys/class/scsi_host

Now type the following to send a rescan request:

echo “- – -” > /sys/class/scsi_host/host0/scan

Can devices be rescanned in Linux OS without reloading the Linux driver?

A new LUN was added to the storage, but the LUN cannot be seen by the driver or the OS. Rebooting or reloading the driver would be too disruptive.

Answer

There is a procedure which forces the driver to rescan the targets to allow a new device to be added. This triggers the driver to initiate a LUN discovery process.

To force a rescan from the command line, type the following command

# echo “scsi-qlascan” > /proc/scsi//

Where:

– = qla2100, qla2200, qla2300 (2.4 kernel drivers) or qla2xxx (2.6 kernel drivers)
– = the instance number of the HBA

After executing this command, force the SCSI mid layer to do its own scan and build the device table entry for the new device by typing the following command

# echo “scsi add-single-device 0 1 2 3? >/proc/scsi/scsi

Where:

– “0 1 2 3? = your “Host Channel ID LUN”

The scanning must be done in the above mentioned order: first the driver (qla2300/qla2200 driver, etc.), and then the Linux SCSI mid layer (i.e. OS scan).

– See more at: http://linoxide.com/how-tos/linux-scan-scsi/#sthash.CUZ4SYF6.dpuf