Power off an unresponsive VM using ESXTOP

Just recently we have some hardware issues in our primary datacenter and during that time had a few VM’s that became unresponsive and needed to get them back online. The VM’s had stopped responding to the normal vSphere commands to reboot, shutdown or even restart. I didn’t want to power cycle the entire ESXi host and instead just power off an unresponsive VM.

Here is a quick and easy way to do just that using ESXTOP.

How to kill an unresponsive VM using ESXTOP

  1. SSH into the host that the virtual machine is currently running on using PuTTy or your choice of client.
  2. Type: esxtop to start ESXTOP.
  3. Press c to enter the CPU view of ESXTOP.
    Note: It may be helpful to press shift and V so that only VM’s are shown.
  4. Press f to change what display fields are shown and press c to show the LWID field, then press ENTER to go back to the CPU view.
    SSH add LWID field
  5. Finally, press k to open the “kill” prompt and enter the LWID of the VM you would like to power off and press ENTER.
    Using ESXTOP to kill VM using LWID

At this point the VM should be powered off. Keep in mind this will perform a hard power off, similar to yanking the power from a physical box so you really only want to do this if the VM is not responding to any other commands in order to gracefully power it off.

How to kill a VM using vSphere CLI

You can get the same result as above, powering off a VM, and not needing to use ESXTOP at all.

  1. SSH into the host where the unresponsive VM is located and type the following:
    vim-cmd vmsvc/getallvms
  2. Take note of world ID of the VM and then use the following command to issue a shutdown by typing, making sure to replace (vmid) with the world ID:
    vim-cmd vmsvc/power.off (vmid)
  3. If you find the VM still won’t power off you can kill the PID for the VM. Use lsof to find the pid that has the vmx (config file) open and then use kill to terminate it.
    lsof /vmfs/volumes/datastore/vmname/vmname.vmx
    kill -9 (pid)

Thanks goes to /u/Acaila on Reddit for pointing out this other method!

Similar Posts

  • VMware ESXi 6.0 CBT bug fix released

    VMware

    You may remember ESXi 4.x-5.x had a CBT bug, as mentioned here, that could potentially cause your backups to be pretty useless. Well it seems ESXi 6.0 isn’t without it’s own CBT bug which could cause the following to possibly occur:

    • Backing up a VM with CBT enabled fails.
    • Powering on virtual machines fails.
    • Expanding the size of a virtual disk fails.
    • Taking VM quiesced snapshots fails.

    Prior to the fix, the workaround was to disable CBT. Thankfully VMware has released a fix for the ESXi 6.0 CBT bug and it’s recommended that anyone who uses CBT apply this patch regardless if it was a clean install of VMware ESXi 6.0 or an upgrade to ESXi 6.0.

    Read More “VMware ESXi 6.0 CBT bug fix released”

  • Thank you VMware Community!

    VMware vExpert 2014

    So far, 2014 has been a very rewarding year for a number of reasons, two of which has happened in just a week or two span. First, Eric Siebert (@ericsiebert) announced on March 27th, this years results of the 2014 Top VMware & Virtualization Blog voting. My first year entered into voting and made it to 71st place! A huge thanks goes out to not only Eric but just as much so to everyone who voted for me!

    To top it off, yesterday VMware announced 2014’s first quarter VMware vExpert list. While vExpert isn’t a technical certification or even a general measure of VMware expertise. The VMware judges selected people who were engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year. VMware awarded this title to 754 people this year and on that list of many impressive names you’ll find yours truly, Michael Tabor!

    I’m both honored and humbled by both lists. It’s a great feeling to be recognized by not only my peers through the voting in the Top vBlog but also by VMware themselves through the vExpert title.

    So again THANK YOU very much to the entire VMware community, a spectacular community indeed, and congratulations to everyone else that made the Top vBlog and vExpert lists!

  • Easy ESXi 5.5 upgrade via command line

    ESXi 5.5 was just released general availability (GA) on Sunday (9/22) and I’m itching to upgrade the home lab to run the latest version with all it’s goodies. I wanted to try upgrading my hosts without having to go through the same process that I followed setting up ESXi on the NUC in the first place, injecting custom NIC drivers, etc.

    Enter the command line…

    1. Move all VM’s from the host and then put the host into Maintenance Mode.
    2. Go to the Configuration tab > Security Profile and Enable SSH under Services.
      ssh enabled
    3. Under Firewall, enable httpClient (outbound http).
      httpClient enable
    4. Open PuTTY (or other SSH client) and SSH into your host.
    5. Read More “Easy ESXi 5.5 upgrade via command line”

  • My VMware ESXi Home Lab Upgrade

    Although the focus in my career right now is certainly more cloud focused in Amazon Web Services and Azure, I still use my home lab a lot.

    For the last 5+ years my home lab had consisted of using 3x Intel NUC’s (i5 DC53427HYE), a Synology NAS for shared storage and an HP ProCurve switch. This setup served me well for most of those years. It has allowed me to get many of the certifications I have, progress in my career and have fun as well.

    At the start of this year I decided it was time to give the home lab an overhaul. At first I looked at the newest generation of Intel NUC’s but really wasn’t looking forward to dropping over $1,300 on just partial compute (I’d still need to be RAM for each of the 3 NUC’s). I also wanted something that just worked, no more fooling around with network adapter drivers or doing this tweak or that tweak.

    I also no longer needed to be concerned about something that had a tiny footprint. I also questioned if I really needed multiple physical ESXi hosts. My home lab isn’t running anything mission critical and if I really wanted I could always build additional nested VMware ESXi hosts on one powerful machine if I needed.

    So in the end, the below is what I settled on. Replacing all of my compute, most of my networking and adding more storage!

    Read More “My VMware ESXi Home Lab Upgrade”

  • VMware vCenter Server 5.5 Update 1b released

    vCenter Server 5.5 Update 1b released

    VMware released vCenter Server 5.5 Update 1b today. The release does not bring any new features but instead patches a few bugs and possible security issues such as the heartbleed fix and most recent OpenSSL as mentioned in CVE-2014-0224. vCenter 5.5 Update 1b now includes the OpenSSL library which has been updated to versions openssl-0.9.8za, openssl-1.0.0m, and openssl-1.0.1h.

    Read More “VMware vCenter Server 5.5 Update 1b released”

  • Upgrade ESXi host to ESXi 5.5 using VMware Update Manager 5.5

    A while back I wrote about how to upgrade to ESXi 5.5 via command line which works great when you only have a few hosts as each host has to download the ISO from the web each time. This time I’ll show you step by step how to upgrade your ESXi 5.1 host to ESXi 5.5 using VMware Update Manager 5.5 (aka VUM).

    For this post I’m going to assume you have already upgraded your vCenter and VUM to versions 5.5 as well as the VUM plugin installed. So lets begin!

    Upgrade ESXi host to 5.5 using VMware Update Manager (VUM)

    1. Open the vSphere client and click on Update Manager
      VMware Update Manager icon

    Read More “Upgrade ESXi host to ESXi 5.5 using VMware Update Manager 5.5”

Leave a Reply to Warren Massey Cancel reply

Your email address will not be published. Required fields are marked *

One Comment

  1. This is a nice post, I thought I would add that Ghost PIDs & WIDs exist. If you run either of the commands listed in this post and the result is “PID or WID not found.” Double check the PID/WID and if it is correct, a reboot of the ESXi host is the only fix.

    We only encounter this in our Horizon View Cluster and it happens for customers on ESXi 5.x and ESXi 6.x running View 5.x or 6.x.