Davoud Teimouri - Virtualization & Datacenter

A technology blog mainly focusing on virtualization and datacenter

Reload Partition Table Without Reboot In Linux

Linux shell is most popular than GUI in Linux systems and most of Linux administrators doing their tasks and system configurations via shell. But shell is most difficult than GUI. They do many thing via shell for example adding new disks or partitions for applications and services.

Sometimes administrators and users faced with the below messages after create new partitions:

WARNING: Re-reading the partition table failed with error 16: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8)

Syncing disks.

Or after run “mkfs.extX” command to create file-system on the partition:

Could not stat /dev/sdXX — No such file or directory

The device apparently does not exist; did you specify it correctly?

Actually, kernel couldn’t reload partition table at this situation and ask administrator to reboot the machine for reloading partition table.

What’s Solution?

Partprobe

This utility is the first solution for reloading partition table of the disk. It’s installed on most distribution by default.

Run the below command to reload partition table manually:

If Partprobe doesn’t work and the below error message was shown:

Error: Could not stat device /dev/sdX – No such file or directory

Try another solution from the below solutions.

Hdparm

The Hdparm utility is a general hard disk utility in Linux, try the below command to reload partition table:

Kpartx / Partx

These two utility can reload partition table. Kpartx is generally use for Multipath devices and partx using for local devices:

Kernel Interface

If the above utility couldn’t resolve issue, the last solution is last. This solution force kernel to reload the device:

After all if the problem is still exist. There is no solution except rebooting OS!

214 total views, 42 views today

Updated: 23/06/2017 — 3:03 pm

Dell Customized ESXi 6.5 Image – June 2017

Server hardware vendors should customize ESXi images that provided by VMware because VMware image doesn’t include the vendors drivers and there is some generic drivers.

I’ve published some posts about HPE customized ESXi image and now, you can download latest Dell customized ESXi image from the below links:

Dell Customized ESXi 6.5 Image

You can also download HPE customized ESXi image from the below links:

HPE Customized ESXi Image Download – May 2017 (Latest)

1,338 total views, 11 views today

ESXi PCI Passthrough – Large VM Memory (MainHeap) BUG!

ESXi PCI Passthrough

This is a combination hardware and software feature on hypervisors to allows VMs to use PCI functions directly And we know it as VMDirectPath I/O in vSphere environment.

VMDirectPath I/O needs some requirements to work perfectly, please read this KB for more information, as we read it!

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2142307

There is also some limitation when using VMDirectPath I/O and the below features will unavailable:

  • Hot adding and removing of virtual devices
  • Suspend and resume
  • Record and replay
  • Fault tolerance
  • High availability
  • DRS (limited availability. The virtual machine can be part of a cluster, but cannot migrate across hosts)
  • Snapshots

I couldn’t find any other limitation specially about memory size, so now why we couldn’t use more than 790 GB to 850 GB of our server memory capacity?!

Anyway, let’s review our test scenario!

Our Test Scenario

We have some Sun X4-8 servers with the below specifications:

  • CPU: 8 x E7-8895 v2
  • Local Disk: 8 x 600 GB SAS Disk
  • Memory: 48 x 16 GB – Totally 768 GB
  • PCI Devices:
    • 2 x QLogic 2600 16 Gb – 2 Ports (HBA)
    • 2 x Intel 82599EB 10 Gb – 2 Ports (Network)
  • Embedded Devices: 2 x Intel I350 1Gb

ESXi 6.x U2 has been installed on all the servers and two virtual machines with 120 CPU cores and 368 GB memory were created on the servers.

Each virtual machine has one port of Intel 82599EB and two ports of the HBA cards as PCI Passthrough devices.

There was no problem till we added more memory module to each server and increased total capacity to 1 TB.

We have planned to increase each VM memory capacity to 512 GB at least but after memory expansion, we couldn’t power on both virtual machines with more than 395 GB memory or more than 790 GB one by one.

There was something wrong and we faced with the below error during virtual machine power on:

Failed to register  the device pciPassthru0 for x:x.x due to  unavailable hardware or software support.

We checked many situations but nothing changed! We reduced virtual machines cores and try to power on them with different memory capacity but nothing changed. Even we did ESXi upgrade to ESXi 6.x U3 but result was same.

Adding advanced parameters to virtual machines configuration file for allow virtual machine to use 64-bit MMIO addresses but nothing changed.

We found the below lines in vmkernel.log and the lines logged during power on virtual machines:

2017-05-31T21:12:41.904Z cpu51:38631)VSCSI: 4038: handle 8199(vscsi0:0):Creating Virtual Device for world 38632 (FSS handle 592241) numBlocks=1169920000 (bs=512)

2017-05-31T21:12:41.904Z cpu51:38631)VSCSI: 273: handle 8199(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000

2017-05-31T21:12:41.919Z cpu51:38631)WARNING: Heap: 3721: Heap mainHeap (21594976/31462240): Maximum allowed growth (9867264) too small for size (13111296)

2017-05-31T21:12:41.919Z cpu51:38631)WARNING: Heap: 4214: Heap_Align(mainHeap, 13107208/13107208 bytes, 8 align) failed.  caller: 0x41800433fbe2

2017-05-31T21:12:41.922Z cpu51:38631)VSCSI: 6726: handle 8199(vscsi0:0):Destroying Device for world 38632 (pendCom 0)

2017-05-31T21:12:41.946Z cpu16:33292)WARNING: SP: 1523: Smashing barrier dealloc-Barrier.

2017-05-31T21:12:42.077Z cpu185:35847)Config: 681: “SIOControlFlag2” = 0, Old Value: 1, (Status: 0x0)

2017-05-31T21:13:23.048Z cpu23:37390 opID=c69de623)World: 15516: VC opID 384E9BA6-0000016B-3934 maps to vmkernel opID c69de623

2017-05-31T21:13:23.048Z cpu23:37390 opID=c69de623)Config: 681: “SIOControlFlag2” = 1, Old Value: 0, (Status: 0x0)

Seems, there is some limitation about ESXi vmkernel memory management because of the below warnings:

WARNING: Heap: 3721: Heap mainHeap (21594976/31462240): Maximum allowed growth (9867264) too small for size (13111296)

WARNING: Heap: 4214: Heap_Align(mainHeap, 13107208/13107208 bytes, 8 align) failed.  caller: 0x41800433fbe2

We did search but we could find nothing about the warning, just we found a way to check the mainHeap status.

The below command help us to find mainHeap status:

[[email protected]:~] vsish
/> cat /system/heap
/system/heapMgrVA /system/heaps/
/> cat /system/heaps/mainHeap-0x43004d006000/stats
Heap stats {
Name:mainHeap
dynamically growable:1
physical contiguity:MM_PhysContigType: 1 -> Any Physical Contiguity
lower memory PA limit:0
upper memory PA limit:-1
may use reserved memory:0
memory pool:76
# of ranges allocated:1
dlmalloc overhead:1024
current heap size:1319776
initial heap size:0
current bytes allocated:1161424
current bytes available:158352
current bytes releasable:158048
percent free of current size:11
percent releasable of current size:11
maximum heap size:31462240
maximum bytes available:30300816
percent free of max size:96
lowest percent free of max size ever encountered:96
# of failure messages:0
number of succeeded allocations:36329
number of failed allocations:0
average size of an allocation:380
number of requests we try to satisfy per heap growth:48
number of heap growth operations:2
number of heap shrink operations:0

“maximum heap size:31462240” is same on different ESXi version so upgrade or downgrade couldn’t help us.

Finally, we did same tests on HPE DL580 G8 with 1 TB memory and result was same as Sun X4-8. So this is not related to hardware.

Actually, I guess, there is a bug about ESXi mainHeap and ESXi can’t power on virtual machines with more than 800 GB memory capacity totally when we are using PCI Passthrough.

Example of memory configuration:

  1. Two virtual machines with 400 GB memory or more for each one, power on will be failed.
  2. One virtual machine with 800 GB memory ore more, power on will be failed.
  3. Two virtual machine with 395 GB memory or less, virtual machines will be powered on.
  4. One virtual machine with 790 GB memory or less, virtual machine will be powered on.

I’ve reported this to VMware but please inform me that if anyone has same experiences and know the solution.

May be we missed some configurations but i don’t know, because we have problem just with big servers.

2,128 total views, 2 views today

Updated: 22/06/2017 — 11:57 pm

NAKIVO Backup & Replication v7.1 – Hyper-V Failover Clusters

NAKIVO Backup & Replication v7.1 has been release with supporting Hyper-V Failover Clusters.

A Hyper-V Failover Cluster is a group of Hyper-V servers (nodes) that can use the Live Migration technology to move live VMs between nodes in the cluster without downtime.

Hyper-V Failover Clusters help improve availability and efficiency of the virtualized environment. If one cluster node fails, its VMs can be automatically started on a different cluster node. In addition, VMs in the cluster can be moved between nodes both manually and by Microsoft’s System Center Virtual Machine Manager (SCVMM) Dynamic Optimization feature that can automatically move VMs to distribute the load evenly between nodes. NAKIVO Backup & Replication v7.1 can track the VM location in the cluster and can continue to protect the VM regardless on which cluster node the VM is located.

As virtual environments change rapidly and new VMs are created on a daily basis, chasing and protecting those new VMs becomes a challenge. As a result, some VMs become unprotected and their data can be irreversibly lost. To solve this problem, NAKIVO Backup & Replication offers the container protection feature: Customers can add an entire Hyper-V Failover Cluster to VM backup and replication jobs. When an entire Failover Cluster is protected, NAKIVO Backup & Replication will monitor the cluster contents and will automatically add all new VMs in the Failover Cluster to the job. This way, all important VMs will always be protected.

NAKIVO Backup & Replication supports VMware, Hyper-V, and AWS EC2 environments and offers advanced features that increase VM backup performance, improve reliability, speed up recovery, and help save time and money. NAKIVO Backup & Replication provides:

  • Proper backups: Native, agentless, image-based, application-aware VM backup and replication
  • Fast backups: Forever-incremental backups, LAN-free data transfer, network acceleration
  • Small backups: Exclusion of SWAP files and partitions, global backup deduplication, variable backup compression
  • Guaranteed recovery: Instant backup verification with screenshots of test-recovered VMs, backup copy offsite/to the cloud
  • Faster recovery: Instant recovery of VMs, files, Exchange objects, Active Directory objects, instant disaster recovery with VM replicas.

1,082 total views, 2 views today

VMware Tools Client – Interact with a VM without Network Connectivity

VMware Tools Client

VMware Tools Client is a beta tools to interact with VMs without network connectivity. May know, there is some vSphere API that developers can write some codes and developing their tools for vSphere environments.

VMware Tools Client written by Pierre Lainé is a useful tool to managing virtual machines via vSphere Guest API and VMware Tools.

The tool developed by Java, so JRE or JDK should installed on manage machine. As Java is cross-platform run-time, so VMware Tools Client will run on any machine, Windows, Linux, Unix and Mac.

VMware Tools Client allows administrators to:

  • Upload and download files between management client machine and virtual machine.
  • Run scripts to reconfigure operating system, troubleshooting and other tasks.
  • Troubleshooting network by ping some addresses from within the virtual machine.

VMware Tools Client be able to connect to vCenter and load vCenter inventory, so administrators can select and manage any virtual machine.

PowerCLI also provides commands such as Invoke-VMScript to run script or batch via VMware Tools on virtual machines but VMware Tools Client is more featured.

It’s beta version yet but available on this link for public download:

Download Link

Screenshots:

VMware Tools Client - Main Window

VMware Tools Client - VM Window

1,920 total views, 4 views today

Nakivo VM Backup Appliance – QNAP

NAKIVO VM Backup Appliance based on QNAP NAS is available

NAKIVO VM backup appliances based on QNAP NAS combine backup hardware, backup software, backup storage, and data deduplication in a single device. This setup frees up virtual infrastructure resources previously used for backup and results in a smaller footprint and less maintenance. Compared to purpose-build backup appliances, the combination of NAKIVO Backup & Replication and QNAP NAS is up to 5X more affordable, while delivering the same or higher levels of performance and reliability.

Nakivo VM Backup Appliance - QNAP

NAKIVO VM backup appliances based on QNAP NAS also deliver higher VM backup speed when compared to VM-based backup solutions. This is because backup data is written directly to the NAS disks, bypassing network protocols such as NFS and CIFS. NAKIVO VM backup appliance based on QNAP NAS can boost VM backup performance by up to 2X.

NAKIVO VM backup appliances based on QNAP NAS separate virtual infrastructure and backup software, which improves reliability and ensures that recovery can be performed even if a portion of a virtual infrastructure is unavailable.

NAKIVO VM backup appliances based on QNAP NAS can be deployed onsite or offsite – even in locations with no virtual infrastructure – and can be used to store primary and secondary VM backups. NAKIVO VM backup appliances based on QNAP NAS provides all components that are required for operational and disaster recovery: hardware to run restores, backup data, and backup software.

NAKIVO Backup & Replication supports VMware, Hyper-V, and AWS EC2 environments and offers advanced features that increase VM backup performance, improve reliability, speed up recovery, and help save time and money. NAKIVO Backup & Replication provides:

  • Proper backups: Native, agentless, image-based, application-aware VM backup and replication
  • Fast backups: Forever-incremental backups, LAN-free data transfer, network acceleration
  • Small backups: Exclusion of SWAP files and partitions, global backup deduplication, variable backup compression
  • Guaranteed recovery: Instant backup verification with screenshots of test-recovered VMs, backup copy offsite/to the cloud
  • Faster recovery: Instant recovery of VMs, files, Exchange objects, Active Directory objects, instant disaster recovery with VM replicas.

1,366 total views, 4 views today

HPE Customized ESXi Image Download – May 2017 (Latest)

HPE has released new image for ESXi 6.5 at this month (May 2017). The customized image contains HPE certified devices drivers and HPE management agents.

It’s recommended that install ESXi on HPE servers by these customized images. Because the images contains all management tools and also all certified device drivers. 

The HPE customized images are available on the below links, you can download vSphere 6.5 directly but other links are available on VMware website only:

ESX

Don’t use VMware images for your server hardware and also don’t use other vendors images for your hardware because you will face with many problems such as purple screens. 

2,161 total views, 2 views today

Teimouri.net © 2012 Frontier Theme