Proxmox Server Solutions GmbH, developer of the open-source virtualization management platform Proxmox VE, today released its major version Proxmox VE 6.0. The comprehensive solution, designed to deploy an open-source software-defined data center (SDDC), is based on Debian 10.0 Buster. It includes updates to the latest versions of the leading open-source technologies for virtual environments like a 5.0 Linux kernel (based on Ubuntu 19.04 “Disco Dingo”), QEMU 4.0.0, LXC 3.1.0, Ceph 14.2 (Nautilus), ZFS 0.8.1, and Corosync 3.0.2. Proxmox VE 6.0 delivers several new major features, enhancements, and bug fixes.
Category: Server Virtualization
VMware vSphere, Microsoft Hyper-V, Citrix Xen Server, Oracle VM Server, Red Hat KVM.
We are using HPE ProLiant servers in our virtual environment to delivering different services to our customers. VMware ESXi has been installed on all servers as Hypervisor. You know that each vendor has customized image that includes VMware ESXi, drivers and management tools. It seems, there is an issue on HPE Agentless Management (AMS) on latest ESXi image.
QEMU (short for Quick Emulator) is a free and open-source emulator that performs hardware virtualization. QEMU is a hosted virtual machine monitor: it emulates the machine’s processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems.
All VMware guys know that guest OS needs drivers for
paravirtualized devices and the drivers provided by VMware as a package which called VMware Tools. VMware Tools is available for supported operating systems and also there is an open-source which called Open-VM-Tools and it’s available for modern Linux distributions and installed by operating system installer.
If you have plan to deploy vCenter Server Appliance (vCSA) without DNS server, installation will be failed, if you have add standard information for deploying virtual appliance. Because installer ask you to add FQDN of server before start to deploy. By adding FQDN, installer will be try to resolve it.
vRealize Operations Manager delivers intelligent operations management with application-to-storage visibility across physical, virtual, and cloud infrastructures. Using policy-based automation, operations teams automate key processes and improve IT efficiency.
If you are using vSphere 5.5 on your environment, please don’t read this post otherwise it will be useful for you. Currently, vSphere is most popular server virtualization software and any changes or notifications which published by VMware, has impact on may organizations IT infrastructure. What’s the newest? Answer: Say good bye to vSphere 6.0 .
Whether you want to provide Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms, deploy a Ceph Filesystem or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network, and the Ceph Storage Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also required when running Ceph Filesystem clients.
Software-defined storage (SDS) enables users and organizations to uncouple or abstract storage resources from the underlying hardware platform for greater flexibility, efficiency and faster scalability by making storage resources programmable.
VMware has released patch for ESXi 6.7 (ESXi670-201901001) to resolving some important issues, all the resolved issues are bug fix and this patch doesn’t include any security fix . As you may know, ESXi patches are cumulative and new patches includes all resolved issues that released before the new patch.
“Corruption in dlmalloc” issue occurs because multiple esxcfg-dumppart threads attempt to free memory which has been used for configuring the dump partition. Thread A checks if there are entries to be freed and proceeds to free them, while within the same time frame, Thread B is also attempting to free the same entries.
Based on VMware KB2147888, this issue is resolved on ESXi 6 U3. But why issue is happening on ESXi 6 U3 or ESXi 6.5 U1 when they are installed on HPE ProLiant servers?
There is no need to introduce Nakivo for my reader at begin of each post. Nakivo is one of leaders in backup & replication market now. I writing reviews about Nakivo Backup & Replication since version 6.x. All new version had new amazing and useful features. Nakivo Backup & Replication v8.1 comes with two new features as well. In addition of new features, Nakivo Backup & Replication v8.1 has come with lot of improvement and fixes. Let’s review new features, improvements and fixes.
I guess, you know the instruction but let’s quick review. You can export virtual machines via some different tools such as vSphere Client, vSphere Web Client and others. All administrators do it today and familiar with OVA and OVF. It’s possible to export small virtual machines via vSphere Client, vSphere Web Client, PowerCLI. If you want to export virtual machine with 200~300 GB virtual disks (Thin or Thick), there is serious problem, just make sure that you have enough free space. But did you try to export big or monster VM as OVA or OVF?
These days, everyone knows what’s Cloud Computing and cloud based services are using for speedup deployment of organizations services. Operating System Level Virtualization or Containers helping system architectures and administrators to achieve the goals. There are many implementations for containers that today, those methods are compatible with different hardware architectures and operating system.
You may know that Unix has OS Level Virtualization from past years and this technology is very older than other virtualization such as Full Virtualization or Paravirtulization.
Full Virtualization (VMware ESXi, Hyper-V) and Paravirtualization (Xen, UML) provides different guest OS but there is no way to use different guest OS when you are using containers. Of curse, some solutions are under development.
RAW Device Mapping (RDM) is one of oldest VMware vSphere features which introduced to resolving some limitation on virtualized environments such as virtual disks size limitation and deploying services top of fail-over clustering services.
You can use a raw device mapping (RDM) to store virtual machine data directly on a SAN LUN, instead of storing it in a virtual disk file. You can add an RDM disk to an existing virtual machine, or you can add the disk when you customize the virtual machine hardware during the virtual machine creation process.
I had publish a post about VMware Tools 10.3.0 and it’s issues last week. VMware recommended that downgrading VMware Tools to previous stable versions at the time and VMware Tools 10.3.0 was removed from download link.
VMDK (Virtual Machine Disk) has been designed to mimic the operation of physical disk. Virtual disks are stored as one or more VMDK files on the host computer or remote storage device, and appear to the guest operating system as standard disk drives.
VMware supports three provisioning types:
Eager-zeroed Thick Provisioned