Category: Virtualization

vsphere7 removal deprecation 2

VMware vSphere 7: Which Features Are Removed?

VMware vSphere 7 introduces lot of new features but there are some features which removed or deprecated in VMware vSphere 7. Some of them were important features for us but VMware has defined some policies to achieved new and updated solutions in server/desktop/OS Level virtualization.

C70000 0

HPE FlexFabric 650FLB Adapter May Cause PSOD on ESXi 6.x

Seems, there is an issue on HPE Blade Servers when a specific adapter is installed on server. If there is any server of the below models in your virtual environment that you should consider to this post: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server HPE servers running VMware ESXi 6.0, VMware ESXi 6.5 or VMware ESXi 6.7 and configured with an HPE FlexFabric 20Gb 2-port 650FLB Adapter with driver version 12.0.1211.0 (or prior), a “wake NOT set” message is logged in the Vmkernel logs. Then after 20 to 30 days of server run time, a Purple Screen of Death (PSOD) may display a brcmfcoe: lpfc_sli_issue_iocb_wait:10828 message. The following errors are displayed in the VMkernel logs after approximately 50 to 70 days of server run time: 2019-07-20T00:46:10.267Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T01:10:14.266Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:16:25.801Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:22:26.957Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T03:26:39.057Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T04:06:46.158Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330...

kubewise 1

Kubewise, Multi-Platform Desktop Client for Kubernetes

Kubewise is a simple multi-platform desktop client for Kubernetes. In the same way the kubectl command requires only a valid kubeconfig file to run commands against a Kubernetes cluster, Kubewise requires you just to configure one or more valid kubeconfig files to interact with the corresponding Kubernetes clusters. Main features: Support for multiple kubeconfig files. UI-driven interaction with the most frequently used Kubernetes entities. One-click terminal with the proper KUBECONFIG env variable set. Generation of custom kubeconfig files for a given namespace. Highlight sustaniability and security-related data. Requirements Kubewise is a desktop application built with HTML, JavaScript, CSS and Node.js, and it runs on Electron, a framework for building cross platform apps using web technologies. So for running it, basically, all you need is: Any modern macOS, Windows or Linux (Debian-based) OS. kubectl v1.14.0+ installed to access Kubernetes v1.14.0+ clusters. External Links Kubewise

VMware Ports and Protocols 0

VMware’s Tool for Find VMware’s Products Ports and Protocols

VMware’s products needs to communicate together or communicate with other components of services via network, administrators must know that components will communicate via which ports and protocols. VMware has provided a tool to find port and protocol of popular VMware’s products which called ” VMware Ports and Protocols”. Currently, you can find ports and protocols which the below products used: vSphere vSAN NSX for vSphere vRealize Network Insight vRealize Operations Manager vRealize Automation Note: The recent versions of the mentioned products and not out of support products. External Links VMware Ports and Protocols

HPE Power 3

HPE G10 Servers: Best and Optimized Setting for VMware vSphere

Optimizing HPE server configuration is always one of biggest challenges for virtual environments administrators. The administrators always trying to provide best performance by changing configurations on hypervisor, virtual machines or other components or virtual infrastructure. There are best practices which published by server vendors to achieve best performance on their server platform. Server hardware is one most important component in every environment, server provides main power and resource: Computing Resources. Processor and memory and other server’s component made to work almost like as human brain. Some Important Things! Human brain needs consumes about 20 percent of the body’s energy. Consume more energy is equal to more heat in human body. Server components needs energy and cooling, like human body. Energy and cooling needs money. So there is a challenge to keep balance between performance and cost of services in any IT environments. What’s Important About Server Hardware? In fact, processors and cooling system consume most power which provided by power modules. Don’t think about single server, think about hundreds of servers and thousands processors, thousand x 200W! (At least). Best performance has more cost for any environment also needs more maintenance. What’s Best Practices? Best practices are depended to workload...

npar 1 1

Using Network Partitioning (NPAR) in VMware ESXi

Data Center design is changing year by year, before virtualization all servers was physical and data centers were full of devices and cables. Virtualization helped to make data centers smaller and consolidate different workloads in same hardware. Hardware technologies are also helping to achieve this goal. Network Partitioning (NPAR) is one of hardware technologies which helping to reduce cabling, switch devices and use all I/O capacity in data centers. What’s Network Partitioning (NPAR)? NPAR is an operating system and switch agnostic technology that allows customers to reduce the number of I/O adapters required to support different application workloads. Traditional best practices require separate LAN or SAN connections for different aspects of application workloads. Converged Network Adapters already support widely used SAN protocols like Fibre Channel over Ethernet (FCoE) and iSCSI, administrators can already reduce the number of adapters needed for separate protocols, including separate Fibre Channel and Ethernet adapters. With NPAR, these adapters can now partition their network bandwidth further into multiple virtual connections, making one dual-port adapter appear as eight adapters to the operating system for use by the applications. This greatly simplifies the physical connectivity to the server, reduces implementation time, and lowers the acquisition cost of the...


What’s New in Proxmox VE 6

Proxmox Server Solutions GmbH, developer of the open-source virtualization management platform Proxmox VE, today released its major version Proxmox VE 6.0. The comprehensive solution, designed to deploy an open-source software-defined data center (SDDC), is based on Debian 10.0 Buster. It includes updates to the latest versions of the leading open-source technologies for virtual environments like a 5.0 Linux kernel (based on Ubuntu 19.04 “Disco Dingo”), QEMU 4.0.0, LXC 3.1.0, Ceph 14.2 (Nautilus), ZFS 0.8.1, and Corosync 3.0.2. Proxmox VE 6.0 delivers several new major features, enhancements, and bug fixes.

Oracle Secure Global Desktop 0

Oracle Secure Global Desktop 5.5 for Secure Remote Access to Data and Applications in the Cloud and On Premises

Oracle Secure Global Desktop (SGD) is a web-based solution that allows users to remotely access data and applications in a data center or the cloud. It provides administrators with a single pane of glass to manage secure access to resources with a completely air-gapped, highly secure connection between the client and the data and applications accessed, allowing complete control of which user can access which application and the server it runs on, all through a convenient web interface.


Oracle Linux Virtualization Manager

This new server virtualization management platform can be easily deployed to configure, monitor, and manage an Oracle Linux Kernel-based Virtual Machine (KVM) environment with enterprise-grade performance and support from Oracle. Red Hat did it and developed Red Hat Enterprise Virtualization Manager based on the open source oVirt project .

ramdisk tmp full esxi 2

The ramdisk ‘tmp’ is full – VMware ESXi 6.x on HPE ProLiant

We are using HPE ProLiant servers in our virtual environment to delivering different services to our customers. VMware ESXi has been installed on all servers as Hypervisor. You know that each vendor has customized image that includes VMware ESXi, drivers and management tools. It seems, there is an issue on HPE Agentless Management (AMS) on latest ESXi image.

Failed to reserve volume 1

VMware ESXi Warning: Failed to reserve volume f530 28 1

We had a disaster on one of our storage devices that some virtual machines were hosting by the storage. After recovering issue on the storage, the below log was on some ESXi hosts: ESXi-Server vmkwarning: cpu45:34354)WARNING: Fil3: 2469: Failed to reserve volume f530 28 1 5cde7257 f5e2aa1e 67208f12 f0f55d7c 0 0 0 0 0 0 0 The issue was happened when the hosts were trying to mount the affected datastores even after issue was recovered. Some of affected datastores wont mounted automatically after disaster. So we had to rescan all paths on ESXi servers or mount the datastores manually. Hope this post will help you to recover same issues faster. Further Reading ESXi Fails with “Corruption in dlmalloc” on HPE Server VMware ESXi Queue Depth – Overview, Configuration And Calculation

Windocks 0

What’s WinDocks?

Windocks combines Docker Windows containers with SQL Server database cloning, for a modern, open data delivery solution.Enterprises modernize application development, testing, reporting, and BI, with existing licenses and infrastructure, at a fraction of the cost of alternatives.