Tagged: HPE

HPE Customized ESXi Image 2

New HPE Customized ESXi Image Not Supported on ProLiant BL c-Class Servers

HPE has released the supported ESXi versions as customized images for ProLiant and other HPE server products. There is bad news about HPE ProLiant BL460c. Which Version Have Released? The below versions of HPE Custom Images for VMware Released in January 2021: VMware-ESXi-7.0.1-17325551-HPE-701.0.0.10.6.3.9-Jan2021.iso VMware-ESXi-6.7.0-17167734-HPE-Gen9plus-670.U3.10.6.3.8-Jan2021.iso VMware-ESXi-6.5.0-Update3-17097218-HPE-Gen9plus-650.U3.10.6.3.8-Dec2020.iso Which Hardware Products Not Supported? The mentioned images don’t support installing on the below server products: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server Blade Which Versions Are Supported on ProLiant BL c-Class Servers? The below versions must be used on ProLiant BL c-Class Servers: VMware-ESXi-7.0.1-16850804-HPE-701.0.0.10.6.0.40-Oct2020.iso VMware-ESXi-6.7.0-Update3-16713306-HPE-Gen9plus-670.U3.10.6.0.83-Oct2020.iso VMware-ESXi-6.5.0-Update3-16389870-HPE-Gen9plus-650.U3.10.6.0.86-Oct2020.iso What Should We Do For Future? Don’t worry, the next releases will be supported on ProLiant BL c-Class Servers. Wait for new releases of HPE Custom Images for VMware. See Also Network Connection Problem on HPE FlexFabric 650 (FLB/M) Adapter References Notice: HPE c-Class BladeSystem – The HPE Custom Images for VMware Released in January 2021 Are Not Supported on ProLiant BL c-Class Servers

ilo5 650flb m 4

Network Connection Problem on HPE FlexFabric 650 (FLB/M) Adapter

We have many HPE Blade System C7000 chassis for hosting our virtual infrastructures or our customers virtual infrastructures. Also many of them are filled with HPE ProLiant BL460c Gen10. Recently, we had serious problem with some of the servers about network connection, if there was FlexFabric 650 (FLB/M) adapter installed on blade server. What’s Exact Problem? Our servers were working fine but after a while, we lose our vMotion network connectivity, the hosts couldn’t migrate virtual machines even between servers in same C7000 chassis. The problem was happened on one chassis and other chassis has no problem. Network adapter was connected physically but it don’t pass any network traffic, the issue some like ARP problem! Also network adapter status was Degraded in iLO web administration. List of All Attempts to Recovering Issue Try this, ask IT guy about network problem on network card, one hundred percent sure that IT guy will say: Upgrade firmware, update driver, upgrade ….! So, I did it! 😀 Let’s review the list: Upgrade Firmware for FlexFabric 650M and FlexFabric 650FLB. Update Driver (ESXi). Turn off, turn on virtual connect modules. Unassign and reassign server profiles. Check all configurations and compare with other chastises. Re-seat one...

HPE Offline Bundle 3.5.0-12 for VMware ESXi 0

HPE Offline Bundle 3.5.0-12 for VMware ESXi

HPE Offline Bundle 3.5.0-12 for VMware ESXi includes important fixes for issues which can be cause of service down time. If you have VMware ESXi 6.5 or VMware ESXi 6.7, you must update HPE Offline Bundle to the new version (3.5.0-12).

C70000 0

HPE Virtual Connect 4.80

Today, HPE has released Virtual Connect 4.80, the release is includes lots of security fixes. Beginning with VC 4.80, HPE VC Flex-10 10Gb Ethernet Module is not supported having reached its 5 year End of Support period in July 2018. To upgrade to Virtual Connect 4.80, the minimum Virtual Connect Support Utility required version is 1.15.0. Supported Interconnect Modules Virtual Connect 4.80 is supported on the following interconnect modules: • HPE VC Flex-10/10D Module • HPE VC FlexFabric 10Gb/24-Port Module • HPE VC FlexFabric-20/40 F8 Module • HPE VC FlexFabric-20/40 F8 TAA Module • HPE VC 8Gb 20-Port FC Module • HPE VC 8Gb 24-Port FC Module • HPE VC 16Gb 24-Port FC Module • HPE VC 16Gb 24-Port FC TAA Module Major Enhancements VC 4.80 contains the following enhancements: Support for lldpRemManAddrTable(LLDP-MIB) and lldpV2RemManAddrTable(LLDPv2-MIB) Added SNMP trap support for non-correctable ECC memory parity error For VC VC-Enet and FlexFabric modules in non-FIPS mode and for new VC Domains, TLS v1.2 will be the default TLS setting and TLS v1.0, TLS v1.1 will be disabled. Major Fixes VC 4.80 release resolves the following issues: With HPE Virtual Connect FlexFabric 10Gb/24-port Module, FCoE high throughput traffic might see packet loss. When...

C70000 0

HPE FlexFabric 650FLB Adapter May Cause PSOD on ESXi 6.x

Seems, there is an issue on HPE Blade Servers when a specific adapter is installed on server. If there is any server of the below models in your virtual environment that you should consider to this post: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server HPE servers running VMware ESXi 6.0, VMware ESXi 6.5 or VMware ESXi 6.7 and configured with an HPE FlexFabric 20Gb 2-port 650FLB Adapter with driver version 12.0.1211.0 (or prior), a “wake NOT set” message is logged in the Vmkernel logs. Then after 20 to 30 days of server run time, a Purple Screen of Death (PSOD) may display a brcmfcoe: lpfc_sli_issue_iocb_wait:10828 message. The following errors are displayed in the VMkernel logs after approximately 50 to 70 days of server run time: 2019-07-20T00:46:10.267Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T01:10:14.266Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:16:25.801Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:22:26.957Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T03:26:39.057Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T04:06:46.158Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330...

HPE Power 3

HPE G10 Servers: Best and Optimized Setting for VMware vSphere

Optimizing HPE server configuration is always one of biggest challenges for virtual environments administrators. The administrators always trying to provide best performance by changing configurations on hypervisor, virtual machines or other components or virtual infrastructure. There are best practices which published by server vendors to achieve best performance on their server platform. Server hardware is one most important component in every environment, server provides main power and resource: Computing Resources. Processor and memory and other server’s component made to work almost like as human brain. Some Important Things! Human brain needs consumes about 20 percent of the body’s energy. Consume more energy is equal to more heat in human body. Server components needs energy and cooling, like human body. Energy and cooling needs money. So there is a challenge to keep balance between performance and cost of services in any IT environments. What’s Important About Server Hardware? In fact, processors and cooling system consume most power which provided by power modules. Don’t think about single server, think about hundreds of servers and thousands processors, thousand x 200W! (At least). Best performance has more cost for any environment also needs more maintenance. What’s Best Practices? Best practices are depended to workload...

HPE 0

Change Administrator’s Password for All HPE C7000 Device Bays

HPE C7000 Blade System using Onboard Administrator (OA) KVM module to manage devices and interconnects in chassis. To manage devices directly, OA will redirect to iLO of each device. OA use single sign-on to logon to iLO administration web page. But each server has own administrator user and the user has default password, so if iLO addresses are reachable via network, it’s better to change default password. Think about two or more C7000 chassis with half high blade servers, changing password on 64 servers would be difficult! Onboard Administrator has builtin tool to configure device bays iLO. HPONCFG can change almost all iLO configuration. HPONCFG is also available for different systems and operating systems. Configuration should be as XML format. Example 1: Change Administrator’s Password The below code will change administrator’s password to what you want, just replace “Y0ur Passw0rd” with your password. I’ve tested it on iLO5 and BL460c G10. If you have empty bays, you should run it again after installing new bade server. Also to run on single bay, just replace “all” with bay number. The code must be run via OA, so you should open SSH session to OA module of each C7000 chassis. Example 2:...

npar 1 1

Using Network Partitioning (NPAR) in VMware ESXi

Data Center design is changing year by year, before virtualization all servers was physical and data centers were full of devices and cables. Virtualization helped to make data centers smaller and consolidate different workloads in same hardware. Hardware technologies are also helping to achieve this goal. Network Partitioning (NPAR) is one of hardware technologies which helping to reduce cabling, switch devices and use all I/O capacity in data centers. What’s Network Partitioning (NPAR)? NPAR is an operating system and switch agnostic technology that allows customers to reduce the number of I/O adapters required to support different application workloads. Traditional best practices require separate LAN or SAN connections for different aspects of application workloads. Converged Network Adapters already support widely used SAN protocols like Fibre Channel over Ethernet (FCoE) and iSCSI, administrators can already reduce the number of adapters needed for separate protocols, including separate Fibre Channel and Ethernet adapters. With NPAR, these adapters can now partition their network bandwidth further into multiple virtual connections, making one dual-port adapter appear as eight adapters to the operating system for use by the applications. This greatly simplifies the physical connectivity to the server, reduces implementation time, and lowers the acquisition cost of the...

HPE Fast Fault Tolerance 2

HPE Fast Fault Tolerance vs HPE Advanced ECC Support – Choosing Best Technology!

Memory failure is one of reasons that can be cause of server crash and impact on service availability and performance. Think about a service which including multiple servers, server could be crashed cause of single memory module failure or uncorrectable memory error. Regarding to preventing memory impact of memory errors on services, HPE provides RAS technologies. RAS (reliability, availability, and serviceability) technologies are including: In this post, we’ll compare HPE Fast Fault Tolerance and HPE Advanced ECC Support. Before comparison, let’s find that why we need to memory RAS? Why Memory RAS is needed? Server uptime is still one of the most critical aspects of data center maintenance. Unfortunately, servers can run into trouble from time to time due to software issues, power outages, or memory errors. The three major categories of memory errors we track and manage include correctable errors, uncorrectable errors, and recoverable errors. The determination of which errors are correctable and uncorrectable is completely dependent on the capability of the memory controller. Correctable Errors Correctable errors are, by definition, errors that can be detected and corrected by the chipset. Correctable errors are generally single-bit errors. All HPE servers are capable of detecting and correcting single-bit errors and...

PowerShell 1

BIOS Configuration: Best Solution for HPE G10/G11 Servers by PowerShell

Last post was about configuring HPE smart array and create logical drives on HPE G10 servers by HPE Scripting Tools. This post is about configuring HPE G10 via PowerShell by using HPE BIOS cmdlets. As I have mentioned in the last post, there are also cmdltes available for Smart Array, iLO and OA which helping administrators to configure and deploy servers faster than regular ways and without no additional cost. In order to configure HPE servers by script, you need to download and install HPE Scripting Tools: Scripting Tools for Windows Sample BIOS Script The below script is a sample for configuring HPE DL580 G10: The above script needs same credential on multiple servers if you want to applying configurations on multiple servers at same time. Also don’t run script on server that server has production server, it has been created for first deployment. Replace “iLO IP 1” and others with your servers’ iLO IP addresses. Further Reading Why Device Bay IP Doesn’t Change in HPE BladeSystem? [Script]: Enable/Disable vMotion on VMKernel Ports via PowerCLI Configure NTP on iLO via HPE Scripting Tools How to Create Logical Drive on HPE DL580 G10

PowerShell 1

PowerShell: How to Create Logical Drive on HPE G10

I have published some posts about HPE Scripting Tools for PowerShell to automating physical server preparation and deployment services. The tools helping administrators to deploying services on HPE servers faster than normal ways, also there is no additional costs, just buy a PowerShell book! The scripting tools provides PowerShell cmdlets for configuring BIOS, iLO, OA and Smart Array. The tools are available for download as free tools on HPE website. Scripting Tools for Windows The smart array cmdlets are compatible with HPE Generation 10 and not all smart array adapters. Read the user guide document to find list of compatible smart array adapters. The tools will help to configure smart array adapters on multiple servers very faster than normal ways or other automation tools. Of course, the tools is not comparable with HPE OneView. Sample PowerShell Script to Create Logical Drive with 2 Drive The below sample will create two logical drives from 4 drives on multiple servers: Same administrator account will be required on the servers and you can change “iLO IP” with your iLO IP addresses. Logical drives will be made as RAID 1. The script has been tested on HPE DL580 G10. You need to install HPE...

hpe synergy cabling 2

HPE Synergy Cabling Guide

HPE Synergy is a next‐generation data center architectural option, Composable Infrastructure embraces and extends key concepts and traits from the architectures that have come before it, including converged and hyperconverged systems. I don’t know when but seems that HPE will replace HPE Blade System with HPE Synergy in future. Those who worked with HPE Blade System, will confuse about HPE Synergy configuration. Configuration is very different with Blade System at all. Cabling is one of biggest challenges with HPE Synergy. There is many management connection which should connect to the correct ports. Because HPE Synergy is a composable infrastructure so there are some options to have multi-frame configuration which allows companies to resource expansion and service expansion easily. I recommend that design cabling according to service requirements before initializing new hardware. There are some sample configurations for interconnect connections, stack connections, Image Streamer connections and management: Cabling interconnect modules in a single frame with redundancy Cabling master and satellite interconnect modules in multiple frames (non-redundant) Cabling master and satellite interconnect modules in multiple frames (with redundancy) Cabling a single-frame management ring Cabling a multiframe management ring Cabling a single-frame management ring with HPE Synergy Image Streamer Cabling a three-frame management...

HPE 2

What’s HPE RESTful Interface Tool?

Having problems finding a single scripting tool that provides management automation among the server components? Being challenged with many tools, remote management vulnerabilities and scripting limitation? Hewlett Packard Enterprise offers you a single scripting tool called RESTful Interface Tool designed for HPE ProLiant Gen9 & Gen10 Servers with flexible and simpler scripting server automation at scale for rapid deployments to help cut time substantially.

ramdisk tmp full esxi 2

The ramdisk ‘tmp’ is full – VMware ESXi 6.x on HPE ProLiant

We are using HPE ProLiant servers in our virtual environment to delivering different services to our customers. VMware ESXi has been installed on all servers as Hypervisor. You know that each vendor has customized image that includes VMware ESXi, drivers and management tools. It seems, there is an issue on HPE Agentless Management (AMS) on latest ESXi image.

Corruption in dlmalloc 0

ESXi Fails with “Corruption in dlmalloc” on HPE Server

“Corruption in dlmalloc” issue occurs because multiple esxcfg-dumppart threads attempt to free memory which has been used for configuring the dump partition. Thread A checks if there are entries to be freed and proceeds to free them, while within the same time frame, Thread B is also attempting to free the same entries.
Based on VMware KB2147888, this issue is resolved on ESXi 6 U3. But why issue is happening on ESXi 6 U3 or ESXi 6.5 U1 when they are installed on HPE ProLiant servers?