Category: Server Hardware

Server Hardware Review, News.
HPE, Dell, Hitachi, Oracle Sun, IBM …

FC Port Aggregation (Trunking) 0

Fibre Channel Trunking (Aggregation): Better Performance, Higher Bandwidth (>1 Tb/s)

We are all knowing LACP on ethernet network but Fibre Channel and SAN network providing multipathing I/O by using logical and physical paths as single device on operating system and also using path selecting policies for using I/O paths optimized. But if server using 16Gb/s FC HBA, so there is no bandwidth aggregation. Server can generate bandwidth for each port up to maximum speed of port and also path selecting has performance impact of server when large paths are presented to server. Months before, I was thinking about FC port aggregation and did many searches but I didn’t find anything until now.

HPE Blade System 0

HPE BladeSystem Retirement

I met HPE C7000 more than ten years ago for first time and I was newbie engineer in virtualization and datacenter. We had 6th generation of blade servers and the first virtual desktop infrastructure farm hosted by the servers in Iran and even Middle East. Now I didn’t find any official information about HPE Blade System retirement but after BL6xx server are retired and BL4xx servers released by some limitation compare to Synergy servers, it was clear.

HPE OneView Global Dashboard 0

HPE OneView Global Dashboard: Best Software for Global Monitoring

HPE is on of biggest companies which producing data center equipment’s and facilities. HPE has unified management and monitoring suite for server hardware, network devices and storage devices that called HPE OneView. You may have multiple data center at different geographical location and needs to monitoring all devices on same dashboard. Because HPE OneView needs access to any under managed devices via network and it’s recommended that implementing HPE OneView in local site, there are some limitations about managing and monitoring all devices on single instance of HPE OneView. HPE has introduced HPE OneView Global Dashboard for monitoring and managing multiple HPE OneView appliances.

HPE Customized ESXi Image 2

New HPE Customized ESXi Image Not Supported on ProLiant BL c-Class Servers

HPE has released the supported ESXi versions as customized images for ProLiant and other HPE server products. There is bad news about HPE ProLiant BL460c. Which Version Have Released? The below versions of HPE Custom Images for VMware Released in January 2021: VMware-ESXi-7.0.1-17325551-HPE-701.0.0.10.6.3.9-Jan2021.iso VMware-ESXi-6.7.0-17167734-HPE-Gen9plus-670.U3.10.6.3.8-Jan2021.iso VMware-ESXi-6.5.0-Update3-17097218-HPE-Gen9plus-650.U3.10.6.3.8-Dec2020.iso Which Hardware Products Not Supported? The mentioned images don’t support installing on the below server products: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server Blade Which Versions Are Supported on ProLiant BL c-Class Servers? The below versions must be used on ProLiant BL c-Class Servers: VMware-ESXi-7.0.1-16850804-HPE-701.0.0.10.6.0.40-Oct2020.iso VMware-ESXi-6.7.0-Update3-16713306-HPE-Gen9plus-670.U3.10.6.0.83-Oct2020.iso VMware-ESXi-6.5.0-Update3-16389870-HPE-Gen9plus-650.U3.10.6.0.86-Oct2020.iso What Should We Do For Future? Don’t worry, the next releases will be supported on ProLiant BL c-Class Servers. Wait for new releases of HPE Custom Images for VMware. See Also Network Connection Problem on HPE FlexFabric 650 (FLB/M) Adapter References Notice: HPE c-Class BladeSystem – The HPE Custom Images for VMware Released in January 2021 Are Not Supported on ProLiant BL c-Class Servers

ilo5 650flb m 4

Network Connection Problem on HPE FlexFabric 650 (FLB/M) Adapter

We have many HPE Blade System C7000 chassis for hosting our virtual infrastructures or our customers virtual infrastructures. Also many of them are filled with HPE ProLiant BL460c Gen10. Recently, we had serious problem with some of the servers about network connection, if there was FlexFabric 650 (FLB/M) adapter installed on blade server. What’s Exact Problem? Our servers were working fine but after a while, we lose our vMotion network connectivity, the hosts couldn’t migrate virtual machines even between servers in same C7000 chassis. The problem was happened on one chassis and other chassis has no problem. Network adapter was connected physically but it don’t pass any network traffic, the issue some like ARP problem! Also network adapter status was Degraded in iLO web administration. List of All Attempts to Recovering Issue Try this, ask IT guy about network problem on network card, one hundred percent sure that IT guy will say: Upgrade firmware, update driver, upgrade ….! So, I did it! 😀 Let’s review the list: Upgrade Firmware for FlexFabric 650M and FlexFabric 650FLB. Update Driver (ESXi). Turn off, turn on virtual connect modules. Unassign and reassign server profiles. Check all configurations and compare with other chastises. Re-seat one...

C70000 0

HPE FlexFabric 650FLB Adapter May Cause PSOD on ESXi 6.x

Seems, there is an issue on HPE Blade Servers when a specific adapter is installed on server. If there is any server of the below models in your virtual environment that you should consider to this post: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server HPE servers running VMware ESXi 6.0, VMware ESXi 6.5 or VMware ESXi 6.7 and configured with an HPE FlexFabric 20Gb 2-port 650FLB Adapter with driver version 12.0.1211.0 (or prior), a “wake NOT set” message is logged in the Vmkernel logs. Then after 20 to 30 days of server run time, a Purple Screen of Death (PSOD) may display a brcmfcoe: lpfc_sli_issue_iocb_wait:10828 message. The following errors are displayed in the VMkernel logs after approximately 50 to 70 days of server run time: 2019-07-20T00:46:10.267Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T01:10:14.266Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:16:25.801Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:22:26.957Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T03:26:39.057Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T04:06:46.158Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330...

hyper threading SMT 2

Hyper-Threading vs SMT: Which One has Better Performance?

Join me to review Intel Hyper-Threading and AMD SMT. In the past but not so long, just before 2001 and before design first multi-core CPU, processors had one core. Intel, AMD and others they tried to increase transistors and frequency in processors. Result of one core CPU was processing single thread at time, the thread might not take all CPU time. Multi-Core processors and multi-threading helping to processing more than one task or process at a time. Today, processors have more cores and also have features to doing multi-threading. Intel Hyper-Threading This technology is a form of simultaneous multithreading technology introduced by Intel, while the concept behind the technology has been patented by Sun Microsystems. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per core, each of which has its own processor architectural state. Each logical processor can be individually halted, interrupted or directed to execute a specified thread, independently from the other logical processor sharing the same physical core. Unlike a traditional dual-processor configuration that uses two separate physical processors, the logical processors in a hyper-threaded core share the execution resources. These resources include the execution engine, caches, and system bus interface; the sharing of resources allows two logical...

HPE Power 3

HPE G10 Servers: Best and Optimized Setting for VMware vSphere

Optimizing HPE server configuration is always one of biggest challenges for virtual environments administrators. The administrators always trying to provide best performance by changing configurations on hypervisor, virtual machines or other components or virtual infrastructure. There are best practices which published by server vendors to achieve best performance on their server platform. Server hardware is one most important component in every environment, server provides main power and resource: Computing Resources. Processor and memory and other server’s component made to work almost like as human brain. Some Important Things! Human brain needs consumes about 20 percent of the body’s energy. Consume more energy is equal to more heat in human body. Server components needs energy and cooling, like human body. Energy and cooling needs money. So there is a challenge to keep balance between performance and cost of services in any IT environments. What’s Important About Server Hardware? In fact, processors and cooling system consume most power which provided by power modules. Don’t think about single server, think about hundreds of servers and thousands processors, thousand x 200W! (At least). Best performance has more cost for any environment also needs more maintenance. What’s Best Practices? Best practices are depended to workload...

HPE 0

Change Administrator’s Password for All HPE C7000 Device Bays

HPE C7000 Blade System using Onboard Administrator (OA) KVM module to manage devices and interconnects in chassis. To manage devices directly, OA will redirect to iLO of each device. OA use single sign-on to logon to iLO administration web page. But each server has own administrator user and the user has default password, so if iLO addresses are reachable via network, it’s better to change default password. Think about two or more C7000 chassis with half high blade servers, changing password on 64 servers would be difficult! Onboard Administrator has builtin tool to configure device bays iLO. HPONCFG can change almost all iLO configuration. HPONCFG is also available for different systems and operating systems. Configuration should be as XML format. Example 1: Change Administrator’s Password The below code will change administrator’s password to what you want, just replace “Y0ur Passw0rd” with your password. I’ve tested it on iLO5 and BL460c G10. If you have empty bays, you should run it again after installing new bade server. Also to run on single bay, just replace “all” with bay number. The code must be run via OA, so you should open SSH session to OA module of each C7000 chassis. Example 2:...

npar 1 1

Using Network Partitioning (NPAR) in VMware ESXi

Data Center design is changing year by year, before virtualization all servers was physical and data centers were full of devices and cables. Virtualization helped to make data centers smaller and consolidate different workloads in same hardware. Hardware technologies are also helping to achieve this goal. Network Partitioning (NPAR) is one of hardware technologies which helping to reduce cabling, switch devices and use all I/O capacity in data centers. What’s Network Partitioning (NPAR)? NPAR is an operating system and switch agnostic technology that allows customers to reduce the number of I/O adapters required to support different application workloads. Traditional best practices require separate LAN or SAN connections for different aspects of application workloads. Converged Network Adapters already support widely used SAN protocols like Fibre Channel over Ethernet (FCoE) and iSCSI, administrators can already reduce the number of adapters needed for separate protocols, including separate Fibre Channel and Ethernet adapters. With NPAR, these adapters can now partition their network bandwidth further into multiple virtual connections, making one dual-port adapter appear as eight adapters to the operating system for use by the applications. This greatly simplifies the physical connectivity to the server, reduces implementation time, and lowers the acquisition cost of the...

HPE Fast Fault Tolerance 2

HPE Fast Fault Tolerance vs HPE Advanced ECC Support – Choosing Best Technology!

Memory failure is one of reasons that can be cause of server crash and impact on service availability and performance. Think about a service which including multiple servers, server could be crashed cause of single memory module failure or uncorrectable memory error. Regarding to preventing memory impact of memory errors on services, HPE provides RAS technologies. RAS (reliability, availability, and serviceability) technologies are including: In this post, we’ll compare HPE Fast Fault Tolerance and HPE Advanced ECC Support. Before comparison, let’s find that why we need to memory RAS? Why Memory RAS is needed? Server uptime is still one of the most critical aspects of data center maintenance. Unfortunately, servers can run into trouble from time to time due to software issues, power outages, or memory errors. The three major categories of memory errors we track and manage include correctable errors, uncorrectable errors, and recoverable errors. The determination of which errors are correctable and uncorrectable is completely dependent on the capability of the memory controller. Correctable Errors Correctable errors are, by definition, errors that can be detected and corrected by the chipset. Correctable errors are generally single-bit errors. All HPE servers are capable of detecting and correcting single-bit errors and...

hpe synergy cabling 2

HPE Synergy Cabling Guide

HPE Synergy is a next‐generation data center architectural option, Composable Infrastructure embraces and extends key concepts and traits from the architectures that have come before it, including converged and hyperconverged systems. I don’t know when but seems that HPE will replace HPE Blade System with HPE Synergy in future. Those who worked with HPE Blade System, will confuse about HPE Synergy configuration. Configuration is very different with Blade System at all. Cabling is one of biggest challenges with HPE Synergy. There is many management connection which should connect to the correct ports. Because HPE Synergy is a composable infrastructure so there are some options to have multi-frame configuration which allows companies to resource expansion and service expansion easily. I recommend that design cabling according to service requirements before initializing new hardware. There are some sample configurations for interconnect connections, stack connections, Image Streamer connections and management: Cabling interconnect modules in a single frame with redundancy Cabling master and satellite interconnect modules in multiple frames (non-redundant) Cabling master and satellite interconnect modules in multiple frames (with redundancy) Cabling a single-frame management ring Cabling a multiframe management ring Cabling a single-frame management ring with HPE Synergy Image Streamer Cabling a three-frame management...

HPE 2

What’s HPE RESTful Interface Tool?

Having problems finding a single scripting tool that provides management automation among the server components? Being challenged with many tools, remote management vulnerabilities and scripting limitation? Hewlett Packard Enterprise offers you a single scripting tool called RESTful Interface Tool designed for HPE ProLiant Gen9 & Gen10 Servers with flexible and simpler scripting server automation at scale for rapid deployments to help cut time substantially.