Category: Data Center

EMC Unity 3

Dell EMC Unity OE 5.0.2.0.5.009

EMC has released new upgrade file for Unity family (5.0.2.0.5.009) but I couldn’t find Release Notes link till now. Seems, the upgrade file is contains some minor fixes and security fixes as well. (Now I know that some issues were major!) The file size is as big as previous releases, but as I know, the below security issues have been fixed on this release. (Thanks to “derWolle“, I don’t know him but he has read the post and sent me the link of release notes: https://support.emc.com/docu97010_Dell-EMC-Unity-Family-5.0.2.0.5.009-Release-Notes.pdf?language=en_US Seems, there is new feature and important fixes: Initially Configuring your Unity syste: When initially configuring your system, you will need to reset the default password whether you are using the Unisphere UI, REST API, SMI-S, Service Commands, or CLI. Support for new Power Supply Unit (PSU): Unity Operating Environment 5.0.2.0.5.009 contains updated firmware to support the new PSU part number 071-000-208-XX. The below security issues have been addressed and 21 technical issue which opened from previous releases have been fixed: bzip2 CVE-2016-3189 CVE-2019-12900                curl CVE-2019-5482 glib2 CVE-2019-13012 libgcrypt CVE-2019-13627 Mozilla-nss, libfreebl3, libsoftokn3   CVE-2019-9811 CVE-2019-11709 CVE-2019-11711 CVE-2019-11712 CVE-2019-11713 CVE-2019-11715 CVE-2019-11717 CVE-2019-11719 CVE-2019-11729 CVE-2019-11730 perl CVE-2018-18311 polkit CVE-2019-6133 python CVE-2018-20852 CVE-2019-9636 CVE-2019-10160  Further Reading...

0

HPE FlexFabric 650FLB Adapter May Cause PSOD on ESXi 6.x

Seems, there is an issue on HPE Blade Servers when a specific adapter is installed on server. If there is any server of the below models in your virtual environment that you should consider to this post: HPE ProLiant BL460c Gen10 Server Blade HPE ProLiant BL460c Gen9 Server Blade HPE ProLiant BL660c Gen9 Server HPE servers running VMware ESXi 6.0, VMware ESXi 6.5 or VMware ESXi 6.7 and configured with an HPE FlexFabric 20Gb 2-port 650FLB Adapter with driver version 12.0.1211.0 (or prior), a “wake NOT set” message is logged in the Vmkernel logs. Then after 20 to 30 days of server run time, a Purple Screen of Death (PSOD) may display a brcmfcoe: lpfc_sli_issue_iocb_wait:10828 message. The following errors are displayed in the VMkernel logs after approximately 50 to 70 days of server run time: 2019-07-20T00:46:10.267Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T01:10:14.266Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:16:25.801Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T02:22:26.957Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T03:26:39.057Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0 2019-07-20T04:06:46.158Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330...

kubewise 1

Kubewise, Multi-Platform Desktop Client for Kubernetes

Kubewise is a simple multi-platform desktop client for Kubernetes. In the same way the kubectl command requires only a valid kubeconfig file to run commands against a Kubernetes cluster, Kubewise requires you just to configure one or more valid kubeconfig files to interact with the corresponding Kubernetes clusters. Main features: Support for multiple kubeconfig files. UI-driven interaction with the most frequently used Kubernetes entities. One-click terminal with the proper KUBECONFIG env variable set. Generation of custom kubeconfig files for a given namespace. Highlight sustaniability and security-related data. Requirements Kubewise is a desktop application built with HTML, JavaScript, CSS and Node.js, and it runs on Electron, a framework for building cross platform apps using web technologies. So for running it, basically, all you need is: Any modern macOS, Windows or Linux (Debian-based) OS. kubectl v1.14.0+ installed to access Kubernetes v1.14.0+ clusters. External Links Kubewise

0

Linux Deduplication and Compression: The Ultimate Guide to Saving Space

Cost of storage is the biggest piece of IT budget in any company (Of course not EMC, HPE or other like them 😀 ) and the storage spaces is eating by databases, other data files and backup files as well. Lot of those data file not be used once generated!. I have serious issue with these types of data and especially log files that one of dark data types. I can’t stop generating those data and the related teams are always crying about free space on their servers. They want just Linux servers (Seems, they don’t know Windows has NFS services as well), because they are using shared space on application servers to storing log files and others via NFS. I have offered other solutions such as using Windows as NFS server and use deduplication on Windows Servers, this feature is really good. I have used Windows deduplication on our backup proxy servers and result was incredible. Anyway, they want to Linux server (Not even other Unix-Like!), so I started to googling these: Don’t panic, most Linux’s modern file systems have no native deduplication and transparent compression and must be enabled with third-party software. Let’s see what is deduplication and...

EMC Unity 0

Some Lower-End Unity Models Could Experience Single or Dual SP Reboots When Running Unity OE 5.0

This is warning and workaround about EMC Unity OE 5.0 on some Unity systems with fewer memory capacity. Let’s see what happened? The lower-end Unity systems, in particular the Unity 300 and 300F, contain less physical memory than the higher-end Unity models.  The lower available system memory makes them more vulnerable to experience an SP reboot brought about by the “mergelogs” component of NGTRiiAGE which is run by uDoctor on SRS enabled arrays.   It is also run periodically on non-SRS enabled arrays. If your Unity system is SRS enabled, “mergelogs” will run at least once per day. Changes intro-duced in Unity OE 5.0.x make the “mergelogs” function more memory intensive, which causes the process to run closer to the limits of the available memory budget.  If the available memory is exceeded, the SP may reboot. A hotfix is available for this issue if you are running Unity OE 5.0.0.0.5.116.  To acquire this hot-fix, please contact Dell EMC Technical Support or your authorized service vendor and reference KB article 536786.  This knowledge base article also stipulates a work-around available to customers who have service credentials. You can disable the daily triage service within uDoctor by connecting to your system via SSH...

0

CentOS Linux 8 and CentOS Streams

Today, CentOS Linux 8.0.1905 has been released and available to download now. The version is first version of CentOS Linux 8. Same as previous version, the new version is based on Red Hat Enterprise Linux source codes but it’s totally free. CentOS 8 end of life is May 2029 (10 years, same as previous versions). Improvements and fixes are same as RHEL 8 and CentOS 8.0.1905, CentOS includes all new features actually. But there is a big change! Read more on CentOS-8 (1905) Release Notes. What’s CentOS Streams? CentOS Stream is available based on CentOS Linux 8 software packages the project has been building over the summer combined with the latest Red Hat Enterprise Linux (RHEL) 8 development kernel. CentOS Stream will be a rolling-release Linux distro that exists as a midstream between the upstream development in Fedora Linux and the downstream development for Red Hat Enterprise Linux (RHEL). It is a cleared-path to contributing into future minor releases of RHEL while interacting with Red Hat and other open source developers. This pairs nicely with the existing contribution path in Fedora for future major releases of RHEL. Read more on Presenting CentOS Stream. Download CentOS 8.0.1905 CentOS 8.0.1905 is available...

2

What’s FreeDOS?

FreeDOS is an open source DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS. It doesn’t cost anything to download and use FreeDOS. You can also share FreeDOS for others to enjoy! And you can view and edit our source code, because all FreeDOS programs are distributed under the GNU General Public License or a similar open source software license. FreeDOS Software Since 1998, each program included in the FreeDOS distribution is made available as a “package.” The distribution divides these FreeDOS software packages into groups, sometimes called “sets.” The BASE package group contains only those programs that reproduce the functionality of classic DOS systems. The other package groups contain software that you may find useful, such as games, editors, and developer tools. BASE Programs that provide the functionality of classic DOS Archivers Tools to compress files and create archives. Boot tools Utilities to help you boot your computer Development Development tools such as compilers and assemblers Editors Editors and simple word processors that let you edit text files Emulators Programs that emulate other systems Games Fun games that you can...

2

Hyper-Threading vs SMT: Which One has Better Performance?

Join me to review Intel Hyper-Threading and AMD SMT. In the past but not so long, just before 2001 and before design first multi-core CPU, processors had one core. Intel, AMD and others they tried to increase transistors and frequency in processors. Result of one core CPU was processing single thread at time, the thread might not take all CPU time. Multi-Core processors and multi-threading helping to processing more than one task or process at a time. Today, processors have more cores and also have features to doing multi-threading. Intel Hyper-Threading This technology is a form of simultaneous multithreading technology introduced by Intel, while the concept behind the technology has been patented by Sun Microsystems. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per core, each of which has its own processor architectural state. Each logical processor can be individually halted, interrupted or directed to execute a specified thread, independently from the other logical processor sharing the same physical core. Unlike a traditional dual-processor configuration that uses two separate physical processors, the logical processors in a hyper-threaded core share the execution resources. These resources include the execution engine, caches, and system bus interface; the sharing of resources allows two logical...

VMware Ports and Protocols 0

VMware’s Tool for Find VMware’s Products Ports and Protocols

VMware’s products needs to communicate together or communicate with other components of services via network, administrators must know that components will communicate via which ports and protocols. VMware has provided a tool to find port and protocol of popular VMware’s products which called ” VMware Ports and Protocols”. Currently, you can find ports and protocols which the below products used: vSphere vSAN NSX for vSphere vRealize Network Insight vRealize Operations Manager vRealize Automation Note: The recent versions of the mentioned products and not out of support products. External Links VMware Ports and Protocols

HPE Power 3

HPE G10 Servers: Best and Optimized Setting for VMware vSphere

Optimizing HPE server configuration is always one of biggest challenges for virtual environments administrators. The administrators always trying to provide best performance by changing configurations on hypervisor, virtual machines or other components or virtual infrastructure. There are best practices which published by server vendors to achieve best performance on their server platform. Server hardware is one most important component in every environment, server provides main power and resource: Computing Resources. Processor and memory and other server’s component made to work almost like as human brain. Some Important Things! Human brain needs consumes about 20 percent of the body’s energy. Consume more energy is equal to more heat in human body. Server components needs energy and cooling, like human body. Energy and cooling needs money. So there is a challenge to keep balance between performance and cost of services in any IT environments. What’s Important About Server Hardware? In fact, processors and cooling system consume most power which provided by power modules. Don’t think about single server, think about hundreds of servers and thousands processors, thousand x 200W! (At least). Best performance has more cost for any environment also needs more maintenance. What’s Best Practices? Best practices are depended to workload...

HPE 0

Change Administrator’s Password for All HPE C7000 Device Bays

HPE C7000 Blade System using Onboard Administrator (OA) KVM module to manage devices and interconnects in chassis. To manage devices directly, OA will redirect to iLO of each device. OA use single sign-on to logon to iLO administration web page. But each server has own administrator user and the user has default password, so if iLO addresses are reachable via network, it’s better to change default password. Think about two or more C7000 chassis with half high blade servers, changing password on 64 servers would be difficult! Onboard Administrator has builtin tool to configure device bays iLO. HPONCFG can change almost all iLO configuration. HPONCFG is also available for different systems and operating systems. Configuration should be as XML format. Example 1: Change Administrator’s Password The below code will change administrator’s password to what you want, just replace “Y0ur Passw0rd” with your password. I’ve tested it on iLO5 and BL460c G10. If you have empty bays, you should run it again after installing new bade server. Also to run on single bay, just replace “all” with bay number. The code must be run via OA, so you should open SSH session to OA module of each C7000 chassis. Example 2:...

1

Using Network Partitioning (NPAR) in VMware ESXi

Data Center design is changing year by year, before virtualization all servers was physical and data centers were full of devices and cables. Virtualization helped to make data centers smaller and consolidate different workloads in same hardware. Hardware technologies are also helping to achieve this goal. Network Partitioning (NPAR) is one of hardware technologies which helping to reduce cabling, switch devices and use all I/O capacity in data centers. What’s Network Partitioning (NPAR)? NPAR is an operating system and switch agnostic technology that allows customers to reduce the number of I/O adapters required to support different application workloads. Traditional best practices require separate LAN or SAN connections for different aspects of application workloads. Converged Network Adapters already support widely used SAN protocols like Fibre Channel over Ethernet (FCoE) and iSCSI, administrators can already reduce the number of adapters needed for separate protocols, including separate Fibre Channel and Ethernet adapters. With NPAR, these adapters can now partition their network bandwidth further into multiple virtual connections, making one dual-port adapter appear as eight adapters to the operating system for use by the applications. This greatly simplifies the physical connectivity to the server, reduces implementation time, and lowers the acquisition cost of the...

HPE Fast Fault Tolerance 2

HPE Fast Fault Tolerance vs HPE Advanced ECC Support – Choosing Best Technology!

Memory failure is one of reasons that can be cause of server crash and impact on service availability and performance. Think about a service which including multiple servers, server could be crashed cause of single memory module failure or uncorrectable memory error. Regarding to preventing memory impact of memory errors on services, HPE provides RAS technologies. RAS (reliability, availability, and serviceability) technologies are including: In this post, we’ll compare HPE Fast Fault Tolerance and HPE Advanced ECC Support. Before comparison, let’s find that why we need to memory RAS? Why Memory RAS is needed? Server uptime is still one of the most critical aspects of data center maintenance. Unfortunately, servers can run into trouble from time to time due to software issues, power outages, or memory errors. The three major categories of memory errors we track and manage include correctable errors, uncorrectable errors, and recoverable errors. The determination of which errors are correctable and uncorrectable is completely dependent on the capability of the memory controller. Correctable Errors Correctable errors are, by definition, errors that can be detected and corrected by the chipset. Correctable errors are generally single-bit errors. All HPE servers are capable of detecting and correcting single-bit errors and...

2

HPE Synergy Cabling Guide

HPE Synergy is a next‐generation data center architectural option, Composable Infrastructure embraces and extends key concepts and traits from the architectures that have come before it, including converged and hyperconverged systems. I don’t know when but seems that HPE will replace HPE Blade System with HPE Synergy in future. Those who worked with HPE Blade System, will confuse about HPE Synergy configuration. Configuration is very different with Blade System at all. Cabling is one of biggest challenges with HPE Synergy. There is many management connection which should connect to the correct ports. Because HPE Synergy is a composable infrastructure so there are some options to have multi-frame configuration which allows companies to resource expansion and service expansion easily. I recommend that design cabling according to service requirements before initializing new hardware. There are some sample configurations for interconnect connections, stack connections, Image Streamer connections and management: Cabling interconnect modules in a single frame with redundancy Cabling master and satellite interconnect modules in multiple frames (non-redundant) Cabling master and satellite interconnect modules in multiple frames (with redundancy) Cabling a single-frame management ring Cabling a multiframe management ring Cabling a single-frame management ring with HPE Synergy Image Streamer Cabling a three-frame management...