Enabling the UID on a Disk Drive on HPE Proliant Gen10 Servers Running ESXi 6.7

I ran into a situation lately where we had a disk throw a predictive failure alert on an HPE Proliant server at a remote site running VMware ESXi 6.7. Using the iLO is a great way to figure out which drive bay contains the failed drive, but it does not provide a mechanism to “blink” the UID so that the on-site technician can easily identify and replace the correct drive.

This particular server belonged to a vSAN cluster. vSAN disk management within vSphere provides the ability to be able to enable the LED on a drive, but that functionality did not work for us. In an earlier post, I found that I could use the Smart Storage Administrator CLI (SSACLI) utility that is included with the custom HPE ESXi iso in order to determine drive bay location, so I was wondering if we could use the same tool to enable the UID (blinking light) for a specific drive.

I found this guide on github which had a lot of great examples of the usage of the SSACLI utility, including the ability to enable the UID on a particular drive. First, I ran the blow command to list all of the disks in the server to confirm the bay location of the failed drive.

/opt/smartstorageadmin/ssacli/bin/ssaclictrl slot=0 pd all show detail

The output will look something like this, with an entry for each disk in the system:

physicaldrive 1I:1:4
Port: 1I
Box: 1
Bay: 4
Status: OK
Drive Type: Unassigned Drive
Interface Type: Solid State SAS
Size: 1.9 TB
Drive exposed to OS: True
Logical/Physical Block Size: 512/4096
Firmware Revision: HPD2
Serial Number: –
WWID:58CE38EE2057D3C6
Model: HP MO001920JWFWU
Current Temperature (C): 29
Maximum Temperature (C): 30
Usage remaining: 100.00%
Power On Hours: 592
SSD Smart Trip Wearout: True

After identifying the failed drive, and its location of 1I:1:4, I can then tailor the command to enable the UID of that particular drive.

/opt/smartstorageadmin/ssacli/bin/ssacli ctrl slot=0 physicaldrive 1I:1:4 modify led=on duration=3600

The above command will enable the UID for an hour, if you want to manually turn off the LED light before that, simply change the the parameter led=off.

Continue Reading

vRealize Automation Root Password Policy Resets after Updating (8.x)

By default, a vRealize Automation (vRA) 8.x deployment sets the password expiration policy on the root account to one year. If the password is not manually updated in that year, the account will expire which will prevent any future upgrades from succeeding. This issue is well documented in VMware KB 2150647, and if needed, the method for resetting the root password. David Ring did a great write-up on modifying the password expiration policy causing this condition. I would recommend bookmarking that procedure, because I ended up needing it more than one time.

I deployed the vRA environment at 8.0, and have updated it incrementally up to 8.4. So far, I’ve noticed with every upgrade that the password policy would be reset back to the default expiration value of 365 days. I have had to add an additional validation check at the end of our upgrade process to ensure that the account expiration polices remain correct after an update is completed.

Continue Reading

Problems With mon_disk_space Script? (VMware KB 2058187)

I recently attempted to deploy the mon_disk_space script from VMware KB 2058187. The instructions from the KB are straightforward; users only need to modify the below two values to get started:

# Please provide email for alert messages
email='wmilliron@example.com'
# Please provide percentage threshold for PostgreSQL used disk space
thresh=10

The script should send an email to the address provided when the PostgreSQL volume is utilizing more capacity than the provided (as a percentage) threshold. For my testing, I put the initial value at 10 knowing it would trigger the email to send.

After copying the script to /etc/cron.hourly on the VCSA and running ‘chmod 700 /etc/cron.hourly/mon_disk_space‘ to ensure the script is executable by cron, emails still were still not showing up, even after waiting over an hour. The troubleshooting began…

First, make sure cron is attempting to execute the script by running:

grep cron /var/log/vmware/messages

You should find entries similar to this in the log:

run-parts[51761]: (/etc/cron.hourly) starting mon_disk_space
run-parts[51761][51796]: (/etc/cron.hourly) finished mon_disk_space

If you see those entries, then cron is able to execute the script, so the problem seems to be within the script itself. If you take a look at line 9 of the provided script, the variable ‘db_type’ is populated by running:

cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | tr -d ' '

When I run that single command against my 6.7 VCSA, I get these duplicate values:

vcsa [ ~ ] cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | tr -d ' '
PostgreSQL
PostgreSQL

Let’s take a look at the provided script again. Lines 10-12 are looking for a single “PostgreSQL” entry, but the VCSA is providing back two values. This condition causes the script to exit, which explains why no emails are sent.

Simply adding a ‘uniq’ to line 9 will cause the script to produce a single, unique value. Line 9 of mon_disk_space ends up looking like this:

db_type=`cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | uniq | tr -d ' '`

After making the change, I manually triggered the cron job by running run-parts /etc/cron.hourly. The alert properly triggered, and the email showed up in my inbox. Lastly, don’t forget to go back and modify the alerting threshold on line 6 of the script to something more sensible.

Continue Reading

HPE Primera – What the 100% Availability Guarantee Means for Storage Administrators

Disclaimer: As an attendee of Tech Field Day, my flights, accommodations, and other expenses were paid for by Tech Field Day. I was not required to write about any of the content presented, and I have not been compensated in any way for this post. Some presenting companies chose to provide gifts to attendees; any such gifts do not impact my reviews or opinions as expressed on this site. All opinions in this post are my own, and do not necessarily represent that of my employer or Gestalt IT.

HPE Primera

Tech Field Day recently held an exclusive event with HPE Storage at the Nth Generation Symposium, and I was fortunate enough to be one of the attending delegates. HPE had a lot of great sessions during the event. I would encourage you to check them out at the Tech Field Day YouTube page.

Overview

HPE’s Primera product is their new, tier-0 storage array designed for extreme performance and resiliency for mission-critical applications. Primera is a massive evolution of their existing 3Par line. Here’s a quick list of some of those changes that make Primera a next-generation product:

  • OS/Platform functions have been taken out of the traditional monolithic OS package model and split into microservices running in containers. This allows them to be upgraded separately and non-disruptively, while also allowing them to scale on demand.
  • The platform has moved to using a new, 6th-generation ASIC designed for parallel processing. Using these ASICs, the CPU cores have a direct path to every other CPU core on other nodes. This design, HPE claims, will maximize the utility of NVMe drives.
  • Primera was designed with intelligence in mind, and comes InfoSight-ready. The AI/ML components of InfoSight are intended to increase availability and decrease the workload of storage administrators.

Others have done a better job than I could at spelling out the technical aspects, so instead of rehashing them all, I’ll direct you to better resources:

  • Alex Veprinsky Presents Primera Storage at Tech Field Day (YouTube)
  • Chris Mellor – HPE scales out 3PAR to build massively parallel Primera line (Blocks and Files)
  • Chris Evans – HPE Primera – First impressions on the new storage platform (Architecting IT)

Primera’s 100% Availability Guarantee

Ever since HPE announced Primera at their HPE Discover event earlier this year, I’ve been very interested to learn more about it. In particular, about how they are able to back up their 100% availability guarantee.

I’ll be honest, as a systems engineer that spends at least some of my time managing various storage arrays, hearing a vendor guarantee 100% availability makes me skeptical. But Dianne Gonzalez, Principal Technical Marketing Engineer with HPE, explains that the guarantee is achievable in practice; it simply requires deployment and ongoing upkeep according to HPE’s best practices.

Customer Requirements for 100% Availability Guarantee

Primera is a hardware system that is designed to be highly performant and resilient, but that alone won’t keep a system going. HPE does have customer requirements in order to ensure compliance with their 100% availability guarantee, and it’s all explained in their 100% Availability Guarantee document. It’s readable even with no law degree, but I’ve pulled out what I believe storage admins will want to know in order to give their directors and VPs the availability they are expecting:

  • An HPE Proactive Care (or higher) support contract is required
  • HPE InfoSight must be enabled and sending data back to HPE (requires external connectivity)
  • All critical and recommended HPE patches must be applied within 10 days of receiving patch availability notification from InfoSight
  • All critical and recommended Primera OS/firmware releases must be applied within 30 days of receiving upgrade notification from InfoSight
  • Guarantee applies to unplanned outages only (obviously)

Most of those points should come as no surprise. Pulling out all the power cables in an array isn’t going to net you a fat credit from HPE. The biggest thing the customer has to worry about is about staying on top of updates, but, that’s what HPE Proactive Care is for. In my experience, the HPE Proactive Care support contracts are well worth it. The Proactive Care team works hand in hand with the customer to schedule and perform updates. Assuming customers can get the patches approved by their change review boards in time, maintaining their end of the 100% availability responsibilities should be no problem.

That might leave you wondering, what happens if there is a qualifying outage?

Customers will need to open a support case with HPE who will determine, at their sole discretion, if the outage qualifies for a credit. HPE states that the credit can be up to 20% of the original purchase price, but also that the awarded credit amount will be determined by them on a case-by-case basis. If credit is awarded, it can then only be applied to the purchase of a new Primera, or an upgrade of an existing one.

My Thoughts

I think HPE is headed in the right direction with Primera. It’s the next evolution of a successful storage platform that’s been around for nearly two decades. I’ve managed a handful of 3Par arrays over the past few years, and for the most part, they have been solid. For existing 3Par customers, buying Primera is almost a no-brainer. Actually, HPE states that 3Par arrays will be compatible with Primera arrays as a replication partner. That seems like a pretty smart way to not alienate existing customers, while also giving them a path onto the new platform. HPE was clear at the Tech Field Day exclusive event that there are currently no plans to deprecate the 3Par line.

Let’s look at their portfolio:

With such a comprehensive offering, I wouldn’t be surprised if HPE eventually stops selling 3Par as it shifts its customers to Primera. But for now, customers have the choice of which platform is best for them.

Overall, I’m impressed with Primera. HPE is building on the strengths of the 3Par platform while optimizing and redesigning where necessary for future improvements. HPE has hinted there will be more to come in terms of NVMe and SCM down the road, and I’ll be excited to see how the product leverages those technologies.

Continue Reading

VMware Cloud Automation Services: The Next Evolution in Multi-Cloud Automation

Disclaimer: As an attendee of Tech Field Day 19, my flights, accommodations, and other expenses were paid for by Tech Field Day. I was not required to write about any of the content presented, and I have not been compensated in any way for this post. Some presenting companies chose to provide gifts to attendees; any such gifts do not impact my reviews or opinions as expressed on this site. All opinions in this post are my own, and do not necessarily represent that of my employer or Gestalt IT.

The last day of Tech Field Day 19 was all about VMware. The recordings from their presentations that day can be found here, but in this post I’ll be focusing on the content presented by Ken Lee and Cody De Arkland on VMware’s Cloud Automation Services (CAS) suite.

Overview

The Cloud Automation Services Suite is a SaaS-based offering designed for multi-cloud management and automation and is currently composed of 3 products:

Cloud Assembly serves as the blueprinting engine, allowing users to deploy workloads, such as infrastructure or containers, to any connected public or private cloud environment. Service Broker is the “storefront” of sorts. It functions as a catalog of services available to users. Tailored policies and request forms can be applied to those available services to non-disruptively maintain organizational controls such as naming, access, cost controls, etc. Code Stream is the CI/CD platform of the product. It leverages the concept of “pipelines” to automate the delivery of applications or infrastructure. Users can integrate existing tools like Gitlab and Jenkins while using Code Stream to orchestrate the flow. Cody does an absolutely excellent job of explaining and demonstrating these products in his Tech Field Day presentations, so be sure to check those out for all the juicy details.

My Thoughts

Those familiar with VMware’s current vRealize Automation product (vRA) will recognize that CAS clearly is a logical progression of the technology vRA offers. Improving the on-boarding process and developing new integrations with third-party tools and platforms are just two of the ways they’ve used customer feedback to improve the product. What remains to be seen is exactly what parallels will exist between CAS and the next version of vRA, other than the obvious difference in deployment models. Cody hints that we should pay attention to announcements at VMWorld 2019 for more information, and I intend to do just that.

What could not be ignored during the Tech Field Day presentations on CAS was just how flexible this product is. Perhaps the most concise description of that comes from Pietro Piutti:

https://twitter.com/stingray92/status/1144689695542009856

Being able to connect to both public and private clouds and deploy workloads in just a matter of minutes provides an easy on-ramp for customers. Achieving similar functionality in recent versions of vRA is possible, but the configuration required to do so was more complicated.

That flexibility doesn’t end with the Cloud Assembly product. The entire Cloud Automation Services suite was designed with an “API-first” mentality. That allows the product to be extremely extensible. VMware isn’t asking customers to give up their tools. Do you want to continue to leverage GitHub or GitLab for your code repos? CAS supports that. Are you using Ansible or Puppet for your configuration management? No problem. While watching the demonstrations live at Tech Field Day, I couldn’t help but notice that VMware’s focus for this platform is to make it consumable, regardless of technical approach.

“We’ve taken concepts that could be very complex, and we’ve given people an on-ramp to use them.”

– Cody De Arkland, Technical Marketing Architect at VMware, on using Blueprints in VMware Cloud Assembly

Working in this field, it’s common to see a new product or platform that is impressive in function but requires users to abandon their existing tools or processes. Those processes then have to be rebuilt on the new platform with new methods. That isn’t the play by VMware with Cloud Automation Services. They understand that for this product to be adopted, it must be usable, and they must allow users, administrators, and developers to bring their own tools and processes.

Keep in mind that VMware Cloud Automation Services is a SaaS offering, and that comes with the added benefit of not having to manage the infrastructure to perform these functions. But, SaaS products aren’t for everyone. Although CAS is being touted as the next evolution of vRA, I don’t see vRA being deprecated in favor of CAS. I hope that feature parity is maintained between CAS and vRA moving forward so that the customer can decide what product is right for them, without sacrifice. Cody is refreshingly transparent in his presentations and makes clear that all of a customers’ desired product integrations may not exist yet, but that they take feedback very seriously and are rapidly developing to accommodate for customers’ needs. I’m looking forward to getting an update on the future of these products at VMWorld 2019.

In a nutshell, VMware’s Cloud Automation Services platform allows organizations to embrace DevOps methodologies without attempting to funnel customers into using a particular set of tools. I’m excited to see what is added and refined in the product, as this platform only became generally available early in 2019. If you want to get your hands on the product to learn more, VMware offers a hands-on lab specific to Cloud Automation Services.

Continue Reading