Problems With mon_disk_space Script? (VMware KB 2058187)

I recently attempted to deploy the mon_disk_space script from VMware KB 2058187. The instructions from the KB are straightforward; users only need to modify the below two values to get started:

# Please provide email for alert messages
email='wmilliron@example.com'
# Please provide percentage threshold for PostgreSQL used disk space
thresh=10

The script should send an email to the address provided when the PostgreSQL volume is utilizing more capacity than the provided (as a percentage) threshold. For my testing, I put the initial value at 10 knowing it would trigger the email to send.

After copying the script to /etc/cron.hourly on the VCSA and running ‘chmod 700 /etc/cron.hourly/mon_disk_space‘ to ensure the script is executable by cron, emails still were still not showing up, even after waiting over an hour. The troubleshooting began…

First, make sure cron is attempting to execute the script by running:

grep cron /var/log/vmware/messages

You should find entries similar to this in the log:

run-parts[51761]: (/etc/cron.hourly) starting mon_disk_space
run-parts[51761][51796]: (/etc/cron.hourly) finished mon_disk_space

If you see those entries, then cron is able to execute the script, so the problem seems to be within the script itself. If you take a look at line 9 of the provided script, the variable ‘db_type’ is populated by running:

cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | tr -d ' '

When I run that single command against my 6.7 VCSA, I get these duplicate values:

vcsa [ ~ ] cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | tr -d ' '
PostgreSQL
PostgreSQL

Let’s take a look at the provided script again. Lines 10-12 are looking for a single “PostgreSQL” entry, but the VCSA is providing back two values. This condition causes the script to exit, which explains why no emails are sent.

Simply adding a ‘uniq’ to line 9 will cause the script to produce a single, unique value. Line 9 of mon_disk_space ends up looking like this:

db_type=`cat /etc/odbc.ini | grep DB_TYPE | awk -F= '{print $2}' | uniq | tr -d ' '`

After making the change, I manually triggered the cron job by running run-parts /etc/cron.hourly. The alert properly triggered, and the email showed up in my inbox. Lastly, don’t forget to go back and modify the alerting threshold on line 6 of the script to something more sensible.

Continue Reading

HPE Primera – What the 100% Availability Guarantee Means for Storage Administrators

Disclaimer: As an attendee of Tech Field Day, my flights, accommodations, and other expenses were paid for by Tech Field Day. I was not required to write about any of the content presented, and I have not been compensated in any way for this post. Some presenting companies chose to provide gifts to attendees; any such gifts do not impact my reviews or opinions as expressed on this site. All opinions in this post are my own, and do not necessarily represent that of my employer or Gestalt IT.

HPE Primera

Tech Field Day recently held an exclusive event with HPE Storage at the Nth Generation Symposium, and I was fortunate enough to be one of the attending delegates. HPE had a lot of great sessions during the event. I would encourage you to check them out at the Tech Field Day YouTube page.

Overview

HPE’s Primera product is their new, tier-0 storage array designed for extreme performance and resiliency for mission-critical applications. Primera is a massive evolution of their existing 3Par line. Here’s a quick list of some of those changes that make Primera a next-generation product:

  • OS/Platform functions have been taken out of the traditional monolithic OS package model and split into microservices running in containers. This allows them to be upgraded separately and non-disruptively, while also allowing them to scale on demand.
  • The platform has moved to using a new, 6th-generation ASIC designed for parallel processing. Using these ASICs, the CPU cores have a direct path to every other CPU core on other nodes. This design, HPE claims, will maximize the utility of NVMe drives.
  • Primera was designed with intelligence in mind, and comes InfoSight-ready. The AI/ML components of InfoSight are intended to increase availability and decrease the workload of storage administrators.

Others have done a better job than I could at spelling out the technical aspects, so instead of rehashing them all, I’ll direct you to better resources:

  • Alex Veprinsky Presents Primera Storage at Tech Field Day (YouTube)
  • Chris Mellor – HPE scales out 3PAR to build massively parallel Primera line (Blocks and Files)
  • Chris Evans – HPE Primera – First impressions on the new storage platform (Architecting IT)

Primera’s 100% Availability Guarantee

Ever since HPE announced Primera at their HPE Discover event earlier this year, I’ve been very interested to learn more about it. In particular, about how they are able to back up their 100% availability guarantee.

I’ll be honest, as a systems engineer that spends at least some of my time managing various storage arrays, hearing a vendor guarantee 100% availability makes me skeptical. But Dianne Gonzalez, Principal Technical Marketing Engineer with HPE, explains that the guarantee is achievable in practice; it simply requires deployment and ongoing upkeep according to HPE’s best practices.

Customer Requirements for 100% Availability Guarantee

Primera is a hardware system that is designed to be highly performant and resilient, but that alone won’t keep a system going. HPE does have customer requirements in order to ensure compliance with their 100% availability guarantee, and it’s all explained in their 100% Availability Guarantee document. It’s readable even with no law degree, but I’ve pulled out what I believe storage admins will want to know in order to give their directors and VPs the availability they are expecting:

  • An HPE Proactive Care (or higher) support contract is required
  • HPE InfoSight must be enabled and sending data back to HPE (requires external connectivity)
  • All critical and recommended HPE patches must be applied within 10 days of receiving patch availability notification from InfoSight
  • All critical and recommended Primera OS/firmware releases must be applied within 30 days of receiving upgrade notification from InfoSight
  • Guarantee applies to unplanned outages only (obviously)

Most of those points should come as no surprise. Pulling out all the power cables in an array isn’t going to net you a fat credit from HPE. The biggest thing the customer has to worry about is about staying on top of updates, but, that’s what HPE Proactive Care is for. In my experience, the HPE Proactive Care support contracts are well worth it. The Proactive Care team works hand in hand with the customer to schedule and perform updates. Assuming customers can get the patches approved by their change review boards in time, maintaining their end of the 100% availability responsibilities should be no problem.

That might leave you wondering, what happens if there is a qualifying outage?

Customers will need to open a support case with HPE who will determine, at their sole discretion, if the outage qualifies for a credit. HPE states that the credit can be up to 20% of the original purchase price, but also that the awarded credit amount will be determined by them on a case-by-case basis. If credit is awarded, it can then only be applied to the purchase of a new Primera, or an upgrade of an existing one.

My Thoughts

I think HPE is headed in the right direction with Primera. It’s the next evolution of a successful storage platform that’s been around for nearly two decades. I’ve managed a handful of 3Par arrays over the past few years, and for the most part, they have been solid. For existing 3Par customers, buying Primera is almost a no-brainer. Actually, HPE states that 3Par arrays will be compatible with Primera arrays as a replication partner. That seems like a pretty smart way to not alienate existing customers, while also giving them a path onto the new platform. HPE was clear at the Tech Field Day exclusive event that there are currently no plans to deprecate the 3Par line.

Let’s look at their portfolio:

With such a comprehensive offering, I wouldn’t be surprised if HPE eventually stops selling 3Par as it shifts its customers to Primera. But for now, customers have the choice of which platform is best for them.

Overall, I’m impressed with Primera. HPE is building on the strengths of the 3Par platform while optimizing and redesigning where necessary for future improvements. HPE has hinted there will be more to come in terms of NVMe and SCM down the road, and I’ll be excited to see how the product leverages those technologies.

Continue Reading

VMware Cloud Automation Services: The Next Evolution in Multi-Cloud Automation

Disclaimer: As an attendee of Tech Field Day 19, my flights, accommodations, and other expenses were paid for by Tech Field Day. I was not required to write about any of the content presented, and I have not been compensated in any way for this post. Some presenting companies chose to provide gifts to attendees; any such gifts do not impact my reviews or opinions as expressed on this site. All opinions in this post are my own, and do not necessarily represent that of my employer or Gestalt IT.

The last day of Tech Field Day 19 was all about VMware. The recordings from their presentations that day can be found here, but in this post I’ll be focusing on the content presented by Ken Lee and Cody De Arkland on VMware’s Cloud Automation Services (CAS) suite.

Overview

The Cloud Automation Services Suite is a SaaS-based offering designed for multi-cloud management and automation and is currently composed of 3 products:

Cloud Assembly serves as the blueprinting engine, allowing users to deploy workloads, such as infrastructure or containers, to any connected public or private cloud environment. Service Broker is the “storefront” of sorts. It functions as a catalog of services available to users. Tailored policies and request forms can be applied to those available services to non-disruptively maintain organizational controls such as naming, access, cost controls, etc. Code Stream is the CI/CD platform of the product. It leverages the concept of “pipelines” to automate the delivery of applications or infrastructure. Users can integrate existing tools like Gitlab and Jenkins while using Code Stream to orchestrate the flow. Cody does an absolutely excellent job of explaining and demonstrating these products in his Tech Field Day presentations, so be sure to check those out for all the juicy details.

My Thoughts

Those familiar with VMware’s current vRealize Automation product (vRA) will recognize that CAS clearly is a logical progression of the technology vRA offers. Improving the on-boarding process and developing new integrations with third-party tools and platforms are just two of the ways they’ve used customer feedback to improve┬áthe product. What remains to be seen is exactly what parallels will exist between CAS and the next version of vRA, other than the obvious difference in deployment models. Cody hints that we should pay attention to announcements at VMWorld 2019 for more information, and I intend to do just that.

What could not be ignored during the Tech Field Day presentations on CAS was just how flexible this product is. Perhaps the most concise description of that comes from Pietro Piutti:

Being able to connect to both public and private clouds and deploy workloads in just a matter of minutes provides an easy on-ramp for customers. Achieving similar functionality in recent versions of vRA is possible, but the configuration required to do so was more complicated.

That flexibility doesn’t end with the Cloud Assembly product. The entire Cloud Automation Services suite was designed with an “API-first” mentality. That allows the product to be extremely extensible. VMware isn’t asking customers to give up their tools. Do you want to continue to leverage GitHub or GitLab for your code repos? CAS supports that. Are you using Ansible or Puppet for your configuration management? No problem. While watching the demonstrations live at Tech Field Day, I couldn’t help but notice that VMware’s focus for this platform is to make it consumable, regardless of technical approach.

“We’ve taken concepts that could be very complex, and we’ve given people an on-ramp to use them.”

– Cody De Arkland, Technical Marketing Architect at VMware, on using Blueprints in VMware Cloud Assembly

Working in this field, it’s common to see a new product or platform that is impressive in function but requires users to abandon their existing tools or processes. Those processes then have to be rebuilt on the new platform with new methods. That isn’t the play by VMware with Cloud Automation Services. They understand that for this product to be adopted, it must be usable, and they must allow users, administrators, and developers to bring their own tools and processes.

Keep in mind that VMware Cloud Automation Services is a SaaS offering, and that comes with the added benefit of not having to manage the infrastructure to perform these functions. But, SaaS products aren’t for everyone. Although CAS is being touted as the next evolution of vRA, I don’t see vRA being deprecated in favor of CAS. I hope that feature parity is maintained between CAS and vRA moving forward so that the customer can decide what product is right for them, without sacrifice. Cody is refreshingly transparent in his presentations and makes clear that all of a customers’ desired product integrations may not exist yet, but that they take feedback very seriously and are rapidly developing to accommodate for customers’ needs. I’m looking forward to getting an update on the future of these products at VMWorld 2019.

In a nutshell, VMware’s Cloud Automation Services platform allows organizations to embrace DevOps methodologies without attempting to funnel customers into using a particular set of tools. I’m excited to see what is added and refined in the product, as this platform only became generally available early in 2019. If you want to get your hands on the product to learn more, VMware offers a hands-on lab specific to Cloud Automation Services.

Continue Reading

My Take: Ixia’s Visibility Portfolio – As Seen at Tech Field Day 19

Disclaimer: As an attendee of Tech Field Day 19, my flights, accommodations, and other expenses were paid for by Tech Field Day. I was not required to write about any of the content presented, and I have not been compensated in any way for this post. Some presenting companies chose to provide gifts to attendees; any such gifts do not impact my reviews or opinions as expressed on this site. All opinions in this post are my own, and do not necessarily represent that of my employer or Gestalt IT.

The first presenting company of Tech Field Day 19 (TFD19) was Ixia. The focus of their presentation surrounded their network visibility offerings. I’m not going to elaborate on all of the details of their presentations. I would instead encourage you to check them out:

My Thoughts

Although I have a pretty solid understanding of networking principles and regularly spend time troubleshooting problems around the network stack, the products in Ixia’s visibility platform are beyond the scope of what I do daily. Despite that, it’s easy to see where their product line would fit into an enterprise company. They strive to offer increased visibility into network traffic from the datacenter, to the edge, and all the way to the cloud. Companies need to increase security within their networks, but they can’t do that without knowing what is happening within them.

“The number one driver for visibility is security.”

Recep Ozdag – VP and GM of Network Visibility, Ixia

They leverage network taps to pass traffic, packet brokers to organize, aggregate and filter traffic, and also offer options to do the same in the cloud. You are then able to wrap rules and policies around the packet brokers to dictate what traffic is directed to your existing network security tool set, and how much of it. If that seems a little confusing, this diagram of theirs may help.

You may be asking yourself, “Why would I want this?” Perimeter security is great, but it is only a small part of the picture when it comes to securing a network. Enterprise networks create an immense amount of traffic, a large portion of which never touches the perimeter. Trying to funnel all of that traffic through a traditional set of security tools will likely overrun what the tools are capable of processing, or will land you a crazy-high licensing bill for those tools whose pricing are based on ingestion rates. That’s where the beauty of the traffic broker comes in. It allows you to wrap policies around traffic flow to intelligently trim and route packets to your existing security tools in order to maximize their utility.

I’m not going to get any further into the weeds on how it all works (that’s what the presentations are for). If you’re wondering if this type of product is right for you, let’s take a look at this customer statistic they displayed:

Those are some impressive numbers on large companies! However, my guess is that’s because their product is geared towards those large companies. I would be interested to see their customer numbers in the small/medium sized business areas.

Unfortunately, Ixia only had an hour to present at TFD19 and likely didn’t have time to do a demonstration of the platform. They did show us a few screenshots of the UI, and honestly it felt a little dated. I can’t speak to the usability of the interface as I didn’t see it in action, but it does remind me of the iLO2 interfaces from the HPE days of old. Given their customer base, I would assume that the interface functions as expected. I was just hoping for something a little more modern.

TL;DR

Even though the UI leaves a little to be desired, Ixia offers a comprehensive portfolio of products that will help collect, aggregate, and filter network packets in order to maximize the effectiveness of a customer’s existing security tools. I would love to see how Ixia can tailor a solution to help the small businesses of the world achieve similar visibility.

Continue Reading

Presentation of “vSAN 2-Node Clusters: From Planning to Production”

Recently I had the opportunity to present on my journey deploying vSAN 2-node clusters at the Central Ohio VMUG UserCon, as well as a local Pittsburgh VMUG event. Overall, it was a great experience and I’m thankful to have had the opportunity! Check out the recording of the session below, and don’t forget to check out my post on Building a 2-node Direct Connect vSAN cluster.

Feel free to download the slide deck for this presentation.

Thanks to Ariel Sanchez for not only recording the session and uploading to the vBrownBag channel, but for pushing me to give this presentation in the first place!

Continue Reading