When I first start working with an API, I aim for low-hanging fruit. REST APIs, by nature, should be very generic in how they’re interacted with; however, there’s usually small nuances to take into consideration. For example, I recently found out that the VMware Cloud on AWS API uses a csp-auth-token header for authentication and authorization.
While authorization and authentication to the VCF API was straightforward (SDDC Manager username and password), I struggled the first time with POSTing a new VMware license due the API requiring a specific format for productType.
In a recent post, I wrote about interacting with VCF using the API to add a new license key as a simple way to begin familiarizing myself with the API. As a huge proponent of PowerShell, I began looking for a module to talk to the API but came up empty handed. I began working on a module with vSphere admins in mind because I know the important role PowerShell plays in day-to-day operations. During a conversation with Jase McCarty, he told me about the PowerVCF project which does exactly that! The module was initially developed and is maintained by Brian O’Connell and has 50 cmdlets which covers ~70% of the API calls in VCF 3.9.0:
Get/New/Set/Remove Workload Domain
Get/New/Set/Remove vSphere Cluster
Get/New/Remove Network Pool
Get/New/Remove Network IP Pool
Get/New/Remove License Keys
Get/Set participation in CEIP
Get/Start Backup Configuration
Get/Request Log Bundle
Get/Set Microsoft Certificate Authority
Get/Request Certificate CSRs
Get/Set Depot Credentials
Get PSCs & vCenter servers managed by SDDC Manager
Get NSX-V Managers
Get NSX-V/T Clusters
Get vRealize Log Insight info
Get vRealize Lifecycle Manager & Environment info
Get vROPS info
PowerVCF is also mostly compatible with VxRAIL too with the exception of commissioning and decommissioning hosts, working with network pools, and creating and removing workload domains.
The biggest functionality missing right now are creating and deleting PKS/Horizon workload domains and creating/joining/tearing down federations. I’m currently working on the PKS workload domain functions and plan to submit a PR soon!
If you’ve recently deployed VCF and looking to orchestrate functionality, I highly recommend checking this module out! If you enjoy creating PowerShell cmdlets and looking to contribute to a project, you’ll find quite a few opportunities to help us work towards feature parity!
Today I took my first VMware certification exam in 7 years and happy to report that I successfully passed the Professional vSphere 6.7 Delta Exam 2019(2V0-21.19D) to become a VMware Certified Professional again!
The VMware hypervisor hasn’t had significant changes in 7 years since I took the VCP 5 exam and I never stopped working with vSphere so it didn’t require a tremendous amount of time to prepare. The topics I spent the most time on were reviewing vSphere HA/DRS updates, vCenter HA and PSCs, content libraries, SSO domains, and security enhancements. It’s especially helpful that VMware created the Delta Exam as it allows professionals to test and re-certify only on the information that is new or has changed since the previous exam and not require candidates to sit through the complete VCP exam.
Now that I’ve been at VMware for 6 months, I felt it was timely to get my VCP-DCV updated. It was also helpful that all VMware employees receive three free exam vouchers per year! In the bigger picture, I want to become more versed with the major components of VMware Cloud Foundation that I’m not as familiar with: NSX, and vRealize. In 2020 I will focus on completing the VCP-NV and VCP-CM certifications to get deeper knowledge of these technologies and the value they bring to organizations.
About the Exam
The exam’s foundation is focused on a candidate who already has a VCP-DCV 6.5 and has 6-12 months of experience installing, configuring, and managing vSphere. The exam consists of 40 single and multiple choice questions covering a wide variety of topics such as:
Architecture and Technologies
Products and Solutions
Planning and Designing
Installing, Configuring, and Setup
Performance-tuning, Optimization, and Upgrades
Troubleshooting and Repairing
Administrative and Operational Tasks
The full exam prep guide can be found on VMware’s Education site here.
Since it’s been such a long time since I held an active VMware certification, I want to help others who may be on a similar journey. In upcoming blog posts, I will comprehensively cover the biggest and most complex additions/changes to vSphere since vSphere 5.5. If you have topics you’d like to see covered, drop a comment!
A common question I
receive from customers is why they don’t see a VMware
Cloud Foundation license in the MyVMware portal. What appears instead is
licenses for each individual product that make up the VCF
edition you purchased. Which is typically:
I’ve been at VMware for 12 weeks now and continuing to work towards being a vSAN expert. One of my many challenges facing that goal is not only learning the current state of vSAN’s features and capabilities (the latest being 6.7U3) but also learning how vSAN operated in previous versions to articulate to my customers why feature X in this release is relevant to them.
VMware has released updates to vSAN 75 times since the initial release in 2014 and 12 updates in 2019 alone. So where is the best place to start for having a foundational understanding of modern vSAN functionality? VMware called version 6.6 their “Biggest Release Ever” back in 2017 and admittedly, while at Pure Storage, that’s the version that I started to recognize that vSAN had matured a lot so this version would be the basis for level setting my knowledge on what most customers’ experience with vSAN will be. However, of the handful of customers that I support in my Global Accounts role at VMware, most are running at least vSphere 6.5U3 so vSAN 6.6.1 will be the basis for my learning.
One of the confusions I’m adjusting to diving into vSAN is that vSphere and vSAN versions don’t match. One would reasonably expect a product built into another one to have matching versions but they rarely do. Interestingly, they have matched in the past! One of the most helpful documents I’ve used at VMware while ramping up is KB 2150753, Build numbers and versions of VMware vSAN. I’ve referenced this KB article many times to correlate vSphere and vSAN versions. At the end of the day, matching version numbers is a nice to have “feature” but not matching is the reality of two separate business units working on their own products with specific goals and milestones to reach different major and minor releases.
I’m going to highlight major performance and usability enhancements to vSAN in the past 4 release:
A typical minor dot-release for vSAN: a few new enhancements but nothing major. Although there were 12 updates to 6.6.1 since it’s initial release (Express Patches, Patches, and Updates), I couldn’t find any release notes. Fundamentally, these were the most important features in this release:
VUM Integration: VUM integration automates the process of ensuring that hardware installed in the cluster is on the VMware Compatibility Guide (or HCL). It also provided firmware updates for select hardware vendors such as Dell, Lenovo, Supermicro, and Fujitsu. A known issue in this release is that Custom ISOs are not supported in vSAN build recommendations and hosts built on custom ISOs will display as Non-Compliant.
Storage Device Serviceability (Blink Disk Lights): When a device fails, it’s extremely important to be able to find it in the server! This feature gives you the ability to select the disk in the UI and make the LED light blink. Great feature but in this release, it’s limited to HPE DL/ML series servers with Gen 9 controllers.
What Was New in vSAN 6.7 GA
A big usability enhancement in this release was the HTML 5 Client becoming the standard interface for vSphere! Other notable performance enhancements included:
This feature includes three main components: congestion control mechanisms, a dispatch/fairness scheduler, and a bandwidth regulator. In essence, under contention vSAN has the ability to throttle I/O caused by resync operations in favor of prioritizing VM I/O. Before this feature was added, VM I/O was in an every-man-for-himself battle that could cause performance. The adaptive nature of this feature means it’s always on and allows it to be an invisible vSAN operations that doesn’t need any user-defined capabilities. The Adaptive Resync Deep Dive on StorageHub goes into much greater detail.
New Health Checks in vSAN Health
vSAN Health is a cloud-connected, built in framework for providing proactive health checks for vSAN clusters. Participation in VMware’s Customer Experience Improvement Program (CEIP) is mandatory to realize this benefit. This capability was initially released in vSAN 6.6 and additional checks were added in 6.7 included:
Host maintenance mode verification
Host consistency settings for advanced settings
Improved vSAN and vMotion network connectivity checks
Improved vSAN Health Service installation check
Physical Disk Health checks combine multiple checks into a single health check
Improved HCL check
Firmware checks are now independent of driver checks
This release had 3 new features to improve performance and relaiability when using stretched clusters. Namely:
Intelligent site continuity: If there’s a partition in the cluster (link goes down, etc), vSAN will first validate which site provides maximum data availability before establishing a quorum with the witness. For example, if Site A (preferred) lost a node or a device during the partition and objects are in a degraded state but Site B (secondary) is healthy, vSAN will consider Site B active until Site A is healthy again.
Witness traffic separation: A separate vmkernel NIC can be dedicated for vSAN witness traffic when using stretched clusters. Previously it was required for the data network to communicate with the vSAN witness host and that VLAN to be stretched across the WAN as well. When deploying stretched clusters, separating witness traffic is recommended.
Efficient inter-site resync: A proxy host is established for components that need to be resynced across sites following a failure instead of copying the objects across the WAN to meet the storage policy requirements
More details on vSAN 6.7 GA updates can be found in the release notes.
What Was New in vSAN 6.7 Update 1
vSAN 6.7U1 seems it was the biggest update to vSAN since 6.6 and there’s a lot of great performance and usability enhancements in this release!
The following tasks are performed to speed up and ease the deployment process of vSphere clusters:
Setup HA, DRS, and vSAN
Select vSAN deployment type
Network configuration including vSphere Distributed Switching
Disk Group configuration
Enable Deduplication & Compression / Encryption
Remember how in 6.6.1 there was VUM integration? Well kinda…what was missing was the ability to utilize VUM to update vSAN clusters when using OEM-specific ISOs. That’s fixed in this release but still no ability to update vSAN through VUM with custom ISOs.
When entering a host into maintenance mode whether to perform updates or simply decommission it, vSAN will now perform a full simulation of the activity (assess the capacity/availability impact of host going into maintenance mode and ability for cluster to redistribute object components) and report back success or failure.
Additionally, the “object repair delay timer” setting (around since vSAN 5.5) is now in the GUI. This allows an administration to modify the amount of time to wait for vSAN to rebuild data when components are out of compliance with the storage policy due to a disk or node failure.
vSAN now has awareness of TRIM/UNMAP commands sent from the Guest OS and can reclaim previously allocated blocks as free space.
Mixed MTU Support for 2 Node and Stretched Clusters
Remember that Witness Traffic Separation (WTS) feature in 6.7 GA? It was nice that a different vmkernel port could be used to separate vSAN data traffic from witness traffic; however, it was still required that the MTU matched on all vmkernel interfaces. That changed in 6.7U1 and now it’s possible to have Jumbo Frames on the vSAN data vmkernel interfaces while using a standard MTU setting on the vmkernel interface for witness traffic!
Enhanced Health Checks & Support
Network performance health check ensures that sufficient performance can be achieved
Display and classify multiple, VCG-approved storage controller firmware versions such as not latest, latest, and not on HCL
Expanded diagnostics in vSAN Support Insight which give GSS tools to capture network diagnostic data and further reduce the need for collecting and transmitting logs
More details on vSAN 6.7 Update 1 features can be found in the release notes.
What’s New in vSAN 6.7 Update 3
Finally! We’ve made it to the current version of vSAN and you may have noticed that we skipped over Update 2. That’s because vSphere 6.7 Update 2 didn’t include any new features or enhancements to vSAN so it was skipped. I guess VMware tries to keep versions aligned after all?
Update 3 is another huge leap forward for vSAN with the biggest being the introduction of Cloud Native Storage. This isn’t specifically tied to just vSAN. Instead, it enables vSphere to provide persistent storage to Kubernetes and gives the vSphere administrator the ability to select the required storage (vSAN, VMFS, NFS) for the pod. There’s an excellent doc on Getting Started with VMware Cloud Native Storagehere which walks you through setting up a k8s cluster, deploying applications, and managing container volumes.
VUM integration gets another update: instead of showing only the latest version of vSAN, you can create new baselines to stay that allow you to stay at the current version and only show new patches and updates
New Monitoring and Dashboards
Capacity Monitoring Dashboard has been redesigned to provide better visibility into overall as well as granular utilization. New insights per site, per fault domain, and host/disk level
Resync: improved accuracy when displaying time remaining to complete a resync
Data migration pre-check: new dashboard that provides detailed information when performing data migration activities for maintenance mode tasks. Provides insight into object compliance, cluster capacity, and even predicts the health of the cluster before placing a host into maintenance mode
In the past, when vSAN was resyncing components, it would use a single thread to copy the data. This isn’t really a problem if the components are small as they’re likely to transfer quickly; however, what if we have many max-size components (255GB) due to large VMDKs? For example, a 5TB VMDK will span over 20x 255GB components. In vSAN 6.7U3, it will now leverage numerous parallel streams per component to make resyncs complete faster. Bandwidth for this process is managed by Adaptive Resync that was introduced in 6.7 GA.
Introducing Automatic Rebalance
In previous versions of vSAN, administrators could manually initiate a proactive rebalance after being alerted by a vSAN health check that disk(s) were imbalanced. Now automatic rebalancing can be configured to enable vSAN to handle these operations without user intervention. Information on how to enable automatic rebalancing can be found here. Be sure to adjust the vSAN health check to prevent unnecessary alerts!
New Tool: vsantop
vSphere administrators have been using esxtop for years and now there’s a similar tool, vsantop, to measure CPU usage for storage-related tasks to help with troubleshooting and support cases. This can be especially useful to provide quantifiable measurements to assist administrators understanding the impact of using data services like dedupe & compression or data at rest encryption.
There is still significant enhancements that improved I/O handling, resync and rebalancing performance, and degraded device handling since vSAN 6.6.1 that weren’t mentioned here. VMware has made significant investments in vSAN since it’s release in 2014 and serves as a solid foundation for on-premises and hybrid cloud storage.
This exercise was very productive to help me understand the progress that vSAN has seen over the last 2 years and has better prepared me to discuss upgrade paths and new features with customers.
VMware Cloud Foundation 3.8 was released in July 2019 and the biggest news in this release is the addition of public RESTful APIs for common tasks that are performed for workload domains and other day 2 operations. Managing Cloud Foundation in the SDDC manager is incredibly intuitive but customers have significant investment in existing IT and business systems such as vRA or ServiceNow.
In large scale cloud foundation deployments like I work with in Global Accounts, this will be a heavily used feature because customers now have the ability to utilize existing provisioning workflows in vRA or create new workflows that allow ops teams to orchestrate even higher levels of automation. Some common operational tasks that are available in version 1 of the API are:
Commission and decommission hosts
Create and delete workload domains
Manage network pools
Cloud Foundation 3.8 also adds capability for the SDDC manager to patch and upgrade all vRealize Suite components and NSX-T. In previous versions, SDDC manager could deploy vRealize Suite but initial config, patching, and upgrades were handled manually through each individual component. The Cloud Foundation engineering teams has been rapidly deploying enhancements and this version comes just 6 weeks since the last major release.
For further details such as release notes and planning and upgrade guides for Cloud Foundation 3.8, visit VMware Docs.
Let’s be honest — if you’re a VMUG member, you get quite a few emails from VMUG and probably delete them without looking or quickly scan it and then delete it. I tend to do the latter but the one I received this morning caught my attention and quickly turned to excitement and I wanted to do my part to promote what I’m expecting to be a very beneficial event.
Lately I’ve spent a lot of after hours time working on my own professional development and specifically focusing on leadership as I feel that my future roles in technology will require that skill. But it’s also an important skill in my role as father raising 3 children.
The upcoming VMUG virtual event’s keynote speaker will be VMware CEO, Pat Gelsinger, where he will share his “Five L’s of Leadership.” The event will also include 5 members of the VMUG community that will share their experience ranging from broad topics such as resume writing, networking, and public speaking as well as deeper topics to help you identify your brand and use it for your future success. I’m looking forward to hearing each of the following speakers:
A Public Speakers Guide to Public Speaking, Chris McCain, Director of Technical Certifications @ VMware
Soft Skills, Resume Building and Networking are Some of the Toughest Areas to Master, Paul Nadeau, Sr. SD-WAN Systems Engineer @ VMware
Tips and Habits to Advance Your IT Career, Ariel Sanchez, Sr. Technical Account Manager @ VMware
Growing From VI Admin to SRE, Michael Roy, Product Line Marketing Manager @ VMware
Achieving Happiness: Building Your Brand and Your Career, Amanda Blevins, Sr. Director & Chief Technologist @ VMware
Over my 15 year career in IT, all of these skills have been extremely important to plot a course, go on a journey, and execute on those goals. The two latest journeys I’m taking are public speaking and building my brand. I’ve been fortunate to have found the VMware community through social media nearly 10 years ago and found industry experts to follow and learn from but I’m making an concerted effort now to raise my voice and share my ideas.
I hope you’ll join me along the way. To join the VMUG virtual event on September 19 from 9 AM – 3 PM, register here: https://vmugvirtualseptevent.vfairs.com. Let your voice be heard too! Share what you learned at the event on social media and your plan to sharpen your skills.
The SE organization at Pure has been hard at work promoting VMware VVols as it enables customers to take the next step in their virtualization journey: mobility. In an earlier post on the Pure Storage blog, Ray Mar wrote about the simplest VVols implementation in the industry. Getting up and running with VVols is effortless but there’s always those pesky minimum requirements to know about before you can begin implementing VVols.
NTP servers configured on ESXi, vCenter, and FlashArray
FlashArray management ports accessible on port 8084
Host and host groups are present on the FlashArray
If replicating, make sure all of these requirements are met on the remote side too!
As the sharp system admin you are, you can probably take a quick glance at the requirements and know you’re good to go. But, it’s a great idea to double check a setting such as NTP that is usually “set it and forget it.” On a small cluster it’s easy enough to click around on a few hosts and vCenter and make sure it’s set and turned on but that’s no bueno on a much larger cluster. Sounds like a great task to be automated! With that in mind, I created the VVols Readiness Checker to quickly validate these prerequisites with PowerShell using PowerCLI and the Pure Storage SDK.
The script can be run on your local machine or server and will download PowerCLI and the Pure Storage SDK if it’s not present. After entering your vCenter, FlashArray, and associated credentials you’ll quickly get a summary of your environment’s readiness to implement VVols.
Once you’re finished addressing any warnings, proceed with the Quick Start Guide to update the vSphere Web Client plugin, register the VASA provider, and create the VVols Datastore!
I highly recommend importing the FlashArray protection groups as VM storage policies as this gives you fine grain control and validation via compliance checks that ensures the VMs are always protected as required by the business.
In a previous post, I wrote about taking FlashArray snapshots with Veeam using a PowerShell script. At the time, there was a limitation that prevented Veeam from seeing protection group snapshots. The Pure Storage Plugin for Veeam version 1.1.40, was released on August 24, 2018 and support for volume snapshots created as part of a Pure Storage Protection Group are now available. Check out the KB article to download the update. Installation is a simple wizard that takes a minute or so to install.
No settings need to be changed on FlashArray or Veeam to see Protection Group snapshots. When selecting a volume on FlashArray, you can see snapshots from a protection group (highlighted) and those taken separately by Veeam as part of another protection policy.
With multiple options for snapshot policies, what’s my recommendation for a best practice? Continue to leverage the volume or protection group snapshot policies on FlashArray. Veeam has visibility into volumes on the FlashArray but can’t manage Protection Groups. Having the ability to group volumes on FlashArray to snapshot and replicate and maintain one retention schedule is easier to administer.
What I would like to see in the next iteration of the plugin is the ability for Veeam to truly integrate with protection groups (consistency groups on other arrays). It looks like Veeam’s Universal Storage API for Integrated Systems will need additional functionality though. The API’s documentation doesn’t describe that functionality.
In April 2018, Veeam released the Universal Storage API which enabled storage vendors like Pure Storage to create integrations for Veeam with their storage system. At a high level, this functionality allows Veeam to leverage storage system snapshots when performing backups as well as take snapshots of volumes for instant restore of VMs or granular file restoration.
In the initial release of the Pure Storage FlashArray plugin, the ability for Veeam to see and utilize existing snapshots on the FlashArray is unavailable. Additionally, it’s not currently possible for Veeam to take snapshots of all the volumes associated with a Protection Group. Joint customers have expressed the desire for this functionality but development takes time.
In the mean time, I created a script that gives customers the ability for Veeam to create snapshots of all the volumes in a FlashArray Protection Group. This script is designed to be run automatically using Windows Task Scheduler; however, you can run it from a PowerShell command prompt for a quick, one time use.
The most significant use case I created this for was recovering file shares faster if it was encrypted by a malware attack. It’s totally possible to immediately remediate the most extreme case where the whole file share is encrypted by overwriting the volume from a storage snapshot but what if it’s just a user’s home directory or a small subset of the file share?
In the following example, I have snapshots on the FlashArray that were taken by Veeam:
From Veeam’s view:
When selecting a snapshot, you can see each VM protected by that snapshot:
This integration is extremely powerful as it provides instant VM, guest file, and application item recovery from FlashArray snapshots instead of backup.
In a sample test, I recovered a single Windows Server 2016 VM in just over a minute:
Veeam performs this operation similar to how it operates when restoring from a backup, with the exception that it creates a volume on the FlashArray from the snapshot, presents it to the applicable host, rescans the hosts’s HBA, mounts the volume, and adds the VM to vCenter.
Currently the first version of this script only supports volume-based Protection Groups. If your Protection Group’s members are hosts or host groups, the script will not work. I anticipate fixing this in an upcoming release as well as adding the ability to specify a volume instead of a Protection Group. Additionally, this script doesn’t limit the number of snapshots taken so please monitor your usage. A future version will address this issue as well.
If you have questions about installing and configuring the Pure Storage FlashArray plugin for Veeam, check out Stephen Owens’ blog posts: