Nutanix

Nutanix Announces Support for HPE Servers and a New Consumption Model

Today Nutanix announced support for HPE ProLiant server hardware and a new consumption model called “Nutanix Go”.  Both announcements support Nutanix’s position that the “enterprise cloud” should be flexible, easy to consume, and with the power of the public cloud….what I like to call the “have it your way” model.

nutanix-have-it-your-way2

HPE ProLiant Support

The announcement of support for HPE server hardware probably doesn’t come as a surprise to many because it’s very similar in nature to the announcement of support for Cisco UCS hardware just a few months ago.  While Nutanix had OEM agreements in place with both Dell and Lenovo hardware, customers wanted the flexibility to use Cisco UCS – their existing server hardware standard,  and after a validation process, Nutanix offered a “meet in the channel” procurement model where a customer buys the Nutanix software from an authorized Nutanix reseller and then buys the validated server hardware from an authorized Cisco reseller.  The announcement for HPE follows this same model using select HPE ProLiant server hardware (currently DL360-G9 and DL380-G9).

While it’s safe to say that there will probably be some gnashing of the teeth regarding this announcement just like there was from the Cisco UCS one (especially in light of HPE’s recent acquisition of SimpliVity), I see it as a win for everyone involved – the customer gets another choice for server hardware and the software that runs on it, channel partners have more “tools in their tool chests” to offer best in class solutions to their customers, and vendors get to move more boxes.

As mentioned earlier, Nutanix plans to support two HPE ProLiant server models initially – DL360-G9 and DL380-G9.  The DL360 is a 1U server with 8 small form factor drive slots and 24 DIMM slots.  The targeted workload for this server (VDI, middleware, web services) would be similar to the Nutanix branded NX3175…things that may be more CPU intensive than storage IO/capacity intensive.  The DL380 is a 2U server with 12 large form factor drive slots and 24 DIMM slots.  The targeted workload for this server would be similar to the Nutanix branded NX6155/8035…things that may generate larger amounts of IO or require more storage capacity.

Nutanix will offer both Acropolis Pro and Ultimate editions in conjunction with the HPE Proliant server hardware.  Starter and Xpress editions will not be available at this time.  However, one interesting tidbit is the fact that software entitlements are transferable across platforms, meaning that a customer could leverage Nutanix software on an existing HPE server hardware investment (assuming it met the validated criteria) and at a later date “slide” that software on over to a different HPE server model or perhaps a Cisco UCS server at the time of a server hardware refresh, if they so chose.

Support is bundled with the software license as a subscription in 1, 3, or 5 year terms.  Just like the model with Nutanix running on Cisco UCS hardware, the server hardware vendor still fields hardware concerns, Nutanix will support the software side, and when in doubt, call Nutanix – if the issue is on the hardware side, concerns will be escalated through TSA Net for handoff to HPE support.

As far as availability timelines are concerned, it should be possible to get quotes for this solution at announcement (today – May 3 2017), with the ability to place orders expected for Q3 2017, and general availability targeted for Q4 2017.

Nutanix Go

Nutanix labels Nutanix Go as “On-premises Enterprise Cloud infrastructure with pay-as-you-Go billing”.  In a nutshell, a customer now has the ability to “rent” a certain number of servers for a defined term, ranging from 6 months to 5 years depending on configuration and model, with pricing incentives for longer term agreements, and billing / payment occurring monthly.

While an outright purchase is probably still the most advantageous in terms of price, there are plenty of scenarios beyond price where the flexibility of quickly scaling up or down in a short time period without keeping hardware with a 3 or 5 year lifecycle on the books…having costs fall under OPEX instead of CAPEX, “de-risking” projects with uncertain futures, augmenting existing owned Nutanix clusters, etc.  Customers will have the ability to mix “rented” nodes with “owned” nodes within the same cluster, enabling a sort of “on premises cloud bursting” capability.

The pricing for Nutanix Go is structured in such a way that the TCO is supposed to be significantly less than running a similar workload in AWS while mitigating some of the “use cases” that may traditionally necessitate consuming a public cloud.

Nutanix Go includes hardware, software, entitlements, and support under one SKU.  It’s priced per block, per term length, and as mentioned previously, billing and payment occur monthly.  Currently, there is a minimum of 12 nodes required for an agreement which in my opinion is a bit high.  I’d like to see something more a long the lines of what is the required minimum for a Nutanix cluster…something like 3 or 4 nodes that might be more attractive to small and medium sized business.  On the flip side, since it is Nutanix keeping the hardware on their books and allowing the customer to rent it, I can see why they’d want a certain minimum to make it worth their while.  Perhaps this will change in the future.

As far as availability is concerned, Nutanix Go is initially only available to US customers, with rollout country by country for the rest of the world in the second half of 2017.

nutanix-go-1

In summary, “more choices” is always a good thing, and further proof that the “power” is in software.  I’m sure many customers, both potential and existing, will find these new consumption models to be a welcome addition.

Installing Nutanix NFS VAAI .vib on ESXi Lab Hosts

This post covers the installation of a Nutanix NFS VAAI .vib on some “non-Nutanix” lab hosts.

Why would one do this?  Several months ago I stood up a three node lab environment accessing “shared” storage using a Nutanix filesystem whitelist (allows defined external clients to access the Nutanix filesystem via NFS).  While the Nutanix VAAI plugin for NFS would normally be installed on the host as part of the Nutanix deployment, it obviously was not there on my vanilla ESXi 6.0 Dell R720 servers accessing the whitelist….which made things like deploying VM’s from template and other tasks normally offloaded to the storage unnecessarily slow.

Since Nutanix just released “Acropolis Block Services / ABS” GA in AOS 4.7 (read more about it at the Nutanix blog) there’s probably less of a reason to use filesystem whitelists for this purpose now, but alas, maybe someone will find it useful (*edit* – it’s worth noting that ABS doesn’t currently support ESXi.  I haven’t tried to see if it actually workyet but needless to say, don’t do it from a production environment and expect Nutanix to help you *edit 1/27/17* as of AOS 5.0 released earlier this month, ESXi is supported using ABS)  At the time of this blog post, Windows 2008 R2/2012 R2, Microsoft SQL and Exchange, Red Hat Enterprise Linux 6+, and Oracle RAC are supported.  NFS whitelists aren’t supported by Nutanix for the purpose of running VM’s, either.

  1. The first step is to SCP the Nutanix NFS VAAI .vib from one of your existing CVM’s.  Point your favorite SCP client to the CVM’s IP, enter the appropriate credentials, and browse to the following directory:/home/nutanix/data/installer/%version_of_software%/pkg2016-06-27 07_49_20-PhotosCopy the “nfs-vaai-plugin.vib” file to your workstation so that it can be uploaded to storage connected to your ESXi hosts using the vSphere Client.
  2. Once the .vib is uploaded to storage accessible by all ESXi hosts, SSH to the first host to begin installation.  You may need to enable SSH access on the host as it’s disabled by default.  This can be done by starting the SSH service in %host% > Configuration > Security Profile > Services “Properties” in the vSphere Client.
  3. Once logged in to your ESXi host, we can verify that the NFS VAAI .vib is missing by issuing the “esxcli software vib list” command.vib-listIf the .vib were present, we’d see it at the top of the list.
  4. Now we need to get the exact path to location you placed the .vib on your storage.  This can be done by issuing the “esxcli storage filesystem list” command.  You will be presented with a list of all storage accessible to the host, the mount point, the volume name, and UUID.storage-listHighlight the “mount point” of the appropriate storage volume so that we can paste it into the next command.  Alternatively, you could use the “volume name” in place of the UUID in the mount point path, but this was easier for me.
  5. Next, we will  install the .vib file using the “esxcli software vib install -v “/vmfs/volumes/%UUID_or_volume_name%/%subdir_name%/nfs-vaai-plugin.vib”” command.  I created a subdirectory called “VIBs” and placed the nfs-vaai-plugin.vib file in it.  Be careful as the path to the file is case sensitive.vib-installIf the install was successful, you should see a message indicating it completed successfully and a reboot is required for it to take effect.  Assuming your host is in maintenance mode and has no running VM’s on it, go ahead and reboot now.
  6. Once the host has rebooted and is back online, start a new SSH session and issue the “esxcli software vib list” command again and you should see the new .vib at the top of the list.install-confirmationVoila!  You can now deploy VM’s from template in seconds.itsbeautifulmeme

Nutanix – Taking It to the .NEXT Level

I was happy to participate in the opening “Nutanix Champion” event to “ring in” the day one keynote.  When I got back stage during the rehearsal, it was evident a lot of people worked real hard to do something fun for the opening acts (Angelo Luciani, Julie O’Brien  and surely many more…so, kudos to you!).

IMG_20160621_071414.jpg

ClfX_wkVAAAeETb

Picture credit to https://twitter.com/@ClaireBelly

And now, my take on some of the announcements today….

Acropolis File Services:

This announcement fits into the recurring theme of “power through software” – leveraging commodity hardware to deliver additional value based on software upgrades and enhancements.  Acropolis File Services allows you to leverage your existing investment to expose the Nutanix file system for scale out file level storage.

Initially SMB 2.1 will be supported with other protocols (NFS, never versions of SMB, etc) on the roadmap.  AHV and ESXi hypervisors will be supported at GA.  Other features include user driven restore leveraging file level and server level snapshots (think file level recovery and disaster recovery, respectively) with asynchronous replication on the roadmap for Q4.

There are some interesting use cases I can see for this, such as user profile storage and replication for desktop and application virtualization environments and low cost scale out file services using the included Acropolis Hypervisor + Nutanix storage nodes.

Acropolis Block Services:

Similar to Acropolis File Services, Acropolis Block Services exposes the Nutanix file system as an iSCSI target for bare metal servers and applications.  Though the file system is exposed to a bare metal workload, all the great Nutanix features are preserved for them (snapshot, clone, data efficiency services, etc).  Again, this is a demonstration of “power through software” and the evolution of the platform – these features and support were first available for VM’s, then files, and now bare metal.

The Nutanix file system is presented via the iSCSI protocol a little differently than iSCSI is normally implemented.  Instead of resiliency built into the protocol being leveraged (ALUA, multipathing, etc).  Multipathing is handled by the back end, paths are managed dynamically and in the event of a node failure, failover is handled on the backend.  While “best practices” for iSCSI are usually well documented based on the vendor or platform, not having to rely on as much client side configuration and optimization removes the human element and thus risk for “PEBKAC” issues.  I’ve seen the “human element” manifest itself in more than one iSCSI implementation.

I see this feature being a big deal for shops that have both hyperconverged and traditional 3 tier deployed in the same datacenter.  Due to some extenuating circumstance (like a crappy software vendor that STILL doesn’t support virtualization in the year 2016) or an investment in physical servers the business wishes to extract value from, physical servers and/or traditional SAN storage must exist in parallel.  Being able to present traditional “block” storage to a bare metal server or app may remove that last roadblock on the journey.

“All Flash on All Platforms”

As the cost of flash continues to plummet, it continues to become more pervasive in the datacenter.  The capacity of SSD’s has surpassed that of traditional spindles (though at a cost right now) and I foresee a day where “flash first” is the commonplace policy.  As such, starting with the Broadwell / Nutanix G5 platform, all flash config will be available on all platforms. 

Microsoft Cloud Platform System Loaded from Factory

This was announced last week, but in a nutshell Nutanix and Microsoft collaborated to offer the Microsoft Cloud Platform Standard (CPS) installed from the factory.  This offers a more turnkey private cloud that accelerates the time to value and allows for “day one” operation. All “patching operations” are integrated into the Nutanix “One Click Upgrade” platform, further streamlining the day to day care and feeding that historically has burned up so much administrator time.

In addition, Nutanix will support the entire stack from the hardware up to the software, just like they have been doing for vSphere, Hyper-V, and AHV already.  There’s a lot to be said for “one throat to choke” when it comes to technical support.  I’m sure we’ve all been an unwilling participant in the “vendor circular firing squad” at some point.  My experience with Nutanix support has always been excellent, both from the technical capability of the support engineers as well as the customer service they deliver.

Prism Enhancements

Some big enhancements are coming to Prism.  Building upon “Capacity Planning” in Acropolis Base Software 4.6, “What if?” modeling will be added.   Instead of just projections based on existing workloads, you’ll be able to model scenarios such as onboarding of a new client or introduction of a new application or service at a granular level.

One of the benefits of the “building block / right sized” hyperconverged model is being able to accurately size for your existing workload, allowing for sufficient overhead, without overbuying based on best effort projections where your environment might be 3 years out.  Calculation based on existing utilization and growth was the first step, and then modeling “what if _____” is the next evolution of accurately projecting the next “building block” required to meet compute, IOPS, and capacity needs for “just in time forecasting”.  Maybe a “buy node now” button is in order through Prism 😛

Network Visualization

Enhancements to Prism will allow for quick configuration and visualization of the network config in AHV – both the config in the hypervisor and the underlying physical network infrastructure.  This makes finding the root cause of an issue much quicker and lowering the overall time til resolution by being more aware of the underlying network infrastructure.  Josh Odgers has a great blog post covering this with some nice screenshots of the Prism UI so I won’t bother reinventing the wheel http://www.joshodgers.com/2016/06/15/whats-next-2016-prism-integrated-network-configuration-for-ahv/

Community Edition:

Another notable milestone in the Nutanix ecosystem is 12,000+ downloads of Community Edition to date (and nearly 200 activations a week).  Hosted Community Edition trials will be available free in two hour blocks through the Nutanix portal as a “test drive”.  Another option for getting your hands on Nutanix CE are to install it on your own “lab gear” – Angelo Luciani has a great blog post on using an Intel NUC (are multiple NUC’s NUCii?)  https://next.nutanix.com/t5/Nutanix-Connect-Blog/The-Prestige-Continues-Community-Edition/ba-p/10399

Or….maybe on a drone?

20160621_093840.jpg

Other Interesting Notes:

During the general session, some statistics were presented regarding the adoption of Acropolis Base Software 4.6.  500 clusters were updated to 4.6 within 7 days of release, and an overall 43% adoption within 100 days.  It was also noted that there was a significant performance increase available in 4.6 and as such 43% of customers received up to a 4x performance increase at no cost – I’ll say it again, power through software.

20160621_095621.jpg

Another statistic I found interesting was that 15% of the customer base is now running AHV.  I suspect that percentage will increase significantly over the next 12 months with all the new features now native to AHV combined with the ease of “One Click” online hypervisor conversion.

A Picture to Sum It Up:

As someone who’s dealt with infrastructure that was everything but invisible, I think this says it all…

20160621_101445.jpg

“Power Through Software”:

20160621_095004.jpg

Other Blogs to Check Out:

I know I didn’t capture all the announcements here, but some other good blog posts I’ve seen today are worth a read…

Josh Odgers:

http://www.joshodgers.com/ (there’s a whole series here titled “What’s .NEXT 2016”

Marius Sandbu:

https://msandbu.wordpress.com/2016/06/21/whats-coming-from-nutanix-announcements-from-next/

Eduardo Molina:

http://molikop.com/2016/06/nutanix-nextconf-2016-keynote-day-1/

Nutanix Acropolis Base Software 4.5 (NOS release)…be still my heart!

Today Nutanix announced the release of “Acropolis Base Software” 4.5…the software formerly known as NOS.  I happened to be in the Nutanix Portal this morning and didn’t even notice the release in the “Downloads” section due to the name change…thankfully @tbuckholz was nice enough to alert me to this wonderful news.

smiley-with-heart-shaped-eyes

I read through the Release Notes and was pretty excited with what I found – a bunch of new features that solve some challenges and enhance the environment I’m responsible for caring and feeding on a daily basis.  Some of these features I knew were coming, others were a surprise.  There’s a ton of good stuff in this release so I encourage you to check them out for yourself.

A short list of some of the things particularly interesting to me in no particular order…

  1. Cloud Connect for Azure – prior to this release, Nutanix Cloud Connect supported AWS…it’s good to have options.  I was actually having a conversation with a coworker yesterday about the possibility of sending certain data at our DR site up to cloud storage for longer / more resilient retention.

    The cloud connect feature for Azure enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Microsoft Azure cloud. Once configured through the Prism web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured. This feature is currently supported for ESXi hypervisor environments only. [FEAT-684]

  2. Erasure Coding – lots of good info out there on this feature that was announced this summer so I won’t go into too much detail.  Long story short it can allow you to get further effective capacity out of your Nutanix cluster.  A lower $ : GB ratio is always welcome. @andreleibovici has a good blog post describing this feature at his myvirtualcloud.net site.

    Erasure CodingComplementary to deduplication and compression, erasure coding increases the effective or usable cluster storage capacity. [FEAT-1096]

  3. MPIO Access to iSCSI Disks – another thing I was fighting Microsoft support and a couple other misinformed people about just last week.  One word:  Exchange.  Hopefully this will finally put to rest any pushback by Microsoft or others about “NFS” being “unsupported”.  I spent a bunch of time last week researching the whole “NFS thing” and it was a very interesting discussion.  @josh_odgers spent a lot of time “fighting the FUD” if you will and detailing why Microsoft should support Exchange with “NFS” backed storage.  A few of my favorite links THIS, THIS (my favorite), and THIS WHOLE SERIES.

    Acropolis base software 4.5 feature to help enforce access control to volume groups and expose volume group disks as dual namespace disks.

  4. File Level Restore (Tech Preview) – this was one of the “surprises” and also one of my favorites.  We are leveraging Nutanix Protection Domains for local and remote snapshots for VM level recovery and Veeam for longer term retention / file based recovery.  However, the storage appliance that houses our backup data can be rather slow for large restores so the ability to recover SOME or ALL of a VM using the Nutanix snapshots I already have in place is a big deal for me.

    The file level restore feature allows a virtual machine user to restore a file within a virtual machine from the Nutanix protected snapshot with minimal Nutanix administrator intervention. [FEAT-680]

  5. Support for Minor Release Upgrades for ESXi hosts – this is nice for those random times that you need to do a minor revision upgrade to ESXi because “when ____ hardware is combined with ______ version of software ______, ______ happens”.  We’ve all been there.  Nutanix still qualifies certain releases for one click upgrade, but there is now support for upgrades using the Controller “VM cluster” command.

    Acropolis base software 4.5 enables you to patch upgrade ESXi hosts with minor release versions of ESXi host software through the Controller VM cluster command. Nutanix qualifies specific VMware updates and provides a related JSON metadata upgrade file for one-click upgrade, but now customers can patch hosts by using the offline bundle and md5sum checksum available from VMware, and using the Controller VM cluster command. [ENG-31506]

It’s always nice to get “new stuff” with a super simple software upgrade.  Thanks for taking the time to read and I encourage you to check out some of the other features that might be of interest to your environment.

Veeam + Nutanix: “Active snapshots limit reached for datastore”

Last night I ran into an interesting “quirk” using Veeam v8 to back up my virtual machines that live on a Nutanix cluster.  We’d just moved the majority of our production workload over to the new Nutanix hardware this past weekend and last night marked the first round of backups using Veeam on it.

We ended up deploying a new Veeam backup server and proxy set on the Nutanix cluster in parallel to our existing environment.  When there were multiple jobs running concurrently overnight, many of them were in a “0% completion” state, and the individual VM’s that make up the jobs had a “Resource not ready: Active snapshots limit reached for datastore” message on them.

veeam 1

I turned to the all-knowing Google and happened across a Veeam forum post that sounded very similar to the issue I was experiencing.  I decided to open up a ticket with Veeam support since the forum post in question referenced Veeam v7, and the support engineer confirmed that there was indeed a self-imposed limit of 4 active snapshots per datastore – a “protection method” of sorts to avoid filling up a datastore.  On our previous platform, the VM’s were spread across 10+ volumes and this issue was never experienced.  However, our Nutanix cluster is configured with a single storage pool and a single container with all VM’s living on it, so we hit that limit quickly with concurrent backup jobs.

The default 4 active snapshot per datastore value can be modified by creating a registry DWORD value in ‘HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\’ called MaxSnapshotsPerDatastore and use the appropriate hex or decimal value.  I started off with ’20’ but will move up or down as necessary.  We have plenty of capacity at this time and I’m not worried at all about filling up the storage container.  However, caveat emptor here because it is still a possibility.

This “issue” wasn’t anything specific to Nutanix at all, but is increasingly likely with any platform that uses a scale-out file system that can store hundreds or thousands of virtual machines on a single container.

A tale of two firmware upgrades…

On this fine Friday afternoon, I thought I’d have a little fun comparing and contrasting the firmware upgrade process on two different storage solutions. We recently bought some Nutanix 8035 nodes to replace the existing storage platform. While I wouldn’t necessarily call Nutanix “just” a storage platform, the topic of this discussion will be the storage side of the house. For the sake of anonymity, we’ll call our existing storage platform the “CME XNV 0035”.

One of the biggest factors in choosing the Nutanix platform for our new compute and storage solution was “ease of use”. And there’s a reason for that – the amount of administrative effort required to care and feed the “CME XNV 0035” was far too high, in my opinion. Even a “simple” firmware upgrade took days or weeks of pre-planning/scheduling and 8 hours to complete in our maintenance window. Now that I’ve been through the firmware upgrade on our Nutanix platform, I thought a compare and contrast was in order.

First, let me take you through the firmware upgrade on the “CME XNV 0035”

  1. Reach out to CME Support, open a ticket, and request a firmware update. They might have reached out to you proactively if there was a major bug/stability issue found. (Their product support structure is methodical and thorough, I will give them that) 30 minutes
  2. An upgrade engineer was scheduled to do the “pre upgrade health check” at a later date.
  3. The “pre upgrade health check” occurs, logs and support data are gathered for later analysis. Eventually it occurred frequently enough I’d just go ahead and gather and upload this data on my own and attach it to the ticket. 1 hour
  4. A few hours to a few days later, we’d get the green light from the support analysis that we were “go for upgrade”. In the mean time, the actual upgrade was scheduled with an upgrade engineer for a later date during our maintenance window…typically a week or so after the “pre upgrade health check” happened.
  5. Day of the upgrade – hop on a Webex with the upgrade engineer, and begin the upgrade process.  Logs were gathered again and reviewed.  This was a “unified” XNV 0035, though we weren’t using the file side…..I’m…..not sure why file was even bought at all, but I digress….which meant we still had to upgrade the data movers and THEN move onto the block side.  One storage processor was upgraded and rebooted…took about an hour, and then the other storage processor was upgraded and rebooted…took another hour.  Support logs were gathered again, reviewed by the upgrade engineer, and as long as there were no outstanding issues, the “green light” was given.  6-8 hours

Whew……7.5 – 9 hours of my life down the drain…

download

Now, let’s review the firmware upgrade process on the Nutanix cluster

  1. Log into Prism, click “Upgrade Software” 10 seconds
  2. Click “Download” if it hasn’t done it automatically 1 minute (longer if you’re still on dial up)
  3. Click “Upgrade”, then click the “Yes, I really really do want to upgrade” button (I paraphrase) 5 seconds
  4. Play “2048”, drink a beer or coffee, etc. 30 minutes
  5. Run a “Nutanix Cluster Check (NCC)”
  6. Done

There you have it, 31 minutes and 15 seconds later, you’re running on the latest firmware.  Nutanix touts “One click upgrades”, but I counted four, technically.  I can live with that.

Yes, this post is rather tongue in cheek, but it is reflective of the actual upgrade process for each solution.  Aside from the initial “four clicks”, Nutanix handles everything else for you and the firmware upgrade occurs completely non-disruptively.

unnamed