Key features for VDI storage

One of the biggest trends in IT infrastructure today is dedicated “storage systems” for VDI. I put “storage systems” in scare quotes because many of the vendors making these systems would object to being called a storage system. Regardless, the primary use case driving the sales of many of these systems is as a storage location for VDI. The reason for this is that traditional arrays have proven woefully inadequate to handle the amount and type of IO VDI can generate.

The architecture backing these systems varies greatly but when looking for a dedicated storage solution for your VDI environment, here are the top features I look for:

Speed. This one should be obvious but any storage system dedicated for VDI needs to be fast. Anyone who’s ever designed storage for a VDI environment can tell you that VDI workloads can generate tremendous amounts of most write-IO with very ‘bursty’ workload patterns. Traditional storage arrays with active-passive controllers, ALUA architecture and tiered HDD storage weren’t created with this workload in mind. Trying to design a VDI environment on this architecture can become (in some cases) cost and performance prohibitive. Indeed, many businesses are spending 40%-60% of their VDI budget on storage alone.

Today, the speed solution is being solved with a variety of methods. RAM is being used as a read/write location (e.g. Atlantis ILIO) for microsecond access times. “All flash” arrays are being purpose built to hold 100% SSD drives (Invicta, XtremIO, Pure, etc.). Adding to this, a whole host of “converged” compute/storage appliances are popping up utilizing local disk/flash for increased speed and simplicity (Nutanix, Simplivity, VSAN, ScaleIO, etc.) To reiterate, each of these systems I’ve mentioned can do more than just VDI, but VDI just happens to be a good use case for these solutions in many cases. If you’re looking for a place to put your VDI environment, the ability to rapidly process lots of random write IO should be of paramount concern and you should know that there are currently many ways this can be mitigated.

Data reduction. This one will be more controversial, particularly for non-persistent fanboys. Nevertheless, persistent VDI is a fact of life for many VDI environments. As such, large amounts of duplicate data will be written to storage and as a result, data reduction mechanisms become very important. De-duplication and compression will be the most effective methods and will be preferably in-line. Again, various solutions from Atlantis to Invicta, to XtremIO to Pure all offer these features but with very different architectures. If you have no persistent desktops then this feature becomes less important. However, data reduction can still be quite valuable in many non-persistent VDI architectures as well, as an example, XenDesktop MCS could greatly benefit from storage with de-duplication. I also find that many of my customers who start out thinking they’ll have only non-persistent desktops quickly discover during the course of their migration users who need persistence. Don’t be surprised by the need for this feature at a later point, plan for this at the beginning and make sure your storage platform has the appropriate data reduction features.

Scale. I don’t know how many VDI projects I’ve heard of where storage was purchased to support X amount of users only for the VDI project to take off faster and of larger scale than expected. The project then gets stalled because the storage system can’t handle more than the X amount of users it was designed for and the business doesn’t have enough budget to purchase another storage system. For this reason, any storage dedicated to VDI should be able to scale both “up” and “out”. “Up” to support more capacity and “out” to support more IO. The scaling of the system should be such that it is one unified system…not multiple systems with a unified control plane. The converged solutions are great at this, VSAN, Nutanix, et al. All flash arrays typically have this as well e.g. Invicta, XtremIO.

Ease of Management. This sounds basic and very obvious but make sure you evaluate “ease of management” when purchasing any VDI-specific storage solution. The reason for this is simple, any VDI-specific storage system is bound to have a much different architecture than any array’s you currently have in your environment. The harder it is to manage, the higher the learning curve will be for existing admins. My criteria for determining if a VDI storage system is “easy” to manage is this – “can my VDI admins manage this?” (and that’s no slight to VDI admins!). The management of the system shouldn’t require a lot of legacy SAN knowledge or skillsets. This makes the environment more agile by not having to rely on multiple teams for basic functions and doesn’t burden SAN teams with a disparate island of storage they must learn and manage. Again, many of the converged solutions are great at this as well as some of the newer AFA’s.

There are many other important factors in deciding what to look for in a storage solution for your VDI environment. Whatever the architecture, if it doesn’t include the above four features, I’d look elsewhere.

Note: Vijay Swami wrote an excellent article entitled “A buyer’s guide for the All Flash Array Market”. I found it interesting after I wrote this to read his thoughts and note how many of the things he looks for in an AFA are similar to my top features for VDI storage. Regardless, it’s good reading and if you haven’t already, check it out.

Advertisement

, , ,

Leave a comment

End of year Randomness

I’m not big on “end of year” posts or predictions and lacking any other ideas, thought I’d write down some random thoughts about technology going through my head as this year draws to an end.

All Flash Array Dominance
I’m not buying the hype surrounding all flash array’s (AFA).  Certainly there are legitimate use cases and they’ll be deployed more in the near future than they have in the past but the coming dominance of all flash array’s, I think, has been greatly exaggerated.  It’s clear that the main problem these array’s are trying to solve is the extreme performance demands of some applications and I just think there are much better ways to solve this problem (e.g. local disk, convergence, local flash, RAM caching, etc) in most scenarios than purchasing disparate islands of SAN.  And many of the things that make an AFA so “cool” (e.g. in-line dedupe, compression, no RAID, etc.) would be even cooler if the technology could be incorporated into a hybrid array.  The AFA craze feels very much like the VDI craze to me, lots of hype about how “cool” the technology is but in reality a niche use case.  Ironically, VDI is the main AFA use case.

The Emergence of Convergence
This year has seen a real spike in interest and deployment of converged storage/compute software and hardware and I’m extremely excited for this technology going into 2014.  With VMware VSAN being GA in 2014, I expect that interest and deployment to rise to even greater heights.  VSAN has some distinct strategic advantages over other converged models that should really make the competition for this space interesting.  Name recognition alone is getting them a ton of interest.  Being integrated with ESXi gives them an existing install base that already dominates the data center.  In addition, it’s sheer simplicity and availability make it easy for anyone to try out.  Pricing still hasn’t been announced so that will be the big thing to watch for in 2014 with this offering, that and any new enhancements that come with general availability.  In addition to VSAN, EMC’s ScaleIO is another more ‘software-based’ rather than ‘appliance-based’ solution that is already GA that I’m looking forward to seeing more of in 2014.  Along with VMware and EMC, Nutanix, Simplivity, Dell, HP, VCE, et al. all have varying “converged” solutions as well so this isn’t going away any time soon.  With this new wave of convergence products and interest, expect all kinds of new tech buzzwords to develop!  I fully expect and predict “Software Defined Convergence” will become mainstream by the end of the year!

Random convergence links:
Duncan Epping VSAN article collection – http://www.yellow-bricks.com/virtual-san/
Scott Lowe – http://wikibon.org/wiki/v/VMware_VSAN_vs_the_Simplicity_of_Hyperconvergence
Cormac Hogan looks at ScaleIO – http://cormachogan.com/2013/12/05/a-closer-look-at-emc-scaleio/
Good look at VSAN and All-Flash Array performance – http://blogs.vmware.com/performance/2013/11/vdi-benchmarking-using-view-planner-on-vmware-virtual-san-part-3.html
Chris Whal musing over VSAN architecture – http://wahlnetwork.com/2013/10/31/muse-vmwares-virtual-san-architecture/?utm_source=buffer&utm_medium=twitter&utm_campaign=Buffer&utm_content=buffer59ec6

The Fall of XenServer
As any reader of this blog knows, I used to be a huge proponent of XenServer.  However, things have really gone downhill after 5.6 in terms of product reliability.  So much so that I really have a hard time recommending it at all anymore.  ESXi was always at the top of my list but XenServer remained a solid #2.  Now it’s a distant 3rd in my mind behind Hyper-V.  I’ll grant that there are many environments successfully and reliably running XenServer, I have built quite a few myself, but far too many suffer from bluescreen server crashes and general unreliability to be acceptable in many enterprises.  The product has even had to be pulled from the site to prevent people from downloading it while bugs were fixed.  I’ve never seen so many others express like sentiments about this product as I have seen this past year.

Random CTP frustration with XenServer:

Random stuff I’m reading
Colin Lynch has always had a great UCS blog and his two latest posts are great examples.  Best UCS blog out there, in my opinion:
UCS Manager 2.2 (El Capitan) Released
Under the Cisco UCS Kimono

I definitely agree with Andre here!  Too many customers don’t take advantage of CBRC and it’s so easy to enable:
Here is why your Horizon View deployment is not performing to it’s max!

Great collection of links and information on using HPs Moonshot ConvergedSystem 100 with XenDesktop by Dane Young:
Citrix XenDesktop 7.1 HDX 3D Pro on HP’s Moonshot ConvergedSystem 100 for Hosted Desktop Infrastructure (HDI)

In the end, this post ends up being an “end of year” post with a few predictions.  Alas, at least I got the “random” part right…

, , , , , , ,

1 Comment

Cisco UCS 101: MAC, WWN and UUID Pool Naming Conventions

Continuing with the Cisco UCS 101 series, I thought I’d post on MAC, WWPN, WWNN and even UUID pool naming conventions.  There’s a number of ways this can be done but as a general rule-of-thumb my pools will ensure a few things:

  1. Uniqueness of MACs/WWNs/etc. across blades, UCS Domains (aka “Pods”) and sites.
  2. The MAC’s/WWNs/etc. that are created from your pools should give you some level of description as to the location, fabric and OS that is assigned to that particular address.
  3. Lastly, the naming convention should be as simple and un-cryptic as possible.  Naming conventions are useless if they aren’t easily discernible to those tasked with reading them.

With that out of the way, lets look at a common naming scheme:

MACPool

This is fairly straight forward.  The first three bytes are a Cisco prefix that UCS Manager encourages you not to modify.  This can actually be modified but I always keep it the same.  The next digit in this naming convention represents a site ID.  This can be any physical location where UCS may reside, so a production site might be “1” and a DR site might be “2”.  Then we come to “Pod”, in UCS nomenclature a “Domain” or “Pod” is simply a pair of fabric interconnects and any attached chassis’s.  For OS, I usually use “1” to denote VMware, “2” for Windows and “3” for a Linux host.  Fabric just denotes whether the MAC should be destined for Fabric A or B.  The last byte will just be an incremental number assigned by UCS.  Let’s look at an example:

MACPoolExample

In this example pool, the MAC address would belong to a server at site “1” that resides in UCS Pod “1” that is running VMware and should be communicating out of Fabric A.  A MAC address of 00:25:B5:23:1B:XX would denote a server at site “2” in the third UCS pod at that site running VMware and communicating out of Fabric B.  Another commonly used naming convention would look like this:

AltMACPoolExample

The only difference here is that the site/pod distinction has been done away with in favor of just UCS Pod ID.  So while this example won’t allow you to easily distinguish a particular site, it will give you much larger Pod ID possibilities.  There’s no right answer as to which is best, it really just depends on the environment and personal preference.  For WWPN pools, I follow an almost identical naming scheme:

WWPNExample

Again, the Cisco prefix can be modified but I just prefer to leave it as it is.  For WWNN, I follow a very similar convention except that I exclude Fabric ID:

WWNNExample

As you can see, whether I’m looking at the MAC address, WWPN or WWNN I can easily discern from which site and pod the address originates, what OS the address belongs to and what fabric it is communicating out of.  UUID pools can be named similarly:

UUIDExample

This doesn’t have to and shouldn’t be complicated.  These simple, common naming schemes will not only ensure unique, informative and easily discernible addresses but can make common management tasks such as network traces or zoning that much more easy.  Use the above examples as a guideline, but feel free to customize if there’s a scheme that fits your environment better.  For more on this topic, I recommend the following resources:

Cisco UCS WWN and MAC Pools

Cisco UCS Manager Configuration Common Practices and Quick-Start Guide

, ,

Leave a comment

Double the Procedure, Double the Price?

In my last post I touched briefly on a claim I’m hearing a lot in IT circles these days.  This claim is often heard in discussions surrounding multi-hypervisor environments and most recently in VDI discussions.  The claim in question, at its’ core, says this – “If you have two procedures to perform the same task you double your operational expense in performing that task”.  Given the prevalence of this argument I wanted to focus on this in one post even though I’ve touched on it elsewhere.

As mentioned in my last post, Shawn Bass recently displayed this logic in a debate at VMworld.  The example given is a company with a mixture of physical and virtual desktops.  In this scenario they manage their physical desktops with Altiris/SCCM and use image-based management techniques for their non-persistent virtual desktops.  Since you are using two different procedures to accomplish the same task (update desktops), it is claimed that you then “double” your operational expense.

As I’ve said, in many scenarios this is clearly false.  The only way having two procedures “doubles” your operational cost is if both procedures require an equal amount of time/effort/training/etc. to implement and maintain.  And the odd thing about this example is that it actually proves the opposite of what it claims.  It’s very common for organizations to have physical desktops that they manage differently than their non-persistent virtual desktops.  Are these organizations just not privy to the nuances of operational expenditures?  I don’t think so, these organizations in many cases chose VDI at least in part for easier desktop management.  For many, it’s just easier and much faster to maintain a small group of “golden images” rather than hundreds or thousands of individual images.  So in this example adding the second procedure of image-based management can actually reduce the overall operational expense.  Now a large portion of my desktops can be managed much more efficiently than they were before, this reduces the overall time and energy I spend managing my total desktops and thus, reduces my operational expense.

We see this same logic in a lot of multi-hypervisor discussions as well.  “Two hypervisors, two ways of managing things, double the operational expense”.  When done wrong, a multi-hypervisor environment can fall into this trap.  However, before treating this logic as universally true you have to evaluate your own IT staff and workload requirements.  Some workloads will be managed/backed up/recovered in a disaster/etc. differently than the rest of your infrastructure anyway, so putting these workloads on a separate hypervisor isn’t going to add to that expense.  The management of the second hypervisor itself doesn’t necessarily “double” your cost as in many cases the knowledge your staff already possesses on how a hypervisor works in general can translate well into managing an alternate hypervisor.  A lot more could be said here but in the end, CAPEX savings should override any nominal added OPEX expense or you’re doing it wrong.

In general, standardization and common management platforms are things every IT department should strive for. Like “best practice” recommendations from vendors, however, we don’t apply them universally.  The main problem with this line of thinking is that it states a generalization as a universal truth and applies it to all situations while ignoring the subtle complexities of individual environments.  In IT, it’s just not that easy.

, ,

Leave a comment

The Great Persistence Debate

There was a good discussion at VMworld this year between persistent and non-persistent VDI proponents.  The debate spawned from discussions on twitter surrounding a blog post by Andre Leibovici entitled “Open letter to non-persistent VDI fanboys…”.  Representing the persistent side of the debate was Andre Leibovici and Shawn Bass.  Non-persistent fanboys were represented by Jason Langone and Jason Mattox.  Overall, this is a good discussion with both sides pointing out some strengths and weaknesses of each position:

So which is the better VDI management model, persistent or non-persistent?  Personally I think Andre nailed it near the end of the debate, it’s all about use case!  I know that’s the typical IT answer to most questions but it really is the best answer in many of these “best tech” debates.  What matters to most customers is not which is the “best” but which is the “right fit”.  A Ferrari may be the best car in the world but it’s clearly not the right fit for a family of four on a budget.  So while it may be fun and entertaining to discuss which is the best, in the real-world, the most relevant question is ‘which is the right fit given a particular use case?’.  If you have a call center with a small application portfolio, then this is an obvious use case for non-persistent desktops (though certainly not the only use case).  I agree with the persistence crowd in regards to larger environments that have extensive application portfolios.  The time it takes to virtualize and package all these applications and the impossibly large amount of software required to go non-persistent for all desktops in such an environment (UEM, app publishing, app streaming, etc.) makes persistence a much more viable option.  This is why many VDI environments will usually have a mixture of persistent and non-persistent desktops.  These are extreme examples but it’s clear that no one model is perfect for every situation.

Other random thoughts from this discussion:

Throughout the debate and in most discussions surrounding persistent desktops, the persistent desktop crowd often points to new technology advances that make persistent desktops a viable option.  Flash-based arrays, inline de-duplication, etc. are all cited as examples.  The only problem with this is that while this technology exists today, many customers still don’t have it and aren’t willing to make the additional investment in a new array or other technology on top of the VDI software investment.  So the technology exists and we can have very high-level, academic discussions on running persistent desktops with this technology but for many customers it’s still not a reality.
Here again, like most times this discussion crops up, the non-persistent crowd makes a point of trumpeting the ease of managing non-persistent desktops while glossing over how difficult it can be to actually deploy this desktop type when organizations are seeking a high percentage of VDI users.  Even if we ignore the technical challenges around application delivery, users still have to like the desktop…and most companies will have more users than they know that will require/demand persistent desktops.
About midway through the debate there is talk about how non-persistence is limiting the user and installing apps is what users want, but earlier in the debate the panel all agreed that just allowing users to install whatever app they want is a security and support nightmare.  I found this dichotomy interesting in that it illuminates this truth – whichever desktop model you choose the user is limited in some way.  Whatever marketing you may hear to the contrary, remember that.

And last but certainly not least…

In this debate Shawn delivers an argument I hear a lot in IT that I disagree with and maybe this deserves a separate post.  He talks about the “duality” of operational expense when you are managing non-persistent desktops using image-based management in an environment where you still have physical endpoints being managed by Altiris/SCCM.  He says you actually “double” your operational expence managing these desktops in different ways.  The logic undergirding this argument is the assumption that ‘double the procedure equals double the operational cost’.  To me this is not necessarily true and for many environments, definitely false.  The only way having two procedures “doubles” your operational cost is if both procedures require an equal amount of time/effort/training/etc. to implement and maintain.  And for many customers (who implement VDI at least partly for easier desktop managment) it’s clear that image-based management is viewed as the easier and faster solution to maintain desktops.  I see this same logic applied to multi-hypervisor environments as well and simply disagree that having multiple procedures is always going to mean you double or even increase your operational cost.

Any other thoughts, comments or disagreements are welcome in the comment section!

, , ,

Leave a comment

PCoIP Proxy for Horizon View

A couple months ago F5 came out with a very intriguing announcement when they released full proxy support for PCoIP on the latest Access Policy Manager code version, 11.4.  Traditional Horizon View environments use “Security Servers” to proxy PCoIP connections from external users to desktops residing in the datacenter.  Horizon View Security Servers will reside in the DMZ and the software is installed on Windows hosts.  This new capability from F5 completely eliminates the need for Security Servers in a Horizon View architecture and greatly simplifies the solution in the process.

In addition to eliminating Security Servers and getting Windows hosts out of your DMZ, this feature simplifies Horizon View in other ways that aren’t being talked about as much.  One caveat to using Security Servers is that they must be paired with Connection Servers in a 1:1 relationship.  Any sessions brokered through these Connections Servers will then be proxied through the Security Servers they are paired with.  Because Security Servers are located in the DMZ, this setup works fine for your external users.  For internal users, a separate pair of Connection Servers are usually needed so users can connect directly to their virtual desktop after the brokering process without having to go through the DMZ.  To learn more about this behavior see here and here.

Pictured below is a traditional Horizon View deployment with redundancy and load balancing for all the necessary components:

TraditionalView

What does this architecture look like when eliminating the Security Servers altogether in favor of using F5’s ability to proxy PCoIP?

ViewWithProxy

As you can see, this is a much simpler architecture.  Note also that each Connection Server supports up to 2000 connections per server.  I wouldn’t recommend pushing that limit but the above servers could easily support around 1500 total users (accounting for the failure of one Connection Server).  If you wanted full redundancy and automatic failover with Security Servers in the architecture, whether it was for 10 or 1500 external users, you would still need at least 2 Security and 2 Connection servers.  A lot of times they are not there so much for increased capacity but just for redundancy for external users, so eliminating them from the architecture can easily simplify your deployment.

But could this be simplified even further?

ViewProxywithInternalVIP

In this scenario the internal load balancers were removed in favor of the load balancers in the DMZ having an internal interface configured with an internal VIP for load balancing.  Many organizations will not like this solution because it will be considered a security risk for the device in the DMZ to have interfaces physically outside the DMZ.  ADC vendors and partners will claim their device is secure but most customers still aren’t comfortable with this solution.  Another solution for small deployments with limited budget would be to just place that VIP in the above picture in the DMZ.  Internal users will still connect directly to their virtual desktops on the internal network and the DMZ VIP is only accessed during the initial load balancing process for the Connection Servers. Regardless of whether you use an internal VIP or another set of load balancers, this solution greatly simplifies and secures a Horizon View architecture.

Overall, I’m really excited by this development and am interested in seeing if other ADC vendors offer this functionality for PCoIP in the near future or not.  To learn more, see the following links:

Inside Look – PCoIP Proxy for VMware Horizon View

Big-IP Access Policy Manager VMware Horizon View Integration Implementations

In 5 Minutes or Less – PCoIP Proxy for VMware Horizon View

, , ,

Leave a comment

Convergence and the Software-Defined Data Center

There’s been a lot of industry news lately regarding Software-Defined Storage, Software-Defined Data Centers and hyper-convergence .  After numerous conversations with various colleagues and friends about these concepts, I wanted to post my own thoughts on them and how I believe they are related.

First off, hyper-convergence has usually been used to denote the “next stage” in modern converged infrastructure.  With many of the popular reference architectures or pre-built systems representing some level of “convergence”, hyper-convergence has come to refer to those systems that combine multiple data center tiers into a single appliance. However, as a term, I’ve come to view “hyper-convergence” as a misnomer.  When looking at the modern landscape of integrated infrastructure platforms, there is only “convergence” and “simulated convergence”.  Examples of converged infrastructure include Nutanix, Simplivity, et al while simulated convergence examples can be found in vBlock, VSPEX and FlexPod.  And while there is differentiation within the simulated convergence platforms (e.g. pre-built vBlock vs. reference architectures VSPEX/FlexPod), they are only “converged” insofar as their disparate components are cabled and racked together in a branded rack and sometimes managed with common software (e.g. Cloupia).  With simulated convergence, each “tier” of the data center is still represented by separate hardware components and an attempt at unity is made through the use of “single-pane” management software.  Convergence differs from this in that data center tiers are consolidated into common hardware components which naturally increase management software simplicity as well.

Another interesting difference is that while simulated convergence offers simplified management and automation, convergence gives you Simulatedthese same things plus performance, cost and reduced complexity benefits as well.  Because convergence moves data center tiers into a common platform, this naturally puts the network/compute/storage into closer proximity to each other, enabling greater performance and reduced complexity.  Cost savings are achieved not only through hardware consolidation but operational expenditures can be lessened in a converged model as well.

None of this is to say that simulated convergence is worthless.  On the contrary, simulated convergence via management software and reference architecture/pre-built configurations can greatly increase the consume-ability and ease of management of these separate components.  Simulated convergence gives you increased efficiency on legacy platforms that organizations already have in place and already have knowledge on how to manage.  It’s an improvement over traditional processes but it is not actual convergence, which is the next logical progression.

Indeed, say what you will about specific converged offerings but it’s hard to see why convergence as a model wouldn’t be the clear path to simplified software-defined data centers.  No matter how much management software and automation you put in front of it, simulated convergence will always require specialized knowledge of various levels of divergent hardware components in order to properly maintain and run that model.  You would never deploy a vBlock and only train your support staff on just Cloupia or vCenter with VSI plugins.  No, for advanced troubleshooting and configuration an in-depth knowledge of all the network, hypervisor, compute, storage network and array components is necessary as well. Management software can mask the complexity, but it’s still there.  It doesn’t move the control plane, it just creates another one.

Converged infrastructure that relies on commodity hardware and is software/virtualization-based shifts the focus from tier-based component management and support to a more holistic data center view.  Under the converged model , the deployment and ongoing maintenance of the underlying infrastructure is greatly simplified, allowing for faster application deployment , monitoring and troubleshooting.  In short, you spend much less time on your physical infrastructure and more time focusing on the business.  Of course, hardware is still necessary on such a system but that’s not where the intelligence lies and as we’ve seen, there’s much less of it!

Going forward, I’m convinced that the popularity of convergence will only increase.  What will be interesting to see is how the major compute/storage vendors handle this shift.  As convergence increases, will a storage and compute vendor team up to sell their own converged solution?  Will one of the startup convergence companies be acquired?  Whatever happens, this will be one of the more exciting areas of IT to be involved with for many years to come.  I can’t wait!

, , , , , ,

Leave a comment

VMware Resource Pools Once More

Over the past few years there has been no shortage of excellent blog posts detailing how to properly configure resource pools in a vSphere environment. Despite the abundance, quality and availability of this information, resource pools still seem to be the #1 most commonly misconfigured item on every VMware health check I’m involved with. Even though this is well treaded territory I wanted to lend my own way of explaining this issue, if for nothing else than just a place to direct people for information on resource pools.

What follows below is a simple diagram I usually draw on a whiteboard to help explain how resource pools work with customers.

ResourcePools1

 

ResourcePools2

 

ResourcePools3

 

There’s not much to say that the pictures don’t already show.  Just remember to keep adjusting your pool share values as new VMs are added to the pool.  Also note that while I assigned 8000:4000:2000 to the VMs in the High:Normal:Low pools above, I could have just as easily assigned 8:4:2 to the same VMs and achieved the same results.  It’s the ratio between VMs that counts.  In either example, a VM in the “High” pool gets twice as much resources under contention as a VM in the “Normal” pool and four times as much as a VM in the “Low” pool.

Looking for more information on resource pools?

Feel free to send me any other good resource pool links in the comments section and I’ll add them to my list.

,

Leave a comment

Cisco UCS 101: Installation and Basic Config

Below you’ll find step-by-step instructions on setting up a Cisco UCS environment for the first time.  I wanted to post this as a general guideline for those new to UCS who may be setting up their first lab or production environments.  It’s important to note that UCS is highly customizable and that configuration settings will be different between environments.  So, what you’ll see below is a fairly generic configuration of UCS with an ESXi service profile template.  Also important to note is that since the purpose of this is to aid UCS newcomers in setting up UCS for the first time, I’ve done many of these steps manually.  Most of the below configuration can be scripted and pools and policies can be created in the service profile template wizard but to really learn where things are at the first time, I recommend doing it this way.

This is a pretty lengthy blog post, so if you’d like it in .pdf format, click here.

Cabling UCS

There’s really not more to say on a general level that the pictures don’t already show. Based on how your environment is set up and the type of connectivity you require, the cabling could be much different than what is pictured above. The important things to note, however, are that you will always only connect a particular I/O Module to its associated Fabric Interconnect (as shown above) and for Fiber channel connections, “Fabric A” goes to “Switch A” and likewise for Fabric B. Each switch is then connected to each storage processor. Think of the Fabric Interconnects in this scenario as separate initiator ports on a single physical server (which is how we’ll configure them in our service profile) and the cabling will make much more sense.

Configuring the Fabric Interconnects

Connect to the console port of Fabric Interconnect (FI) “A”, which will be the primary member of the cluster. Power on FI-A and leave the secondary FI off for now. Verify that the console port parameters on the attached computer are as follows “9600 baud”, “8 data bits”, “No parity”, “1 stop bit”. You will then be presented with the following menu items (in bold, with input in green):

Enter the configuration method. (console/gui) ? console
Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Enter the password for “admin”: password

Confirm the password for “admin”: password
Is this Fabric interconnect part of a cluster(select ‘no’ for standalone)? (yes/no) [n]: yes
Enter the switch fabric (A/B) []: A
Enter the system name: NameOfSystem
(NOTE: “-A” will be appended to the end of the name)
Physical Switch Mgmt0 IPv4 address : X.X.X.X
Physical Switch Mgmt0 IPv4 netmask : X.X.X.X
IPv4 address of the default gateway : X.X.X.X
Cluster IPv4 address : X.X.X.X
(NOTE: This IP address will be used for Management)
Configure the DNS Server IPv4 address? (yes/no) [n]: y
DNS IPv4 address : X.X.X.X
Configure the default domain name? (yes/no) [n]: y
Default domain name: domain.com
Apply and save the configuration (select ‘no’ if you want to re-enter)? (yes/no): yes

Now connect to the console port of the secondary FI and power it on. Once again, you will be presented with the following menu items:

Enter the configuration method. (console/gui) ? console
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y
Enter the admin password of the peer Fabric interconnect: password
Physical Switch Mgmt0 IPv4 address : X.X.X.X
Apply and save the configuration (select ‘no’ if you want to re-enter)? (yes/no): yes

Both Fabric Interconnects should now be configured with basic IP and Cluster IP information. If, for whatever reason you decide you’d like to erase the Fabric Interconnect configuration and start over from the initial configuration wizard, issue the following commands: “connect local-mgmt” and then “erase configuration”

Configuring UCS

After the initial configuration and cabling of Fabric Interconnect A and B is complete, open a browser and connect to the cluster IP address and launch UCS Manager:

Configuring Equipment Policy

Go to the “Equipment” tab and then “Equipment->Policies”:

The chassis discover policy “Action:” dropdown should be set to the amount of links that are connected between an individual IOM and Fabric Interconnect pair. For instance, in the drawing displayed earlier each IOM had four connections to its associated Fabric Interconnect. Thus, a “4 link” policy should be created. This policy could be left at the default value of “1 link” but my personal preference is to set it to the actual amount of connections that should be connected between an IOM and FI pair. This policy is essentially just specifying how many connections need to be present for a chassis to be discovered.

For environments with redundant power sources/PDUs, “Grid” should be specified for a power policy. If one source fails (which causes a loss of power to one or two power supplies), the surviving power supplies on the other power circuit continue to provide power to the chassis. Both grids in a power redundant system should have the same number of power supplies. Slots 1 and 2 are assigned to grid 1 and slots 3 and 4 are assigned to grid 2.

Configuring Ports

Go to the “Equipment” tab and then “Fabric Interconnects->Fabric Interconnect A/B” and expand any Fixed or Expansion modules as necessary. Configure the appropriate unconfigured ports as “Server” (connections between IOM and Fabric Interconnect) and “Uplink” (connection to network) as necessary:

For Storage ports, go to the “Equipment” tab and then “Fabric Interconnects->Fabric Interconnect A/B” and in the right-hand pane, select “Configure Unified Ports”. Click “Yes” in the proceeding dialog box to acknowledge that a reboot of the module will be necessary to make these changes. On the “Configure Fixed Module Ports” screen, drag the slider just past the ports you want to configure as storage ports and click “Finish”. Select “Yes” on the following screen to confirm that you want to make these changes:

Next, create port channels as necessary on each Fabric Interconnect for Uplink ports. Go to the “LAN” tab, then “LAN->LAN Cloud->FabricA/B->Port Channels->Right-Click and ‘Create Port Channel'”. Then give the port channel a name and select the appropriate ports and click “Finish”:

Select the Port Channel and ensure that it is enabled and is set for the appropriate speed:

Next, configure port channels for your SAN interfaces as necessary. Go to the “SAN” tab and then “SAN Cloud->Fabric A/B->FC Port Channels->Right Click and ‘Create Port Channel'”. Then give the port channel a name and select the appropriate ports and select finish:


Select the SAN port channel and ensure that it is enabled and set for the appropriate speed:

Updating Firmware

What follows are instructions for manually updating firmware to the 2.1 release on a system that is being newly installed. Systems that are currently in production will follow a slightly different set of steps (e.g. “Set startup version only”). After the 2.1 release, firmware auto install can be used to automate some of these steps. Release notes should be read before upgrading to any firmware release as the order of these steps may change over time. With that disclaimer out of the way, the first step in updating the firmware is downloading the most recent firmware packages from cisco.com:

There are two files required for B-Series firmware upgrades. An “*.A.bin” file and a “*.B.bin” file. The “*.B.bin” file contains all of the firmware for the B-Series blades. The “*.A.bin” file contains all the firmware for the Fabric Interconnects, I/O Modules and UCS Manager.

After the files have been downloaded, launch UCS manager and go to the “Equipment” tab. From there navigate to “Firmware Management->Download Firmware”, and upload both .bin packages:

The newly downloaded packages should be visible under the “Equipment” tab “Firmware Management->Packages”.

The next step is to update the adapters, CIMC and IOMs. Do this under the “Equipment” tab “Firmware Management->Installed Firmware->Update Firmware”:

Next, activate the adapters, then UCS Manager and then the I/O Modules under the “Equipment” tab “Firmware Management->Installed Firmware->Activate Firmware”. Choose “Ignore Compatibility Check” anywhere applicable. Make sure to uncheck “Set startup version only”, since this is an initial setup and we aren’t concerned with rebooting running hosts:

Next, activate the subordinate Fabric Interconnect and then the primary Fabric Interconnect:

Creating a KVM IP Pool

Go to the “LAN” tab and then “Pools->root->IP Pools->IP Pool ext-mgmt”. Right-click and select “Create Block of IP addresses”. Next, specify your starting IP address and the total amount of IPs you require, as well as the default gateway and primary and secondary DNS servers:

Creating a Sub-Organization

Creating a sub-organization is optional, for granularity and organizational purposes and are meant to contain servers/pools/policies of different functions. To create a sub-organization, right-click any “root” directory and select “Create Organization”. Specify the name of the organization and any necessary descriptions and select “OK”. The newly created sub-organization will be visible in most tabs now under “root->Sub-Organizations”:

Create a Server Pool

To create a server pool, go to “Servers” tab and then “Pools->Sub-Organization->Server Pools”. Right-Click “Server Pools” and select “Create Server Pool”. From there, give the Pool a name and select the servers that should be part of the pool:

Creating a UUID Suffix Pool

Go to the “Servers” tab and then “Pools->Sub-Organizations->UUID Suffix Pool”. Right-Click and select “Create UUID Suffix Pool”. Give the pool a name and then create a block of UUID Suffixes. I usually try to create some two letter/number code that will align with my MAC/HBA templates that allow me to easily identify a server (e.g. “11” for production ESXi):

Creating MAC Pools

For each group of servers (i.e. “ESXi_Servers”, “Windows_Servers”, etc.), create two MAC pools. One that will go out of the “A” fabric another that will go out the “B” fabric. Go to the “LAN” tab, then “Pools->root->Sub-Organization”, right-click “MAC Pools” and select “Create MAC Pool”. From there, give each pool a name and MAC address range that will allow you to easily identify the type of server it is (e.g. “11” for production ESXi) and the fabric it should be going out (e.g. “A” or “B”):

Whole blog posts have been written on MAC pool naming conventions, to keep things simple for this initial configuration, I’ve chosen a fairly simple naming convention where “11” denotes a production ESXi server and “A” or “B” denotes which FI traffic should be routed through. If you have multiple UCS pods and multiple sites, consider creating a slightly more complex naming convention that will allow you to easily identify exactly where traffic is coming from by simply reviewing the MAC address information. The same goes for WWNN and WWPN pools as well.

Creating WWNN Pools

To create a WWNN Pool, go to the “SAN” tab, then “Pools->root->Sub-Organization”. Right-click on “WWNN Pools” and select “Create WWNN Pool. From there, create a pool name and select a WWNN pool range. Each server should have two HBA’s and therefore two WWNNs. So the amount of WWNNs should be the amount of servers in the pool multiplied by 2:

Create WWPN Pools

Each group of servers should have two WWPN Pools, one for the “A” fabric and one for “B”. Go to the “SAN” tab, then “Pools->root->Sub-Organization”. Right-click on “WWPN Pools” and select “Create WWPN Pool”, from there, give the pool a name and WWPN range:

Creating a Network Control Policy

Go to the “LAN” tab, then “Policies->root->Sub-Organizations->Network Control Policies”, from there, right-click “Network Control Policies” and select “Create Network Control Policy”. Give the policy a name and enable CDP:

Create vLANs

Go to the “LAN” tab and then “LAN->LAN Cloud->VLANS”. Right-click on “VLANs” and select “Create VLANs”. From there, create a VLAN name and ID:

Create vSANs

Go to the “SAN” tab and then “SAN->SAN Cloud->VSANs”. Right-Click “VSANs” and select “Create VSAN”. From there, specify a VSAN name, select “Both Fabrics Configured Differently” and then specify the VSAN and FCoE ID for both fabrics:

After this has been done, go to each FC Port-Channel in “SAN” tab “SAN->SAN Cloud->Fabric A/B->FC Port Channels” and select the appropriate VSAN. Once the VSAN has been selected, “Save Changes”:

Creating vNIC Templates

Each group of servers should have two templates. One going out the “A” side of the fabric and one going out the “B” side. Go to the “LAN” tab, then “Policies->root->Sub-Organization->vNIC Templates”. Right-click on “vNIC Templates” and select “Create vNIC Template”. Give the template a name, specify the Fabric ID and select “Updating Template”. Also specify the appropriate VLANs, MAC Pool and Network Control Policy:

Creating vHBA Templates

Each group of servers should have two templates. One going out the “A” side of the fabric and one going out the “B” side. Go to the “SAN” tab, then “Policies->root->Sub-Organization->vHBA Templates”. Right-click on “vHBA Templates” and select “Create vHBA Template”. Give the template a name, specify the Fabric ID and select “Updating Template”. Also specify the appropriate WWPN Pool:

Creating a BIOS policy

For hypervisors, I always disable Speedstep and Turbo Boost. Go to the “Servers” tab, then “Policies->root->Sub-Organizations->BIOS Policies”. From there, right-click on “BIOS Policies and select “Create BIOS Policy. Give the policy a name and under “Processor”, disable “Turbo Boost” and “Enhanced Intel Speedstep”:

Creating a Host Firmware Policy

Go to the “Servers” tab, then “Policies->root->Sub-Organizations->Host Firmware Packages”. Right-click “Host Firmware Packages” and select “Create Host Firmware Package”. Give the policy a name and select the appropriate package:

Create Local Disk Configuration Policy

Go to the “Servers” tab, then “Policies->root->Sub-Organizations->Local Disk Config Policies”. Right-click “Local Disk Config Policies” and select “Create Local Disk Configuration Policy”. Give the policy a name and under “Mode:” select “No Local Storage” (assuming you are booting from SAN):

Create a Maintenance Policy

Go to the “Servers” tab, then “Policies->root->Sub-Organizations->Maintenance Policies”. Right-click “Maintenance Policies” and select “Create Maintenance Policy”. From there, give the policy a name and choose “User ack”. “User ack” just means that the user/admin has to acknowledge any maintenance tasks that require a reboot of the server:

Create a Boot Policy

Go to the “Servers” tab, then “Policies->root->Sub-Organizations->Boot Policy”. Right-click “Boot Policy” and select “Create Boot Policy”. Give the policy a name and add a CD-ROM as the first device in the boot order. Next, go to “vHBAs” and “Add SAN Boot”. Name the HBA’s the same as your vHBA templates. Each “SAN Boot” vHBA will have two “SAN Boot Targets” that will need to be added. The WWNs you enter should match the cabling configuration of your Fabric Interconnects. As an example, the following cabling configuration…:

Should have the following boot policy configuration:

Creating a Service Profile Template

Now that you have created all the appropriate policies, pools and interface templates, you are ready to build your service profile. Go to the “Servers” tab and then “Servers->Service Profile Templates->root->Sub-Organizations”. Right-click on the appropriate sub-organization and select “Create Service Profile Template”. Give the template a name, select “Updating Template” and specify the UUID pool created earlier. An updating template will allow you to modify the template at a later time and have those modifications propagate to any service profiles that were deployed using that template:

In the “Networking” section, select the “Expert” radio button and “Add” 6 NICS for ESXi hosts (2 for MGMT, 2 for VMs, 2 for vMotion). After clicking “Add” you will go to the “Create vNIC” dialog box. Immediately select the “Use vNIC Template” checkbox, select vNIC Template A/B and the “VMware” adapter policy. Alternate between the “A” and “B” templates on each vNIC:

In the “Storage” section, specify the local storage policy created earlier and select the “Expert” radio button. Next “Add” two vHBA’s. After you click “Add” and are in the “Create vHBA” dialog box, immediately select the “Use vHBA Template” checkbox and give the vHBA a name. Select the appropriate vHBA Template (e.g. vHBA_A->ESXi_HBA_A, etc) and adapter policy:

Skip the “Zoning” and “vNIC/vHBA Placement” sections by selecting “Next”. Then, in the “Server Boot Order” section, select the appropriate boot policy:

In the “Maintenance Policy” section, select the appropriate maintenance policy:

In the “Server Assignment” section, leave the “Pool Assignment” and power state options at their default. Select the “Firmware Management” dropdown and select the appropriate firmware management policy:

In “Operational Policies”, select the BIOS policy created earlier and then “Finish”:

Deploying a Service Profile

To deploy a service profile from a template, go to the “Servers” tab, then “Servers->Service Profile Templates->root->Sub-Organizations”. Right-click the appropriate service profile template and select “Create service profiles from template”. Select a naming prefix and the amount of service profiles you’d like to create:

To associate a physical server with the newly created profile, right-click the service profile and select “Change service profile association”. In the “Associate Service Profile” dialog box, choose “Select existing server” from the “Server Assignment” drop down menu. Select the appropriate blade and click “OK”:

You can have UCS manager automatically assign a service profile to a physical blade by associating the service profile template to a server pool. However, the way in which UCS automatically assigns a profile to a blade is usually not desired by most people and this way allows you assign profiles to specific slots for better organization.

Configuring Call Home

Go to the “Admin” tab and then “Communication Management->Call Home”. In the right-hand pane, turn the admin state to “On” and fill out all required fields:

In the “Profiles” tab, add callhome@cisco.com to the “Profile CiscoTAC-1”. Add the internal email address to the “Profile full_txt”:

Under “Call Home Policies”, add the following. More policies could be added but this is a good baseline that will alert you to any major equipment problems:

Under “System Inventory”, select “On” next to “Send Periodically” and change to a desirable interval. Select “Save Changes” and then click the “Send System Inventory Now” button and an email should be sent to callhome@cisco.com:

Configure NTP

In the “Admin” tab, select “Time Zone Management”. Click “Add NTP Server” in the right-hand pane to add an NTP server and select “Save Changes” at the bottom:

Backing up the Configuration

Go to the “Admin” tab and then “All”. In the right-hand pane, select “Backup Configuration”. From the “Backup Configuration” dialog box, choose “Create Backup Operation”. Change Admin states to “Enabled” and do a “Full State” and then an “All Configuration” backup. Make sure to check “Preserve Identities:” when doing an “All Configuration” backup and save both backups to the local computer and then to an easily accessible network location:

After backing up your configuration you can start your ESXi/Windows/Linux/etc. host configurations!  Now that all the basic prep-work has been done, deploying multiple servers from this template should be a breeze.  Again, it’s important to note that what is shown above are some common settings typically seen in UCS environments, particularly when setting up ESXi service profile templates.  Certainly, there could be much more tweaking (BIOS, QoS settings, MAC Pool naming conventions, etc.) but these general settings should give you a general idea of what is needed for a basic UCS config.

, , ,

24 Comments

Cisco UCS 101: Windows Boot from SAN

I’ve had a number of customers ask me about the steps needed in order to setup Windows boot from SAN in a Cisco UCS environment.  There are a number of resources out there already, but I wanted to go ahead and create my own resource that I could consistently point people to when the question comes up.  So, without further ado…

Assuming the service profile has already been built with a boot policy specifying CD-ROM and then SAN storage as boot targets, complete the following steps to install Microsoft Windows in a boot from SAN environment on Cisco UCS:

1.  First, download the Cisco UCS drivers from Cisco.com.  Use the driver .iso file that matches the level of firmware you are on:
CiscoDrivers

2.  Next, boot the server and launch the KVM console.  From the “Virtual Media” tab, add the Windows server boot media as well as the drivers .iso file downloaded in the previous step and map the Windows boot media.  After the server is booted, zone only one path to your storage array (e.g. vHBA-A -> SPA-0).  Once the path has been zoned, you can also register the server on the array and add to the appropriate storage groups.  Remember, it is very important that you only present one path to your storage array until multipathing can be configured on Windows after the installation.  A failure to do this will result in LUN corruption.
WindowsBootMedia

3.  Once the installation reaches the point where you select the disk to install Windows on, the installation process will notify you that drivers were not found for the storage device.  Go back to the “Virtual Media” tab and map the drivers .iso file:
DriversISO

4.  Next, go back to the KVM tab and select “Load Driver”:
LoadDriver

5.  Navigate to the CD-ROM drive and drop all the way down to the exact folder appropriate for the OS you are installing:
Driver

6.  After selecting the appropriate driver, the new drive should appear (you may have to  select “Refresh” if it does not show up immediately).  Re-map the Windows media and continue with  the installation:
DriveAvailable

7.  After Windows is fully installed, configure the desired multipathing software and zone and register the rest of the paths to the array.

That’s about it!  This is really a very simple procedure, the most important things to note are to get the appropriate drivers and zone only one path during installation.

, , ,

Leave a comment