VMware has a KB article detailing a bug present in ESXi 5.0 that has been known to cause a variety of networking issues in iSCSI environments. Until last week, I had not encountered this particular bug and thought I’d detail my experiences troubleshooting this issue for those still on 5.0 that may experience this issue.
The customer I was working with had originally called for assistance because their storage array was only reporting 2 out of 4 available paths “up” to each connected iSCSI host. All paths had originally been up/active until a recent power outage and since then, no manner of rebooting or disabling/re-enabling had been successful in bringing them all back up simultaneously. Their iSCSI configuration was fairly standard, with 2 iSCSI port groups connected to a single vSwitch per-server and each port group connected to separate iSCSI networks. Each port group in this configuration has a different NIC specified as an “Active Adapter” and the other is placed under the “Unused Adapters” heading.
One of the first things that I wanted to rule out was a hardware issue related to the power outage. However, after not much time troubleshooting, I quickly discovered that simply doing some NIC disable/re-enable on the iSCSI switches would cause the “downed” paths to become active again within the storage array and the path that was previously “up” would go down. As expected, a vmkping was never successful through a NIC that was not registering properly on the storage array. Everything appeared to be configured correctly within the array, the switches and the ESXi hosts so at this point I had no clear culprit and needed to rule out potential causes. Luckily these systems had not been placed into production yet and so I was granted a lot of leeway in my troubleshooting proccess.
- Test #1. For my first test I wanted to rule out the storage array. I was working with this customer remotely, so I had them unplug the array from the iSCSI switches and plug into some Linksys switch they had lying around. I then had them plug their laptop into this same switch and assign it an IP address on each of the iSCSI networks. All ping tests to each interface was successful so I was fairly confident at this point the array was not the cause of this issue.
- Test #2. For my second test I wanted to rule out the switches. I had the customer plug all array interfaces back into the original iSCSI switches. I then had them unplug a few ESXi hosts from the switches. Then they assigned their laptop the same IP addresses as the unplugged ESXi host iSCSI port groups and ran additional ping tests from the same ports the ESXi hosts were using. All ping tests on every interface was successful, so it appeared unlikely that the switches were the culprit.
At this point it appeared almost certain that the ESXi hosts were the cause of the problems here. They were the only component that appeared to be having any communication issues as all other components taken in isolation communicated just fine. At this point it was also evident that something with the NIC failover/failback wasn’t working correctly (given the behavior when we disabled/re-enabled ports) so I put the iSCSI port groups on separate vSwitches. BINGO! Within a few seconds of doing this I could vmkping on all ports and the storage array was showing all ports active again. Given that this is not a required configuration for iSCSI networking for ESXi, I immediately started googling for known bugs. Within a few minutes I ran across this excellent blog post by Josh Townsend and the KB article I linked to above. The issue caused by the bug is that it will actually send traffic down the “unused” NIC during a failover scenerio.
This is why me separating the iSCSI port groups “fixed” the issue. There was no unused NIC in the portgroup for ESXi to mistakenly send the traffic to. In addition, it also explained the behavior where disabling/re-enabling a downed port would cause it to become active again (and vice versa). In this case ESXi was sending traffic down the unused port and my disable/re-enable caused a failover scenario that caused ESXi to send traffic down the active adapter again.
In my case, upgrading to 5.0 Update 1 completely fixed this issue. I’ll update this post if I run across this problem with any other version of ESXi, just note the workaround I spoke of above and outlined in both links.
Both VMware View and Citrix XenDesktop require permissions within vCenter to provision and manage virtual desktops. VMware and Citrix both have documentation on the exact permissions required for this user account. Creating a service account with the minimal amount of permissions necessary, however, can be cumbersome and as a result, many businesses have elected to just create an account with “Administrator” permissions within vCenter. While much easier to create, this configuration will not win you any points with a security auditor.
To make this process a bit easier I’ve created a couple quick scripts, one for XenDesktop and one for View, that create “roles” with the minimal permissions necessary for each VDI platform. For XenDesktop, the script will create a role called “Citrix XenDesktop” with the privileges specified here. For View, that script will create a role called “VMware View” with privileges specified on page 87-88 here. VMware mentions creating three roles in its documentation, but I just created one with all the permissions necessary for View Manager, Composer and local mode. Removing the “local mode” permissions is easy enough in the script if you don’t think you’re going to use it and the vast majority of View deployments I’ve seen use Composer, so I didn’t see it as necessary to separate that into a different role either. You’ll also note that I used the privilege “Id” instead of “Name”. The problem I ran into there is that “Name” is not unique within privileges (e.g. there is a “Power On” under both “vApp” and “Virtual Machine”) while “Id” is unique. So, for consistencies sake I just used “Id” to reference every privilege. The only thing that will need to be modified in these scripts is to make sure to enter your vCenter IP/Hostname after “Connect-VIServer”.
Of course, these scripts could be expanded to automate more tasks, such as creating a user account and giving access to specific folders or clusters, etc., but I will let all the PowerCLI gurus out there handle that. 🙂 Really, the only goal of these scripts is to automate the particular task that most people skip due to its tedious nature. Feel free to download, critique and expand as necessary.
Documentation for creating custom load evaluators in Citrix has existed for some time. Articles detailing the folly of using the “Default” load evaluator have been around for a while as well. Citrix even has an excellent whitepaper titled “Top 10 items found by Citrix Consulting on Assessments” that lists improper load management as the 2nd overall most common misconfigured item found by Citrix consulting and even gives an example baseline custom load evaluator. Despite all this, environments using the Default load evaluator are still prevalent and make up at least half the Citrix assessments I’m involved with. When words fail to make an impression, sometimes a visual can help:
The problem with the Default load evaluator is clear, it takes user distribution into account but not actual server resource consumption. Citrix load indexes are calculated on a 0-10,000 scale (you can see the value for each server with the “qfarm /load” command), with 10,000 being a “full” server. As you can see above, Server03 is the least busy from a Citrix perspective (since it has the least amount of users logged on), despite being the most busy from a server perspective. Further, the Default load evaluator sets the maximum amount of users per server at “100” while the environment above will not support more than 25-30. So from a load distribution and capacity perspective, the Default load evaluator is clearly ill-suited for any production environment.
A custom load evaluator that accounts for resource consumption takes less than 5 minutes to create and apply to the appropriate servers in your farm. As mentioned previously, the Citrix whitepaper I linked to above has a good baseline custom load evaluator that should get you started. So, take the time to make this simple farm optimization, your users will thank you!
After reading a bevy of excellent articles on multi-hypervisor datacenters, I thought I’d put pen to paper with my own thoughts on the subject. This article by Joe Onisick will serve as a primer to this discussion. Not only because it was recently written, but because it does an excellent job at fairly laying out the arguments on both sides of the issue. The article mentions three justifications organizations often use for deploying multiple hypervisors in their datacenter. These are, 1) cost, 2) leverage and 3) lock-in avoidance. I am in complete agreement that 2 and 3 are poor reasons to deploy multiple hypervisors, however, my disagrement on #1 is what I’d like to discuss with this post.
The discussion on the validity of multi-hypervisor environments has been going on for several years now. Steve Kaplan wrote an excellent article on this subject back in 2010 that mentions the ongoing debate at that time and discussions on this subject pre-date even that post. The recent acquisition of DynamicOps by VMware has made this a popular topic again and a slew of articles have been written covering the subject. Most of these articles seem to agree on a few things — First, despite what’s best for them, multi-hypervisor environments are increasing across organizations and service providers. Secondly, cost is usually the deciding factor in deploying multiple hypervisors, but this is not a good reason because you’ll spend more money managing the environment and training your engineers than you saved on the cost of the alternative hypervisor. Third, deploying multiple hypervisors in this way doesn’t allow you to move to a truly “private cloud” infrastructure. You now have two hypervisors and need two DR plans, two different deployment methods and two different management models. Let’s take each of these arguments against cost in turn and see how they hold up.
OpEx outweighs CapEx
As alluded to above, there’s really no denying that an organization can save money buying alternative hypervisors that are cheaper than VMware ESXi. But, do those cost savings outweigh potential increases in operational expenditures now that you’re managing two separate hypervisors? As the article by Onisick I linked to above suggests, this will vary from organization to organization. I’d like to suggest, however, that the increase in OpEx cited by many other sources as a reason to abandon multi-hypervisor deployments is often greatly exaggerated. Frequently cited is the increase in training costs, you have two hypervisors and now you have to send your people to two different training classes. I don’t necessarily see that as the case. If you’ve been trained and have a good grasp of the ESXi hypervisor, learning and administering the nuances and feature sets of another hypervisor is really not that difficult and formal training may not be necessary. Understanding the core mechanisms of what a hypervisor is and how it works will go a long way in allowing you to manage multiple hypervisors. And even if you did have to send your people to a one time training class, is it really all that likely that the class will outweigh the ongoing hypervisor cost savings? If not, then you probably aren’t saving enough money to justify multiple hypervisors in the first place. Doing a quick search, I’ve found week long XenServer training available for $5,000. Evaluate your people, do the math and figure out the cost savings in your scenario. Just don’t rule out multi-hypervisor environments thinking training costs will be necessarily astronomical or even essential for all of your employees.
Similar to the OpEx discussion, another argument often presented against the cost saving benefits of multi-hypervisor environments is that they are harder to administer as you have to come up with separate management strategies for VMs residing on the different hypervisors. Managing things in two separate ways, it is argued, moves away from the type of Private Cloud infrastructure most organizations should strive for. The main problem with this argument is that it assumes you would manage all of your VMs the same way even if they were on the same hypervisor. This is clearly false. A couple clear examples of this are XenApp and VDI. The way you manage these type of environments, deploy VMs, or plan DR is often vastly different than you would the rest of your server infrastructure. And so, if there is a significant cost savings, it is these type of environments that are often good targets for alternate hypervisors. They are good candidates for this type of environment not only because they are managed differently, regardless of hypervisor, but because they often don’t require many of the advanced features only ESXi provides.
I’m in complete agreement that having test/dev and production on separate hypervisors is a bad idea. Testing things on a different platform than they run in production is never good. But if you can save significant amounts of money by moving some of these systems that are managed in ways unique to your environment onto an alternate hypervisor, I’m all for it. This may not be the best solution for every organization (or even most), but like all things, should be evaluated carefully before ruling it out or adopting it.
Which is better, Citrix XenDesktop or VMware View? XenServer or ESXi? HDX or PCoIP? While the answer to these questions are debated on numerous blogs, tech conferences and marketing literature, what is explored far less often is how Citrix and VMware technologies can actually work together. What follows is a brief overview of some different ways that these technologies can be combined, forming integrated virtual infrastructures.
1) Application and Desktop delivery with VMware View and XenApp
Many organizations deploying VMware View already have existing Citrix XenApp infrastructures in place. The View and XenApp infrastructures are usually managed by separate teams and not integrated to the degree they could be. Pictured above are some possible ways these two technologies can integrate. As you can see, there are many different options in terms of application delivery with both environments. The most obvious is publishing applications from XenApp to your View desktops. This can reduce the resource consumption on individual desktops and also provides the added benefit of accessing those same applications outside your View environment with the ability to publish directly to remote endpoints as well. Existing Citrix infrastructures may also be utilizing Citrix application streaming technology as well. By simply installing some Citrix clients on your View desktops, applications can be streamed directly to View desktops or alternatively directly to end-points or even to XenApp servers and then published to View desktops or endpoints. Another option is to integrate ThinApp into this environment. Tina de Benedictis, had a good write-up on this a while back. The options for this are similar to Citrix streaming. You can stream to a XenApp server and then publish the application from there, stream directly to your View desktops or stream directly to end-points. As shown in the above picture, both Citrix Streaming and ThinApp can be used within the same environment. This might be an option if you’ve already packaged many of your applications with Citrix but either want to migrate to ThinApp over time or package and stream certain applications that Citrix streaming cannot (e.g. Internet Explorer). Whatever options you choose, it’s clear that both technologies can work together to form a very robust application and desktop delivery infrastructure.
2) Load Balancing VMware infrastructures with Citrix Netscaler
Some good articles have been written about this option as well. In fact, this option is becoming popular enough that VMware even has a KB dedicated to ensuring the correct configuration of Citix Netscalers in View environments. VMware View and VMware vCloud Director have redundant components that should be load balanced for best performance and high availability. If you have either of these products and are using Citrix Netscaler to proxy HDX connections or load balance Citrix components or other portions of your infrastructure, why not use them for VMware as well? Pictured above is a high-level overview of load balancing some internal-facing View Connection servers. Users connect to a VIP defined on the Netscalers (1), that directs them to the least busy View Connection server (2) that then connects them to the appropriate desktop based on user entitlement (3). After the initial connection process, the user connects directly to their desktop over PCoIP.
This is actually an extremely popular combination and the reasons are numerous and varied. You can have 32 host clusters (only 16 in XenServer and 8 with VMware View on ESXi), Storage vMotion and Storage DRS (XenServer doesn’t have these features and you can’t use them with VMware View), memory overcommitment (only ESXi has legitimate overcommit technology), Storage I/O Control, Network I/O Control, Multi-NIC vMotioning, Auto Deploy, and many more features that you can only get from the ESXi hypervisor. Using XenApp and XenDesktop on top of ESXi gets you the most robust hypervisor and application and desktop virtualization technology combinations possible.
4) XenApp as a connection broker for VMware View
This option intrigues me from an architectural point of view, but I have yet to see it utilized in a production environment. With this option you would publish your View Client from a XenApp server. Users could then utilize HDX/ICA over external connections or the WAN and from the XenApp server would connect to the View desktop on the LAN over PCoIP. What are the flaws in this method? I can think of a couple benefits to this off-hand. First, HDX generally performs better over high latency connections, so there could be a user experience boost. Second, VMware View uses a “Security Server” to proxy external PCoIP connections. The Security Server software just resides on a Windows server OS, a hardened security appliance like Netscaler would be more secure. I’d be interested to see how things like printing and USB redirection would work in such an environment, but for me, it’s definitely something I’d like to explore more.
So, those are a few of the possibilities for integrating VMware and Citrix technologies, what are some other combinations you can think of? Any other benefits or flaws in the above mentioned methods?
Those familiar with VMware certification exams will have experience studying for those exams with the excellent exam blueprints that occompany each test. I took the VCP5-DT (VMware View 5) test several weeks ago and used its exam blueprint to study from. While filling out the blueprint for my own study purposes, I thought it might be a useful tool for others as well so I went ahead and filled out most of the rest of the blueprint as well. I did however, leave out certain portions for various reasons. These reasons range from a) the meaning of the particular section was unclear, b) portions of the blueprint were redundant or c) certain sections can only be known through real-world experience (e.g. troubleshooting). Despite these short omissions, there is quite a bit of content here (30 pages). I got most of it from the resources listed in the exam blueprint and even copied and pasted tables as necessary. I did add my own commentary in several places where I felt the listed resources did not go far enough in their explanation.
Download the blueprint study guide here.
With the recent release of XenDesktop 5.6, Citrix has introduced the “Personal vDisk” feature into its XenDesktop product line. See below for links on how Personal vDisks work, but the basic idea behind this technology is that it allows you to create pools of non-persistent desktops and still allow users to install applications on top of these desktops and those applications persist between reboots and base image updates. This is a significant improvement over “dedicated” virtual desktops, where any updates to the base image would completely wipe out user customization. This limitation forced administrators to apply updates to each dedicated desktop which would, over time, consume large amounts of storage space. Needless to say, the Personal vDisk model is a welcome step forward for Citrix.
Now, with this release there was some exciting news about this technology’s ability to resolve application conflicts between user and admin installed apps. For example, in this video, between the 6min-7:40min mark an interesting scenerio is given where a user installs Firefox 9 but the admin installs Firefox 10 as part of an image update. The default behavior is that Firefox 9 will be “hidden” and Firefox 10 will be the application available to end users. Another scenerio is given where both the user and admin have installed the exact same application, we are told that in this scenerio the user installed app is removed from their Personal vDisk to save space and only the admin installed app is utilized. In the Personal vDisk FAQ, we’re also told that “Should an end-user change conflict with an administrator’s change, personal vDisk provides a simple and automatic way to reconcile the changes”. With these things in mind, I set out to test this feature myself and see how this actually works. As you might have guessed, things aren’t quite as “easy” as advertised.
What follows are the high-level steps I took to initially test this feature and try to get it to work:
- Install Firefox 10 in the base/parent image
- Update Inventory and Shutdown, create new snapshot
- Update Image
- Install Firefox 11 as user
At this point I was expecting to get an error or some warning denying me access to install Firefox 11 and that it conflicts with an admin installed app. However, this did not happen and I was able to install Firefox 11 as a user. This led to my next test.
- Install firefox 11 in the base/parent image
- Update Inventory and Shutdown, create new snapshot
- Update image
- Install firefox 10 as user
Again, I was expecting some kind of error or warning at this point but it never happened. As a user, I was able to install the older version of Firefox without any issues. This led to another test.
Test # 3
- Install firefox 11 in base image/parent image.
- Update Inventory and Shutdown, create new snapshot.
- Update image.
- Install firefox 11 as user and observe more space being taken up on the Personal vDisk.
Again, no warnings or errors at this point despite directly creating a conflict between a user and admin installed app and wasting space on the Personal vDisk. I tried this same test with several different applications but had the same result each time. Frustrated, I turned to the Citrix Forums and found the answer to why this doesn’t work.
As explained in that forum, the reason my tests didn’t turn out the way I thought they would is because Personal vDisk application conflict resolution does not happen proactively, during the time when a user is installing an application, but only after a base image update when files or folders have been modified and updated. To borrow the example given in the forum and at a more granular level, say that “app.dll” is present in the base image. The user installs an application or in some way changes “app.dll” on their virtual desktop. This change will persist indefinitely until “app.dll” is once again updated in the base image. At that point the inventory process will note that “app.dll” has been modified and the user changes to “app.dll” will be overwritten the first time the virtual desktop boots up after an image update.
I decided to test this out at the individual file level to easily verify the results. Here is a file in C:\Test on my base image. Note the size:
As a user, I modify this file by deleting all of the content and create another file in this directory. Note the sizes:
Now, these user changes persist between reboots and even persist between image updates when this specific file is not updated. However, when I go back into my base image and update that file (add a word), here’s what it looks like to the user after an image update:
As you can see, the admin changes in the base image have overwitten the user changes. If we go back to my earlier examples we will see that this same behavior holds true for entire applications as well. For instance, on Test #3, if I go back into the base image and reinstall Firefox 11, those files get removed from the Personal vDisk the first time it boots up and I now use the application as installed by the administrator from the base image . On Test #2, if I go back in and reinstall Firefox 11 on the base image, I now see Firefox 11 as the end user and the Firefox 10 files are overwritten.
While the Personal vDisk feature of XenDesktop 5.6 is a definite step in the right direction, there is still some work that needs to be done with application conflict resolution. Currently, the only way to be sure that admin installed apps overwrite any conflicting user installed apps is to regularly go into the base image and update or reinstall your applications. Further, since the default behavior is for admin installed apps to “win” in the event of a conflict, administrators should take care when updating applications and images as they could inadvertantly be overwriting user installed apps that they didn’t intent to overwrite and this could lead to a confusing experience for the user (“Hey! I didn’t install this version!?”).
Not having a solid application conflict mechanism in place isn’t a deal-breaker for me, after all, current “dedicated” desktops don’t have a solution for this either. However, it is important to know how this works and when overwrites occur so you can properly manage applications in your environment and aren’t unintentionally creating a bad experience for your users. A future post may delve into ways to modify the default behavior (admin apps overwriting user apps) but for now I put this out there for all who may be confused as to to how this works, as I was.
Here are some useful Personal vDisk links:
For today’s post I’d like to introduce the first guest blogger to post on speakvirtual.com, Jamie Lin! Jamie has been working in the IT industry for a long time and has a ton of knowledge across a broad spectrum of technologies. Jamie and I co-wrote the below post and I anticipate him contributing more content in the future.
What is it?
With the advent of XenServer 5.6 SP2 and XenDesktop 5 SP1, Intellicache became a configurable and supported feature for the Citrix VDI stack. You can use Intellicache with the combination of XenServer and XenDesktop Machine Creation Services (MCS). The basic idea behind Intellicache is that it allows you take some of the pressure off of your shared storage by offloading IO onto host local storage. As discussed before on this site, IO in VDI environments has historically been one of the most overlooked and biggest technical challenges to any VDI rollout. With Intellicache, Citrix has sought to help alleviate this issue. See below for more on how this works and some additional considerations you won’t find in the documentation.
How does it work?
The folks over at Citrix blogs have already done an excellent job explaining how Intellicache works so we’ll try not to repeat too much here. However, at a fairly basic level, the offloading of IO is achieved by caching blocks of data accessed from shared storage by virtual desktops onto host local storage. So if Intellicache is enabled and a Windows 7 VM boots from a particular XenServer host, it will cache the roughly ~200MB accessed by the Operating System during the boot process on the hosts local storage. Subsequent VMs that boot up on that host will then access these blocks from local storage instead of the SAN. In addition, if you are using non-persistent images, your writes will occur exclusively on local storage as well. Persistent (aka “Dedicated”) images will write to local and shared storage. I think this image from the Citrix blog sums it up nicely:
You might also be wondering about storage space and what happens when you run out of room on your local storage. With both read and write caches happening on local storage, this is bound to happen from time to time. Luckily, Intellicache has taken this into account and will seamlessly fail back to shared storage in the event the local storage runs out of space. For more on “how it works”, see the link above or read more here.
How to enable Intellicache
This CTX article explains the process of enabling Intellicache quite nicely. Basically it’s a two-step process. The first step occurs during the installation of XenServer itself, where you select “Enable thin provisioning (Optimized storage for XenDesktop)”. This option will change the default local storage type from LVM to EXT3. The next step occurs after the installation of XenDesktop itself where you create a connection to your host infrastructure. There is a checkbox that says “Use IntelliCache to reduce load on the shared storage device”. Selecting this checkbox will change the virtual disk parameter “allow-caching ( RW):” to “true” for any virtual desktop created that uses that particular catalog. You can verify this by issuing the command “xe vdi-param-list uuid=<VDI_UUID>”:
You can also use the performance graphs to see Intellicache in action as well. In the performance tab, verify that “Intellicache Hits”, “Intellicache Misses” and “Intellicache Size” are all selected. If they are all selected, you will be able to monitor its usage as shown below:
While we’re uncertain as to if Citrix will support this or not, it is also possible to enable or disable Intellicache on a per VM basis. You do this with the following command, “xe vdi-param-set uuid=VDI_UUID allow-caching=true”. You can then use the param-list command to view the parameters of that virtual disk to see that “allow-caching” is set to true. As it starts to utilize Intellicache, you’ll see Intellicache hits and misses for the VM start to appear in the performance tab.
While this may appear a bit complicated, it is important to note that the only thing necessary to implement Intellicache is selecting the Thin Provisioning option during XenServer install and selecting the checkbox when creating the catalog in XenDesktop. These command line options merely allow you more granular control on configuring Intellicache and allow you to see what it’s doing “under the hood”.
According to the XenServer Installation guide, when you use Intellicache, “The load on the storage array is reduced and performance is enhanced”. Given that VDI IO is such a concern for most deployments, shouldn’t we just be enabling Intellicache all the time? Our answer is “no”. For while Intellicache does take IO pressure off of your shared storage array, you now have another IO concern to consider, IO on local storage. Remember what we said earlier about Intellicache failing back to shared storage if you run out of disk space on local storage? Well, what happens if your local storage can’t handle the IO being generated by the virtual desktops on your host, will it fall back to shared storage? The answer is no! There is no built in safeguard to prevent your VMs from using too much IO on local storage and thus, creating bad performance on any VM utilizing that hosts cache for reads and writes. This all but makes local storage SSDs an absolute necessity, particularly in blade environments where most vendors provide only two slots for local storage per blade. Given that most environments use N+1 redundancy for their host infrastructure, this means your local disks need to be able to handle the IO for the amount of VMs that can reside on two hosts!
There is another concern here as well that, as far as we can tell, is completely undocumented by Citrix. When you use Intellicache, non-persistent VMs will be unable to XenMotion! This makes complete sense when you think about it. How could a VM live migrate to another host when its write differentials reside on a separate host (the “Migrate to Server” option isn’t even present on these VMs)? What makes this so interesting is that it appears not to be mentioned by Citrix anywhere. It’s not in the installation guides, we couldn’t find it on edocs, and their blog on Intellicache only mentions XenMotioning in regards to dedicated desktops! This means you cannot perform any type of host maintenance that requires downtime while there are running non-persistent (aka “pooled”) desktops present on the host. Notice that we said “running”, not “in-use”, for a VM can still be running even though no one is using it. This caveat alone will be a deal-breaker for many considering the use of Intellicache and is something Citrix should have more openly documented.
With this post we wanted to give a broad overview on how Intellicache works and some general considerations before deploying XenDesktop with Intellicache. As we’ve seen, local host IO capability planning becomes paramount with the use of Intellicache and VM mobility is reduced. As it stands now, Intellicache use-case scenerios will be fairly limited and more features and configurable granularity needs to be built into the system before broader adoption can occur. We’ll dig deeper into Intellicache in future posts, in the meantime, let us know what you think!
Citrix announced its acquisition of ShareFile back in October and has recently allowed partners a free, one year, 20 “employee”, 20GB of space trial offer. I’ve been kicking the tires on ShareFile for the past few weeks and wanted to share my thoughts.
What is it?
If you’re familiar with solutions like DropBox and SugarSync then you already have a pretty good idea of what ShareFile is – an online file sync and collaboration tool. Unlike these other solutions, however, ShareFile is designed to be used by businesses. ShareFile provides you with SSL encrypted storage and allows you to add users and assign permissions to particular folders and the ability to add additional administrators to help manage your data and users. You’ll get configurable email alerts on file uploads and downloads and can even control the amount of bandwidth allotted to particular users in a given month. ShareFile provides you with a customizable web portal (yourdomain.sharefile.com) that allows you to brand the website with your logos and corporate colors. This web portal can be used as an alternative to FTP and even gives you the ability to search the site for particular files. Other items of note:
ShareFile is hosted almost entirely out of Amazon AWS and its services are spread across all 5 major Amazon datacenters.
-Desktop Widget: Basically a fat-client that is built on Adobe Air that allows you to upload and download files to ShareFile without having to launch a web browser.
-Outlook Plugin: Allows you to link to existing ShareFile documents and upload and send new files to ShareFile. Administrators can even set policies that dictate that files over a certain size are automatically uploaded to ShareFile instead of attached using the corporate email system. I’ve found this to be the most used ShareFile feature for me.
-Desktop Sync: This gives you the ability to select folders on your PC to be automatically synced to ShareFile. There is an “Enterprise Sync” as well that’s designed for server use and allows for sync jobs to be created under multiple user accounts.
-ShareFile Mobile: A mobile website designed to be accessed from a tablet or smartphone. In addition, there’s a ShareFile app for iOS, Android, Blackberry and Windows Phone.
ShareFile has more features that you can read about on their website.
What does this mean for the enterprise?
Citrix is incorporating ShareFile into what it’s calling the “Follow-Me-Data Fabric”, which is comprised of ShareFile, Follow-Me-Data and GoToMeeting with Workspaces. Citrix has long had the goal of allowing you to access your applications anywhere, from any device and they’re now attempting to extend this philosophy to your data as well.
In all honesty, it was initially hard for me to see this adding much value to the Citrix portfolio. After all, doesn’t XenApp, XenDesktop, Netscaler, et al. already give me the ability to access my applications and data wherever I’m at? My virtual desktop is accessible from almost any device already and all the data I work on is either saved on that desktop or accessible on corporate network shares from that desktop. As I began to think about the future of IT though, and the shift to public and hybrid clouds, the strategy here became much more obvious. While almost all the data I work on now is stored in one centralized location, the push to public and hybrid clouds will create a dispersion of corporate data across different cloud providers. Corporations may be utilizing CloudCompany-A, B and C for SaaS applications and CloudCompany-D for portions of their infrastructure. Even if you’ve only chosen one Cloud provider, most companies aren’t ready to dump all of their data and applications into the Cloud yet and may not ever. This will obviously create a de-centralization of data that could get messy if not managed properly, and that’s where ShareFile comes in.
Working in conjunction with StoreFront and Follow-me-Data, ShareFile would give you the ability to centralize all the data stored in any private and public cloud infrastructures you’ve invested in. You’d have StoreFront on the front-end tying your internal and SaaS applications into one unified interface and Follow-Me-Data and ShareFile on the back-end allowing you to access dispersed data in a centralized fashion. That, at least, is the vision. The key here will be integration – something Citrix has historically not done very well (e.g. VDI-in-a-Box, management consoles, etc). To the user, ShareFile needs to go almost unnoticed and be seamlessly integrated into the Citrix product stack so that it does not feel like a separate technology. Doing this will just make it natural for the user to store their public and private cloud data and access from anywhere. If it’s seamlessly integrated into the products the user is already utilizing for their job then I think it will go a long way to securing corporate data. After all, why would I put my corporate data on DropBox or SugarSync when it’s so much easier to get this same functionality with tools that are already integrated with the work I do? And that too, will be a key factor in how successful this will be – corporations can’t lock this down to such a degree that it’s not easy for users to work with or else it will drive them to more “open” solutions.
In the end, I think this was a smart move that’s success will ultimately be dependent on the ever increasing push towards the public Cloud and Citrix’s ability to integrate this seamlessly with their already existing products. It will also be interesting to see how DropBox and other similar companies respond to this. Whether they want to define themselves as competitors or not, the bottom line is that there are currently tons of corporate data on DropBox and SugarSync and a well-integrated ShareFile means less data on these type of solutions. Whether they add more “business-friendly” features to their products or are content with “personal” data remains to be seen. And if they do add more features that allow companies more control of the data that is stored on them, how will Citrix respond? Citrix has generally been very receptive to utilizing their services from multiple platforms (e.g. XenDesktop on ESXi/Hyper-V) so they might look to just provide integration with these other online file shares from Citrix Receiver as well. And will this service always be hosted in the public Cloud or will there be an option in the future to host a ShareFile-like service for your company within your own datacenter?
There’s a lot that remains to be seen but overall, this appears to be a “win” for Citrix and a trend that other companies have already adopted as well. End-user computing was a huge component at VMworld and Synergy this past year and I anticipate and look forward to even more rapid development in this space!
There’ve been some interesting discussions about VDI recently and many of these discussion share a common theme – that VDI is not all that it was made out to be and that there are better ways to deliver desktops to your users “anytime, anywhere”. This line of thinking has existed for some time but has recently come into vogue after years of pent-up and cynical disillusionment as a result of overhyped VDI promises and underwhelming results. Understanding where this hype came from will be instructive in learning the mindset of most organizations starting VDI implementations and why many of these implementations haven’t lived up to the promises and have failed from a technical and user acceptance standpoint.
I’ve lost track of the amount of people I’ve talked with over the years that want to pursue VDI with the following justification, “We had so much success with server virtualization that virtualizing our desktops just made sense”. In fact, hearing this statement just this past week is what prompted this post. At a cursory glance, this reasoning does have a common sense appeal, however, the devil is in the details and it is precisely this reasoning that has led so many people astray in regards to VDI. Why might this be? Because server virtualization is a freak of nature! Very few things in life become cheaper when more features are added (as other commentators have noted). In moving from physical to virtual with a server infrastructure, an organization not only saved money with server consolidation and power and cooling but also added some extremely valuable features that just weren’t available to them before. Things like server mobility, easier disaster recovery, rapid server deployment, etc. all added tremendous value to server virtualization on top of the financial benefits of moving to such a solution. With the success of server virtualization in hand, many organizations rushed headlong into VDI deployments thinking they would get the same benefits at the same low cost with the same level of ease because hey, desktops are easy, right? Sensing the excitement building around this “next stage of virtualization” vendor marketing departments went into overdrive hyping VDI and touting its technical benefits and cost effectiveness just as they did with server virtualization. At that time, the unique workload characteristics of desktops and how that would translate to a virtual, shared image environment had not been taken into account by those already experienced with server virtualization (myself included).
In reality, virtualizing your desktops is not “easy” and the success of a server virtualization project by no means guarantees success in setting up a virtual desktop infrastructure. The technical differences between these two technology domains are significant and important to note. For instance, in most server environments, you’ll have large amounts of idle servers at any given time. Desktops, however, are busy all the time with user activity. Adding to that, the user “lives” in the virtual desktop, so that any lag in performance, any delay in mouse clicks, will be immediately noticeable. Since you’ve virtualized your desktops, the user still expects the same level of graphical performance as their local PC, whether they’re viewing work related material or not. And how many of your servers average 20-30 IOPS on a continual basis? Many of these problems simply didn’t exist with server virtualization. Servers remain online almost all the time, with VDI however, desktops are rebooted on a regular basis and can lead to boot storms. With server virtualization you installed one application per server, with VDI, users want to install their own applications (UIA) and you have a whole assortment of “long tail applications” that you have to develop a strategy around. Adding to these VDI complexities are profile and “user” management, printing and more.
Server virtualization is an anomaly and the prevailing opinion of desktops as a lesser form of server has lulled the masses into thinking VDI would be a piece of cake. The bottom line is, VDI is not “easy” and this line of thinking has led to many failed VDI implementations. While the technical challenges listed above and previously on this website and others are very real, these would never have “surprised” anyone or even been a problem if planned for and designed carefully.
Paradoxically, while the conflation of desktop and server virtualization and the “desktops are easy” mentality has contributed to so many failed VDI implementations, I am convinced that these failures simply wouldn’t have happened if VDI had been implemented in a similar fashion as server virtualization. No one started a server virtualization project with a “virtualize everything” mentality. Server workloads were carefully analyzed to determine the best candidates for virtualization. After these “low hanging fruit” servers were identified, IT departments slowly worked their way up to more resource intensive servers with unique workload characteristics. In the end, some servers remained physical and this was fine because it was anticipated as part of the overall strategy. A similar strategy should be taken with VDI. Develop a comprehensive desktop and application virtualization strategy. Create application and user catalogs to determine where your users and applications fit into this strategy. Then, start with your “low hanging fruit”, your call centers or task workers and slowly work your way up to “knowledge workers” with more unique and demanding requirements. Ultimately, you may end up with some users who remain with physical desktops. Careful planning and a realistic level-headedness will result in a successful VDI implementation. Knowing what VDI “is” and “is not” is essential in determining your end goal for such a project and setting expectations about what VDI will do for your organization.
Appendum: While doing some research after writing the above post, I ran across a presentation by Ron Oglesby where he raises similar points. As always, it’s a great presentation. Here is the link, enjoy!