Turbo Charging your IOPS with the new PVS Cache in RAM with

15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Blog
APPLICATION VIRTUALIZATION
DESKTOP MANAGEMENT
APR 18, 2014
Turbo Charging your IOPS with the new PVS Cache in
RAM with Disk Overflow Feature! – Part One
Miguel Contreras
66
With PVS 7.1 and later you may have noticed a new caching option called “Cache in
Device RAM with Hard Disk Overflow”. We actually implemented this new feature to
address some application compatibility issues with Microsoft ASLR and PVS. You can
check out CTX139627 for more details.
One of the most amazing side effects of this new feature is that it can give a
significant performance boost and drastically increase your IOPS throughput for PVS
targets while reducing and sometimes eliminating the IOPS from ever hitting physical
storage!!!
My colleague Dan Allen and I have recently been conducting some testing of this new
feature in both lab and real customer environments and wanted to share some of our
results and some “new” recommended practices for PVS that encourage everyone to
start taking advantage of this new cache type. This is a 2 part blog series where I first
recap the various write cache options for PVS and discuss some of new results we are
seeing with the new cache type. In part 2 of the series, Dan Allen dives deeper into
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
1/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
performance results and provide some guidelines for properly sizing the RAM for both
XenApp and VDI workloads.
PVS Write Cache Types
When using Provisioning Services, one of the most important things to ensure optimal
performance is the write cache type. I am sure that most of you are already familiar
with the various types, but I will review them here again as a refresher!
1. Cache on server: this write-cache type places the write-cache on the PVS server. By
default it is placed in the same location as the vDisk, but a different path can be
specified.
This write-cache type provides poor performance when compared to other write
cache types, limits high availability configurations, and should almost never be
used in a virtual environment. This option was typically only used when streaming
to physical end points or thin clients that are diskless.
2. Cache on device’s hard drive: this write-cache type creates a write-cache file
(.vdiskcache) on the target devices’ hard drive. It requires an NTFS formatted hard
drive on the target device to be able to create this file on the disk. This cache type
has been our leading Citrix best practice for environments to date and most of our
deployments use this write-cache type as it provides the best balance between cost
and performance. To achieve the highest throughput to the write-cache drive,
Intermediate Buffering should almost always be used (caution should be used with
target devices hosted on Hyper-V where we have occasionally seen adverse effects). Intermediate Buffering allows writes to use the underlying buffers of the disk/disk
driver before committing them to disk allowing the PVS disk drive to continue
working rather than waiting for the write on disk to finish, therefore increasing
performance. By default this feature is disabled. For more information on
Intermediate Buffering, including how to enable it, please refer to CTX126042.
3. Cache in device RAM: this write-cache type reserves a portion of the target device’s
memory for the write cache, meaning that whatever portion of RAM is used for
write-cache is not available to the operating system. The amount of memory
reserved for write-cache is specified in the vDisk properties. This option provides
better throughput, better response times, and higher IOPS for write-cache than the
previous types because it writes to memory rather than disk.
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
2/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
There are some challenges with this option, though. First of all, there is no
overflow, so once the write cache is filled the device will become unusable (might
even blue screen). Therefore, there has to be plenty of RAM available for the target
devices to be able to operate and not run out of write-cache space, which can be
expensive, or just not possible because of memory constraints on the physical host. Second, if there is a need to store persistent settings or data such as event logs, a
hard drive will still be required on each target. On the flip side, this hard disk will
not be as large or use as many IOPS as when using “Cache on device’s hard drive”
since the write cache will not be on it. We have typically seen customers
successfully use this feature when virtualizing XenApp since you do not run as
many XenApp VMs on a physical host (compared to VDI), so often times there is
enough memory to make this feature viable for XenApp.
4. Cache on device RAM with overflow on hard disk: this is a new write-cache type
and is basically a combination of the previous two, but with a different underlying
architecture. It provides a write-cache buffer in memory and the overflow is written
to disk. However, the way that memory and disk are used is different than with
“Cache in device RAM” and “Cache in device’s hard drive” respectively. This is how it
works:
Just as before, the buffer size is specified in the vDisk properties. By default, the
buffer is set to 64 MB but can be set to any size.
Rather than reserving a portion of the device’s memory, the cache is mapped to
Non-paged pool memory and used as needed, and the memory is given back to the
system if the system needs it.
On the hard drive, instead of using the old “.vdiskcache” file, a VHDX (vdiskdif.vhdx)
file is used.
On startup, the VHDX file is created and is 4 MB due to the VHDX header.
Data is written to the buffer in memory first. Once the buffer is full, “stale” data is
flushed to disk.
Data is written to the VHDX in 2 MB blocks, instead of 4 KB blocks as before. This
will cause the write-cache file to grow faster in the beginning than the old
“.vdiskcache” cache file. However, over time, the total space consumed by this new
format will not be significantly larger as data will eventually back fill into the 2 MB
blocks that are reserved.
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
3/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
A few things to note about this write-cache type:
The write-cache VHDX file will grow larger than the “.vdiskcache” file format. This is
due to the VHDX format using 2 MB blocks vs. 4 KB blocks. Over time, the size of
the VHDX file will normalize and become closer in size to what the “.vdiskcache”
would be, as data will eventually back fill into the 2 MB blocks that are reserved.
The point at which the size normalizes varies by environment depending on the
workload.
Intermediate buffering is not supported with this write-cache type (this cache type
is actually designed to replace it).
System cache and vDisk RAM cache work in conjunction. What I mean by this is
that if there is block data that is moved from the PVS RAM cache into the disk
overflow file, but it is still available in the Windows System Cache, it will be re-read
from memory rather than disk.
This write-cache type is only available for Windows 7/2008 R2 and later.
This cache type addresses interoperability issues with Microsoft ASLR and
Provisioning Services write-cache where we have seen application and printer
instability that result in undesirable behavior. Therefore, this cache type will provide
the best stability.
A PVS 7.1 hotfix is required for this write-cache type to work properly: 32-bit and 64bit.
New PVS RAM Cache Results
Now, a review of the newest cache type wouldn’t be complete if we didn’t share some
results of some of our testing. I will summarize some of the impressive new results we
are seeing and in Part 2 of the series, Dan Allen will dive much deeper into the results
and provide sizing considerations.
Test Environment 1
Physical Hardware
Server CPU: 2 x 8 core CPU Intel 2.20 GHz
Server RAM: 256 GB
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
4/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Hypervisor: vSphere 5.5
Storage: EMC VNX 7500. Flash in tier 1 and 15K SAS RAID 1 in tier 2. (Most of our IOPS
stayed in tier 1)
XenApp Virtual Machine
vServer CPU: 4 vCPU
vServer RAM: 30 GB
vServer OS: Windows 2012
vServer Disk: 30 GB Disk (E: disk for PVS write cache on tier 1 storage)
We ran 5 tests using IOMETER against the XenApp VM so that we could compare the
various write cache types. The 5 tests are detailed below:
1. E: Drive Test: This IOMETER test used an 8 GB file configured to write directly on
write-cache disk (E:) bypassing PVS. This test would allow us to know the true
underlying IOPS provided by the SAN.
2. New PVS RAM Cache with disk Overflow: We configured the new RAM cache to use
up to 10 GB RAM and ran the IOMETER test with an 8 GB file so that all I/O would
remain in the RAM.
3. New PVS RAM Cache with disk Overflow: We configured the new RAM cache to use
up to 10 GB RAM and ran the IOMETER test with a 15 GB file so that at least 5 GB of
I/O would overflow to disk.
4. Old PVS Cache in Device RAM: We used the old PVS Cache in RAM feature and
configured it for 10 GB RAM. We ran the IOMETER test with an 8 GB file so that the
RAM cache would not run out, which would make the VM crash!
5. PVS Cache on Device Hard Disk: We configured PVS to cache on device hard disk
and ran IOMETER test with 8 GB file.
With the exception of the size of the IOMETER test file as detailed above, all of the
IOMETER tests were run with the following parameters:
4 workers configured
Depth Queue set to 16 for each worker
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
5/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
4 KB block size
80% Writes / 20% Reads
90% Random IO / 10% Sequential IO
30 minute test duration
Test
IOPS
#
Read
Write
IOPS
IOPS
MBps
Read
Write
Avg
MBps
MBps
Response
Time (ms)
1
18,412
3,679
14733
71.92
14.37
57.55
0.89
2
71,299
14,258
57,041
278.51
55.69
222.81
0.86
3*
68,938
13,789
55,149
269.28
53.86
215.42
0.92
4
14,498
2,899
11,599
56.63
11.32
45.30
1.10
5
8,364
1,672
6,692
32.67
6.53
26.14
1.91
* In test scenario 3, when the write-cache first started to overflow to disk the IOPS
count dropped to 31,405 and the average response time was slightly over 2 ms for a
brief period of time. As the test progressed, the IOPS count gradually increased back
up and the response time decreased. This was due to the PVS driver performing the
initial flush of large amounts of data to the disk to make enough room in RAM so that
most of the data could remain in RAM. Even during this initial overflow to disk, the
total IOPS was still nearly twice as fast as what the underlying disk could physically
provide!
As you can see from the numbers above, we are getting some amazing results from
our new RAM Cache feature. In our test, the tier one storage was able to provide us a
raw IOPS capability of a little over 18K IOPS, which is pretty darn good! However, when
using our new RAM Cache with overflow feature, we are able to get nearly 70K+ IOPS
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
6/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
when staying in RAM and were able to maintain nearly 69K IOPS even when we had a
15 GB workload and only 10GB RAM buffer. There are also a few other very interesting
things we learned from this test:
The old PVS Cache in RAM feature could not push above 14K IOPS. This is most
likely due to the old driver architecture used by this feature. The new Cache in RAM
with disk overflow is actually more than 4 times faster than the old RAM cache!
The PVS “Cache on device hard disk” option, which uses the old .vdiskcache type
could only drive about 50% of the IOPS that the actual flash SAN storage could
provide. Again, this is due limitations in the older driver architecture
It is quite obvious that the new Cache in Device RAM with Hard Disk Overflow option is
definitely the best option from a performance perspective, and we encourage everyone
to take advantage of it. However, it is critical that proper testing be done in a test
system/environment in order to understand the storage requirements for the write
cache with your particular configuration. Chances are that you will need more disk
space, but exactly how much will depend on your particular workload and how large
your RAM buffer is (the larger the RAM buffer, the less disk space you will need), so
make sure you test it thoroughly before making the switch.
Check out Part 2 of this blog series for more information on various test configurations
and sizing considerations.
Another thing you might want to consider is whether your write-cache storage should
be thick or thin provisioned. For information on this topic please refer to my colleague
Nick Rintalan’s recent post Clearing the Air (Part 2) – Thick or Thin Provisioned Write
Cache.
Finally, I would like to thank my colleagues Chris Straight, Dan Allen, and Nick Rintalan
for their input and participation during the gathering of this data.
Happy provisioning!
Migs
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
7/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Tagged under:
Application Virtualization
Virtualization
Desktop Management
XenApp
XenApp
Desktop Virtualization (VDI)
XenDesktop
XenDesktop
Miguel Contreras
Miguel Contreras is an Architect in the Americas Consulting organization based out of Fort
Lauderdale, FL. He has been with Citrix Consulting for the better part of a decade and has
worked on implementing Citrix solutions for some of the largest enterprises in the geo.
66 Comments
Citrix Blogs
 Recommend
⤤ Share
1

Login
Sort by Best
Join the discussion…
Jim
•
5 months ago
Hi,
Im new to Xendesktop and looking for some basic information on sizing for pvscache-in-ram-with-disk-overflow.
I take it the cache to RAM uses the individual xendesktops allocated RAM and
overflow also stays on XD allocated disk.
I have created a XD pool with machines and assigned the following
RAM: 4GB
OS partition 50Gb (C drive)
Overflow disk: 15GB (D drive)
Is this a best practice and good use of resources?
I assume 4GB allocated RAM is insufficient for OS performance and caching.
△ ▽
• Reply • Share ›
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
8/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Miguel Contreras > Jim
•
5 months ago
Hi Jim,
That should be more than enough from a PVS perspective. The RAM buffer
for PVS is configured in the vDisk properties, and you should be setting
that to 256 - 512 MB for your VDI machines. What that means is that PVS
will use up to 256 - 512 MB of RAM for write cache and overflow to disk
once it requires more than that. Note that all write cache writes will be
written to RAM first and then moved to disk, so when the RAM buffer is full
the older data is flushed out to disk to make room for the new.
Check out this blog to get familiar with write cache size:
http://blogs.citrix.com/2015/0...
Migs
△ ▽
Vinny
• Reply • Share ›
6 months ago
•
Miguel, thanks for the reply. So I am running PVS 7.6 and the updated target
device software. I assume I am looking on the target device for the below reg key
to determine if cache overflow is enabled\working. I dont even have the
parameters key, what am I missing? I have
HKLM\System\CurrentControlSet\services\bnistack\ ENUM and PVSAGENT
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BNIStack\Parameters
Name: WcRamConfiguration
△ ▽
Vinny
• Reply • Share ›
6 months ago
•
Miguel,
A couple of questions
1) Does having intermediate buffering enabled disable\affect cache overflow, or
does it just ignore it?
2) Is there anything special to know in XD 7.6 about the cache overflow feature?
Hotfixes that need to be installed, bugs to watch out for?
Thanks,
Vinny
△ ▽
• Reply • Share ›
Miguel Contreras > Vinny
•
6 months ago
Hey Vinny,
1) Intermediate buffering is not officially supported for use with write cache
on RAM with overflow to disk. Based on some testing weve done, it is not
ignored, but I personally havent really spent much time testing it since it is
not supported.
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
9/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
not supported.
2) Nothing special or hotfixes to worry about for this feature to work as long
as youre using PVS 7.1 or newer (7.6 recommended)
△ ▽
Skyrocket
•
• Reply • Share ›
7 months ago
http://support.citrix.com/prod...
Note: When creating a machine template for SCVMM 2012, ensure that it has a
similar hard disk drive structure and that it can boot
from a vDisk in Private Image mode. Examples:
•to use Boot Device Manager (BDM) to boot a VM with write cache, create a VM
with 2 hard disk drives.
in the above article it says, template should have two drives attached one for BDM
and another for write cache but, at the time of the provisioning these are discarded
as also says in the article.
We are using BDM and WC.so do we need to create two HDD in the VM template,
1 with 8 MB BDM HDD and 1x 6 GB WC HDD in the template
but as per your statement replied to one of the member here saying " One thing to
note, though, is that if you are using the XD Setup Wizard to provision your
machines, this may not work. Long story short, the XDSW ignores the drive on the
template and creates a new one.
so the above statement mentioned in the citrix article is outdated or wrong
because, it confuses that statement.
△ ▽
• Reply • Share ›
Miguel Contreras > Skyrocket
•
7 months ago
Thanks for pointing out the details in the documentation.
Yes, the XenDesktop Setup Wizard will ignore the drives on the template
machine. This, however, is not the case with the Streamed VM Wizard.
What I normally do is that I create a machine using the wizard that I
dedicate for vDisk updates, so once I have a vDisk, before setting it to
standard mode I boot it from that update VM with the correct HDDs and
configuration to make sure the image is updated appropriately. Moving
forward, I continue to use that same VM for vDisk updates. In deployments
where I use the XDSW to create VMs with a BDM partition, I create a BDM
ISO and mount it to the master VM from which I need to capture my vDisk,
and I dont add any additional drives to it. As I mentioned, I take care of all
that configuration and staging after I have a vDisk with a dedicated update
machine.
Also, for the sake of consistency, I clone that machine and convert the
clone to a template, and use that moving forward when creating machine
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
10/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
clone to a template, and use that moving forward when creating machine
with the XDSW. Yes, the drives are ignored by the wizard, but everything
else carries over, so that minimizes any chance for inconsistencies.
Migs
△ ▽
Anand Shah
• Reply • Share ›
9 months ago
•
drastically increase your IOPS for PVS targets while reducing and sometimes
eliminating the IOPS from ever hitting physical storage!!!
by any chance you mean drastically decrease your IOPS ?
△ ▽
• Reply • Share ›
Miguel Contreras > Anand Shah
•
9 months ago
Hi Anand,
My apologies, that statement is a bit confusing. What I meant to say is that
it increases the throughput while reducing IOPS.
△ ▽
Neil Burton
•
• Reply • Share ›
a year ago
Nice work Miguel. In my testing Ive seen a XenApp 6.5 workload on Hyper-V 2012
R2 with write cache on a simple 15K RAID1 array pulling 600 IOPS (random write
biased) increase to no less than 192,000 IOPS when caching entirely in RAM with
PVS 7.6. Very impressive. Driving this load is consuming approx 60% of 4xvCPU
on IvyBridge cores. Ive already used this to demonstrate the performance uplift
available to a customer using PVS 6.1 and the results speak for themselves.
△ ▽
Lennart
• Reply • Share ›
•
a year ago
Hi Miguel,
Nice read!
I was wondering what happens if the flash memory on a server crashes. (an SSD
fails). If these are in a RAID setup could the session still be restored? as the "
cache" file would maybe partly in RAM?
thnx!
△ ▽
• Reply • Share ›
John
a year ago
•
HI Miguel,
How would I monitor how much memory is being used for RAM cache with ram
cache overflow to disk methodology?
With RAM cache, i can monitor/see actual memory usage from virtual disk status
at the taskbar.
But it only shows disk usage, not actual memory usage when using ram cache
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
11/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
overflow to disk.
△ ▽
• Reply • Share ›
Mark
a year ago
•
When I had configure " Cache on device RAM with overflow on hard disk" ," Hard
drive" was D-drive.
Is The drive letter always " D" ?
Or,Can I configure other drive?
△ ▽
• Reply • Share ›
Miguel Contreras > Mark
•
a year ago
That depends on how youre configuring your VMs.
First, I recommend you create a VM dedicated for vDisk updates, which
needs to be created from the same template you are creating the rest of
your VMs and should be configured the same way.
Once you have that VM, boot up the vDisk in private mode, change the
drive letter of the write cache drive to whatever letter you want, shut down
the machine, and switch the vDisk to standard mode. When you boot your
VMs, they should show the write cache drive with the letter you configured
while in private mode.
One thing to note, though, is that if you are using the XD Setup Wizard to
provision your machines, this may not work. Long story short, the XDSW
ignores the drive on the template and creates a new one. I recommend
using the Streamed VM Wizard to create the VMs and then manually
adding them to the appropriate machine catalog.
△ ▽
john
•
• Reply • Share ›
a year ago
Just wondering, whether I should fit SSD drive or just cheap SAS drive for setting
up RAM cache with overflow disk?
Does it matter what type of disk? i guess once if RAM overflows to disk, there will
be a performance gain if the disk is SSD?correct?
△ ▽
• Reply • Share ›
Miguel Contreras > john
•
a year ago
Hi John,
There is no need to use expensive SSDs, the SAS drives will do just fine.
When the data is moved from RAM to disk, it is done in sequential 2 MB
block writes, rather than random 4K as before, so that extra horse power is
no longer needed.
Nick Rintalan published a post that has some good info you should read
and also has links to other posts with lots of detail:
http://blogs.citrix.com/2014/0...
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
12/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
△ ▽
• Reply • Share ›
Romans Makarovs
•
a year ago
Guys, if I missed this somewhere - still no HA for VMs with Cache in RAM with
Disk Overflow" feature?
△ ▽
• Reply • Share ›
Miguel Contreras > Romans Makarovs
•
a year ago
Hi Romans. What exactly do you mean by no HA?
△ ▽
• Reply • Share ›
Romans Makarovs > Miguel Contreras
•
a year ago
Sorry Miguel, wanted to ask is it possible to Live Migrate XenApp
VMs (RAM + Disk Overflow) between XS hosts in XS resource
pool?
△ ▽
• Reply • Share ›
Miguel Contreras > Romans Makarovs
•
a year ago
Gotcha. I havent tested this on XS, to be honest. I have
tested vMotion in vSphere 5.5 though. During the live
migration the machines are still functional but they get very
sluggish until the migration is done.
I wouldnt recommend migrating these machines anyway,
unless it is absolutely necessary.
△ ▽
Thomas
•
• Reply • Share ›
a year ago
How can i monitor the usage of the RAM cache in a " ram with overflow to disk"
environment?
△ ▽
• Reply • Share ›
Miguel Contreras > Thomas
•
a year ago
Hi Thomas. You can monitor the size of non-paged pooled memory. The
non-paged pool utilization for our servers is normally 170 - 200 MB, so
anything beyond that would be PVS RAM Cache.
△ ▽
• Reply • Share ›
Steven Atkinson
•
a year ago
Be careful with this setting if you are using the SCCM agent - the SMSCFG.ini file
is different every time if you use cache in RAM (as its discarded from memory) this causes issues between the agent and the SCCM database. You should
configure as cache on disk so that the ini file is retained following a reboot.
△ ▽
• Reply • Share ›
Miguel Contreras > Steven Atkinson
•
a year ago
Hi Steven. This would be the case with any non-persistent write cache
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
13/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Hi Steven. This would be the case with any non-persistent write cache
type, whether it is cache on RAM or cache on disk. Also, this is something
that affects any agents that depend on machine specific files or registry
entries, such as antivirus, SCCM, etc. To overcome this we typically use
shutdown and startup scripts to synchornize machine specific entries/files
and restore them upon boot.
△ ▽
• Reply • Share ›
Steven Atkinson > Miguel Contreras
•
a year ago
Hi Miguel - only just saw your response :-) The wired thing is that
when we were using cache in RAM - SCCM agent was registering
under different names after each reboot. When we went back to
cache on DISK it started to work again i.e. the SCCM agent would
always register with the SCCM console as the same name i.e. just
like a regular client. I too didnt expect this as I thought that either of
the caching methods discard the cache at reboot. However after I
read the following from this link (http://support.citrix.com/arti... I am
not too sure... - " When the device is read, this cached data is
checked to determine the presence of the cache file. If the cache
file exists, the data is read from the write cache file. If the cache file
is not present, the data is then read from the original vDisk file" .
Because the cache is on the local disk and not discarded as it is
with RAM - then the SCCM client checks the SMSCFG.ini file and
because PVS finds a copy of the file in cache then it uses this i.e. it
uses the same INN file each time and hence it means the SCCM
client works. What do you think?
△ ▽
Michel Lajoie
•
• Reply • Share ›
a year ago
Thanks for sharing, this is great news for PVS. Lets hope intergration with
SCCM2012 will be confirmed.
△ ▽
Stefan
• Reply • Share ›
•
2 years ago
Great article Miguel! A few questions regarding this promising new feature:
Situation: Windows 7 32-bit, Machine is assigned 4096MB, RAM Cache is set to
512MB, Local Disk of 6GB.
1. When taking a look in the Windows Resource Monitor it mentions that 1024MB
is " Reserved for hardware" . Has this something to do with this new feature and if
so, why is it reserving 1024MB when RAM Cache is set to 512MB?
2. How about the Page File, can we handle this over to PVS (and leave it on C:\) or
should we still re-direct it to the local disk by directing it to D:\?
3. When over 2GB of RAM Cache is assigned the machine seems to freeze when
doing copy actions. When we go under 1GB everything is OK. Could this be
regarding the OS being 32-bit?
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
14/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
regarding the OS being 32-bit?
4. The mechanism is reserving non-paged pool memory. So is it true that you
cannot see if and how much is being used?
△ ▽
• Reply • Share ›
Miguel Contreras > Stefan
•
2 years ago
Hey Stefan,
1. You need a 64-bit OS if you want to use more than 3GB RAM.
Otherwise, it will do exactly what you are seeing.
2. You should still redirect your pagefile to a different drive as part of your
image config.
3. This is most likely a result of using a 32-bit OS.
4. It actually doesnt reserve the memory. It will use up to whatever amount
you specified, and you can actually see how much non-paged pool
memory is being used in Task Manager. If for any reason the system
needs more RAM, then it will reduce the buffer size and give that memory
back to the system.
△ ▽
• Reply • Share ›
Sunshine Baines
•
2 years ago
ok thanks. I just had one do it again. I was monitoring the diskdif.vhdx file and it is
jumping back and forth between 7MB and 35MB constantly.
△ ▽
Sknox
• Reply • Share ›
•
2 years ago
Thanks for the reply. We are unable to catch it by the time the server is
unresponsive. We are not using AV at this time. We store event logs, spooler,
edge sight system monitoring, and page file. We have plenty of space for the wriecache disk (average of 18GB free). Intermediate buffering doesnt apply to " Cache
to device RAM with overflow to hard disk" ?
△ ▽
• Reply • Share ›
Miguel Contreras > Sknox
•
2 years ago
Im afraid youre going to have to actively monitor one of your servers so that
you can determine which process is causing the CPU spikes as well as
your max write-cache utilization before the server becomes unresponsive.
And yes, intermediate buffering does not apply to this cache type. This
cache type was meant to replace " Cache on devices hard drive" . The
RAM buffer takes the place of Intermediate Buffering.
△ ▽
• Reply • Share ›
sknox > Miguel Contreras
•
2 years ago
One more question. How can we properly measure how much
memory to use? If I remove the to two registry keys will it
automatically set back to default. We seemed to experience the
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
15/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
automatically set back to default. We seemed to experience the
issues once it overflowed to disk. Thanks for the help and when
should part 2 be out. great article
△ ▽
• Reply • Share ›
Miguel Contreras > sknox
•
8 months ago
Sorry for the delayed response. In case you still need
assistance: http://blogs.citrix.com/2015/0...
△ ▽
SKnox
•
• Reply • Share ›
2 years ago
We tried to use Cache to device RAM with overflow to hard disk, but it was without
success. The CPU spiked and the Citrix servers stopped accepting connections
(IMA CRASHED) and the server was unresponsive after running for 2 days. We
had to revert back to cache to device hard disk. We would really like to be able to
use the Cache to device RAM with overflow to hard disk, because of it addressing
ASLR. Below is our setup.
1. vSphere 5.1
2. Provision Server 7.1
3. Citrix Server is running XenApp 6.5
Windows 2008R2
4 vProcs
8 GB of memory
1 GB of memory allocated to Cache to device Ram
25 GB write cache disk
Latest target hotfix
WcHDNoIntermediateBuffering set to 2
WcRamConfiguration set to 1
Please let me know if you need anymore info, thanks.
△ ▽
• Reply • Share ›
Miguel Contreras > SKnox
•
2 years ago
A few things:
- Have you checked which process is responsible for the high CPU
utilization?
- Have you applied the necessary antivirus exclusions? If not, check out:
http://blogs.citrix.com/2012/1...
AND
http://blogs.citrix.com/2013/0...
- Aside from write-cache, what else is being stored on the write-cache
disk? Have you checked the cache utilization to make sure you are not
running out of disk space?
- There is no need to set the two registry values you listed since
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
16/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
- There is no need to set the two registry values you listed since
intermediate buffering doesnt apply to this write-cache type, and, if you
have the latest PVS target hotfix, the last one is taken care of.
△ ▽
• Reply • Share ›
Albert Tedjadiputra
•
2 years ago
Hi Miguel,
Thanks for sharing the result, is there any similar performance best practice for
MCS type deployment ?
△ ▽
• Reply • Share ›
Miguel Contreras > Albert Tedjadiputra
•
2 years ago
Hey Albert,
One of our colleagues is working on MCS testing. We should be seeing
something soon.
△ ▽
Jeff
•
• Reply • Share ›
2 years ago
Is it possible for the Disk overflow to ever fill up and still crash the system or do
objects simply get overwritten on the disk as they flush from RAM?
△ ▽
• Reply • Share ›
Miguel Contreras > Jeff
•
2 years ago
If you run out of write cache space, the machine will crash. However, unlike
the other cache types, with this new cache type if you delete files the
cache space is reclaimed and the write-cache decreases in size, so
running out of cache space should be rare. So in the case of a XenApp
server, for example, as users log off and their profiles are deleted, the
space that was used by those profiles is reclaimed.
△ ▽
Kevin
•
• Reply • Share ›
2 years ago
What about the integration with SCCM 2012? This only worked with PVS when you
were caching on local device hard disk. With this option you will have a local hard
disk (the overflow) but the primary cache will be in memory. Will the SCCM 2012
integration still work with this configuration?
△ ▽
• Reply • Share ›
Miguel Contreras > Kevin
•
2 years ago
I havent tested to be honest. Ill follow up as soon as I find out.
△ ▽
Berry
•
• Reply • Share ›
2 years ago
We are a huge fan of cache in RAM for years, and using this in all of our VDI
projects simply because it is the fastest option and keeps your useless IOPS into
your hypervisor. One big issue always has been the BSOD for a few users who
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
17/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
your hypervisor. One big issue always has been the BSOD for a few users who
use to much cache, but because it was never more than a 0,01 percent of the
users we just kept it this way.
So with this feature, which seems to work very good since the latest hotfix, is
again our VDI best practice.
Fast and now also flexible for light and heavy writecache users.
One strange thing is that we dont see the bytes written in our PVS status tray
anymore, which is odd ?
△ ▽
• Reply • Share ›
Miguel Contreras > Berry
•
2 years ago
Hey Berry. The vDisk status tray does show the amount of write-cache
used. However, it only shows the amount that is used on disk, not the
amount used on RAM. I dont know if this will change in the future.
As I mentioned in the post, this cache type is supposed to replace the
cache on hard disk with intermediate buffering, meaning that the RAM
buffer was not really meant to be so large, or at least not large enough to
make it necessary to show in the vDisks status tray. however, you can
always look at how much non-paged pool memory is being used in task
manager to figure out how much ram cache is in use.
△ ▽
• Reply • Share ›
Ryan Kellerman
•
2 years ago
When I implemented this I figured out what I wanted to use as my target cache
size by using the RAM cache only setting on a limited number of provisioned hosts
and then watch the size of the cache used at the end of the day. After recording
the numbers for a couple weeks I implemented with a number 10% higher than my
peak.
The weird thing is when youre in the " RAM with overflow on local hard drive" mode
the Virtual Disk Status utility does not tell you how much RAM youre using, rather
only how much disk cache is utilized. So that made it necessary to test on normal
operating conditions a couple RAM heavy XenApp hosts in " RAM cache" mode.
Also, this feature did not work properly for me until I upgraded the Citrix
Provisioning Services Target Device software to 7.1.2.3 ... which it translates to
Provisioning Service 7.1 SP2 Build 3. Before that version there seems to be a bug
where it just cache straight to disk instead, which benchmark numbers
demonstrated and the fact that the cache used number was always higher than 4
MB when it should not be.
△ ▽
• Reply • Share ›
Carl Fallis > Ryan Kellerman
•
2 years ago
You are correct, you should use the latest 7.1 Target hotfix
http://support.citrix.com/arti.... This fixes an issue with enabling the RAM
portion of write cache.
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
18/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
There will always be a 4MB file created for the cache on disk even if you do
not overflow the RAM cache. This is the VHDX header that must be
allocated for the cache file.
Another thing to know about the new RAM with overflow to hard drive cache
mode, it uses 2MB sectors like the disk, the old RAM cache was space
efficient and used RAM as needed, so a 512 byte write would take 512
bytes. Using the new RAM cache the initial 512 byte write will take 2 MBs, if
more data is written to that particular sector then no more memory will be
allocated for that 2MB sector.
△ ▽
• Reply • Share ›
Christoph Wegener
•
2 years ago
Thanks Migs for this excellent explanation.
Im sure plenty of customers were anxiously waiting for the ASLR fix.
I will surely poke around a target device with RAMMap and maybe windbg to figure
out exactly how it works! :)
What is the pool tag that is used for the new cache RAM mode?
△ ▽
• Reply • Share ›
Miguel Contreras > Christoph Wegener
•
2 years ago
Hi Christoph. To be honest Im not sure. Ill look into it later this week and get
back to you.
△ ▽
Ron Kuper
•
• Reply • Share ›
2 years ago
How about implementing the same mechanism to PvD and extend the
performance boost to Persistent VDI as well?
△ ▽
• Reply • Share ›
Ron Kupferschmied
•
2 years ago
Big kudos with a little sting:
I was thrilled about RAM with Overflow and now even more so after learning that it
is considerably faster than the old RAM option. Awesome!
Was less excited to witness how the feature worked only a few months after its
official release.
Reading this sentence - " One of the most amazing SIDE EFFECTS of this new
feature..." sort of explained it.
Being a partner (and also a fan!) I truly wished for innovation in provisioning from
the VDI market leader, not coincidental features which only realized as a side
effect from a fix necessary to play nice with the MS hypervisor.
But this is a recurring theme with PVS with so much potential but so little real
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
19/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
But this is a recurring theme with PVS with so much potential but so little real
advances. Many " side effects" as features such as:
- Using Subnet Affinity for Pod design instead of adding a simple (network
independent) way to assign PVS servers for a device collection (“Collection
Affinity”)
YOU MIGHT BE INTERESTED IN
CITRIX LABS
VIRTUALIZATION
4 DAYS AGO
Octoblu Has a Post-Halloween Trick That’s a Treat!
Cody Matthieu
0
Stay Informed.
Subscribe Today.
email address
SUBSCRIBE
SUBSCRIBE BY RSS
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
20/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
APPLICATION VIRTUALIZATION
CHANNEL & PARTNERS
5 DAYS AGO
Top-3 ways Citrix Lifecycle Management Makes Adopting
AWS or Azure EASY
Kedarnath Poduri
SOFTWARE-DEFINED NETWORKING
1
VIRTUALIZATION
NOV 6
NVIDIA M60 Support on XenServer
Sowmya Murthy
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
21/22
15.11.2015
Turbo Charging your IOPS with the new PVS Cache in RAM with Disk Overflow Feature! – Part One | Citrix Blogs
Sowmya Murthy
1
CITRIX.COM
ABOUT
CONTACT
TERMS OF USE
LOG IN
© 2015 Citrix. All rights reserved.
https://www.citrix.com/blogs/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/
22/22