Archive for the ‘Storage’ Category

VMware View 5.1 host caching using vSphere 5’s CBRC

I have seen different implementations of read caching in arrays and even inside hosts, just to be able to cope with boot storms of VDI workloads. When using linked clones caching really helps; all the VDIs being booted perform massive reads from a very small portion of the infrastructure: the replica(s). VMware came up with a nice software solution to this: Why not sacrifice some memory inside the vSphere nodes to accommodate read caching there? This is what CBRC (vSphere 5) or Host Caching (View 5.1) is all about. And… It really works!

What happens during a boot storm

First of all, we need to figure out what happens during a boot storm. Even wondered just how much data Read the rest of this entry »

EMC Live Webcast: VSI plugin

Tomorrow there will be an EMC Live Webcast around EMC’s VSI plugin called “How to simplify Management with EMC VSI plugin for VMware vSphere”. This webcast will be delivered by my colleague and friend Simon Seagrave (@Kiwi_Si at http://www.techhead.co.uk for those who know people by their twitter names!). I will be assisting in answering in the chat window, together with Josh Hutt, one of the VSI plugin developers!

The VSI plugin is a cool little plugin that allows for EMC storage to be integrated into vCenter. With separate menus and right-click integration using EMC storage straight from vCenter is a breeze. Provision storage, monitor storage, deploy fast clones or change path failover modes… The VSI plugin has it all.

Come join me, Simon and Josh in this webinar! You can subscribe for this webinar here.

It will run from 6/7/12 9:00 AM MDT to 10:00 AM MDT (America/Denver)

That is 15:30 through 17:00 GMT (Greenwich Mean Time).

UPDATE: Read more info on the Webcast at Simon’s blog here.

EMC VSPEX: Inbetween cardboard boxes and Vblock

Even though this blog mainly focusses on technical geeky things, it cannot be denied that as infrastructures grow, the deep-down technical details get covered up more and more by the sheer size of things. As customers need to grow their environments more and faster, they have a need to make things decide for themselves, automate more and more. Yesterday you bought boxes and cables. Tomorrow you buy a converged infrastructure like VCE’s Vblock. But what about today? EMC is about to fill that gap…

The Cake Story

To explain the difference between a build-your-own and a Vblock, there is this great story where the parallel is drawn to birthday cakes: Read the rest of this entry »

EMC FAST-cache and “Follow the I/O”

I do not often write to specific implementations of a vendor. This time however I focus on EMC’s FAST-cache technology, and we will be playing a little “follow the I/O” to see what it actually does, where it helps and where it might not.

Read the rest of this entry »

Backwards VDI math: Putting numbers to the 1000 user RA

EMC and VMware have published a joined Reference Architecture where an EMC VNX5300 using a minimum configuration of disks squeezes out the required IOPS for a thousand VDI users. That is awesome stuff, but how to go about using and remodeling this RA for your own needs? In this blog post I’ll try to put some numbers to it, both validating and enabling you to resize for your needs.

A very cool use case: VMware View and 1000 vDesktops running off an EMC VNX5300

This is a very VERY cool one. You can find the Reference Architecture Read the rest of this entry »

“My VAAI is Better Than Yours”

VAAI has been around for quite some time, but I still get a lot of questions on the subject. Most people seem to think VAAI is solely for speeding up processes, where in reality there should not be significant speeding up if your infrastructure has enough reserves. VAAI is meant to offload storage-related things so they are executed where they should: Inside the storage array.

 

EDIT: My title was stolen borrowed from my dear collegue Bas Raayman in a post like this one, but focussing on file-side in My VAAI is Better Than Yours – The File-side of Things. Nice addition Bas!

My VAAI is better than yours

I recently had an interesting conversation Read the rest of this entry »

Sizing VDI: Steady-state workload or Monday Morning Login Storm?

For quite some time now we have been sizing VDI workloads by measuring what people are doing during the day on their virtual desktops. Or even worse, we use a synthetic workload generator. This approach WILL work to size the storage during the day, but what about the login storm in the morning? If this spikes the I/O load above the steady stte workload of the day, we should consider to size for the login storm…

Read the rest of this entry »

Why Virtual Desktop Memory Matters

I have seen several Virtual Desktop projects with “bad storage performance”. Sometimes because the storage impact simply was not considered, but in some cases because the project manager decided that his Windows 7 laptop worked ok with 1GB of memory, so the Virtual desktops should have no issue using 1GB as well. Right? Or wrong? I decided to put this to the test.

Test setup

To verify the way a windows 7 linked clone (VMware View 5) would perform on disk, I resurrected some old script I had laying around on vscsiStats. Read the rest of this entry »

RAID5 DeepDive and Full-Stripe Nerdvana

Ask any user of a SAN if cache matter. Cache DOES matter. Cache is King! But apart from being “just” something that can handle your bursty workloads, there is another advantage some vendors offer when you have plenty of cache. It is all in the implementation, but the smarter vendors out there will save you significant overhead when you use RAID5 or RAID6, especially in a write-intensive environment.

Recall on RAID

Flashback to a post way back: Throughput part 2: RAID types and segment sizes. Here you can read all about RAID types and their pros and cons. For now we focus on RAID5 and RAID6: These RAID types are the most space efficient ones, but they have a rather big impact on small random writes. Read the rest of this entry »

Different Routes to the same Storage Challenge

Once shared storage came about, people have been designing these storages so that you would not have to care again for failing disks; shared storage is built to cope with this. Shared storage is also able to deliver more performance; by leveraging multiple hard disks storage arrays managed to deliver a lot of storage performance. Right until the SSDs came around, the main and only way of storing data was using hard disks. These hard disks have their own set of “issues”, and it is really funny to see how different vendors choose different roads to solve the same problem.

Read the rest of this entry »

Soon to come
Translate
DutchEnglishFrenchGermanItalianPortugueseRussianSpanish
Blogroll
Links
Archives