Archive for the ‘Storage’ Category

Cool videos on Technology, Virtualization and Storage

More and more cool videos are getting round that explain certain things around virtualization, cloud computing and storage better and better. I thought I’d sum up my personal favorites in a blogpost.

Older stuff: The Internet

A very old but still fun to watch is “Warriors of the Net”. This video explains how the internet works. Things like packets, firewalls and routers are explained here:



Warriors of the Net – The Internet explained.

Read the rest of this entry »

Under the Covers with Miss Alignment Part 2: Linked Clones

This post is the continuation of Under the Covers with Miss Alignment: I keep hearing this rumor more and more often: It appears that both snapshots and linked clones on vSphere 4.x and 5.0 are misaligned. Not having had the time to actually put this to the test, I thought it would at least be informative to give you some more down-and-dirty information on the subject.

Read the rest of this entry »

Speeding up your storage array by limiting maximum blocksize

Recently I got an email from a dear ex-colleague of mine Simon Huizenga with a question: “would this help speed up our homelab environment?”. Since his homelab setup is very similar to mine, he pointed me towards an interesting VMware KB article: “Tuning ESX/ESXi for better storage performance by modifying the maximum I/O block size” (KB:1003469). What this article basically describes, is that some arrays may experience a performance impact when very large storage I/O’s are performed, and how limiting the maximum sizes of I/O blocks might improve performance in specific cases.

Read the rest of this entry »

Under the covers with Miss Alignment: Full-stripe writes

In a previous blogpost I covered the general issue of misalignment on a disk segment level. This is the most occurring and the most obvious misalignment, where several spindles in a RAID set perform random I/O and misalignment causes more spindles need to seek for a single I/O than would be required when properly aligned.

Next in the series there is another misalignment issue which is rare, but can have a much bigger impact on tuned storage: Full stripe misalignment.

Read the rest of this entry »

vDesktops – Where do you measure IOPS?

People are talking SO much about VMware View sizing these days. Everyone seems to have their own view on how much IOPS a vDesktop (virtual desktop) really uses. When you’re off by a few IOPS times a thousand desktops things can get pretty ugly. Everyone hammers on optimizing the templates, making sure the vDesktops do not swap themselves to death etc. But everyone seems to forget a very important aspect…

Where to look

People are measuring all the time. Looking, checking, seeing it fail in the field, go back to the drawing board, size things up, try again. This could happen in an environment where storage does not a 1-on-1 relation with disk drives (like when you use SSDs for caching etc). But even in straight RAID5/RAID10 configs I see it happen all the time. Read the rest of this entry »

Place to be: EMC world 2011

This year another EMC world is surely going to rock the house once again. It’s party time again from May 9th to 12th in Las Vegas!

EMC world 2011


When will the fun EVER stop? 2011 is going to be a rocking year for EMC, full of groundbreaking records and very cool products. Ranging from security through backup to disaster recovery – All within EMCs balanced portfolio.

There will be a lot of very interesting things to do at EMC world 2011. Take a look at the session catalog here (requires login): EMC world 2011 Session Catalog. Read the rest of this entry »

“If only we could still get 36GB disks for speed”

Yesterday I remembered a rather funny discussion I once had. Someone stated “if only we could still get 36GB 15K disks, we could speed things up by using a lot of spindles”.

Kind of a funny thing if you think about it. At the time I figured that 36GB disks would force you to use more drives in order to reach a proper capacity. And since a lot of people still tend to scale to capacity only, your problems increase with the size of disks. Let’s say your environment requires 6TB, you could use 4 2TB drives in RAID5 – But don’t expect 100 VMs running properly from that 😉

The funny thing (and the reason for this post) is that most people seem to miss out on the following…


The latest thing: vTesting!

Yes I admit it, now that I’m an EMC vSpecialist I do not have very much time left for all these deepdive measurements. So I’m forced to introduce a new type of testing. I’ll call it a vTest. Actually Einstein is the father of this type of testing, simply because he did not have spaceships that could do near-lightspeed. Me, I simply lack time. Hm that is a kind of deep statement in this light right 🙂

With no further delay I’ll just drop the statement for this vTest, and we’ll boldly go where no geek has gone before:


“A 7200rpm SATA disk CAN outperform a 15K FC disk”



So how many of you think the above is pure nonsense? Don’t be shy, let’s see those fingers!

Now for the actual vTest: In this test I play the devil’s advocate and use a 2TB 7200rpm SATA drive, and a 36GB 15K FC disk. Both disks get 36GB of data carved out. Now we run a vTest performing heavy random access on both 36GB chunks.

See where I’m going? If not, here is a hint: Throughput part 1: The Basics. In random access patterns, the biggest latency in physical disks comes from the average seek time of the head to the correct cylinder on disk. And the trick is in the “average” part.

The average seek time is the average time required for a head to seek to any given cylinder on the disk. But this seek time heavily depends on where the head was coming from. Normally the average seek time is measured when the head needs to travel half of the platter’s surface. But in our test that is far from reality for our 2TB SATA drive!

As the 36GB 15K FC drive has to move its head all over the platter, the 2TB SATA disk only moves (36GB/20000GB)*100 = 1,8% of its total stroke distance. In fact even that is a lie: The outside of the platter carries way more data than the inside, so assuming the 36GB is carved out at the edge (what most arrays do), this number is even lower, probably below 1% !

This means the average seek time of this disk is no longer around 8-9ms, but drops to around 1ms (no, not 1% of 9ms! This value will be very near the track-to-track seektime which for SATA usually is around 1ms). Even the addition of the extra rotational latency of the SATA disk (because it spins at 7200rpm instead of 15000rpm) does not help: The total average seek time is still way lower than the total average seek time of the poor 36GB 15K disk…

Yes you could discuss on caching efficiency; the way the disks differ in sorting the order in which they fetch random blocks, but still:

If you now review the initial statement again, would you still have the same answer….???
(At least it should get you thinking!)

EMC’s Record Breaking Event: Almost showtime!

As EMC2 counts down into the final hours before their live record breaking event, posts are showing up around three products: Data Domain Archiver, the Data Domain 890 and GDA, and the EMC2 VNX and VNXe series. I looked around and found some info here and there on these new products.

Read the rest of this entry »

Veeam Backup part 2- Using jumbo frames to target storage

In my quest to get the most out of my home lab setup when it comes to backup speeds to my IX2-200 (see Veeam Backup part 1- Optimizing IX2-200 backup speeds) today I will configure jumbo frames on my environment, and I will show how each of the possible connection options to the IX2-200 can be configured for jumbo frames.

A small history on network frames, and especially the Jumbo Ones

There are many stories going round about jumbo frames. Some say it is not worth while, others say it is the difference between day and night. But what are jumbo frames in the first place? Read the rest of this entry »

Veeam Backup part 1- Optimizing IX2-200 backup speeds

Thanks to Veeam’s Happy Holidays gift, I now have a license for several Veeam products. The one I really wanted to try in my home lab was Veeam Backup and Replication.

In this blogpost, I will try various ways to connect the Veeam appliance to my Iomega IX2-200 NAS box. This setup is very tiny indeed, but it clearly shows the options you have and how they perform compared to each other. Read the rest of this entry »

Soon to come
  • Coming soon

    • Determining Linked Clone overhead
    • Designing the Future part1: Server-Storage fusion
    • Whiteboxing part 4: Networking your homelab
    • Deduplication: Great or greatly overrated?
    • Roads and routes
    • Stretching a VMware cluster and "sidedness"
    • Stretching VMware clusters - what noone tells you
    • VMware vSAN: What is it?
    • VMware snapshots explained
    • Whiteboxing part 3b: Using Nexenta for your homelab
    • widget_image
    • sidebars_widgets
  • Blogroll
    Links
    Archives