Dirty Hands: A Week in the VCE Lab

 

I sure do like talking with customers and getting on stage at conferences, but my ability to be good/credible in either situation is absolutely predicated on my ability to roll my sleeves up and get dirty doing this stuff in real life from time to time.  This week is one of those weeks where I get to do exactly that…turn off the phone and email and spend 5 straight days in a lab building stuff.  For this round of dirty work, I am at the beautiful VCE Customer Technology Center and my toys include a VBLOCK 300 (which has a bunch of Cicso UCS and a VNX 7600) and some newly minted VXRAIL nodes…here’s me carrying the new VXRAIL awesomesauce.

As most of you know, VCE is EMC’s converged infrastructure division that creates simple to deploy and manage platforms for all sort of applications…this week we are going to being playing around with Splunk getting deployed on these converged and hyper-converged infrastructures.

Ain’t it pretty?!?

image

The goal this week is to spin up a Splunk environment that would be similar to something our customer’s would build…

  • Heavy Forwarder – collecting and sending data
  • Clustered Indexers – the hard workers that index and write the data to disk
  • Search Head – where you login to search the indexed
  • Deployment/Master/License – all the management components for the environment as we scale

IMG_0009

We are going to install this on the VBLOCK 300 with VNX storage and we are also going to install this on the VXRAIL with the latest 3.5 code…see more on this sweet upgrade from Chad Sakac’s recent blog.  Once installed, we plan to do a couple of fun activities:

  1. Get a the latest VNX and VMware apps installed on the Splunk environment and the associated eventgens to get comfortable with deploying them.
  2. Write a quick blog post on how to deploy and some best practices.
  3. Deploy a whole battery of assessments and collections against the environments using EMC’s Mitrend tool set to get a baseline of what the environments are seeing from an I/O perspective.
  4. Spin up a few Splunk benchmarks, some we can’t publicly disclose and certain the old guard Bonnie++.
  5. While the benchmarks are running, we are going to do that same assessment process with MiTrends to better understand how those tools drive I/O at the hardware level and compare that to what the environment looked like just running some apps.

The long term objective of our work is a couple fold:

  1. Get some stick time…as I said before, talking about this stuff is fun and all, but it is critical to stay relevant with experience.
  2. Build some best practices and guides to enable our customers, partners, and internal teams.
  3. Better understand Splunk’s I/O profile under some simulated normal operations.
  4. Better understand how the various Splunk benchmark tools drive infrastructure operations, from host all the way to disk I/O, with the long term goal of actually changing how Splunk recommends testing hardware in when deploying virtualized Splunk on shared infrastructure (because it is generally known that Bonnie++ is actually not a good indicator of performance for virtualized, shared infrastructure).

Should be a pretty fun week…assuming we can actually get all this done. Here’s the first round of work we are doing…racking up the VXRAIL with my boys @kyleprins and @aaronbuley:

 

Whatever we actually do accomplish, I do know I will learn something and have some fun getting these hands dirty.

Your bearded friend,

Cory