Posts tagged Dell
Back in September, 2009, I had written a post with a quick overview of what a private cloud (or infrastructure) looks like and some basic costs and information, including why it is a great product (I am biased).
Since then, Dell has retired the PE2900III model server and items change, this is an update for the basic configuration.
So, originally, the physical servers were configured as:
- Dell PE2900III (reasonably priced, very reliable, I have spares on the shelf)
- 4 ethernet ports (2 built in, 2 port card installed, more can be added)
- 2 73GB SAS drives mirrored together for booting VMware vSphere 4
- 32GB RAM (48GB is max for this hardware platform)
New servers look like:
- Dell PET610 (reasonable price, very reliable, spares to go onto the shelf)
- 4 ethernet ports (all built in)
- 2 80GB SATA drives mirrored together for booting VMware vSphere 4
- 48GB RAM (192GB max available – very expensive)
The reason for the RAM change is that I am seeing a 2:1 (or higher) ratio of RAM to CPU usage in terms of percentage, and 48GB is a good place for this sized system. Also, the newer Xeon 55xx series processors uses RAM sticks in 3s instead of 2 or 4 at a time. 48GB is 12 4GB sticks of RAM. The newer 55xx series of processors also has working hyper-threading (or H/T) and I am seeing very nice performance on servers deployed using this processor family in our network.
Cost difference? The original posting listed had estimated the cost at $1,600.00 per month (see previous post), and I estimate this to be very close, inching up to approximately $1,700.00 per month, and this number should be high. (for accurate pricing, please contact ipHouse sales people, they can run up a quote based on real numbers)
In the last 6 months, I have helped multiple customers achieve their dream of a virtual machine environment built for them exclusively, but with abilities to control their virtual machine setup, configuration, turn up, tear down, etc. These dedicated infrastructure environments are in the ipHouse data center.
This isn’t ‘cloud computing’ as many people think of it (thanks to Amazon EC2 and the like), but it is pretty close to that vague definition, and with far more control available in terms of everything-vm-wise.
What do I mean? With this virtual private cloud, a customer can set up 3 Ubuntu systems, 2 Windows Server 2003, 1 FreeBSD, and 7 Windows Server 2008 systems. There really isn’t anything novel about this (again, reference Amazon EC2 and the like).
What is novel is that the customer can configure these VMs as they wish. Disk space allocation, partitioning, memory configurations, number of vCPUs. Basically, if you can do it on a physical server – you can do it virtually.
Another differentiating feature is that VMware vSphere 4 supports many operating systems while most public cloud providers offer a very limited number in comparison. This choice alone can be enough to warrant looking at this kind of solution.
No per hour fees, no storage fees (above what the customer has purchased), highly available (if configured to do so), dynamic resource scheduling (if configured to do so), bandwidth fees that are predictable. (see VMare vMotion and Storage vMotion, VMware HA, VMware DRS via website)
I’ll build a configuration example offering shared storage between the VMware physical servers. I’ll be doing some cost estimates for the per month fees. These estimates will be high and are purely shown for example. You would want to contact ipHouse Sales to get a real idea for the costs involved.
I chose Dell as my hardware platform, VMware for my host hypervisor, and I am starting with a 2 system ‘cluster’. (I use the word ‘cluster’ not because they are clustered in a way that most people think of things, but in that they will share SAN LUNs for their storage allowing migration from one server to another.)
Last week was like having a birthday – I received two systems from Dell for our virtualization initiative that we’ll be launching in the near future. Very exciting.
Measured power utilization of the new servers is quite reasonable at 4.6A maximum current draw using 14 of the 16 cores running at 100%.
I did this by running SETI@Home over approximately 36 hours using 3 4 vCPU systems and 1 2 vCPU system.
The fun is getting going – ordered up the 8 servers as listed in the configuration in my blog post from October 4th, 2008 on October 14th, 2008.
This will give me 8 host systems and one spare on the shelf (I’ll be using it for test deployments and such as well).
Ship date: October 16th, 2008.
Due date: Today! October 20th, 2008. (Dell tracking said Friday the 17th, but obviously that wasn’t correct, but was in Minneapolis at 8:49am and out for delivery)
I have moved one production system over already (one of the POP/IMAP servers) and performance has been excellent. Over the next few days I’ll get 3 more of the host systems online and migrate the other POP/IMAP physical servers over, then tear down the old systems and remove them from the rack(s).
There is one snag holding us back for the web server side of things – a PDF library that was used by our in-house web guy for automatic formating of PDF documents. We’ll get this worked out soon and start that migration as well.
Once I get these initial 10 systems retired and out of the racks, we’ll rack up the other 4 host systems and prep them for the eventual task of migration of our caching and authoritative name servers (4), our SMTP servers (8), and measure again how things are going (performance, power, etc).
Part 4.1 coming soon, with pictures if I remember a camera…