Wednesday, October 20, 2010

Rocking good reviews for EqualLogic

http://www.infoworld.com/d/storage/infoworld-review-dell-iscsi-san-sizzles-ssd-dynamic-storage-tiering-625?page=0,0

And

http://www.infoworld.com/t/server-virtualization/lab-test-new-equallogic-firmware-takes-load-vmware-856?page=0,0

A question about how to configure LeftHand SAN

http://www.experts-exchange.com/Software/VMWare/Q_26556427.html

I inherited a Lefthand SAN installation and have been working to figure out how it works and configure it correctly.  I hope that someone here can assist me with the configuration for our VMWare configuration.  We are running VMWare 3.5 and we need to sort out this storage solution before we upgrade to the latest version.

We need a single volume or two that can be seen by the two existing hosts we have in a VMWare cluster.  We need to add another machine to the cluster eventually, but right now we have the problem where there is not enough disk space to do anything until we expand one existing volume that is connected to the two hosts.  Also, there is a second volume that should span two hosts but doesn’t.  I can’t quite understand why because it looks like it should be configured correctly.

I have attached screen shots from the Lefthand and VMWare configurations with the hope that someone can quickly tell why this isn’t set up right and tell me the errors that prevent the two hosts don’t both see the two volumes.

Additionally, I need to know if I can simply expand one or both of the volumes (I do have space on the SAN) and also how to connect a future VMWare host to the two existing volumes.
I would be grateful if you could point me towards the best sections of documentation to read, or any other good sources of instruction.

Thanks in advance for your help.


http://filedb.experts-exchange.com/incoming/2010/10_w43/360791/LefthandSAN-Configuration.pdf

Thursday, August 26, 2010

two interesting LH posts

Some difficulties with a LH cluster and fix

https://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1282083918853+28353475&threadId=1443755

And a not-so-obviously documented fluke in the combination of a HP LeftHand (P4000) Multi-Site Cluster and VMware vSphere hosts on multiple sites.


http://www.virtuallifestyle.nl/2010/08/hp-lefthand-multi-site-san-vmware-vsphere/

Thursday, June 10, 2010

This is an interesting post from a LH user

http://itsdrivingmenuts.blogspot.com/2010/04/lefthand-san-hp.html

Don't know what his final decision was, but it shows his decision making process.

Thursday, April 22, 2010

how is equallogic fairing under dell

http://www.theregister.co.uk/2010/04/21/dell_paula_long/

With paula long leaving some are worried about how EqualLogic will be handled.

Wednesday, April 14, 2010

Dell EqualLogic SAN experience and review

http://www.spoonapedia.com/2010/04/dell-equallogic-san-experience-review.html

Pro's


* Easy to use

* Great performance

* Lots of useful features

* No hidden feature costs

* Excellent support

* Snapshot integration with VMware/Exchange/SQL is a nice touch

* Regular firmware updates

Con's

* No WAN optimisation for replication

* Replication can be a little fiddly to maintain

Left Hand, New network RAID features

http://vstorage.wordpress.com/2010/04/05/the-new-networkraidfeatures-of-saniq-8-5/
Interesting new features, although seems complex

Prior to 8.5 you had the choice of Network RAID-10 with 2-way replica, 3-way replica or 4-way replica. i.e 2, 3 or 4 copies of your volumes distributed across the nodes for redundancy – the downside of this; decreased usable capacity.


8.5 introduces Network RAID-5 and RAID-6.

Network RAID-5 needs three data and one parity as a minimum configuration i.e 3+1 , meaning four nodes as a starting point.

Network RAID-6 needs four data and two parity as a minimum configuration i.e 4+2 , meaning six nodes are required initially.

Unlike Network RAID-10 which creates mirror replica(s) of a volume, the documentation states the new RAID levels stripe parity across all nodes in the cluster.

Network RAID–5 and and Network RAID–6 volumes require snapshots in order to achieve space utilization benefits.


This means that deleting the last snapshot of a Network RAID–5 volume causes its space requirement to be the same as a Network RAID–10 (2-Way Mirror) volume.

Similarly deleting the last snapshot of a Network RAID–6 volume causes its space requirement to be the same as a Network RAID–10+1 (3-Way Mirror) volume.

It is possible, therefore, for the storage cluster not to have enough space to accommodate the snapshot deletion.

Deleting the last snapshot of a Network RAID-5 or Network RAID-6 volume is not recommended.

Monday, March 8, 2010

Yet another happy EqualLogic thread

http://episteme.arstechnica.com/eve/forums/a/tpc/f/833003030931/m/679006453041

Highlights

I've been playing the last couple of days.

The HIT kit and "Auto Snapshot Manager" tools seem to make it ridiculously easy to get hosts connected to the EQL and to take both LUN level and application (SQL/Exchange) aware snapshots of databases.

For example I'm not a SQL admin/expert but I put SQL on a test VM, created a DB and Log LUN on the EQL, created a test DB, took a DB aware snapshot of it, deleted the original, and from within the ASM tools restored it and watched it appear in real-time.

Maybe I'm easily impressed but I really do find this thing impressive given the cost/all-in licensing nature.

***

I've had EQ units for 5 years and was always happy with the improved functionality that came out with subsequent firmware updates. Even when Dell bought them the SANHQ program moved light years ahead of the beta I had tested for almost a year.

***

The EqualLogic gear is super-easy to setup. I have a pair of them racked--just waiting for the switches to come i and I am golden.

Sunday, March 7, 2010

Important concept to understand with LH

http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1268007997457+28353475&threadId=1411427


Hello, we just purchased a lefthand P4500 10.8 TB SAN.We created a network raid 1 volume (spanned across the two lefthand nodes)We configured MPIO as documented and it discovered well 4 paths.However, we started to do failover tests by scheduling a power off of one of the nodes (the node on which the virtual manager was running)Of course quorum is lost and access to the volume is lost, but we cannot manage to restore quorum by restarting the virtual manager on the other node. The CMC console asks us to delete first the VM, and to stop it beforehand, but it is marked as offline already.Could you please give me some advice about regaining quorum or some documentations ?Thanks.

Fran Garcia
Mar 2, 2010 09:10:24 GMT Unassigned
Hola Rodrigo :-)You need to configure a FailOver Manager to achieve a resilent cluster. In order to have cluster quorum you need to have at least (n/2)+1 active nodes, and of course that cannot be done with a 2-node cluster.There is a FOM VMWare appliance included in the Lefthand installation CD.
Mark...
Mar 2, 2010 09:53:31 GMT Unassigned
Hi,HP/LHN recommend a failover manager (FOM) at a third site as mentioned above. With the FOM then your cluster should stay up with no disruptionYou can use the Virtual Manager (VM)but the thing with the VM is NOT to start it on a node -just create it. Then, in your config of two nodes, should one of the nodes fail then you start the VM on the remaining node. This keeps disruption to a minimum. If, as in you case, you have the VM started and it is on the node that crashes you will not be able to start a new VM - as you have found out! Only one VM or FOM allowed per management group.

Thursday, March 4, 2010

Complexity of LH

This post (although take with a grain of salt) points to possible more complexity and lock ups for LH.

Sorry, you will have to http://www.avivadirectory.com/bethebot/ in order to view it

http://www.experts-exchange.com/Hardware/Servers/Q_25200410.html

A couple of out takes

Theire units work well for the most part, but we've had far more issues over the past years with the units locking up than you'd expect, almost always due to an incompatabilities to their OS and the underlyring RAID controller - I belive at this point we've had to upgrade the controller on 3 different occasions in 2 years. The feature set from Lefthand is quite nice for the price, but its missing a couple things, like being able to manually move the cluster VIP between boxes (its an automatic thing, so iy only does it when you down a box).

Performance on the units is fairly good, but make sure you calculate what IOPS you'll get from the units versus what you'll need to run your servers and VMs off of it. Both vendros can give estimates on what the units will give you, and what sort of number they'd expect in your environment.

OR

Not a ton (we're still using them), but more than I feel we should have. The units don't failover as gracefully as they should when one has a problem, and so we've had to do do a fair number of manual work when there's an issue to get things back running.

Example: at our HQ, we're using 2 mirrored units to host a number of mid to high importance servers - a couple VMs, data store to a couple important but low use linux servers, and a fileserver that hosts the folder redirection to laptops. When one unit hands/locks up, its supposed to gracefully fail over to the other unit, with no loss or connection drops. What actually happens is the fileserver dismounts the drive, and both the VMs and the data store file systems go to a read only state. It's all fixable, but causes significant scrambling to fix the problem.

squigit, there are two ways to set up network RAID on LeftHand., in one mode it stops if a node goes down for a high level of data protection, in the other mode it keeps going. You can also migrate a LUN to another set of nodes by jiggling about with cluster membership. Maybe you just to install the failover manager on a 3rd box to maintain quorum. Sounds like you need to go on the install/config course.

Followers