Archive for August, 2009
I’ve had some difficulty yesterday while converting 3x Centos physical server to virtual machine which running Zimbra management & MTA. Two of this machines (MTA without LVM) had been converted successfully without issue though at the beginning we unable to get machine details due to firewall restriction between vlan.
The most challenge part is to convert the third machine which running number of Zimbra core services plus with some LVM volume. In this critical server, there are about 1400 user mailbox with size around 250GB located inside /opt volume. As I already knew, Converter 4 shouldn’t have any problem to convert Linux machine with LVM volume even though it will not preserve the LVM. Actually I’ve been assigned for this task within short notice. Although I’ve lack of knowledge about Zimbra bt if I managed to do it, today activity should mark another memorable achievement to my book.No comments
Just found few articles and forums which mentioned about multipath for iscsi initiator only supported on vSphere. If the claimed is true, what about ESX 3.5?. As usual to verify this I simply run number of test as below:
- Multiple iscsi target with different subnet
- 2x vSwitch, each must have one vmkernel & service console with different subnet
Finally the things I’m scared most (DR Exercise) has been conducted successfully for my customer. Now just need to do some documentation for them on how overall process was conducted. This pre DR exercise objectives was :
- Successfully failover production environment from HQ(KL) to DR(Bangi) site (30km).
- Present one replicated lun from HP Eva (HP continuos access) to ESX DataCenter.
- ESX DataCenter (DR) should be able to see replicated Datastore with 3x VMs inside.
- ESX DataCenter should able to register all VMs into DR vCenter.
- ESX DataCenter should able to power-on all VMs.
- All VMs should able to connect to customer production environment.
- All applications & database (SQL) should running fine.
- Failback entire process from DR to HQ and then, bring up all virtual machines in this lun.
There are few others process which can be listed above but I’ll put it in my documentation. This activity involved three parts, storage, VMware & application and all those three we faced some minor problems. First, we having some difficulty to declare primary, secondary lun in HQ & DR (HP). Second, VMware ESX can see & browse the datastore & all VMs was there but out of three, we managed to registered only one VM(Me). Third, client cant communicate with the application(user). But we are still happy with the entire process which only took 2 hours to be finished.No comments
Virtual machine replication is very important for users who like to have a copy of production VMs at the DR site. The process of replication can happen when server start to copy virtual disk block changes from time to time from one location to another. This can be achieved through LAN or WAN depending on few factors as I listed below :
- Connection speed between source & destination
- Time Frame / replication job frequency
- Size of data changes
I’ve tried both vReplicator from vizioncore & Esxpress CDP from Phd Virtual which can do virtual machine replication easily although from technical perspective, I would prefer CDP method to do the replication. Unfortunately, I agreed with some opinions, Esxpress is not very user friendly as what vReplicator is. Virtual machine backup helper (VBA), dedupe VM, backup images, ftp server (if you use it as destination) plus Linux platform could add further headache on the number of components which system admin need to take care. Therefore, fair enough to say this is not the best option for Windows users who prefer not only fast and easy replication, but also which can do easy restoration with single click.No comments
Today I would like to share the way I configure redundancy setup with combination of MSA2012i iscsi + ESX 3.5 U4 and Pro-curve 1800-24G. To be honest this setup has been successfully implemented on one of my customer site although we have had some hick-up somewhere.
My hardware setup can be listed as below:
- MSA2012i with 2 controllers & 4 targets
- 2x ESX hosts with 10 nics each
- 2x Pro-Curve 1800-24G layer 2 switch
From the hardware given above, I have two options to choose :
1. Use link aggregration / LACP
Link Aggregation achieves high utilization across multiple links when carrying multiple conversations, and is less efficient with a small number of conversations (and has no improved bandwith with just one). While Link Aggregation is good, it’s not as efficient as a single faster link. I know, this is not the case, since my setup would have multiple targets but let see the other option I have.
This will provide end-to-end redundancy connection from MSA2012i-switch-ESX hosts. When performance is acceptable and considering the low number of VMs will be running and critical to my customer, this option become my favorite. To make things look exciting and without compromising with iSCSI security & performance, I will use VLAN for ip storage (iSCSI).No comments
MSA2012i iscsi Failover
This is my second time configured MSA2012i for customers with different scenario. First time, the MSA2012i only came with single controller (A) and as we know, no redundant operational can be achieved with only single controller. However, all volumes should visible to both iscsi ports on controller A though second vdisk should automatically belongs to controller B.
Today, I’d another chance to configured MSA2012i but this time it’s came with dual controller (A & B). So how actucally failover works for MSA2012i?
As per HP site “The high-availability configuration requires two gigabit Ethernet (GbE) switches. During active-active operation, both controllers’ mapped volumes are visible to both data hosts.”
BTW, some said, once controller B failed, all mapped volumes which belong to this controller will be owned by controller A. To avoid any further confusion and missunderstanding, I will do some testing and show you how failover works when one of the controller (B) failed.
For this purposes, I’ll show you the test result via CLI instead SMU (GUI):