[General] GAP Inc (gapinc.com / gap.com) moves from windows to linux

Burhan Khalid burhan.khalid at gmail.com
Tue Aug 4 20:14:36 +03 2009


Depends what you mean by when you say Disaster Recovery.  Normally, DR site
is either a 'warm' or a 'cold' site. You usually do not put your backup and
your DR site in the same place; it is never a good practice to use your
disaster recovery site as load balancing (or otherwise) in production
because you then start relying on your DR site to sustain production loads.

I know that the entire front end operations of NBK run from webservers (its
an online app -- using 'staff.nbk.com' as the URL). So everything from
opening accounts, credit cards, debits, etc. all is a web app.  Using the
same scenario as you, 1000 blades, 300 for web -- the _probable_ setup is:

100 servers as front end cache/proxy machines
200 servers that actually run the business app

That would make up for your "front end", not including the database cluster
that is used to support it.

NBK also runs one of the premiere data mining and analysis departments;
employing some very ... quirky SQL geeks.

The vmware *certified* people in Kuwait do an app profile before
recommending virtualization. I know one public company whose entire DR site
is running on virtualized copies of their production machines.

I guess it all depends on what your downtime is costing you; in case of a
bank I would imagine a lot.

On Tue, Aug 4, 2009 at 7:33 PM, Majed B. <majedb at gmail.com> wrote:

> I was actually looking at from the total CPU powerhorse available, but
> you do have a valid point in that some are used for Disaster Recovery,
> but I don't think they have standby machine just sitting idle; it
> would make more sense to run them in parallel to do load balancing.
>
> Blades are convenient if you want to build a clustering environment,
> since the internal fabric of the Blade Chassis allows communication
> between the servers, separate from the external network communication.
> Also, because the blade servers are relatively small, you can't
> squeeze a lot of RAM into it, so this pushes you to do load balancing
> and failover scenarios between blades servers and between multiple
> blade chassis.
>
> With the assumption of having 1000 blade servers, let's assume that
> 300 of them are for the web portal. Also, assume a minimum CPU clock
> of 2GHz. The total CPU power is 600 GHz!!!
> NBK's online portal serves Kuwait only (I checked Egypt, Lebanon &
> UAE, and none have a portal). So one would think that there's a huge
> amount of CPU power being wasted somewhere!
>
> Virtualization is a good solution to consolidate hardware, but
> shouldn't be done always. Some applications benefit from it while
> others suffer a lot of performance hit. I don't know if the current
> vmware suppliers and implementers actually do the benchmarking for
> customers to see if it's worth virtualizing their servers.
>
> On Tue, Aug 4, 2009 at 4:02 PM, Burhan Khalid<burhan.khalid at gmail.com>
> wrote:
> > You have to keep in mind that banks have to deal with outside regulatory
> > requirements so their blades are also split into DR sites, replication,
> > hot-swaps, etc.
> >
> > A lot of times, for security reasons, they don't let computers run more
> than
> > one service at one time; so most of the time the reason they are going
> for
> > blades is for power/cooling/density reasons; not actually because they
> are
> > utilizing the server completely.
> >
> > For example -- they normally don't run two 'core' services on the same
> > physical machine. So, you would rarely see the BDC and lets say print
> > services running on the same physical machine.
> >
> > This is also the reason why virtualization is such a hot topic because it
> > allows for isolation of machines on the same physical host; alleviating
> (to
> > some extent) the problem that one vulnerability brings down the two
> > services.
> >
> > 1000 blades is nothing:
> >
> > 16 blades per chassis = 72 chassis, each is 9U high so that means you can
> > fit all that into 16 42U cabinets.
> >
> > Compare that with average 2U server, and 1000 servers will fit in 48
> > cabinets.
> >
> > 48 vs. 16, just on space -- not to mention cooling and cabling, etc.
> >
> > Regards,
> > --
> > Burhan
>
> --
>        Majed B.
>
> _______________________________________________
> General mailing list
> General at oskw.org
> http://oskw.org/mailman/listinfo/general_oskw.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oskw.org/pipermail/general_oskw.org/attachments/20090804/f73ce287/attachment.html>


More information about the General mailing list