For the last one or two years, customers are asking
questions about how to enable High Availability for WebLogic Administration
Server.
2. I can use WebLogic Cluster for Administration Server fail over
I designed and implemented WebLogic Administration Fail-over for a financial client back in 2004 using Veritas Hardware Cluster then in
2009, I did the same for my telecom client using HP Service guard hardware
cluster.
The cost and overhead of hardware cluster makes the customer to think
twice about implementing the Administration Server fail over but after Oracle
acquired BEA, WebLogic Administration Server became integral part of Oracle
Fusion Middleware. Meaning, the Administration server availability is mandatory
for major implementations.
Here’s my rule of thumb
1 If the customer just uses WebLogic Application
Server to host few Java EE applications, then the Administration Server fail
over is not required (Note, Banks still wants to have Admin Server fail over).
2 The Fusion Middleware control runs on Administration Server when WebLogic is installed with other Oracle Fusion Products like Oracle Service Bus, Oracle Internet Directory, Web center and other. When the Administration Server node goes down, the Fusion control will also go down. When the monitoring components and associated logic is written into FMW control, then the administration server availability is mandatory.
2 The Fusion Middleware control runs on Administration Server when WebLogic is installed with other Oracle Fusion Products like Oracle Service Bus, Oracle Internet Directory, Web center and other. When the Administration Server node goes down, the Fusion control will also go down. When the monitoring components and associated logic is written into FMW control, then the administration server availability is mandatory.
Some products enabled their application monitoring logic
through Administration Server (Ex: Amdocs Order Management)
Note: Failure of
Administration Server will not affect the run time and life cycle operations of
Managed Servers. The configuration changes can’t be done until the
Administration Server comes up.
How to enable Administration Server fail over/HA
Myth
1. I can use Hardware Load balancer in front of Administration Server fail over
1. I can use Hardware Load balancer in front of Administration Server fail over
Answer: NO, Administration Server is
Singleton and No Active-Active, No Active-Passive. It can always run only one
physical server or VM and can’t be replicated due to the security and design
constraints.
2. I can use WebLogic Cluster for Administration Server fail over
Answer: NO, Administration Server is
Singleton and it is not clusterable. Moreover WebLogic Cluster provides Session
fail over for the deployed components with some load distribution and it is
always requires an external component like Proxy Plug-In or Hardware Load
balancer to fail over.
Methods to implement Administration Server fail over/HA
1 Using Hardware Cluster (Automatic fail over)
The diagram below explains the Hardware cluster
functionality and the following fail over scenario is applicable
·
- The hardware cluster provides floating IP (can float between two physical servers)
- Make Administration server listen on floating IP
- When the first physical node fails, the floating IP moves to second physical server and the administration server can be restarted using rc scripts or hardware cluster package scripts
- The Administration server data (Embedded LDAP, pointers) exists on NAS
- Managed servers were started through Administration server floating IP (with possible DNS) and they don’t see any difference regarding to administration server physical location.
2 Manual fail over
A. After creating the WebLogic domain on the first
physical server (with required managed
server entries), copy the whole domain
into the second physical server.
B. Make the Administration Server listens on a DNS
address and the DNS address must be resolved to both the physical hosts.
C. Administration Server runs on first physical server and the managed servers started through administration server DNS listen address
C. Administration Server runs on first physical server and the managed servers started through administration server DNS listen address
D. When first physical server fails, login into second
physical server and start the
administration server
The Managed Server won’t see any
difference. Note, this is applicable only when the domain goes through minimum
level changes for application roll out, configuration changes. The best
practice is to copy the whole domain into the second physical server whenever
the domain configuration changes at the first physical server.
Another best practice is to keep the domain
(at least the Admin Server and config folders) on the NAS and the shared data
avoids the above critical ‘copy’ work.
From my experience, WebLogic Administration Server
availability is critical for Oracle Fusion Middleware based implementations.
Good luck. If you have any questions/suggestions, please contact me.
Happy WebLogic journey!
Do you by any chance have any pointers about implementing fail over/back using weblogic standard edition? The customer has hardware high availabity and therefore I believe it's not necessary to install weblogic's enterprise edition. I just need the documentation to prove my statement.
ReplyDeleteRegards
Gerardo Brenes Trejos, CEO, GBSYS,S.A., San José, Costa Rica
Hi Gerardo,
ReplyDeleteI suggest you to call the Oracle Support and find out whether your customer license allows them to have WebLogic Cluster in place.
WebLogic Cluster and Hardware Cluster does two different type of things for Application Availability.
WebLogic Cluster is primarily for Session Replication and Failover. Meaning when one of the hosting server (Managed Server) in a cluster fails, the user can still finish their transaction without any interruption. The WebLogic Cluster propagates the user session across the Cluster members.
Hardware Cluster is primarily for Application components which are not clusterable (Ex: WebLogic Admin Server, Singleton Classes). When you implement hardware cluster, it ends up in Active-Passive mode, meaning only one server in the hardware cluster serves all requests and second server is a hot standby. There are more associated to it but a hardware cluster don't know anything about application session replication.
Hope this answer helps.
Thanks
Lawrence Manickam
very informative Lawrence!!
ReplyDeleteWe have one server that is running with only administrationserver, we want to move to a managed node so that i can decom admin server, is this possible, any steps.
ReplyDeleteVenkat
Dear, I am having an issue that We are using Oracle 11g DB & Weblogic server ... when ever end user is connect with the Applications, v$session show the Weblogic Server Name and does not show the actual connecting Terminal as well as Operating System Name... There is any way to get the User Terminal as well as OS Name ???
ReplyDeleteI am new to weblogic, and i have a scenario discussed below
ReplyDelete2 Linux boxes
1. Intalled WLS10.3.6 binaries on both Linux boxes.
2. Installed Oracle WebCenter Portal binaries on both Linux boxes.
On Node 1: A. Creation of Domain
3. Configured Domain using the config.sh script
a. For "Configure the Administration Server Screen" set the Listen Address to default i.e All Local Address
b. Configrured Managed Servers pointing to Node1 and Node2.
c. Configured Clusters for Managed Server
d. Configured Machines for Node1 and Node2 Eg Machine_1 && Machine_2 for each of the linux boxes .
e. Added MAnaged Servers under appropriate Machines
Please Note .......Did not include Admin Server to any of the machine.
4. Once the domain creation was completed, configured and started the Nodemanagers on each of the Node.
When found reachable in the console started the Managed Servers.
On Node 1: B. Packing of Domain.
a.using the pack utility, packed to domain to create a team using -managed=true
./pack.sh -managed=true -domain=/d01/Middleware/user_projects/domains/portal_domain -template=/d01/template.jar -template_name=portal_domain
b. unpacked the template on Node2.
On Node 2: Starting the Admin Console
a. Ensured the Admin Console and all the managed servers were down.
b. Started the Admin Console on Node2, it was up.
c. Tried startign and stoppign the Managed servers of Node1 and Node2 and it worked without any glithches.
Please note we havent used any VIP in the above setup.
Admin Console pickups the local IP to boot every time it is started either from Node1 or Node2.
There is no Shared Storage or Load Balancer used in the above setup
Questions
1. Is this a supported Admin Server HA implementation.
2. If we make changes to configuration while Admin Server is running from Node1 it will write it chnages to the local config.xml... in such scenario
is the config.xml on Node 2 automatically synced or do we need to copy the config.xml from Node1 to Node2 manually every time we plan to start the Admin Server
from Node2 to pick up the new configs.
3. Will the EM Console for FMW products support such a configuration, will the EM changes be synced automatically as well.
Awaiting your reply
Thank you
Hello Lawrence,
ReplyDeleteI tried to follow what you did for Admin Server manual failover :
- I have 2 VMs, each one has an IP adress
- I set up a DNS whith 2 entries, pointing to the 2 VMs (nslookup on this DNS show up the 2 IPs)
- I configured this DNS as Listen Address for Admin Server
Unfortunately, when I start the Admin Server, it picks randomly one of the 2 IPs.
If the IP does not match with the current VM, Admin Server fail to start.
Did I miss something ?
Can you explain more what you did for Admin Server manual failover ?
Thank you,
Adrien
great blog describes a complete information about DA. Thank you for sharing.
ReplyDeleteweb development company in Bangalore | Website Design Company in Bangalore | user experience design firms india| ux design firms in bangalore