Pages

Thursday, June 29, 2017

Oracle eBusiness 12.2 clone : OHS config fails for UpdateOhsT2PMovePlan.java:1063

While running clone we faced this issue. Multiple runs but every-time it failed at OHS config.
Under txkSetOHSConfig directory provison_<timestamp>.log has following error.

Updating /YMS-UI-context-root and setting WeblogicCluster to : u060supcw801.kroger.com:6801Inside run context 
StackTrace: java.lang.NullPointerException at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.updateModWLSDirectives(UpdateOhsT2PMovePlan.java:812) at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.updateModWLSConfig(UpdateOhsT2PMovePlan.java:721) at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.updateModWLSConfig(UpdateOhsT2PMovePlan.java:698) at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.updateModWLSConfig(UpdateOhsT2PMovePlan.java:698) at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.processMoveplan(UpdateOhsT2PMovePlan.java:631) at oracle.apps.fnd.txk.util.UpdateOhsT2PMovePlan.main(UpdateOhsT2PMovePlan.java:1063)

Solution:
It was not a recommended way but we had to solve it asap. So I looked for start of the error (just prior to that) in the log.
It was completing until the point /YMS-UI-context-root.

Now searched same string in the comn/clone/FMW/OHS/moveplan.xml.
Took a backup of it and removed the section which was just next to /YMS-UI-context-root. For us this was /console with further details host,port etc. so removed it totally.
p.s. Stick to the indentation and identifiers to remove the section exactly point to point. If not done correctly this may say invalid moveplan.xml in next execution.

So post removing the particular part from moveplan.xml I executed the clone again, and it completed with some warnings. Some services did not come up correctly but we were able to solve those small issues and bring application up.

Reason:
Later found that we had some customization on console URL which was causing the issue. Customization should be removed, or this is supposed to throw error.

Monday, February 6, 2017

[ERROR] Unable to continue as an existing adop session was found.

Using ADOP on 12.2.5 environment. Last session had some issues so we aborted the same. the adop -status was clearly showing aborted.


Node Name       Node Type  Phase           Status          Started              Finished             Elapsed
--------------- ---------- --------------- --------------- -------------------- -------------------- ------------
xxxxxxxxx101    master     PREPARE         SESSION ABORTED 2017/02/06 15:52:37      6:55:14 
                                            APPLY           SESSION ABORTED
                                           FINALIZE        SESSION ABORTED
                                           CUTOVER         SESSION ABORTED
                                           CLEANUP         NOT STARTED

So now cleanup should be run and I tried the same. It gave me following messages.

Validating system setup.
    Node registry is valid.

Checking for existing adop sessions.

    [ERROR]     Unable to continue as an existing adop session was found.

[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"


Surprisingly the cleanup was not proceeding and checking logs with above command and the directories was giving nothing


Even checked from database side. The status was already "C" so nothing could be done here.

SQL> select ADOP_SESSION_ID , abort_status, prepare_status,apply_status , status, edition_name from ad_adop_sessions where ADOP_SESSION_ID=13;

ADOP_SESSION_ID A P A S EDITION_NAME

--------------- - - - - ------------------------------
             13 Y R N C


Digging more found following processing in execution. Looks like were started to abandon the patch but have still been there.

applmgr 18809034 20381948  58 22:57:32      -  0:00 /..../bin/adzdoptl.pl phase=prepare,apply patches=24390794,23708596 abandon=yes restart=no
applmgr 20381948        1 120 15:49:28      - 71:52 /..../bin/adzdoptl.pl phase=prepare,apply patches=24390794,23708596 abandon=yes restart=no


So killed these processes and tried cleanup phase again and it completed successfully.