oracle service 的设定及问题

SYMPTOMS

(1) Database is started automatically when starting the Service defined the Database using SRVCTL.

(2) The PDB is started automatically when restarting the CDB if any Non-Default Services is defined on the PDB. By Default, the PDB will not start by its own when restarting the CDB.

CHANGES

CAUSE

This is expected behaviour. SRVCTL behavior is outside the database and this is expected. 

SOLUTION

This test case illustrates the behaviour in detail.

Test Case 1: The Database is started when attempt to start the Non-Default service defined on the Database. Here <dbname> is the Non-CDB Database.

[oracle@asmhost ~]$ srvctl add service -service orcl_test_service -db <dbname>                      >>Adding new Service on DB <dbname>.

[oracle@asmhost ~]$ srvctl status service -service <servicename> -db <dbname>
Service orcl_test_service is not running.

[oracle@asmhost ~]$ srvctl status database -d <dbname>                                              >>Database <dbname> is stopped and not running here.
Database is not running.

[oracle@asmhost ~]$ srvctl start service -s <servicename> -db <dbname>                          >>Attempting to start the Service <servicename>

[oracle@asmhost ~]$ srvctl status database -d <dbname>                                              >>Database <dbname> is started by its own upon starting the Service.
Database is running.

[oracle@asmhost ~]$ srvctl status service -s <servicename> -d <dbname>
Service <servicename> is running

Test Case 2: The CDB and PDB are getting started when attempt to start the Non-Default service defined on the PDB. Here <dbnameCDB> is the CDB Database.

[oracle@asmhost ~]$ srvctl status database -d <dbnameCDB>                                            >>Database <dbnameCDB> is running.

Database is running.

SQL> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ------------------------------
PDB$SEED READ ONLY
<dbnamePDB1> MOUNTED                                                                                 >>Two PDBs exist on the Container Database <dbnameCDB>
<dbnamePDB2> MOUNTED

[oracle@asmhost ~]$ srvctl add service -service <servicename> -db <dbnameCDB> -pdb <dbnamePDB1>      >>Adding Service on <dbnamePDB1> Pluggable Database

[oracle@asmhost ~]$ srvctl stop database -d <dbnameCDB>

[oracle@asmhost ~]$ srvctl start service -service <servicename> -db <dbnameCDB>                 >>Attempting to start the Service defined on <dbnameCDB> in the Pluggable DB <dbnamePDB1>

[oracle@asmhost ~]$ srvctl status database -d <dbnameCDB>                                            >>CDB is started by its own upon starting the Service.
Database is running.

SQL> select name,open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ------------------------------
PDB$SEED READ ONLY
<dbnamePDB1> READ WRITE                                                                              >>PDB is started by its own upon starting the Service.
<dbnamePDB2> MOUNTED                                                                                 >>This PDB is not started as there is no Service defined in it.

Note: 

This behaviour is expected only during starting up the Services. The CDB and PDB will not stop automatically when attempt to stop the Service (using SRVCTL) and the Database remains open.

[oracle@asmhost ~]$ srvctl stop service -s <servicename> -d <dbnameCDB>

[oracle@asmhost ~]$ srvctl status database -d <dbnameCDB>                                            >>Stopping service will not stop the Database and continue running.
Database is running.

GOAL

From version 12.1.0.2 it is possible to save or discard the open mode of one or more PDBs when the CDB restarts.

This article shows

 - what is the default open mode for PDBs

 - how to save or discard the open mode of one or more PDBs when the CDB restarts

 - how to monitor the save state of the PDBs.

SOLUTION

What is the default open mode for PDBs

The default open mode for a PDB is MOUNTED (except for PDB$SEED which is READ ONLY and cannot be opened READ WRITE by user).

SYS@cnt122> select CON_ID, NAME, OPEN_MODE, RESTRICTED, OPEN_TIME  from gv$containers;

    CON_ID NAME                 OPEN_MODE  RESTRICTED OPEN_TIME
---------- -------------------- ---------- ---------- -----------------------------------
         1 CDB$ROOT             READ WRITE NO         08-OCT-14 09.14.42.775 +01:00
         2 PDB$SEED             READ ONLY  NO         08-OCT-14 09.14.42.873 +01:00
         3 <PDB1_NAME>           MOUNTED
         4 <PDB2_NAME>           MOUNTED

How to save or discard the open mode of a PDB when the CDB restarts

You use the ALTER PLUGGABLE DATABASE SQL statement with a pdb_save or discard_state clause.

The idea is that you can save the current open state of a PDB, so that when CDB restarts this is the open mode the PDB will be in.

When you discard the previously saved open mode for a PDB, then the PDB will be in MOUNTED mode when CDB restarts.

E.g. to save the current open state for <PDB1_NAME>, execute the following:

SQL> ALTER PLUGGABLE DATABASE <PDB1_NAME> SAVE STATE;

Pluggable database altered.

E.g. to discard the saved state for <PDB1_NAME>, execute the following:

SQL> ALTER PLUGGABLE DATABASE <PDB1_NAME> DISCARD STATE;

Pluggable database altered.

For Oracle RAC CDB you can use the instances clause together with the in the pdb_save or discard_state clause to specify the instances on which a PDB's open mode is preserved.

See more information is in the documentation.

How to monitor the save state of the PDBs

You can use the DBA_PDB_SAVED_STATES view to see the save state for PDBs, see details in the documentation.

In the following example we save the OPEN status of the <PDB1_NAME>, so that next time CDB restarts <PDB1_NAME> will be opened instead of mounted.

SQL> STARTUP PLUGGABLE DATABASE <PDB1_NAME> open
Pluggable Database opened.
SQL> select CON_ID, NAME, OPEN_MODE, RESTRICTED, OPEN_TIME  from gv$containers;

    CON_ID NAME                 OPEN_MODE  RESTRICTED OPEN_TIME
---------- -------------------- ---------- ---------- ---------------------------------------------------------------------------
         1 CDB$ROOT             READ WRITE NO         08-OCT-14 09.14.42.775 AM +01:00
         2 PDB$SEED             READ ONLY  NO         08-OCT-14 09.14.42.873 AM +01:00
         3 <PDB1_NAME>          READ WRITE NO         08-OCT-14 09.28.26.830 AM +01:00
         4 <PDB2_NAME>          MOUNTED

SQL> select con_name, state from dba_pdb_saved_states;

no rows selected

SQL> ALTER PLUGGABLE DATABASE PDBP2 SAVE STATE;

Pluggable database altered.

SQL> select con_name, state from dba_pdb_saved_states;

CON_NAME             STATE
-------------------- --------------
<PDB1_NAME>          OPEN

Only the save state is recorded. Once the discard state command is executed for a PDB, the saved state entry for the pdb is removed from DBA_PDB_SAVED_STATES.

SQL> ALTER PLUGGABLE DATABASE <PDB1_NAME> DISCARD STATE;

Pluggable database altered.

SQL> select con_name, state from dba_pdb_saved_states;

no rows selected

As it was mentioned above saving the open state of a PDB is available since 12.1.0.2.
For 12.1.0.1 you may create a database startup trigger to place PDB(s) into a particular open mode at DB startup.

e.g. To open all PDBs at CDB startup, create the following trigger in CDB:

CREATE TRIGGER open_all_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';
END ;
/

--------------trigger的问题

SYMPTOMS

A RAC database service is getting started on an incorrect node in the cluster, for rac pdb.

Sessions are spawning on the node for instance connections and client connections are failing.

The service resource is configured with preferred = node01 and available = node02.  However, the service was being created and started on node03.
 

CHANGES

A logon trigger was created for the RAC database.
 

CAUSE


The logon trigger is configured incorrectly.  The trigger was defined to start all services thus the instance that started first is opening all services.  There is no way to control which instance starts first so when the logon trigger opens all services,
services can start on a non-preferred instance.

Example of incorrectly defined logon trigger for PDB:

create or replace trigger open_all_pdbs
after startup on database
begin
execute immediate ' alter pluggable database all open services=ALL';                <<====HERE
end open_all_PDBS;

SOLUTION

Remove the syntax 'services=ALL' from the logon trigger as the owning user or 'as sysdba'.

Example:

FROM:

create or replace trigger open_all_pdbs
after startup on database
begin
execute immediate ' alter pluggable database all open services=ALL'; 
end open_all_PDBS;

                                                                                    ^^^

TO:

create or replace trigger open_all_pdbs
after startup on database
begin
execute immediate ' alter pluggable database all open';
end open_all_PDBS;

In this example, removing the services option will still open the PDB on all the instances, but the service(s) will be maintained by CRS.  This will allow the service(s) to be correctly registered with only the Preferred instance defined.

-----------------------------

Saving the state for PDBs for RAC databases is not recommended. 

Oracle RAC will open PDBs on a node if Services are defined on that PDB. It is no longer necessary to save state in Oracle RAC environments. 
PDBs may open on nodes where it was not intended.
Per note "Services running simultaneously on preferred and available instances in a multitenant RAC database (Doc ID 2757584.1),"

> In RAC it is not recommended to save the state of PDBs. RAC will open the PDBs on a node if the services are defined on that PDB. 
> It is no longer necessary to save the state in RAC environments.
> Saving state leads to opening the service/PDB on the nodes where it is not intended and the performance may be affected adversely.
> An additional check is introduced in Oracheck to give warning about the saved state.

SYMPTOMS

If the pluggable database is bounced, the services for that database go down but do not automatically restart with the pluggable database startup even though Management Policy set to Automatic.

Service created as

srvctl add service -d orcl -s pdborcl_srv -pdb pdborcl -P BASIC -q TRUE -e SESSION -m BASIC -role PRIMARY,SNAPSHOT_STANDBY -r orcl1,orcl2

or using dbms_service.create_service procedure (where srvctl cannot be used).


Bounced the PDB using SQL plus commands:

alter pluggable database pdborcl close immediate;
alter pluggable database pdborcl open;


The services not auto-start along with pdb open 

CHANGES

CAUSE

Clusterware does not model the PDB resource and as such, there is no direct dependency on the PDB. What is observe is by design, e.g. starting the service will 'pull up' the PDB through an agent action, however, stopping the service will not close the PDB. You will need to return to SQL to close the PDB as SRVCTL has no command to close PDBs.
 

SOLUTION

Specify services clause while open PDB
 

alter pluggable database pdborcl open services=('pdborcl_srv');
alter pluggable database pdborcl open services=All;

--pdb 启动了 咋办

select 'alter session set container='||pdb||';'||'
'||'exec dbms_service.start_service('''||name||''');'
 from  cdb_services 

Enhancement request 20993808 is implemented in 21.1 for considering default services option to consider while open PDB. Patch 20993808 introduced a new parameter auto_start_pdb_services, which can be set to TRUE and all user services in a PDB will be started automatically.

Apply interim patch 20993808, if available for your platform and Oracle version. If no patch exists for your version, please contact Oracle Support for a backport request.

-------------------------------Saving the state for PDBs for RAC databases is not recommended. 

SYMPTOMS

NOTE: In the images, examples and document that follow, user details, cluster names, hostnames, directory paths, filenames, etc. represent a fictitious sample (and are used to provide an illustrative example only). Any similarity to actual persons, or entities, living or dead, is purely coincidental and not intended in any manner.

Service is configured with cardinality one and expected to run on available instance. But it was started on two instances and CRS unaware that the service is running on more than one instance.
This leads to:-

  • Server side load balancing issues.
    • As the service(s) starts on multiple instances, this could overload the nodes.
       
  • Performance degradation.
    • As the users gets connected to multiple instances, this could lead to cluster waits as multiple instances need to access to the same set of data depending on the workload.

srvctl config service -service MY_SVC1_PRD -db RACDB

Service name: MY_SVC1_PRD 
Cardinality: 1

..

Edition:
Pluggable database name: PDB1
...

Preferred instances: RACDB1
Available instances: RACDB2,RACDB3 

Note : Above output is truncated to show only interested values


CONFIGURATION

  • 3 node RAC Database.
  • Pluggable database PDB1 runs on all the instances.
  • Database Configuration :-
DBUNIQUE NAMERACDB
DBNAMERACDB
SERVICE_NAMESRACDB.<domain_name>
INSTANCE_NAMESRACDB1, RACDB2, RACDB3
HOST NAMESRACDB1, RACDB2, RACDB3
PDBPDB1
  •  Service Configuration :- 
SERVICE NAME PREFERRED INSTANCESAVAILABLE INSTANCESPDB
MY_SVC1_PRDRACDB1RACDB2, RACDB3PDB1
MY_SVC2_PRDRACDB2RACDB1, RACDB3PDB1
MY_SVC3_PRDRACDB2RACDB1, RACDB3PDB1

  •  Services are running on the preferred instances as per srvctl  and crsctl  

srvctl status service -db RACDB

Service MY_SVC1_PRD is running on instance(s) RACDB1
Service MY_SVC2_PRD is running on instance(s) RACDB2
Service MY_SVC3_PRD is running on instance(s) RACDB2 

crsctl stat res -t 

...

ora.RACDB.MY_SVC1_PRD.svc
           1     ONLINE ONLINE     RACDB1     STABLE

ora.RACDB.MY_SVC2_PRD.svc
           2     ONLINE ONLINE     RACDB2     STABLE

ora.RACDB.MY_SVC3_PRD.svc
           2     ONLINE ONLINE     RACDB2     STABLE 


  •  But users are able to connect to multiple instances. 

Eg: MY_SVC1_PRD_USER is expected to connect to instance RACDB1 only as the service is running on RACDB1. But MY_SVC1_PRD_USER  is able to make new connections to both RACDB1 and RACDB2.

SQL> select INST_ID, USERNAME, SERVICE_NAME, count(*) from gv$session where username like 'MY_SVC1_PRD_USER%' group by INST_ID, USERNAME, SERVICE_NAME order by INST_ID, USERNAME, SERVICE_NAME;

INST_ID     USERNAME         SERVICE_NAME    COUNT(*)
---------- ----------------  --------------- ---------
1          MY_SVC1_PRD_USER  MY_SVC1_PRD     51
2          MY_SVC1_PRD_USER  MY_SVC1_PRD     53

  

CHANGES

 Instance termination and service failover.

CAUSE

  • The local listeners on both the nodes as well as SCAN listeners are aware of the service MY_SVC1_PRD.
  • Hence the user MY_SVC1_PRD_USER is able to connect to both the instance and the connections are load balanced across both the instances.

INSTANCE 1 : [ Hostname - RACDB1.<domain_name> , Instance name  RACDB1 ]

RACDB1> lsnrctl status LISTENER_SCAN1

Service "MY_SVC1_PRD.<domain_name>" has 2 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
      Instance "RACDB2", status READY, has 1 handler(s) for this service...

RACDB1> lsnrctl status LISTENER

Service "MY_SVC1_PRD.<domain_name>" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service... 

INSTANCE 2 : [ Hostname - RACDB2.<domain_name> , Instance name : RACDB2 ]

RACDB2> lsnrctl status LISTENER_SCAN2

Service "MY_SVC1_PRD.<domain_name>" has 2 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
      Instance "RACDB2", status READY, has 1 handler(s) for this service...

RACDB2> lsnrctl status LISTENER

Service "MY_SVC1_PRD.<domain_name>" has 1 instance(s).
       Instance "RACDB2", status READY, has 1 handler(s) for this service...

  • Database is aware of the service and is running on the instances RACDB1 and RACDB2.

SYS:RACDB1>select inst_id, service_id, name, con_name, creation_date from gv$active_services where name='MY_SVC1_PRD';

INST_ID    SERVICE_ID NAME           CON_NAME     CREATION_DATE
---------- ---------- -------------- ------------ -------------
1           1         MY_SVC1_PRD    PDB1         05-MAR-21
2           1         MY_SVC1_PRD    PDB1         05-MAR-21 

  • The reason for this behavior is because the pdb state was saved in the database.

SYS:RACDB1>select * from dba_pdb_saved_states;

CON_ID     CON_NAME   INSTANCE_NAME  CON_UID     GUID                            STATE
---------- ---------  -------------  ---------- -------------------------------- -----
3          PDB1       RACDB1         925456677  A01B66341CC01148E0539648EC0A28D0 OPEN
3          PDB1       RACDB2         595314999  A839B67599FF7448E0539648EC0A2842 OPEN

PDB SAVE STATE

  • You can preserve the open mode of one or more PDBs when the CDB restarts by using the ALTER PLUGGABLE DATABASE SQL statement with a pdb_save_or_discard_state clause.
  • You can do this in the following way: 
    • Specify SAVE STATE to preserve the PDBs' mode when the CDB is restarted.
      • For example,
        • If a PDB is in open read/write mode before the CDB is restarted, then the PDB is in open read/write mode after the CDB is restarted.
        • If a PDB is in mounted mode before the CDB is restarted, then the PDB is in mounted mode after the CDB is restarted.
    • Specify DISCARD STATE to ignore the PDBs' open mode when the CDB is restarted.
      • When DISCARD STATE is specified for a PDB, the PDB is always mounted after the CDB is restarted.
         
  • Refer note Document 1933511.1 How to Preserve Open Mode of PDBs When the CDB Restarts for more details.
  • When saving the state of PDBs , the details of the PDBs and its state will be saved in DBA_PDB_SAVED_STATES and the running service details of the respective PDBs are recorded in pdb_svc_state$.As the PDB/service state was saved in multiple instances, it overrides the CRS service configuration and starts the service on all the saved instances. (NOTE: even if "srvctl disable service" was performed on the service at the CRS level, the PDB save state would override this as well).
  • As the service was automatically started due to the pdb save state and are not started through cluster, CRS will not be aware that service is running on multiple instances and reports the service being running on one instance.

SOLUTION

  • In RAC it is not recommended to save the state of PDBs. RAC will open the PDBs on a node if the services are defined on that PDB.
  • It is no longer necessary to save the state in RAC environments.
  • Saving state leads to opening the service/PDB on the nodes where it is not intended and the performance may be affected adversely.
  • An additional check is introduced in Oracheck to give warning about the saved state. (screen shot attached)


 Immediate solution :-

  1. Discard saved state of PDBs from all the instance. 
    • SQL> alter pluggable database all discard state instances=all; -- Discard the saved state on all instances 
      SQL> select * from dba_pdb_saved_states; -- Check if the state is cleared  

  2. Find the status of the service.
    • SQL> select inst_id, name,con_name from gv$active_services where name='<SERVICE_NAME>';

       
  3. Change the container to the specific PDB whenever the service is is running (con_name in the above output should give the PDB name )
    • SQL> alter session set container=<PDB_NAME>;

        
  4. Stop the service using dbms_service procedure on the specific instance where the service should not be running
    • SQL> exec dbms_service.stop_service('<SERVICE_NAME>','<INSTANCE_NAME>');

        
  5. Verify and make sure the service is running on one expected node.
    • SQL> select inst_id, name,con_name from gv$active_services where name='<SERVICE_NAME>';

        
  • EXAMPLE
    • SYS:RACDB2> alter pluggable database all discard state instances=all;

      Pluggable database altered.
       

      SYS:RACDB2> select * from dba_pdb_saved_states;

      no rows selected.

      SYS:RACDB2> select inst_id, name,con_name from gv$active_services where name='MY_SVC1_PRD;

      INST_ID    SERVICE_ID NAME          CON_NAME
      ---------- ---------- ------------- -----------
      1          3          MY_SVC1_PRD   PDB1
      2          3          MY_SVC1_PRD   PDB1

      SYS:RACDB2> alter session set container=PDB1;

      Session altered.
       

      SYS:PCD70452>exec dbms_service.stop_service('MY_SVC1_PRD','RACDB2');

      PL/SQL procedure successfully completed.

      SQL> select inst_id, name,con_name from gv$active_services where name='MY_SVC1_PRD';

      INST_ID    SERVICE_ID  NAME         CON_NAME
      ---------- ---------- ------------- -----------
      1          3          MY_SVC1_PRD   PDB1 

      Note: The above steps will not disconnect the existing connections on instance 2, but will not allow new connection to the instance 2. Existing connections should be killed explicitly from application or database.

Long term Solution :- 

  • Avoid saving the state on PDBs on RAC databases as RAC will open the PDBs on a node if the services are defined on that PDB.
  • Introduce monitoring on DBA_PDB_SAVED_STATES to make sure that the states are not saved accidentally.

-------------------------------------------

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值