ORACLE_SID .....U CAN have different version of oracle db in one server
sqlplus 'as sysdba
startup
program starts by reading a file called spfile_oraclei-sid.ora that contrains the paramter identified by the oracle sid
its a binary file ..
init.ora file ..is a text file that historically predates the binary file
db parameter will give the size of shared memory area .which is called sga system global area...
a number of utility process will be started ...and in windows will have thread...
when memory allocated and process started .sqlplus displays the total sga size ...
as well as the size of forced part, fixed size,variable part ,db buffer and redo buffer
At this point ,oracle is going to open and read a file that is so critical and it is always duplicated..which is called control file
Found at location that is specified in the parameter file ..
This isthe file where oracle find the name of the all the data files, temp file and log files, together that makes up the db.
control file also will allow to check the global consistency ,whether something needs to be restored, whether the db is properly shutdown. whether an unfinised
transaction has to be rolled back...
With out control file your db is dead.
After passing this stage .sqlplus will display the db is mounted. .
which mostly means,that all files are identified..
The only step that remains is the opening of various file for read and write operation.
when files are opened then the db is opened ..and the user who are not dba's can connect.
That means ,they can connect ,if they are allowed to connect to the machine hosting the database.
If you want to enables users or an application server to connect from other machine ,which is the most common case. you have to launch lsnrctl start command another program called tnslsnr which is the "listener" that waits for the incoming connection, after displaying alot of intimate details about its configuration found
inthe lisntern.ora file , the program runs in the back end and everytihing is in place.
Lets now contemplate a clinet program that runs on another machine,if it want to acccess the db ,it needs to provide three pieces of information.
1. the name of the host which oracle runs.
2. the port which is tnslsnr is listening
3. the service.the identifier of the db ,u want to connect.
There are several ways to provide this connection.
You can directly provide everything ,this is for instance what you do with jdbc and a pure java driver .
If your client is using oracle client libraries ,you can provide alias ,and the client will automatically fetch the associated fetch the associated information from a file called tnsnames.ora
These two methods are by far the most common ones ,but there are other possibilities such as using a nldap directory or the system utilities such as network information
services as a subsitute to tnsnames.ora file
Obviously oracle will not give uncontrolled access,you must sign on so that you are identified and db will cleanly know you are authorized to do .
Usually you will provide username and password., authentication through an ldap directory is also sometimes. possible .
In practice ,your client issues a connection request ,that is handled by listener .Listener will either spawn a sub- process ,run the oracle code and become the
clients dedicated server, or it redirects the connection to an already existing shared server, which is much less common configuraiton.
The server process is , so to speak the clients proxy on the Oracle side. It can access the SGA ,that is shared between all processes. It will also locate a primary called PGA,where it stores what is private to the process and to the session.
But lets take a closer look to the various part in the SGA.There isnt much to say about the fixed system area.
But the variable area contains the shared pool, which holds compiled version of queries ,as well as various bufferpools that will contain for instance data dictionary information.
The data buffer contains data from the tables but also index structures and various work areas.
Everything is split in pages or blocs; the block is the oracle basic storage unit and a typical size for a block is 8k. whenver you query or change the content of a table, you operate in memory on blocks,that are in this cache.
Finaly a fourth area is used to store information before it is written to log files to ensure transnational consistency even in the occurrence of a crash .
The parts that really matter to understand how Oracle works are the shared pool,, the data buffer and the log buffer. this is the where the dedicated server process will operate ,as well as the main utility processes started with the db.
Lets see what happens when the client issues an SQL query.
select empno, ename, job from emp where ename like 'S%'
The query arrives as test to the server process that will start by computing a hash value that is used as a check sum .
Hash Value = 3046656596
Then the server will check whether this query has already been recently executed ,and will inspect the shared pool .
If the query is found, the analysis stops here ,this is called soft-parsing and the server process can execute the query.
If a prior query with exactly the same text cannot be found,then we have hard-parsing that is CPU intensive and locks some shared resources.
SQL syntax must be analyzed ,the objects that are referred to in the query must be identified ,some synonyms may have to be translated, and the best execution plan to
in the query must be identified ,some synonyms may have to be translated, and the best execution plan must be determined among sometimes a very large number of possible execution plans.
It may cause recursive queries on the data dictionary and input/output operations. This is why you shouldn't concatenate to the text of a query identifies that change
with every call ,but you should pass them as a parameters, as if the query where a function.
Once the query is analyzed and compiled ,it is going to be loaded in the shared pool and executed .
It will then reference pieces of data that may or may not be in the data buffer.
If data blocks are not in memory ,the server process fetches them from the data files and loads them .
Block tat have been modified are written asynhronously to the datafile by one or several db wirter processes.
When you update data, mechanism for analysing the statmenet and uploading data blocks in to memory are the same ones.
But before modifying data ,initial values are copied to a work area called "undo segment" .
Another process that want to read data being modified isnot blocked,but reads data from the undo segement instead of reading from the table block .
Intial and new value are also copied to the redo buffer.
If the redo buffer fills up ,the utility process LGWR ,or log write dumps its content to a log file.
When you commit the transaction ,oracle records that the initial value is no longer required for rolling back the change , it records that the change is validated and
the LGWR process writes the transaction to the log file .
This thime the operation is synchronous ,and the server processwaits for the go ahead before the call retruns to the client
That means the if you commit every single update in a loop, you will waste a lot of time just waiting for acknowledgment.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment