Kamran Agayev's Oracle Blog

Oracle Certified Master

Step by step video tutorial of installing Oracle Database 12c on OEL 6

Posted by Kamran Agayev A. on July 2nd, 2013

In this video tutorial you’ll learn how to install Oracle Database 12c on Oracle Enterprise Linux 6

Oracle12c_installation

 

You can download .mp4 format of the video from this link

You can watch it online from this link  (Don’t forget to watch it in HD format)

You an download Installation configuration file from this link

Posted in Administration, Oracle on Linux, Video Tutorials | 5 Comments »

Moving RAC redolog files to the new ASM diskgroup

Posted by Kamran Agayev A. on May 17th, 2013

Here’re the steps you need to perform in order to move redo log files to the different ASM diskgroup in a RAC environment

First of all, make sure you’ve been provided shared disks by system administrators. If you’re using asmlib, create disks using oracleasm as follows:

[oracle@node1 ~] sudo /etc/init.d/oracleasm createdisk RLOGS /dev/mapper/mpath6

Then run scan and query new disks from other nodes:

[oracle@node2 ~]$ sudo /etc/init.d/oracleasm scandisks
[oracle@node2 ~]$ sudo /etc/init.d/oracleasm listdisks

After having your disk added to the list, query it from /dev/oracleasm/disks folder:
[oracle@node1 ~]$ ls -ltr /dev/oracleasm/disks/

If you got the disk, then everything is ok. Now you can connect to the ASM instance and create a diskgroup. For this, get the name of the ASM instance from OS and connect to it:
ps -ef | grep ASM
export ORACLE_SID=+ASM1
sqlplus / as sysdba

SQL> create diskgroup RLOGS external redundancy
2 DISK ‘/dev/oracleasm/disks/RLOGS’;

Diskgroup created.

SQL>

If it’s not mounted on the rest nodes, connect to those nodes and mount it manually:
SQL>alter diskgroup RLOGS mount;

And also make sure the name of the disk group is added to asm_diskstrings parameter at ASM instance

After having your disk group created, create new redo log group per thread, and remove on redo log group from the same thread which has INACTIVE status
You can check the status of any group based on thread using V$LOG view:

SELECT group#, thread#, status
FROM v$log
WHERE thread# = 2
ORDER BY 2, 1

GROUP# THREAD# STATUS
———- ———- —————-
20 2 CURRENT
21 2 INACTIVE

Now add new group to the same thread (thread#=2) and check v$logfile view:

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 22 (‘+RLOGS’) SIZE 50M

SELECT lf.group#, lf.MEMBER
FROM v$logfile lf, v$log l
WHERE lf.group# = l.group# AND l.thread# = 2

GROUP# MEMBER
———- ——————————————————-
20 +OLD_LOG/PROD/onlinelog/group_20.259.815656575
21 +OLD_LOG/PROD/onlinelog/group_21.260.815656475
22 +RLOGS/PROD/onlinelog/group_22.260.815656475

Now you can safely delete INACTIVE redo log group from the same thread:

SQL> ALTER DATABASE DROP LOGFILE GROUP 21;

Perform above mentioned steps one by one for each thread.

Each time switch log file and make sure the new redo log member is used. Moreover, try to query both sql commands that are written above after each time you add and drop redo log group to see if it’s added/deleted successfully.

Posted in Administration | No Comments »

TT0802: Database permanent space exhausted (TimesTen db)

Posted by Kamran Agayev A. on April 18th, 2013

During TimesTen stress test, we’ve encountered the following error while trying to perform bulk insert :

TT0802: Database permanent space exhausted — file “blk.c”, lineno 3477, procedure “sbBlkAlloc”"

The reason is because of the PERM_ALLOCATED_SIZE parameter defined in sys.odbc.ini file. The default value is 64MB which is small for performing bulk-insert and stress-test. You can get this value either from sys.odbc.ini file, or using dssize command from ttIsql

Command> dssize

PERM_ALLOCATED_SIZE: 65536
PERM_IN_USE_SIZE: 64973
PERM_IN_USE_HIGH_WATER: 65408
TEMP_ALLOCATED_SIZE: 49152
TEMP_IN_USE_SIZE: 11242
TEMP_IN_USE_HIGH_WATER: 12445

Command> exit

 

In order to increase this paramter, switch to $TIMESTEN_HOME/info folder and change PERM_ALLOCATED_SIZE parameter inside the required cache group:

[prod_db]
DataStore=/home/oracle/OracleCache/datastore1
PermSize=10000
OracleNetServiceName=PROD_DB
DatabaseCharacterSet=WE8ISO8859P1

 

Here, we changed the parameter to 10GB.  Now restart the TimesTen deamon and check the value again:

[oracle@timesten info]$ ttDaemonAdmin -stop
TimesTen Daemon stopped.
[oracle@timesten info]$ ttDaemonAdmin -start
TimesTen Daemon startup OK.

 

Command> dssize

PERM_ALLOCATED_SIZE: 10240000
PERM_IN_USE_SIZE: 64593
PERM_IN_USE_HIGH_WATER: 64593
TEMP_ALLOCATED_SIZE: 49152
TEMP_IN_USE_SIZE: 8711
TEMP_IN_USE_HIGH_WATER: 8775

Command>

 

Now, run the bulk insert again and you’ll not get the error, instead you’ll get PERM_IN_USE_SIZE value increased

 

Posted in TimesTen | No Comments »

Step by Step Oracle 10g RAC installation ebook

Posted by Kamran Agayev A. on March 15th, 2013

From the following link you can download my Step by Step Oracle 10g RAC installation on VMware ebook

Downloaded 1434 times

Posted in Uncategorized | 8 Comments »

Debugging Data Pump session

Posted by Kamran Agayev A. on March 11th, 2013

While importing a dump file, sometimes it takes too long and seems to be “hanging” and processing something unknown in the background, or queuing for something. As a DBA, you HAVE NOT wait “until it finishes”. You have to try to find out the solution.

Yesterday, while importing a metadata of a big database, the session hold and took hours to finish.

This is the command I run to import the dump file into the new database:

impdp system/oracle directory=mydir dumpfile=dump01.dmp exclude=tablespace, statistics parfile=param.ini METRICS=y 

The session was hold in the following section:

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/CLUSTER

     Completed 1 CLUSTER objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/INDEX

     Completed 1 INDEX objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Note: I suggest that we use undocumented METRICS=Y parameter to get information about the number of objects and the time it takes to process them into the log file

I waited shortly and was curious about the activities in the background, so decided to debug the Data Pump session and run the following query to get the SID and SERIAL number of the running data pump sessions. These values will be used to trace the session in the next step:

col username format a10
set linesize 150
col job_name format a20
col program format a25
SELECT TO_CHAR (SYSDATE, ‘YYYY-MM-DD HH24:MI:SS’) “DATE”,
     s.program,
     s.sid,
     s.status,
     s.username,
     d.job_name,
     p.spid,
     s.serial#,
     p.pid
FROM V$SESSION s, V$PROCESS p, DBA_DATAPUMP_SESSIONS d
WHERE p.addr = s.paddr AND s.saddr = d.saddr;

DATE                PROGRAM                          SID STATUS   USERNAME   JOB_NAME             SPID            SERIAL#        PID

——————- ————————- ———- ——– ———- ——————– ———— ———- ———-

2013-03-07 14:52:04 udi@TestDB02 (TNS V1-V3)         152 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12143                36         17

2013-03-07 14:52:04 oracle@TestDB02 (DM00)           148 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12147                 3         19

2013-03-07 14:52:04 oracle@TestDB02 (DW01)           142 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12149                15         20

 

 

 

 

Then I queried V$SESSION_WAIT view to get the waiting event:

SELECT   w.sid, w.event, w.seconds_in_wait

  FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w

 WHERE   s.saddr = d.saddr AND s.sid = w.sid;

     SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

       142 control file sequential read                                                                  1

       148 wait for unread message on broadcast channel                                  43

       152 wait for unread message on broadcast channel                                 306

If you run the same query a number of times, you’ll see how the wait event changes:

To get detailed information I decided to debug the first session (SID = 142) and provided values of SID and SERIAL# columns to DBMS_SYSTEM.SET_EV procedure and started the trace:

SQL>  EXECUTE SYS.DBMS_SYSTEM.SET_EV (142,15,10046,8,”);

Then I switched to udump folder and checked the file labeled “dbname_ora_PID” – “testdb_ora_12149”

-bash-4.2$ tail -f testdb_ora_12149.trc

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

WAIT #4: nam=’control file sequential read’ ela= 21 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366772623

WAIT #4: nam=’control file sequential read’ ela= 15 file#=1 block#=1 blocks=1 obj#=51780 tim=13410366772660

WAIT #4: nam=’control file sequential read’ ela= 15 file#=2 block#=1 blocks=1 obj#=51780 tim=13410366772695

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=16 blocks=1 obj#=51780 tim=13410366772728

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=18 blocks=1 obj#=51780 tim=13410366772760

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=24 blocks=1 obj#=51780 tim=13410366772802

WAIT #4: nam=’db file sequential read’ ela= 12 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366772834

WAIT #4: nam=’db file single write’ ela= 11313 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366784170

WAIT #4: nam=’control file parallel write’ ela= 38910 files=3 block#=17 requests=3 obj#=51780 tim=13410366823104

WAIT #4: nam=’control file parallel write’ ela= 46532 files=3 block#=15 requests=3 obj#=51780 tim=13410366869661

WAIT #4: nam=’control file parallel write’ ela= 40258 files=3 block#=1 requests=3 obj#=51780 tim=13410366909955

WAIT #4: nam=’control file sequential read’ ela= 12 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366909986

WAIT #4: nam=’rdbms ipc reply’ ela= 7016 from_process=5 timeout=21474836 p3=0 obj#=51780 tim=13410366917049

=====================

PARSING IN CURSOR #15 len=296 dep=3 uid=0 oct=6 lid=0 tim=13410366917452 hv=2379717279 ad=’de799c58′

update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

END OF STMT

PARSE #15:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=3,tim=13410366917447

EXEC #15:c=0,e=337,p=0,cr=5,cu=1,mis=0,r=1,dep=3,og=3,tim=13410366918000

STAT #15 id=1 cnt=0 pid=0 pos=1 obj=0 op=’UPDATE  SEG$ (cr=5 pr=0 pw=0 time=277 us)’

STAT #15 id=2 cnt=1 pid=1 pos=1 obj=14 op=’TABLE ACCESS CLUSTER SEG$ (cr=5 pr=0 pw=0 time=166 us)’

STAT #15 id=3 cnt=1 pid=2 pos=1 obj=9 op=’INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=31 us)’

By using tkprof I formatted content of the trace file into a readable output:

tkprof testdb_ora_12149.trc impdp_output.dat

And read the file:

insert into seg$ (file#,block#,type#,ts#,blocks,extents,minexts,maxexts,

extsize,extpct,user#,iniexts,lists,groups,cachehint,bitmapranges,hwmincr,

spare1, scanhint)

values

(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,0,:16,DECODE(:17,0,NULL,

:17),:18)

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse      905      0.04       0.01          0          0          0           0

Execute    905      0.26       0.32          0       2756       6420         905

Fetch        0      0.00       0.00          0          0          0           0

——- ——  ——– ———- ———- ———- ———-  ———-

total     1810      0.30       0.34          0       2756       6420         905

Everything is fine. It’s importing the metadata into the database by inserting data to the seg$ (segments) table. I used tail command to get the output of the trace file and run the first query.

After a delay, I got “statement suspended, wait error to be cleared” message, checked alert.log file and found that the USERS tablespace is FULL

SQL> /

SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

142 statement suspended, wait error to be cleared                       20

148 wait for unread message on broadcast channel                                 905

152 wait for unread message on broadcast channel                                 246

SQL>

tail –f Alert.log file

statement in resumable session ‘SYSTEM.SYS_IMPORT_FULL_01.1′ was suspended due to

    ORA-01658: unable to create INITIAL extent for segment in tablespace USERS

After adding some space to the USERS tablespace the message disappeared.

Then suddenly I begin receiving “data file init write” wait event. That was the main reasons why the import took much time.

The reason was that the AUTOEXTEND value of the datafile was set very low and Oracle was making too many calls to OS in order to add new extents

The output from the trace file was as follows:

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

In order to define the tablespace which is the cause of this event, I got the object id from the trace file (obj#=51780) and checked obj$ table.

SELECT   name

  FROM   OBJ$

 WHERE   obj# = 51780

Then using DBMS_METADATA.GET_DLL function I got the tablespace name of the object that couldn’t be imported

SELECT   DBMS_METADATA.get_ddl (‘TABLE’, ‘TAB_TEST’, ‘USR01′) FROM DUAL

As the ‘Data file init write’ event directly relates to AUTOEXTEND and occurs while adding extents to the datafile, I checked AUTOEXTEND value of the datafiles. Their values were too small. I increased the AUTOEXTEND values and the event disappeared from the trace log and import job finished successfully.

Reference: Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump [ID 286496.1]

Posted in Uncategorized | 4 Comments »

Waiting at “Performing remote operations” during RAC installation

Posted by Kamran Agayev A. on January 30th, 2013

Today, while installing RAC, the installation began to wait at “Performing remote operations” step. I’ve tried to check log files, but there was no information written

Then I tried to trace the currently running process, but didn’t get enough information about the wait. At last, I checked both servers and found firewall running. After stopping firewall, (stopping, deinstalling and starting the installation again) the setup was able to pass that step and copied the Grid home files to the remote server

 

Posted in Administration, RAC issues | No Comments »

Some processes are missing from V$SESSION view

Posted by Kamran Agayev A. on January 22nd, 2013

Today, while checking both V$PROCESS and V$SESSION view, I realized that there’re some processes that are missing from V$SESSION view:

SQL> select count(*) from v$process where addr not in (select paddr from v$session);

 COUNT(*)

———-
120

 

How can a process be created without having any corresponding entry in V$SESSION view?

I started to check those processes on OS level. I got SPID of one of those processes and firstly tried to debug it using ORADEBUG from sqlplus:

sqlplus / as sysdba

ORADEBUG SETOSPID 3422
ORADEBUG TRACEFILE_NAME;

By getting the trace file name, I’ve checked the file but didn’t find any helpful message inside.

Then I decided to debug it on OS level. As the OS was HP-UX, I used truss executable to trace the unix system calls:

/usr/local/bin/truss -o truss.out -vall -laefdD -p 3422

And checked the output again:

tail -f truss.out
( Attached to process 18997 (“oraclePRODDB (LOCAL=NO)”) [64-bit] )
18997/9934051: 18446744073.7095 read(16, 0x600000000021a616, 2064) [sleeping]

This output wasn’t helpful (however, Tanel uses pfiles executable to get all opened files in the process by getting the first parameter of the read function – which is 16) , so I decided to use the next executable – lsof – which lists the opened files by the process:

/usr/local/bin/lsof -p 3422

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
oracle 20472 oracle cwd DIR 64,0×4 2048 3660 /home/oracle/product/10g/dbs
oracle 20472 oracle txt REG 64,0×4 284477360 101589 /home/oracle/product/10g/bin/oracle
oracle 20472 oracle mem REG 64,0×4 5337208 25311 /home/oracle/product/10g/lib/libnnz10.so
oracle 20472 oracle mem REG 64,0×7 21281 27339 /usr/lib/tztab
oracle 20472 oracle mem REG 64,0×5 312440 54074 /opt/star-ncf-prod/ep_patch/usr/lib/hpux64/libxti.so.1
oracle 20472 oracle mem REG 64,0×7 4371800 14392 /usr/lib/hpux64/libc.so.1
oracle 20472 oracle mem REG 64,0×7 1649152 91060 /usr/lib/hpux64/libnsl.so.1
oracle 20472 oracle mem REG 64,0×7 67664 91056 /usr/lib/hpux64/libnss_compat.so.1
oracle 20472 oracle mem REG 64,0×4 20041776 8569 /home/oracle/product/10g/lib/libjox10.so
oracle 20472 oracle mem REG 64,0×7 1046480 11645 /usr/lib/hpux64/libpthread.so.1
oracle 20472 oracle mem REG 64,0×7 55088 491 /usr/lib/hpux64/libuca.so.1
oracle 20472 oracle mem REG 64,0×7 706832 91078 /usr/lib/hpux64/libunwind.so.1
oracle 20472 oracle mem REG 64,0×7 6444800 91068 /usr/lib/hpux64/libm.so.1
oracle 20472 oracle mem REG 64,0×7 78168 90961 /usr/lib/hpux64/libdl.so.1
oracle 20472 oracle mem REG 64,0×7 89208 91053 /usr/lib/hpux64/libnss_dns.so.1
oracle 20472 oracle mem REG 64,0×7 44832 3881 /usr/lib/hpux64/librt.so.1
oracle 20472 oracle mem REG 64,0×4 199296 8277 /home/oracle/product/10g/lib/libdbcfg10.so
oracle 20472 oracle mem REG 64,0×4 134552 13143 /home/oracle/product/10g/lib/libocrutl10.so
oracle 20472 oracle mem REG 64,0×4 533792 13142 /home/oracle/product/10g/lib/libocrb10.so
oracle 20472 oracle mem REG 64,0×4 731352 25330 /home/oracle/product/10g/lib/libocr10.so
oracle 20472 oracle mem REG 64,0×4 1296848 25478 /home/oracle/product/10g/lib/libhasgen10.so
oracle 20472 oracle mem REG 64,0×4 315008 13264 /home/oracle/product/10g/lib/libskgxp10.so
oracle 20472 oracle mem REG 64,0×7 1163152 90960 /usr/lib/hpux64/dld.so
oracle 20472 oracle mem REG 64,0×7 184592 90962 /usr/lib/hpux64/uld.so
oracle 20472 oracle 0u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 1u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 2u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 3u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 4u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 5u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 6u CHR 3,0×2 0t0 74 /dev/null
oracle 20472 oracle 7u IPv4 0xe0000001cbfe0900 0t0 TCP *:* (IDLE)
oracle 20472 oracle 8u REG 64,0×4 849408 13244 /home/oracle/product/10g/rdbms/mesg/oraus.msb
oracle 20472 oracle 14u REG 64,0×4 268644352 79279 /home/oracle/product/10g/cdata/localhost/local.ocr
oracle 20472 oracle 27u IPv4 0xe0000001c34f0740 0t3603 TCP ***.***.***.***:1521->name.ofthehost.com:4934 (ESTABLISHED)
oracle 20472 oracle 28w FIFO 0xe00000017b3ba748 0t0 184090157
oracle 20472 oracle 29r FIFO 0xe0000001f99d3e48 0t0 184090158

 

At the end of the output, I saw an IP address and the host name where the process were connecting. After checking that server, I saw that it was opening a connection to the database and created a process on OS. Then it wasn’t able to create a session and didn’t detect the dead process and kill it. So it remained in the V$PROCESS view and on the OS as a process and was missing in the V$SESSION view.

I killed all missing sessions from OS level and rebooted the server which was opening connections to the database. It seems it has memory or network issues, so I asked sysadmins to check it

 

You can also check the following metalink note and learn how you can enable Dead Connection Detection (DCD)

A discussion of Dead Connection Detection, Resource Limits, V$SESSION, V$PROCESS and OS processes [ID 601605.1]

 

Posted in Administration | 2 Comments »

Testing Load balancing in Oracle RAC

Posted by Kamran Agayev A. on January 14th, 2013

If you’ve installed Oracle RAC (Real Application Clusters) and want to test how Load Balancing works, you can run the following shell script and check GV$SESSION view:

#!/bin/bash
. /home/oracle/.bash_profile
for ((i=1; i <= 50 ; i++))
do
nohup sqlplus -S system/oracle@racdb<<eof &
begin
dbms_lock.sleep(10);
end;
/

eof
done

This will open 50 sessions in the background. Check GV$SESSION view before and after  running this query:

SQL> select inst_id,count(*) from gv$session where username is not null group by inst_id;

INST_ID COUNT(*)
———- ———-
1           10
2          9

Run the following command from the different session:

[oracle@node1 ~] ./check_load_balancing.sh

SQL> /

INST_ID COUNT(*)
———- ———-
1           33
2           36

Wait for 10 seconds (as we’ve defined “10″ seconds at DBMS_LOCK.SLEEP procedure) and run the query again :

SQL> /

INST_ID COUNT(*)
———- ———-
1           10
2           9

SQL>

From the output you can see that all new sessions were distributed between two nodes appropriately.

Posted in RAC issues | 1 Comment »

Oracle Enterprise Manager 12c Cloud Control – new book

Posted by Kamran Agayev A. on December 24th, 2012

My friend and former ACE Director Porus Homi Havewala’s new book on Oracle Enterprise Manager 12c Cloud Control, is now available, this is the First published EM 12c Cloud Control book in the world.

If you are interested in learning about the capabilities of Enterprise Manager 12c, please have a look. Electronic copies are available too.

http://www.packtpub.com/oracle-enterprise-manager-12c-cloud-control/book

Oracle Enterprise Manager 12c Cloud Control: Managing Data Center Chaos

Posted in Administration | No Comments »

Using scp with nohup command without providing password

Posted by Kamran Agayev A. on December 1st, 2012

It seems almost impossible for me to use scp from command line with nohup and without providing password. I didn’t want to share authentication keys between servers using ssh-keygen, which seemed to be the only solution to run scp command in the background

I tried:

nohup scp -P 223 /tmp/myfile.dat user:pass@server:/tmp & 

and failed. Using password in such way was incorrect syntax. Then I tried:

nohup scp -P 223 /tmp/myfile.dat user@server:/tmp & 

it asked me for the password:

oracle@server’s password:
oracle

[1]+  Stopped   nohup scp -P 223 /tmp/myfile.dat user@server:/tmp & 

and failed again…

 

After making a little research, I found the following blog post, where the author shows how scp can be used in the command line by providing  password and running the command in the background

http://bharatikunal.wordpress.com/2010/10/20/scp-as-a-background-process/

Posted in Oracle on Linux | 3 Comments »