|
Use iostat to get the performance data
# iostat -x
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.48 2.76 1.90 0.92 56.44 29.40 30.53 0.01 4.48 2.46 0.69
Work out the queue length
requests * average wait time / 1000 = queue length
((1.90+0.92) * 4.48) / 1000 = 0.01
Calculate the throughput
reads and writes * sector size
(56.44+29.40) * 512 / 1024 = 43Ki
Calculate utilization
requests * service time / 1000 * 100 = utilization
(1.90+0.92) * 2.46 / 1000 * 100 = 0.69
Determine peak arrival rate
1/service time * 1000
(1/2.46 )*1000 = 406.5
Fixing the BIND (named) Service Bug – Generating /etc/rndc.key
I must admit, I have never had this bug for a very loong time. I thought It must have been fixed or rather removed all together. It was first reported with RHEL 6.1 and was removed as commented here by the developers.
However, I came accross this bug again while trying configure one of my DNS servers running on CentOS 6.3. The DNS (named) service always stopped on the following
Problem:
#service named restart
Generating /etc/rndc.key:
Solution:
Just exceute the following command:
#rndc-confgen -a -r /dev/urandom
and if you’re runing chroot under /var/named/chroot, you must add “-t /var/named/chroot” to the command above. It should look like this:
#rndc-confgen -a -r /dev/urandom -t /var/named/chroot
More description to rndc-confgen can be found here
You should be able to start DNS (named) service after executing these commands.
Terminal Access Controller Access-Control System (TACACS) is a remote authentication protocol that is used to communicate with an authentication server commonly used in UNIX networks. TACACS allows a remote access server to communicate with an authentication server in order to determine if the user has access to the network. TACACS is defined in RFC 1492, and uses (either TCP or UDP) port 49 by default. A later version of TACACS introduced by Cisco in 1990 was called Extended TACACS (XTACACS). The XTACACS protocol was developed by and is proprietary to Cisco Systems.
TACACS allows a client to accept a username and password and send a query to a TACACS authentication server, sometimes called a TACACS daemon or simply TACACSD. This server was normally a program running on a host. The host would determine whether to accept or deny the request and send a response back. The TIP (routing node accepting dial-up line connections, which the user would normally want to log in into) would then allow access or not, based upon the response. In this way, the process of making the decision is “opened up” and the algorithms and data used to make the decision are under the complete control of whomever is running the TACACS daemon.
TACACS+ and RADIUS have generally replaced TACACS and XTACACS in more recently built or updated networks. TACACS+ is an entirely new protocol and not compatible with TACACS or XTACACS. TACACS+ uses the Transmission Control Protocol (TCP) and RADIUS uses the User Datagram Protocol (UDP). Some administrators[who?] recommend using TACACS+ because TCP is seen as a more reliable protocol. Whereas RADIUS combines authentication and authorization in a user profile, TACACS+ separates the two operations.
TACACS Plus installation
To describe how to install TACACS application on step by step. Specifically we are install tac-plus in this article.
1. Download TACACS+
2. Install Tac-plus application
3. Configure TACACS.conf
4. configure Network device(Cisco router)
1. Download TACACS+
Get lastest tacacs+ binary rpm file from http://www.gazi.edu.tr/tacacs.
2. Install Tac-plus application
Login your machine with root account to avoid any interruption while installing TACACS+
and type
rpm -ivh tac_plus.xxx.i386.rpm
By this command tacacs+ must install your system and to verify your installation type below
rpm -q tac_plus
If you see below output, you are good to go.
tac_plus-F4.0.3.alpha-7
3. Configure TACACS.conf
# Created by Devrim SERAL
# It’s very simple configuration file
# Please read user_guide and tacacs+ FAQ to more information to do more
# complex tacacs+ configuration files.
key = CISCONET
# Use /etc/passwd file to do authentication
default authentication = file /etc/passwd.log
# Now tacacs+ also use default PAM authentication
#default authentication = pam pap
#If you like to use DB authentication
#default authentication = db “db_type://db_user:db_pass@db_hostname/db_name/db_table?name_field&pass_field
# db_type: mysql or null
# db_user: Database connect username
# db_pass: Database connection password
# db_hostname : Database hostname
# db_name : Database name
# db_table : authentication table name
# name_field and pass_field: Username and password field name at the db_table
# Accounting records log file
accounting file = /var/log/tacacs/tacacs.log
# Would you like to store accounting records in database..
# db_accounting = “db_type://db_user:db_pass@db_hostname/db_name/db_table”
# Same as above..
# Permit all authorization request
default authorization = permit
# Profile for enable access, username is $enab15$. Used to be $enable$
user = $enab15$ {
login = cleartext Pr1celess
}
# Profiles for user accounts
user = Superman {
login = cleartext SuperPOP40
}
In this case, username; Superman and password; SuperPOP40
4. configure Network device(Cisco router)
aaa new-model
aaa authentication login default tacacs+ line enable none
aaa authentication login defaut tacacs+ line enable none
tacacs-server host 65.222.247.53
tacacs-server host 65.222.247.37
tacacs-server key CISCONET
Or another sample (if tacacs login is failed, local database will be used)
aaa new-model
username CiscoNET password xxx-CiscoNet
aaa authentication login default enable
aaa authentication login access1 local
aaa authentication login access2 tacacs+ local
tacacs-server host 65.222.247.53
tacacs-server host 65.222.247.37
tacacs-server key CISCONET
!
!
Line console 0
login authentication access 2
!
!
Line vty 0 4
password yyy-CiscoNET
login
./install: /lib/ld-linux.so.2: bad ELF interpreter Websphere
root@localhost]# yum install gtk2.i686
[root@localhost]# yum install libXtst.i686
If you received the the missing libstdc++ message above, install the libstdc++ library:
[root@localhost]# yum install compat-libstdc++
Error while launching IBM HTTP Server
I’ve been working with WebSphere Application Server in a Red Hat Virtual Machine, and the other day, after restarting the virtual machine, I was not able to launch IBM HTTP Server from the WebSphere Integrated Console.
I tried to start IBM HTTP Server from the command line using the following command:
[root@xxx ~]# /opt/IBM/HTTPServer/bin/httpd -f /opt/IBM/HTTPServer/conf/httpd.conf
And I’ve got the following error:
./opt/IBM/HTTPServer/bin/httpd: error while loading shared libraries: libaprutil-1.so.0: cannot open shared object file: No such file or directory
After checked that the file libaprutil-1.so.0 existed in the /opt/IBM/HTTPServer/lib, I understood that the problem should be that the HTTP Server does not know where to look for the libraries, or, in other words, the LD_LIBRARY_PATH was empty or does not included that directory.
And yeah, actually the variable LD_LIBRARY_PATH was empty, so, I used the command:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/IBM/HTTPServer/lib
Comands LVM VXVM
1 Fdisk,vxdctl Fdisk -l vxdctl enable
2 pvcreate,vxdisksetup pvcreate /dev/sdb* vxdisksetup -I /etc/vx/bin/vxdisksetup -i disk_0 /etc/vx/vxdisksetup -i disk_1
3 vgcreate,vxdg vgcreate oravg /dev/sdb* vxdg init oradg disk_0 disk_1
4 lvcreate,vxassist lvcreate -L +3G oravg ora_lv vxassist -g make vol vxassist -g oradg make oravol 4g disk_0 disk_1
5 mkfs.ext3,mkfs.vxfs mkfs.ext3 /dev/oravg/ora_lv mkfs.vxfs /dev/vx/rdsk/webdg/webvol
6 mount mount -t ext3 /dev/oravg/ora_lv /oravol mount -t vxfs /dev/vx/dsk/oradg/oravol /oravol/
lvm veritas
fdisk vxdctl enable
pvcreate vxdisksetup -i lun
vgcreate vxdg -g adddisk device=lun
lvcreate vxassit -g make vol size
mkfs mkfs -F vxfs
mount mount
/etc/fstab /etc/fstab
Linux = native multipathing
solaris = native multpathing
veritas = veritas dynamic multipathing
Veritas cluster Server
or the end of the week, we’re going to continue with the theme of sparse-but-hopefully useful information. Quick little “crib sheets” (preceding by paragraphs and paragraphs of stilted ramblings by the lunatic who pens this blog’s content 😉 For this Friday, we’re going to come back around and take a look at Veritas Cluster Server (VCS) troubleshooting. If you’re interested in more specific examples of problems, solutions and suggestions, with regards to VCS, check out all the VCS related posts from the past year or so. Hopefully you’ll be able to find something useful in our archives, as well. These simple suggestions should work equally well for Unix as well as Linux, if you choose to go the VCS route rather than some less costly one 🙂
And, here we go again; quick, pointed bullets of info. Bite-sized bits of troubleshooting advice that focus on solving the problem, rather than understanding it. That sounds awful, I know, but, sometimes, you have to get things done and, let’s face it, if it’s the job or your arse, who cares about the why? Leave that for philosophers and academics. Plus, since you fix problems so fast, you’ll have plenty of time to read up on the ramifications of your actions later 😉
The setup: Your site is down. It’s a small cluster configuration with only two nodes and redundant nic’s, attached network disk, etc. All you know is that the problem is with VCS (although it’s probably indirectly due to a hardware issue). Something has gone wrong with VCS and it’s, obviously, not responding correctly to whatever terrible accident of nature has occurred. You don’t have much more to go on than that. The person you receive your briefing from thinks the entire clustered server set up (hardware, software, cabling, power, etc) is a bookmark in IE 😉
Now, one by one, in a fashion that zigs on purpose, but has a tendency to zag, here are a few things to look at right off the bat when assessing a situation like this one. Perhaps next week, we’ll look into more advanced troubleshooting (and, of course, you can find lots of specific “weird VCS problem” solutions in our VCS archives)
1. Check if the cluster is working at all.
Log into one of the cluster nodes as root (or a user with equivalent privilege – who shouldn’t exist 😉 and run
host1 # hastatus –summary
or
host1 # hasum <-- both do the same thing, basically
Ex:
host1 # hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A host1 RUNNING 0
A host2 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B ClusterService host1 Y N OFFLINE
B ClusterService host2 Y N ONLINE
B SG_NIC host1 Y N ONLINE
B SG_NIC host2 Y N OFFLINE
B SG_ONE host1 Y N ONLINE
B SG_ONE host2 Y N OFFLINE
B SG_TWO host1 Y N OFFLINE
B SG_TWO host2 Y N OFFLINE
Clearly, your situation is bad: A normal VCS status should indicate that all nodes in the cluster are “RUNNING” (which these are). However, it should also show all service groups as being ONLINE on at least one of the nodes, which isn't the case above with SG_TWO (Service Group 2).
2. Check for cluster communication problems. Here we want to determine if a service group is failing because of any heartbeat failure (The VCS cluster, that is, not another administrator 😉
Check on GAB first, by running:
host1 # gabconfig -a
Ex:
host1 # gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 3a1501 membership 01
Port h gen 3a1505 membership 01
This output is okay. You would know you had a problem at this point if any of the following conditions were true:
if no port “a” memberships were present (0 and 1 above), this could indicate a problem with gab or llt (Looked at next)
If no port "h" memberships were present (0 and 1 above), this could indicate a problem with had.
If starting llt causes it to stop immediately, check your heartbeat cabling and llt setup.
Try starting gab, if it's down, with:
host1 # /etc/init.d/gab start
If you're running the command on a node that isn't operational, gab won't be seeded, which means you'll need to force it, like so:
host1 # /sbin/gabconfig -x
3. Check on LLT, now, since there may be something wrong there (even though it wasn't indicated above)
LLT will most obviously present as a crucial part of the problem if your "hastatus -summary" gives you a message that it "can't connect to the server." This will prompt you to check all cluster communication mechanisms (some of which we've already covered).
First, bang out a quick:
host1 # lltconfig
on the command line to see if llt is running at all.
If llt isn't running, be sure to check your console, system messages file (syslog, possibly messages and any logs in /var/log/VRTSvcs/... - usually the "engine log" is worth a quick look) As a rule, I usually do
host1 # ls -tr
when I'm in the VCS log directory to see which log got written to last, and work backward from there. This puts the most recently updated file last in the listing. My assumption is that any pertinent errors got written to one of the fresher log files 🙂 Look in these logs for any messages about bad llt configurations or files, such as /etc/llttab, /etc/llthost and /etc/VRTSvcs/conf/sysname. Also, make sure those three files contain valid entries that "match" <-- This is very important. If you refer to the same facility by 3 different names, even though they all point back to the same IP, VCS can become addled and drop-the-ball.
Examples of invalid entries in LLT config files would include "node numbers" outside the range of 0 to 31 and "cluster numbers" outside the range of 0 to 255.
Now, if LLT "is" running, check its status, like so:
host # lltstat -wn <-- This will let you know if llt on the separate nodes within the cluster can communicate with one another.
Of course, verify physical connections, as well. Also, see our previous post on dlpiping for more low-level-connection VCS troubleshooting tips.
Ex:
host1 # lltstat -vvn
LLT node information:
Node State Link Status Address
0 prsbn012 OPEN
ce0 DOWN
ce1 DOWN
HB172.1 UP 00:03:BA:9D:57:91
HB172.2 UP 00:03:BA:0E:F1:DE
HB173.1 UP 00:03:BA:9D:57:92
HB173.2 UP 00:03:BA:0E:D0:BE
1 prsbn015 OPEN
ce3 UP 00:03:BA:0E:CE:09
ce5 UP 00:03:BA:0E:F4:6B
HB172.1 UP 00:03:BA:9D:5C:69
HB172.2 UP 00:03:BA:0E:CE:08
HB173.1 UP 00:03:BA:0E:F4:6A
HB173.2 UP 00:03:BA:9D:5C:6A
host1 # cat /etc/llttab <-- pardon the lack of low-pri links. We had to build this cluster on the cheap 😉
set-node /etc/VRTSvcs/conf/sysname
set-cluster 100
link ce0 /dev/ce:0 - ether 0x1051 -
link ce1 /dev/ce:1 - ether 0x1052 -
exclude 7-31
host1 # cat /etc/llthosts
0 host1
1 host2
host1 # cat /etc/VRTSvcs/conf/sysname
host1
If llt is down, or you think it might be the problem, either start it or restart it with:
host1 # /etc/init.d/llt.rc start
or
host1 # /etc/init.d/llt.rc stop
host1 # /etc/init.d/llt.rc start
And, that's where we'll end it today. There's still a lot more to cover (we haven't even given the logs more than their minimum due), but that's for next week.
Until then, have a pleasant and relaxing weekend 🙂
Veritas Cluster Server (VCS) Command line
VCS has can be divided into two important parts
Cluster Communication:
Low Latency Transport (LLT) and Global Atomic Broadcast (GAB) are responsible for heartbeat and cluster communication.
LLT status
lltconfig -a list – List all the MAC addresses in cluster
lltstat -l – Lists information about each configured LLT link
lltstat [-nvv|-n] – Verify status of links in cluster
Starting and stopping LLT
lltconfig -c – Start the LLT service
lltconfig -U – stop the LLT running
GAB status
gabconfig -a – List Membership, Verify id GAB is operating
gabdiskhb -l – Check the disk heartbeat status
gabdiskx -l – lists all the exclusive GAB disks and their membership information
Starting and stopping GAB
gabconfig -c -n seed_number – Start the GAB
gabconfig -U – Stop the GAB
HAD:
Stands for High Availability daemon. HAD is responsible for all the cluster functionality.
The commands for Veritas start with “ha” meaning high availability. For example, ‘hastart’, ‘hastop’, ‘hares’ etc. Listed below are commands sorted by category which are used for most day to day operation/management of VCS.
Cluster Status
hastatus -summary – Outputs the status of cluster
hasys -display – Displays the cluster operation status
Start or Stop services
hastart [-force|-stale] – ‘force’ is used to load local configuration
hasys -force 'system' – start the cluster using config file from the mentioned “system”
hastop -local [-force|-evacuate] – ‘local’ option will stop the service only on the system you type the command
hastop -sys 'system' [-force|-evacuate] – ‘sys’ stops had on the system you specify
hastop -all [-force] – ‘all’ stops had on all systems in the cluster
Change VCS Configuration online
haconf -makerw – makes VCS configuration in read/write mode
haconf -dump -makero – Dumps the configuration changes
Agent Operations
haagent -start agent_name -sys system – Starts an agent
haagent -stop agent_name -sys system – Stops an agent
Cluster Operations
haclus -display – Displays cluster information and status
haclus -enable LinkMonitoring – Enables heartbeat link monitoring in the GUI
haclus -disable LinkMonitoring – Disables heartbeat link monitoring in the GUI
Add and Delete Users
hauser -add user_name – Adds a user with read/write access
hauser -add VCSGuest – Adds a user with read-only access
hauser -modify user_name – Modifies a users password
hauser -delete user_name – Deletes a user
hauser -display [user_name] – Displays all users if username is not specified
System Operations
hasys -list – List systems in the cluster
hasys -display – Get detailed information about each system
hasys -add system – Add a system to cluster
hasys -delete system – Delete a system from cluster
Resource Types
hatype -list – List resource types
hatype -display [type_name] – Get detailed information about a resource type
hatype -resources type_name – List all resources of a particular type
hatype -add resource_type – Add a resource type
hatype -modify .... – Set the value of static attributes
hatype -delete resource_type – Delete a resource type
Resource Operations
hares -list – List all resources
hares -dep [resource] – List a resource’s dependencies
hares -display [resource] – Get detailed information about a resource
hares -add resource_type service_group – Add a resource
hares -modify resource attribute_name value – Modify the attributes of the new resource
hares -delete resource – Delete a resource type
hares -online resource -sys systemname – Online a resource, type
hares -offline resource -sys systemname – Offline a resource, type
hares -probe resource -sys system – Cause a resource’s agent to immediately monitor the resource on a particular system
hares -clear resource [-sys system] – Clear a faulted resource
hares -local resource attribute_name value – Make a resource’s attribute value local
hares -global resource attribute_name value – Make a resource’s attribute value global
hares -link parent_res child_res – Specify a dependency between two resources
hares -unlink parent_res child_res – Remove the dependency relationship between two resources
Service Group Operations
hagrp -list – List all service groups
hagrp -resources [service_group] – List a service group’s resources
hagrp -dep [service_group] – List a service group’s dependencies
hagrp -display [service_group] – Get detailed information about a service group
hagrp -online groupname -sys systemname – Start a service group and bring it resources
hagrp -offline groupname -sys systemname – Stop a service group and take it resources offline
hagrp -switch groupname -to systemname – Switch a service group from one system to another
hagrp -freeze -sys -persistent – Gets into Maintenance Mode. Freeze a service group. This will disable online and offline operations
hagrp -unfreeze -sys -persistent] – Take the servicegroup out of maintenance mode
hagrp -enable service_group [-sys system] – Enable a service group
hagrp -disable service_group [-sys system] – Disable a service group
hagrp -enableresources service_group – Enable all the resources in a service group
hagrp -disableresources service_group – Disable all the resources in a service group
hagrp -link parent_group child_group relationship – Specify the dependency relationship between two service groups
hagrp -unlink parent_group child_group – Remove the dependency between two service groups
VCS Startup Process
Please verify that the Cables are setup for heartbeat network. You can tcpdump from one server NIC’s MAC Address to another to verify the connectivity
Step 1:
LLT (Low latency Transport) should be startup first using the “lltstat -c” command. It reads /etc/llttab and /etc/llthosts files and establishes heartbeat network. Heartbeat network is a private network where VCS status information is exchanged by all systems within a VCS cluster. These networks require each system in the cluster to have a dedicated NIC, connected to a private hub. VCS requires a minimum of two dedicated communication channels between each system in a cluster. LLT is a low overhead networking protocol that runs in the kernel. Because it runs in the kernel, it is capable of handling kernel to kernel communications.
Examples of files are as below:
#cat /etc/llthosts
0
1
.
.
n
In the Example below, the linux systems will have interface names such as “eth0/1?. If using any device, then replace “ce” with “qfe0/1? etc..
#cat /etc/llttab
set-node
set-cluster
link ce2 /dev/ce:0 – ether – –
link ce3 /dev/ce:3 – ether – –
link-lowpri ce4 /dev/ce:4 – ether – –
start
Verification for startup can be done using lltstat -n command. “*” represents firstnode (master) in the cluster
#lltstat -n
Node State Links
* 0 OPEN 3
1 OPEN 3
2 OPEN 3
.
.
n OPEN 3
Step 2:
GAB (Group Atomic Board) starts next. It executes /etc/gabtab and checks for other GABs to establish a cluster membership. GAB runs over Low Latency Transport (LLT) and uses broadcasts to distribute cluster configuration information and ensure that each system has a synchronized view of the cluster, including the state of each system, service group, and resource.
# cat /etc/gabtab
/sbin/gabconfig -c -n5 # for a 5 node cluster
GAB can be started using “gabconfig -c” and verified by using “gabconfig -a”. Below is the example output for a 5 node cluster. Port ‘a’ runs GAB service and port ‘h’ runs VCS Deamon
# gabconfig -a
GAB Port Memberships
=========================================
Port a gen 11ff05 membership 012345
Port h gen 11ff09 membership 012345
Step 3:
After both LLT and GAB are loaded, hashadow starts which will load HAD (High Availability Deamon). HAD reads /etc/VRTSvcs/conf/config/main.cf, types.cf and all include.cf’s mentioned in main.cf file.
HAD checks if there are other HADs avaible and registers them with GAB. If there are no other HADs, it loads the main.cf again into HAD memory.Same process will happen when HAD starts on
other nodes. The HAD on the first node will load the main.cf and other include.cf files from the local system and all other HADs will load configuration from the first HAD.
After starting up, HAD will know all the service groups and resources from main.cf. It will call the respective agents to check if the resources are currently online or offline. Based on
main.cf, HAD will online/offline the Service group on the respective nodes.
Cluster is started up with “hastart” command. The status can be verified using “hastatus -sum”
VCS Logfile: /var//log/engine_A.log
Setup SAN disk for use in a Linux Veritas cluster
For this particular exercise we’re going to go through the entire process of provisioning disk for use in a VCS cluster.
We will use EMC Symmetrix disk zoned and masked to a RHEL 4u6 host as the foundation.
Get the disk(s) presented to the host observing that it’s visible down multiple paths.
# inq -showvol
Inquiry utility, Version V7.3-771 (Rev 0.0) (SIL Version V6.3.0.0 (Edit Level 771)
Copyright (C) by EMC Corporation, all rights reserved.
For help type inq -h.
—————————————————————————–
DEVICE :VEND :PROD :REV :SER NUM :Volume :CAP(kb)
—————————————————————————–
/dev/sda :EMC :SYMMETRIX :5771 :0123456789 : 00617: 2880
/dev/sdb :EMC :SYMMETRIX :5771 :0123456789 : 00204: 35654400
/dev/sdc :EMC :SYMMETRIX :5771 :0123456789 : 00206: 35654400
/dev/sdd :EMC :SYMMETRIX :5771 :0123456789 : 00208: 35654400
/dev/sde :EMC :SYMMETRIX :5771 :0123456789 : 0020A: 35654400
/dev/sdf :EMC :SYMMETRIX :5771 :0123456789 : 0020C: 35654400
/dev/sdg :EMC :SYMMETRIX :5771 :0123456789 : 0020E: 35654400
/dev/sdh :EMC :SYMMETRIX :5771 :0123456789 : 00210: 35654400
/dev/sdi :EMC :SYMMETRIX :5771 :0123456789 : 00212: 35654400
/dev/sdj :EMC :SYMMETRIX :5771 :0123456789 : 00214: 35654400
/dev/sdk :EMC :SYMMETRIX :5771 :0123456789 : 00263: 35654400
/dev/sdl :EMC :SYMMETRIX :5771 :0123456789 : 00265: 35654400
/dev/sdm :EMC :SYMMETRIX :5771 :0123456789 : 00267: 35654400
/dev/sdn :EMC :SYMMETRIX :5771 :0123456789 : 00269: 35654400
/dev/sdo :EMC :SYMMETRIX :5771 :0123456789 : 0026B: 35654400
/dev/sdp :EMC :SYMMETRIX :5771 :0123456789 : 00617: 2880
/dev/sdq :EMC :SYMMETRIX :5771 :0123456789 : 00204: 35654400
/dev/sdr :EMC :SYMMETRIX :5771 :0123456789 : 00206: 35654400
/dev/sds :EMC :SYMMETRIX :5771 :0123456789 : 00208: 35654400
/dev/sdt :EMC :SYMMETRIX :5771 :0123456789 : 0020A: 35654400
/dev/sdu :EMC :SYMMETRIX :5771 :0123456789 : 0020C: 35654400
/dev/sdv :EMC :SYMMETRIX :5771 :0123456789 : 0020E: 35654400
/dev/sdw :EMC :SYMMETRIX :5771 :0123456789 : 00210: 35654400
/dev/sdx :EMC :SYMMETRIX :5771 :0123456789 : 00212: 35654400
/dev/sdy :EMC :SYMMETRIX :5771 :0123456789 : 00214: 35654400
/dev/sdz :EMC :SYMMETRIX :5771 :0123456789 : 00263: 35654400
/dev/sdaa :EMC :SYMMETRIX :5771 :0123456789 : 00265: 35654400
/dev/sdab :EMC :SYMMETRIX :5771 :0123456789 : 00267: 35654400
/dev/sdac :EMC :SYMMETRIX :5771 :0123456789 : 00269: 35654400
/dev/sdad :EMC :SYMMETRIX :5771 :0123456789 : 0026B: 35654400
See what disks Veritas can see.
vxdisk -o alldgs list
Initialize the disk for the first time. This needs to be repeated for each individual disk.
/etc/vx/bin/vxdisksetup -i DEVICE format=cdsdisk
See if the intialize worked correctly.
# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
EMC0_0 auto:cdsdisk – (dg_grp) online
EMC0_1 auto:cdsdisk – (dg_grp) online
EMC0_2 auto:cdsdisk – (dg_grp) online
EMC0_3 auto:cdsdisk – (dg_grp) online
EMC0_4 auto:cdsdisk – (dg_grp) online
EMC0_5 auto:cdsdisk – (dg_grp) online
EMC0_6 auto:cdsdisk – (dg_grp) online
EMC0_7 auto:cdsdisk – (dg_grp) online
EMC0_8 auto:cdsdisk – (dg_grp) online
EMC0_9 auto:cdsdisk – (dg_grp) online
EMC0_10 auto:cdsdisk – (dg_grp) online
EMC0_11 auto:cdsdisk – (dg_grp) online
EMC0_12 auto:cdsdisk – (dg_grp) online
EMC0_13 auto:cdsdisk – (dg_grp) online
cciss/c0d0 auto:none – – online invalid
All device(s) (e.g. EMC0_n) now show as online.
Create the disk group.
vxdg init dg_name dg_internal_name01=DEVICE
The dg_name is the name of your disk group while dg_internal_name01 is the name of the first disk. In our case dg_internal_name01=EMC0_0.
Add any additional disk to the disk group.
vxdg -g dg_name adddisk dg_internal_name02=EMC0_n+1
Note that EMC0_n+1 is the next free disk that you are attempting to add. So dg_internal_name02=EMC0_1 (remember we started with EMC0_0).
To create the volume
vxassist -g dg_name make lv_name [size] dg_internal_nameN
Repeat as necessary.
Finally, create the file system
mkfs -t vxfs /dev/vx/rdsk/dg_name/lv_name
At this point the volumes are now available to be defined as a mount resource in VCS.
Unified Extensible Firmware Interface (UEFI) is a specification for a software program that connects a computer’s firmware to its operating system (OS). UEFI is expected to eventually replace BIOS.
Like BIOS, UEFI is installed at the time of manufacturing and is the first program that runs when a computer is turned on. It checks to see what hardware components the computing device has, wakes the components up and hands them over to the operating system. The new specification addresses several limitations of BIOS, including restrictions on hard disk partition size and the amount of time BIOS takes to perform its tasks.
Because UEFI is programmable, original equipment manufacturer (OEM) developers can add applications and drivers, allowing UEFI to function as a lightweight operating system.
The Unified Extensible Firmware Interface is managed by a group of chipset, hardware, system, firmware, and operating system vendors called the UEFI Forum. The specification is most often pronounced by naming the letters U-E-F-I.
UEFI is nothing new but is first time introduced in Windows 8. If you want to install a clean copy of Windows 8 or Windows 8.1 on a UEFI enabled computer, you will need a UEFI bootable USB flash drive to start with. This is a tutorial to show you how to make such flash drive with and without the help of 3rd party tool.
Option 1: the manual process
1. Connect the USB flash drive to your computer, of course.
2. Open Command Prompt with Admin rights. Press Win+X and choose Command Prompt (Admin) from the list.
3. Type diskpart to start the diskpart built-in utility. And type list disk and make the note of the disk # for the USB drive.
4. Type in the following command to properly format the flash drive. Replace # with the actual # you got from step 3 above.
select disk #
clean
create partition primary
format fs=fat32 quick
active
assign
exit
5. Now close the Command Prompt window, and open File Explorer, browser through to the location where saves the Windows 8 installation ISO image file.
6. Mount ISO file by right-clicking the ISO file and choosing Mount. If you don’t see Mount command from the context menu, go to Open With ? Windows Explorer instead.
7. Select everything in the ISO file, and copy them into the formatted USB flash drive you prepared earlier (Figure 1).

. One more extra step if you are preparing for a 64-bit version of installation. You will need copy a file called bootmgfw.efi from inside install.wim file at sources folder to efi\boot folder on USB flash drive, and rename it to bootx64.efi. Sounds tedious, isn’t it? So let’s put an easy way, you can simply download this file and copy to your efi\boot folder.
That’s it. Now, you can boot off from this USB flash drive and start the fresh clean installation. If for some reason it doesn’t work, move to:
Option 2: a tool comes to rescue
Rufus, one of the 4 tools we mentioned to build bootable USB flash drive, is a small utility that creates bootable USB flash drive for Windows 7 or 8. What makes Rufus different is that it offers 3 different partition scheme to target the system type, such as those UEFI based computers. You can make a bootable drive that can directly boot off on a UEFI computer without turning the Secure Boot off. It’s free and portable.
To make a UEFI bootable USB drive,
1. Plug in your USB flash drive, of course.
2. Launch the program. Since it’s portable, you can simply just download and run it.
3. Check the option “Create a bootable disk using: ISO Image“, and click the icon next to it to pick up the ISO image file.
4. Select “GPT partition scheme for UEFI computer“.
Before you click Start button, check to make sure the settings are selected to similar like Figure 2.

5. Click Start, and sit back.
unzip Oracle-iPlanet-Web-Server-7.0.15-linux-x64.zip
“Attach to native process failed”
Kindly install
yum install compat-*
./setup
Oracle iPlanet Web Server components will be installed in the directory listed
below, referred to as the installation directory. To use the specified
directory, press Enter. To use a different directory, enter the full path of
the directory and press Enter.
Oracle iPlanet Web Server Installation Directory [/opt/oracle/webserver7]
{“<" goes back, "!" exits}:
Oracle iPlanet Web Server components will be installed in the directory listed
below, referred to as the installation directory. To use the specified
directory, press Enter. To use a different directory, enter the full path of
the directory and press Enter.
Oracle iPlanet Web Server Installation Directory [/opt/oracle/webserver7]
{"<" goes back, "!" exits}:
Specified directory /opt/oracle/webserver7 does not exist
Create Directory? [Yes/No] [yes] {"<" goes back, "!" exits} yes
Select the Type of Installation
1. Express
2. Custom
3. Exit
What would you like to do [1] {"<" goes back, "!" exits}? 1
Oracle iPlanet Web Server components will be installed in the directory listed
below, referred to as the installation directory. To use the specified
directory, press Enter. To use a different directory, enter the full path of
the directory and press Enter.
Oracle iPlanet Web Server Installation Directory [/opt/oracle/webserver7]
{“<" goes back, "!" exits}:
Specified directory /opt/oracle/webserver7 does not exist
Create Directory? [Yes/No] [yes] {"<" goes back, "!" exits} yes
Select the Type of Installation
1. Express
2. Custom
3. Exit
What would you like to do [1] {"<" goes back, "!" exits}? 1
Choose a user name and password. You must remember this user name and password
to administer the Web Server after installation.
Administrator User Name [admin] {"<" goes back, "!" exits}
Administrator Password:
Retype Password:
Product : Oracle iPlanet Web Server
Location : /opt/oracle/webserver7
Disk Space : 234.78 MB
------------------------------------------------------
Administration Command Line Interface
Server Core
Start Administration Server [yes/no] [yes] {"<" goes back, "!" exits}:
Choose a user name and password. You must remember this user name and password
to administer the Web Server after installation.
Administrator User Name [admin] {"<" goes back, "!" exits}
Administrator Password:
Retype Password:
Product : Oracle iPlanet Web Server
Location : /opt/oracle/webserver7
Disk Space : 234.78 MB
------------------------------------------------------
Administration Command Line Interface
Server Core
Start Administration Server [yes/no] [yes] {"<" goes back, "!" exits}: yes
Ready to Install
1. Install Now
2. Start Over
3. Exit Installation
What would you like to do [1] {"<" goes back, "!" exits}?
Installing Oracle iPlanet Web Server
|-1%--------------25%-----------------50%-----------------75%--------------100%|
Installation Successful.
Refer to the installation log file at:
/opt/oracle/webserver7/setup/install.log for more details.
Next Steps:
- You can access the Administration Console by accessing the following URL:
https://mxbeta03.rmohan.com:8989
[root@mxbeta03 webserver7]# ls
admin-server bin https-mxbeta03.tagitmobile.com include jdk Legal lib plugins README.txt setup
[root@mxbeta03 webserver7]# cd https-mxbeta03.tagitmobile.com/
[root@mxbeta03 https-mxbeta03.tagitmobile.com]# ls
auto-deploy bin collections config docs generated lock-db logs sessions web-app
[root@mxbeta03 https-mxbeta03.tagitmobile.com]# cd bin/
[root@mxbeta03 bin]# ls
reconfig restart rotate startserv stopserv
[root@mxbeta03 bin]# ./stopserv
server has been shutdown
[root@mxbeta03 bin]# ./startserv
Oracle iPlanet Web Server 7.0.15 B04/19/2012 21:52
info: CORE5076: Using [Java HotSpot(TM) 64-Bit Server VM, Version 1.6.0_24] from [Sun Microsystems Inc.]
info: HTTP3072: http-listener-1: http://mxbeta03.rmohan.com:80 ready to accept requests
info: CORE3274: successful server startup
[root@mxbeta03 bin]# ./stopserv
server has been shutdown
[root@mxbeta03 bin]# ls
reconfig restart rotate startserv stopserv
/opt/oracle/webserver7/https-rmohan.com/config
we can access the url now
Ansible is a configuration management tool, deployment tool, and ad-hoc task execution tool all in one.
It requires no daemons or any other software to start managing remote machines — it works using SSHd (using paramiko, to make it smoother), which is something nearly everyone is running already. Because it’s using SSH, it should easily pass a security audit and be usable in places that would be resistant to running a root-level daemon with a custom PKI infrastructure. Best of all, you should probably be able to completely understand most of Ansible in about 20-30 minutes. Hopefully less.
I also wanted to make Ansible maximally extensible. Ansible modules can be written in any language — not just Ruby or Python, but any language capable of returning JSON or key=value text pairs. Bash or Perl is fine! In this way, Ansible manages to sidestep most of the popular Python vs Ruby language wars entirely, and should be of interest to people who like both — or neither.
Installing ansible on Centos 6
h3. Installing ansible on Centos 6
h5. ansible dependencies
* python-babel
* PyYAML
* python-crypto
* python-jinja2
* python-paramiko
* python-simplejson.x86_64 – on all nodes ( if python is lower version than 2.5 )
{code}
[root@rmohanserver ansible]# ansible –version
ansible 1.1
{code}
h5. Control Machine (master ) Requirements :
Currently Ansible can be from any machine with Python 2.6 installed
[root@rmohanserver download]# python -V
Python 2.6.6
h5. Managed Node Requirements
On the managed nodes, you only need Python 2.4 or later, but if you are are running less than Python 2.5 on the remotes, you will also need:
h5. Inventory of manged hosts:
Inventory of manged hosts will be at /etc/ansible/hosts
{code}
[root@rmohanserver ansible]# cat /etc/ansible/hosts
[local]
rmohanserver
[remote]
rmohan1
[all:children]
local
remote
{code}
h5. First run :
{code}
[root@rmohanserver ansible]# ansible remote1 -a “uptime” –ask-pass
SSH password:
rmohan1 | success | rc=0 >>
23:27:18 up 20 days, 11:05, 0 users, load average: 0.00, 0.00, 0.00
{code}
h5. SSH trust : since you want pass wordless authentication from your master machine ,its advisable to establish a trust
{code}
ssh-copy-id rmohanserver
ssh-copy-id rmohan1
{code}
h5. Sample run :
{code}
[root@rmohanserver ansible]# ansible local -a “uptime”
rmohanserver | success | rc=0 >>
23:32:56 up 20 days, 11:10, 1 user, load average: 0.04, 0.02, 0.00
[root@rmohanserver ansible]# ansible remote -a “uptime”
rmohan1 | success | rc=0 >>
23:33:01 up 20 days, 11:10, 0 users, load average: 0.04, 0.02, 0.00
[root@rmohanserver ansible]# ansible all -a “uptime”
rmohanserver | success | rc=0 >>
23:33:06 up 20 days, 11:11, 1 user, load average: 0.03, 0.02, 0.00
rmohan1 | success | rc=0 >>
23:33:07 up 20 days, 11:11, 0 users, load average: 0.03, 0.02, 0.00
Try to use the configuration management tool Ansible is a continuation of.
Let’s summarize what was examined Playbook this time. Playbook subject by a combination of processing using modules of Ansible server is that describes the structure of the. For example, Apache if you are building a Web server using, simply yum Apache on Install never to completing, then, a series or set properly the contents of the httpd.conf Then, like, or open the port in iptables There is work, but that it must be defined as Playbook together the work for such a server configured you will be able to leave as a “instructions that can be executed”. Puppet becomes equivalent manifest in the sense of, the recipe in the Chef. Format Playbook of Ansible is YAML so, you can start writing feel free.
Simple Playbook
First from yum simply MySQL I try to create a Playbook only to install.
mysql.yml
—
– hosts : all
user : root
Tasks :
– name : install mysql
yum : name = mysql = installed State
To run the Playbook that you created, and then run the command ansible-playbook by preparing a file that describes the target host as well as the time of ansible command.
$ Ansible-PlayBook Mysql.Yml I- hosts -k
Mysql package is installed on the target host using the yum module in this. I will specify the name of the group on the target host or host name of the target host near the top of the yaml file, the hosts. ansible is the user SSH login I specify the user name under which to.
{code}
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.1.60 cluster1.rmohan.com cluster1
192.168.1.61 cluster2.rmohan.com cluster2
192.168.1.62 cluster3.rmohan.com cluster3
192.168.1.50 storage.rmohan.com storage
/etc/init.d/iptables stop
iptables-save > /etc/sysconfig/iptables
SELINUX=disabled
yum update
yum install ricci
yum groupinstall Clustering
yum install -y iscsi-initiator-utils
yum install -y httpd*
cluster1: yum install luci
chkconfig luci on
chkconfig –list luci
service luci start
chkconfig ricci on
chkconfig –list ricci
service ricci start
chkconfig iscsi on
service iscsi start
chkconfig –level 345 httpd on
service httpd start
/usr/sbin/luci_admin init
service luci restart
chkconfig luci on
https://192.168.1.60:8084
admin
admin123
iscsiadm -m discovery -t sendtargets -p storage.rmohan.com
iscsiadm – mode discovery – type sendtargets – portal storage.rmohan.com
iscsiadm –mode discovery –type sendtargets –portal storage.rmohan.com
chkconfig iscsi on
service iscsi start
[root@storage ~]# cat /etc/initiators.deny
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was autogenerated
# by Openfiler. Any manual changes will be overwritten
# Generated at: Sat Sep 22 23:19:38 SGT 2012
#iqn.2006-01.com.openfiler:tsn.34ee3fce49f8 ALL
#iqn.2006-01.com.openfiler:tsn.218fac7efc66 ALL
filesystem ALL
# End of Openfiler configuration
/etc/init.d/iscsi restart
cluster1
fdisk /dev/sdb
partprobe /dev/sdb
mkfs.ext3 /dev/sdb1
mount /dev/sdb1 /var/www/html/
[root@cluster1 ~]# cat /etc/fstab
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/dev/sdb1 /var/www/html ext3 defaults 1 1
cluster2 yum install -y ricci httpd*
yum groupinstall Clustering
service ricci start
chkconfig ricci on
chkconfig iscsi on
service iscsi start
iscsiadm -m discovery -t sendtargets -p storage.rmohan.com
service iscsi restart
mount /dev/sdb1 /var/www/html/
add in fstab
/dev/sdb1 /var/www/html/ ext3 defaults 1 2
cluster3
yum install -y ricci httpd*
yum groupinstall Clustering
service ricci start
chkconfig ricci on
iscsiadm -m discovery -t sendtargets -p storage.rmohan.com
service iscsi restart
mount /dev/sdb1 /var/www/html/
/dev/sdb1 /var/www/html/ ext3 defaults 1 2
Secure sockets layer communication
The industry-standard Secure Sockets Layer (SSL) protocol, which uses signed digital certificates from a certificate authority (CA) for authentication, is used to secure communication in a Tivoli Identity Manager Express deployment.
SSL provides encryption of the data that is exchanged between the applications. Encryption makes data that is transmitted over the network intelligible only to the intended recipient.
Signed digital certificates enable two applications connecting in a network to authenticate each other’s identity. An application acting as an SSL server presents its credentials in a signed digital certificate to verify to an SSL client that it is the entity it claims to be. An application acting as an SSL server can also be configured to require the application acting as an SSL client to present its credentials in a certificate, thereby completing a two-way exchange of certificates. Signed certificates are issued by a third-party certificate authority for a fee.
Some utilities, such as those provided by OpenSSL, can also issue signed certificates.
SSL uses public key encryption technology for authentication. In public key encryption, a public key and a private key are generated for an application. Data encrypted with the public key can only be decrypted using the corresponding private key. Similarly, the data encrypted with the private key can only be decrypted using the corresponding public key. The private key is password-protected in a key database file (keystore) so that only the owner can access the private key to decrypt messages that are encrypted using the corresponding public key.
Digitally-signed CA certificates
A signed digital certificate is an industry-standard means of verifying the authenticity of an entity, such as a server, client, or application. To ensure maximum security, a certificate is issued by a third-party certificate authority (CA).
CA certificate chaining
To increase security, some organizations use CA certificate chaining.
Self-signed certificates
You can use self-signed certificates to test an SSL configuration before you create and install a signed certificate issued by a certificate authority.
Certificate file types
Certificates and keys are stored in several types of files.
SSL implementations that Tivoli Identity Manager Express uses
Tivoli Identity Manager Express uses several implementations of the SSL protocol.
SSL tools that Tivoli Identity Manager Express uses
Tivoli Identity Manager Express uses several SSL key management utilities.
SSL authentication
The SSL authentication process uses certificates that are issued by a certificate authority. The same process applies if the certificates are issued by an certificate generation utility or if self-signed certificates are used.
Digitally-signed CA certificates
A signed digital certificate is an industry-standard means of verifying the authenticity of an entity, such as a server, client, or application.
To ensure maximum security, a certificate is issued by a third-party certificate authority (CA).
A certificate-authority certificate (CA certificate) must be installed to verify the origin of a signed digital certificate.
When an application receives another application’s signed certificate, it uses a CA certificate to verify the originator of the certificate.
A certificate authority can be well-known and widely used by other organizations, or local to a specific region or company. Many applications, such as Web browsers, are configured with the CA certificates of well known certificate authorities to eliminate or reduce the task of distributing CA certificates throughout the security zones in a network.
A certificate contains the following information to verify the identity of an entity:
Organizational informationThis section of the certificate contains information that uniquely identifies the owner of the certificate, such as organizational name and address.
You supply this information when you generate a certificate using a certificate management utility.Public keyThe receiver of the certificate uses the public key to decipher encrypted text sent by the certificate owner to verify
its identity. A public key has a corresponding private key that encrypts the text.Certificate authority’s distinguished nameThe issuer of the certificate identifies itself with this information.
Digital signatureThe issuer of the certificate signs it with a digital signature to verify its authenticity. This signature is compared to the signature on the corresponding CA certificate to verify that the certificate originated from a trusted certificate authority.
Web browsers, servers, and other SSL-enabled applications generally accept as genuine any digital certificate that is signed by a trusted Certificate Authority and is otherwise valid.
For example, a digital certificate can be invalidated because it has expired or the CA certificate used to verify it has expired, or because the distinguished name in the digital certificate of the server does not match the distinguished name specified by the client.
SSL authentication
The SSL authentication process uses certificates that are issued by a certificate authority. The same process applies if the certificates are issued by an certificate generation utility or if self-signed certificates are used.

Figure 1 illustrates
the steps that authenticate the identity of an application:
Figure 1. Authenticating the identity of an application
To establish an SSL connection:
1.An application acting as an SSL client contacts an application acting as an SSL server.
2.The SSL server responds by sending the signed certificate stored in its keystore to the SSL client. A CA certificate contains identifying information about the CA that issued the certificate and the application (owner) that presents the certificate, a public key, and the digital signature of the CA.
3.The SSL client uses the corresponding CA certificate stored in its keystore to verify the digital signature on the certificate.
4.In addition to verifying the signature on the certificate, the SSL client requests the SSL server to prove its identity.
5.The SSL server uses its private key to encrypt a message.
6.The SSL server sends the encrypted message to the SSL client.
7.To decrypt the message, the SSL client uses the public key embedded in the signed certificate it received, and thereby verifies the identity of the owner of the certificate.
If the SSL server is set to use two-way SSL authentication (client authentication), it then asks the SSL client to verify and prove its identity, and the same process described above is used to verify the identity of the SSL client to the SSL server.
One-way SSL authentication
One-way SSL authentication enables the application operating as the SSL client to verify the identity of the application operating as the SSL server.

Two-way SSL authentication
In two-way SSL authentication, the SSL client application verifies the identity of the SSL server application, and then the SSL server application verifies the identity of the SSL-client application.


Configuration of Apache for SSL between the Client (e.g., browser) and the Web Server
Pre-setup
You need to have the following SSL certificates that which will be used to establish the connection:
? Private key certificate
? Public key certificate or server cert
? Certificate Authority (CA)
In the example that follows, these will be named:
? Private key = rmohanKey.crt
? Public Key = rmohanCert.crt
? CA = rmohanCA.crt
For 2-way SSL, another CA file is needed:
? FrontEndCAwithCAClientChainToAuthenticate.crt
This file is the same as the CA FrontEndCA.crt file but contains, concatenated to it, the CAchain it will need to authenticate the client certs in case of 2-way SSL.
Simple Setup for 1-way SSL
Three properties need to be configured in the Apache httpd.conf file:
?SSLCertificateFile
?SSLCertificateKeyFile
?SSLCACertificateFile.
For example:
# server cert to use
SSLCertificateFile /etc/httpd/conf/ssl.crt/rmohanCert.crt
# server private key to use
SSLCertificateKeyFile /etc/httpd/conf/ssl.key/rmohanKey.crt
# CA to use
SSLCACertificateFile /etc/httpd/conf/rmohanCA.crt
Simple Setup for 2-way SSL
In addition to the above 3 properties, the following properties in the Apache httpd.conf file need to be configured:
?SSLVerifyClient
?SSLVerifyDept
# server cert to use
SSLCertificateFile /etc/httpd/conf/ssl.crt/rmohanCert.crt
# server private key to use
SSLCertificateKeyFile /etc/httpd/conf/ssl.key/rmohanKey.crt
# CA to use
SSLCACertificateFile /etc/httpd/conf/rmohanCA.crt
# Force 2 way SSL
# none: no client Certificate is required at all
# optional: the client may present a valid Certificate
# require: the client has to present a valid Certificate
#optional_no_ca: the client may present a valid Certificate but has not to be (successfully) verifyable.
SSLVerifyClient require
# This directive sets how deeply mod_ssl should verify before deciding that the clients
# don’t have a valid certificate. The depth actually is the maximum number of intermediate
#certificate issuers
SSLVerifyDepth 10
It is assumed that the client CAs to be used to authenticate have been extracted. These CAs need to be concatenated to the SSLCACertificateFile.
If the goal is to propagate the client certificate to WebLogic for authentification, consider adding the option:
SSLExportClientCertificates
# SSLExportClientCertificates : Export client certificates and the certificate chain
#behind them to CGIs. The certificates are base 64 encoded in the
#environment variables SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAIN_n,
# where n runs from 1 upwards.
How to Debug
SSLLog: This directive sets the name of the dedicated SSL protocol engine logfile. Error-type messages are additionally duplicated to the general Apache error log file (directive ErrorLog). Put this somewhere where it cannot be used for symlink attacks on a real server (somewhere where only root can write). For example:
SSLLog /var/httpd/logs/ssl_engine_log
SSLLogLevel: This directive sets the verbosity degree of the dedicated SSL protocol engine logfile. The level is one of the following (in ascending order where higher levels include lower levels):
?none – no dedicated SSL logging is done, but messages of level “error” are still written to the general Apache error logfile
?error – log messages of error type only, i.e. (messages which show fatal situations (processing is stopped). Those messages are also duplicated to the general Apache error logfile
?warn – log also warning messages, i.e., messages which show non-fatal problems (processing is continued)
?info – log also informational messages, i.e. messages which show major processing steps
?trace – log also trace messages, i.e. messages which show minor processing steps
?debug – log also debugging messages, i.e. messages which show development and low-level I/O information
For example:
SSLLogLevel info
|
|
Recent Comments