August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Categories

August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

deployment manager due to (NO space left on Device)

The above error is seen in particular in the following two formats:

SystemErrR java.io.FileNotFoundException:  /websphere/profiles/Dmgr01/wstemp/common/XmlLocale.class (No space left on device)

SystemErr     R java.io.FileNotFoundException: /websphere/profiles/Dmgr01/wstemp/778788/prefs.xml (No such file or directory)

For the above two errors, You need to do the following preliminary check:
Step 1: df -kh /webspehre (or the filesystem on which the wstemp is mounted)
To check the filesystem usage. If the filesystem is 100%, Then clear the unwanted files and try restarting the server.
If, The file system is free and you still see the error, Move to the following step.
Step 2: Try creating a directory in the wstemp folder with any name and see if you are able to create a directory. Most probably you would be getting the following error.
[root@ wstemp]# mkdir test
mkdir: cannot create directory `test’: No space left on device
Step 3: df -i /websphere
This must be listing 100% usage of inodes.
 
Filesystem             Inodes   IUsed    IFree IUse% Mounted on
 
/filesystem
                      2293760 2293757        3  100% /websphere
It is very rear to run out of inodes before the disk space is full.
For this you need to do a little manual work. Use the following command to check the number of files and directories for the particular file system.
for i in /*; do echo $i; find $i |wc -l; done
The directory with the highest number of files will probably the culprit. Move to that directory and execute the command again. Keep doing that until you are able to figure out the folder with max files and delete the ones that are not in use.
This will temporarily solve the issue. However, The permanent solution would be to check why the files are piling up. The inodes can be increased only during the filesystem creation, It can not be changed dynamically.

ERROR: ESI: getResponse: failed to get response: rc = 4

15/Feb/2016:23:30:06.56633] 0011dbd46 ads3asd00 – ERROR: ESI: getResponse: failed to get response: rc = 4
[15/Feb/2016:23:30:06.56636] 0011dbd46 asdfdsaf – ERROR: [hostname://url] ws_common: websphereHandleRequest: Failed to handle request rc=4
[15/Feb/2016:23:30:07.3555539] 0000db46 1adsf789 – ERROR: ws_common: websphereFindTransport: Nosecure transports available
[15/Feb/2016:23:30:07.344546] 0000db46 1sdf9f8700 – ERROR: ws_common: websphereWriteRequestReadResponse: Failed to find a transport

Analysis:

This error is seen when you are using both the http and https, i.e, Both the secure and non secure transports in your plugin-cfg.xml file as following. By default, This configuration is seen.

<Transport Hostname=”hostname” Port=”9080″ Protocol=”http”/>
<Transport Hostname=”hostname” Port=”9443″ Protocol=”https”>

In the above situation, When the incoming traffic is secure then the webserver chooses secure transport(https) to connect to the backend application server. However, If the incoming traffic is not secure, Then the webserver chooses unsecure transport (http) to establish a connection to the backend server.

Now, If the plugin receives secure connection request (https) and if it is not able to create a secure connection to the backend application server, It would use the non secure connection to the backend application server. and If the http transport is not defined then the request will fail.

However, In WAS 8.5.5, If the plugin is unable to establish a secure connection to the backend applcation server, It will not create a non secure connection to backend and is treated as security threat and the above error marked in red will be displayed in the http_plugin.log file.

Solution: 

To solve this issue, Navigate to Webserver-> instance name -> Plug-in properties -> custom properties.

 

 

Capture

 

and create the variable Unsecure and value as true. Save in custom properties.

Generate and propagate the plugin and restart the webservers.

Or

You can remove the http transport from the plugin-cfg.xml file and propagate it and restart the webservers.

FFDC Exception:com.ibm.db2.jcc.c.SqlException

[2/26/10 7:34:18:98 IST]     FFDC Exception:com.ibm.db2.jcc.c.SqlException SourceId:com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledCon ProbeId:1298
com.ibm.db2.jcc.c.SqlException: [ibm][db2][jcc] No license was found. An appropriate license file db2jcc_license_*.jar must be provided in the CLASSPATH setting.

This exception is seen when you have not defined the db2jcc_license_cisuz.jar file path in the class path.

Resolution:

Navigate to Application servers > AppNodeName01 > Process definition > Java Virtual Machine

Under general properties, Add the path of the license file in the Classpath field. and the paths can be separated by “:”

Restart the servers once done and the error should be missing.

Script to check the process id running on a particular port – Solaris

I have found this script from the internet, Do not know the author of the script. In future, If I find the original link, I would add it here.

 

Trouble copying the above script? porttopid.sh

 

 

#!/bin/ksh

line=’———————————————‘
pids=$(/usr/bin/ps -ef | sed 1d | awk ‘{print $2}’)

if [ $# -eq 0 ]; then
read ans?”Enter port you would like to know pid for: ”
else
ans=$1
fi

for f in $pids
do
/usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q “port: $ans”
if [ $? -eq 0 ]; then
echo $line
echo “Port: $ans is being used by PID:\c”
/usr/bin/ps -ef -o pid -o args | egrep -v “grep|pfiles” | grep $f
fi
done
exit 0

Save this script to a porttopid.sh file.

pot

Failed in r_gsk_secure_soc_init: GSK_ERROR_BAD_CERT

The architecture includes 2 WAS Base installations in different nodes(physical hosts) and an IHS installation on one of the physical node (Shares the Base server physical host)

After following these steps and creating a cluster in by previous post, If you want to enable the https protocol at IHS level to access the application, The following steps have to be followed.

Error: This error will be seen in the logs, if you are trying to access the application  on node2 using https protocol.

[01/2014:05:00:10.44085] 0000 – ERROR: lib_stream: openStream: Failed in r_gsk_secure_soc_init: GSK_ERROR_BAD_CERT(gsk rc = 414) PARTNER CERTIFICATE DN=CN=hostname,OU=hostNode02Cell,OU=hostNode01,O=IBM,C=US, Serial=04:6f:ec:5e:84:05:56
[01/Dec/2014:05:00:10.44092] 0000510d c57fb700 – ERROR: ws_common: websphereGetStream: Could not open stream
[01/Dec/2014:05:00:10.44097] 0000510d c57fb700 – ERROR: ws_common: websphereExecute: Failed to create the stream

This error is seen as the plugin-cfg.kdb that is defined in the ihs configuration file does not have the node02 certificate installed.

Step1: Login to a WAS console,

Step2: Navigate to Servers -> ServerTypes -> Webservers -> webserver1 -> Plug-in properties

 

downloadStep3: Click on Manage Keys and Certificates -> CMSKeyStore -> Signer Certificates

Step4:  Click on Retrieve from port and provide the hostname and port of the base server.

ex: localhost(node2), Port (9043)

This will extract the certificate installed on the node2 and add that to the plugin kdb file.

Step5: After saving the certificate, Navigate back to Plugin properties and click on copy to webserver keystore directory.

This will create a new set of plugin-key.kdb and plugin-key.sth.

Step6: Now check the plugin-cfg.xml for the kdb and sth path and copy them to the designated paths and restart the IHS.

Step7: Test the application by accessing it with https protocol.

 

 

How to change wsadmin default Language

How to change wsadmin default Language

To change the default language you can modify the following parameter in wsadmin.properties and set it to

#————————————————————————-
# The defaultLang property determines what scripting language to use.
# Supported values are jacl and jython.
# The default value is jacl.
#————————————————————————-
# com.ibm.ws.scripting.defaultLang=jacl
com.ibm.ws.scripting.defaultLang=jython

zfs cheat sheet Solaris 11

zfs cheat sheet Solaris 11

The ZFS file system is a file system that fundamentally changes the way file systems are administered, with features and benefits not found in other file systems available today. ZFS is robust, scalable, and easy to administer.ZFS uses the concept of storage pools to manage physical storage, ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool.File systems are no longer constrained to individual devices, allowing them to share disk space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the disk space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional disk space without additional work
                     zpool commands                         Description
zpool create testpool c0t0d0 Create simple pool named testpool with single disk
creating default mount point as poolname(/testpool)
OPTIONAL:
-n do a dry run on pool creation
-f force creation of the pool
zpool create testpool mirror c0t0d0 c0t0d1 Create testpool mirroring c0t0d0 with c0t0d1
creating default mount point as poolname(/testpool)
zpool create -m /mypool testpool c0t0d0 Create pool with different mount point than default
zpool create testpool raidz c2t1d0 c2t2d0 c2t3d0 Create RAID-Z testpool
zpool add testpool raidz c2t4d0 c2t5d0 c2t6d0 Add RAID-Z disks to testpool
zpool create testpool raidz1 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 Create RAIDZ-1 testpool
zpool create testpool raidz2 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 Create RAIDZ-2 testpool
zpool add testpool spare c2t6d0 Add spare device to the testpool
zpool create testpool mirror c2t1d0 c2t2d0 mirror c2t3d0 c2t4d0 Disk c2t1d0 mirrored with c2t2d0 &
c2t3d0 mirrored with c2t4d0
zpool remove testpool c2t6d0 Removes hot spares and cache disks
zpool detach testpool c2t4d0 Detach the mirror from the pool
zpool clear testpool c2t4d0 Clears specific disk fault
zpool replace testpool c3t4d0 Replace disk like disk
zpool replace testpool c3t4d0 c3t5d0 Replace one disk with another disk
zpool export testpool Export the pool from the system
zpool import testpool Imports specific pool
zpool import -f -D -d /testpool testpool Import destroyed testpool
zpool import testpool newtestpool Import a pool originally named testpool under
new name newtestpool
zpool import 88746667466648 Import pool using ID
zpool offline testpool c2t4d0 Offline the disk in the pool
Note: zpool offline testpool -t c2t4d0 will offline temporary
zpool upgrade -a upgrade all pools
zpool upgrade testpool Upgrade specific pool
zpool status -x Health status of all pools
zpool status testpool Status of pool in verbose mode
zpool get all testpool Lists all the properties of the storage pool
zpool set autoexpand=on testpool Set the parameter value on the storage pool
Note: zpool get all testpool gives you all the properties
on which it could be used to set value
zpool list Lists all pools
zpool list -o name,size,altroot show properties of the pool
zpool history Displays history of the pool
Note: Once the pool is removed, history is removed.
zpool iostat 2 2 Display ZFS I/O stastics
zpool destroy testpool Removes the storage pool
                       zfs commands                       Description
zfs list Lists the ZFS file system’s
zfs list -t filesystem
zfs list -t snapshot
zfs list -t volume
zfs create testpool/filesystem1 Creates ZFS filesystem on testpool storage
zfs create -o mountpoint=/filesystem1 testpool/filesystem1 Different mountpoint created after ZFS creation
zfs rename testpool/filesystem1 testpool/filesystem2 Renames the ZFS filesystem
zfs unmount testpool unmount the storagepool
zfs mount testpool mounts the storagepool
NFS exports in ZFS zfs share testpool – shares the file system for export
zfs set share.nfs=on testpool – make the share persistent
across reboot
svcs -a nfs/server – NFS server should be online
cat /etc/dfs/dfstab – Exported entry in the file
showmount -e – storage pool has been exported
zfs unshare testpool Remove NFS exports
zfs destroy -r testpool Destroy storage pool and all datasets under it
zfs set quota=1G testpool/filesystem1 set quota of 1GB on the filesystem1
zfs set reservations=1G testpool/filesystem1 set reservation of 1GB on the filesystem1
zfs set mountpoint=legacy testpool/filesystem1 Disable ZFS auto mounting and enable
through /etc/vsftab
zfs unmount testpool/filesystem1 unmounts ZFS filesystem1 in testpool
zfs mount testpool/filesystem1 mounts ZFS filestystem1 in testpool
zfs mount -a mounts all the ZFS filesystems
zfs snapshot testpool/filesystem1@friday Creates a snapshot of the filesystem1
zfs hold keep testpool/filesystem1@friday Holds existing snapshot & attempts to destroy using zfs destroy will fail
zfs rename testpool/filesystem1@friday FRIDAY Rename the snapshots
Note: snapshots must exists in the same pools
zfs diff testpool/filesystem1@friday testpool/filesystem1@friday1 Identify the difference between two snapshots
zfs holds testpool/filesystem1@friday Displays the list of snapshots help
zfs rollback -r testpool/fileystem1@friday Roll back yesterday snapshot recursively
zfs destroy testpool/fileystem1@thursday Destroy snapshot created yesterday
zfs clone testpool/filesystem1@friday testpool/clones/friday Clone was created on the same snapshot
Note: Cannot create clone of a filesystem in a pool that is different from where original snapshot resides.
zfs destroy testpool/clones/Friday Destroy the clone

RHEL / CentOS 7 Network Teaming

Below is an example on how to configure network teaming on RHEL/CentOS 7. It is assumed that you have at least two interface cards.

Show Current Network Interfaces
[root@rhce-server ~]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:87 brd ff:ff:ff:ff:ff:ff
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:91 brd ff:ff:ff:ff:ff:ff

The two devices I will be teaming are eno33554984 and eno16777736.

Create the Team Interface
[root@rhce-server ~]$ nmcli connection add type team con-name team0 ifname team0 config ‘{“runner”: {“name”: “activebackup”}}’

This will configure the interface for activebackup. Other runners include broadcast, roundrobin, loadbalance, and lacp.

Configure team0’s IP Address
[root@rhce-server ~]# nmcli connection modify team0 ipv4.addresses 192.168.1.22/24
[root@rhce-server ~]# nmcli connection modify team0 ipv4.method manual

You can also configure IPv6 address by setting the ipv6.addresses field.

Configure the Team Slaves
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave1 ifname eno33554984 master team0
Connection ‘team0-slave1’ (4167ea50-7d3a-4024-98e1-3058a4dcf0fa) successfully added.
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave2 ifname eno16777736 master team0
Connection ‘team0-slave2’ (d5ed65d1-16a7-4bc7-8c4d-78e17a1ed8b3) successfully added.

Check the Connection
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno16777736
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno16777736

[root@rhce-server ~]# ping -I team0 192.168.1.1
PING 192.168.1.1 (192.168.1.1) from 192.168.1.24 team0: 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.38 ms

Test Failover
[root@rhce-server ~]# nmcli device disconnect eno16777736
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno33554984

Configuring Postfix as a Null Client

Configuring Postfix as a Null Client

This howto assumes that the relay server’s IP address is 192.168.1.22 and is running RHEL/CentOS 7. Only mail from the 192.168.1.0/24 network should be accepted and relayed.

Install Postfix
[root@rhce-server ~]# yum install postfix

Configure Systemd
[root@rhce-server ~]# systemctl enable postfix
[root@rhce-server ~]# ^enable^start

Configure the Firewall
[root@rhce-server ~]# firewall-cmd –add-service=smtp
success
[root@rhce-server ~]# firewall-cmd –add-service=smtp –permanent
success

Configure Postfix

Postfix’s main configuration file is located at /etc/postfix/main.cf.

Configure Postfix to listen on the correct interface.
inet_interfaces = all

Configure the trusted network.
mynetworks = 192.168.1.0/24

Configure the list of domains that this Postfix service should consider itself the final destination for. In my case the server is named rhce-server.
mydestination = rhce-server, localhost.localdomain, localhost

Configure all mail not destined for this server to be relayed to another SMTP server.  The brackets tell Postfix to turn off MX lookups.
relayhost = [smtp-server.rmohan.com]

Restart Postfix
[root@rhce-server postfix]# systemctl restart postfix

Send a Test Email
[root@rhce-server postfix]# mail -s “rhce-server test” josh@example.com
testing our null postfix configuration
.
EOT

With any luck we should be all set. You can verify the mail was successfully relayed in /var/log/maillog.

unbound RHCE

This howto shows the steps needed to configure unbound for DNS caching and forwarding from the 192.168.1.0/24 network. It assumes the server’s IP address is 192.168.1.22 and is running RHEL/CentOS 7.

Installation
[root@rhce-server ~]# yum install unbound

Configure Systemd
[root@rhce-server ~]# systemctl enable unbound
ln -s ‘/usr/lib/systemd/system/unbound.service’ ‘/etc/systemd/system/multi-user.target.wants/unbound.service’
[root@rhce-server ~]# ^enable^start
systemctl start unbound

Configure the Firewall
[root@rhce-server ~]# firewall-cmd –add-service=dns
success
[root@rhce-server ~]# firewall-cmd –add-service=dns –permanent
success

Configure Unbound

Unbound’s configuration is stored in /etc/unbound/unbound.conf.

By default unbound only listens on the loopback interface. Specify which interface you would like to use.
interface: 192.168.1.22

Allow queries from 192.168.1.0/24.
access-control: 192.168.1.0/24 allow

Disable DNSSEC.
domain-insecure: *

Forward uncached requests to OpenDNS.
forward-zone:
name: *
forward-addr: 208.67.222.222
forward-addr: 208.67.220.220

Check Your Configuration
[root@rhce-server ~]# unbound-checkconf
unbound-checkconf: no errors in /etc/unbound/unbound.conf

Restart the Unbound Service
[root@rhce-server ~]# systemctl restart unbound

Verify it is Working

Test from a different system on the network.
mooose:~ jglemza$ dig rmohan.com A @192.168.1.22

; <<>> DiG 9.8.3-P1 <<>> rmohan.com A @192.168.1.22
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60299
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;rmohan.com.          IN  A

;; ANSWER SECTION:
rmohan.com.       43200   IN  A   64.191.171.200

;; Query time: 234 msec
;; SERVER: 192.168.1.22#53(192.168.1.22)
;; WHEN: Sat Mar 21 13:16:54 2015
;; MSG SIZE  rcvd: 42

Verify the record is now in unbound’s cache.
[root@rhce-server ~]# unbound-control dump_cache|grep rmohan.com
ns2.rmohan.com.   43197   IN  A   23.253.56.58
rmohan.com.   43197   IN  A   64.191.171.200
ns1.rmohan.com.   43197   IN  A   64.191.171.194
rmohan.com.   43197   IN  NS  ns1.rmohan.com.
rmohan.com.   43197   IN  NS  ns2.rmohan.com.