July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

CentOS 7 + SELinux + PHP + Apache – cannot write/access file

Amazon EC2 instance running on CentOS 7. Apache logs keep saying that it can’t write to file due to permission where file permissions are properly setup, only to realize it was SELinux in action.

Problem 1: Can’t serve files on a custom directory

The first problem I have encountered is that I tried to setup the application inside /data/www/html/sites/mysite. When viewed on the browser, it says 403 Forbidden and error logs says:

1
13)Permission denied: [client 121.54.44.93:23180] AH00529: /data/www/html/sites/mysite/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable and that ‘/data/www/html/sites/mysite/’ is executable
The directory structure has proper ownership and permissions, ex: directory is owned by apache:apache, file permission is 0644 and directory permission is 0755. It doesn’t make sense at all. I noticed though that the default document root has no problem serving the php file so I decided to serve it off the /var/www/html/mysite directory, which is the default document root.

Problem 2: Can’t write to file

Moving to the default document root directory did the trick and I was able to run the application but with errors. The error says it can’t write to file although again, proper permissions are already set to the directory. Below is the error (it is a custom error log, but if writing to log file doesn’t work, imagine how your upload functionality would work):

1
PHP Warning: fopen(/var/www/html/mysite/application/config/../../logs/web/20150708.ALL.log): failed to open stream: Permission denied in /var/www/html/mysite/application/core/App_Exceptions.php
Surprise! SELinux is here!

After realizing that it was SELinux whose messing with me for the past 2 hours, I was thinking of ditching CentOS and go with the recommended Ubuntu instead. But then my instinct tells me that if SELinux is blocking the read/write operations, it must did it for a good reason, and that was for security. I realize that you need to specify which files/directories Apache can serve files and which files/directories it can write into.

SELinux seems to have some rules/policies that applies to files/directories on top of the unix file permissions structure. When I run the command below on the default document root, I saw more information on the file/directory permissions.

1
ls -Z /var/www/html/mysite
Below is the output (some information removed):

1
2
drwxr-xr-x. apache apache unconfined_u:object_r:httpd_sys_content_t:s0 application
-rw-r–r–. apache apache unconfined_u:object_r:httpd_sys_content_t:s0 index.php
And below is what I got for other normal directories:

1
drwxr-xr-x. apache apache unconfined_u:object_r:default_t:s0 www
Therefore, we can conclude that we need to specify the proper SELinux permissions on directories in order to serve files on a custom directory and set another SELinux permissions to allow writing to file. Therefore, we can solve the original problem then.

Fixing the original problem

So we want to serve our files at /data/www/html/sites/mysite and enable writing to log files and file uploads as well? Let’s play nice with SELinux.

First, copy the files as usual to /data/www/html/sites/mysite, then set the proper ownership and permissions.

# Ownership
sudo chown apache:apache -R /data/www/html/sites/mysite
cd /data/www/html/sites/mysite

# File permissions, recursive
find . -type f -exec chmod 0644 {} \;

# Dir permissions, recursive
find . -type d -exec chmod 0755 {} \;

# SELinux serve files off Apache, resursive
sudo chcon -t httpd_sys_content_t /data/www/html/sites/mysite -R

# Allow write only to specific dirs
sudo chcon -t httpd_sys_rw_content_t /data/www/html/sites/mysite/logs -R
sudo chcon -t httpd_sys_rw_content_t /data/www/html/sites/mysite/uploads -R
httpd_sys_content_t – for allowing Apache to serve these contents and httpd_sys_rw_content_t – for allowing Apache to write to those path.

Fix stale NFS mounts on linux without rebooting

Fix stale NFS mounts on linux without rebooting

I have often noticed that some folks reboot systems to fix stale NFS mount problems which can be disruptive.

Fortunately, that often isn’t necessary. All you have to do is restart nfs and autofs services. However that sometimes fails because user processes have files open on the stale partition or users are cd’ed to the stale partition.

Both conditions are easy to fix. The steps to fix stale mounts by addressing the previously described conditions are described below.

Step 1. Kill process with open files on the partition

Use lsof to find the processes that have files open on the partition and then kill those processes using kill or pkill.

% # Find the jobs that are accessing the state partition and kill them.
% kill -9 $(lsof |\
egrep ‘/stale/fs|/export/backup’ |\
awk ‘{print $2;}’ |\
sort -fu )

% # Restart the NFS and AUTOFS services
% service nfs stop
% service autofs stop
% service nfs start
% service autofs start

% # Check it
% ls /stale/fs

Typically this is sufficient but if it fails, you need to go to step 2.

Step 2. Kill process that have cd’ed to the partition

Look at the current working directory of all of the users. If any of them are on the partition, that process has to be killed.

% # List the users that are cd’ed to the stale partition and kill their jobs.
% # NOTE: change /stale/fs to the path to your stale partition.
% kill -9 $( for u in $( who | awk ‘{print $1;}’ | sort -fu ) ; do \
pwdx $(pgrep -u $u) |\
grep ‘/stale/fs’ |\
awk -F: ‘{print $1;}’ ; \
done)

% # umount the stale partition
% umount -f /state/fs

% # Restart the NFS and AUTOFS services
% service nfs stop
% service autofs stop
% service nfs start
% service autofs start

% # Check it
% ls /stale/fs

Step 3. Kill all of the users

If step 2 doesn’t work then there is something strange going on but killing all of the user processes will usually fix it. That is done as follows.

% # Kill all user processes.
% for u in $( who | awk ‘{print $1;}’ | sort -fu ) ; do \
kill -9 $(pgrep -u $u) |\
awk -F: ‘{print $1;}’ ; \
done

% # umount the stale partition
% umount -f /state/fs

% # Restart the NFS and AUTOFS services
% service nfs stop
% service autofs stop
% service nfs start
% service autofs start

% # Check it
% ls /stale/fs

As you can see, it is basically the same as step 2 except that all user processes are killed.

If that doesn’t work you need to resort the nuclear option: rebooting.

Step 4. Reboot

This is the option of last resort but it should always work.

If you know of any other tips for fix stale NFS mounts I would really like to hear about them.

KernelCare is now available for CentOS & RHEL 7 kernels

KernelCare for CentOS & RHEL 7

KernelCare is now available for CentOS & RHEL 7 kernels.
Latest CentOS / RHEL kernels can be patched against privilege escalation vulnerability CVE-2014-4943. Other supported kernels were patched against it last week

CVEs: CVE-2014-4943

Systems with AUTO_UPDATE=True (DEFAULT) in /etc/sysconfig/kcare/kcare.conf will automatically update, and no action is needed for them.

You can manually update the server by running:
# /usr/bin/kcarectl –update

Details:
CVE-2014-4943 kernel: net: pppol2tp: level handling in pppol2tp_[s,g]etsockopt()
A flaw in the Linux kernel allowing an unprivileged user to escalate to kernel privilege when CONFIG_PPPOL2TP is enabled.

List/Change kernel in centos 7

List/Change kernel in centos 7

Following command can be used to list the kernels in centos 7
============================
# egrep ^menuentry /etc/grub2.cfg | cut -f 2 -d \’
Linux Server, with Linux 3.10.0-123.el7.x86_64
Linux Server, with Linux 3.10.0-123.4.4.el7.x86_64
Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.2.el7uek.x86_64
Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.1.el7uek.x86_64
Linux Server, with Linux 0-rescue-d3e0313c0f6d48a0bb72495d2x32r1
==================================

You can use grub2-set-default command to set default boot kernel, to set first kernel in grub2.cfg as default run and reboot.
===============
grub2-set-default 0
===================

WHAT IS STORAGE SPACES DIRECT IN WINDOWS 2016?

WHAT IS STORAGE SPACES DIRECT IN WINDOWS 2016?

Windows 2016 will continue to focus on Software Defined Storage. In Windows 2012 Storage spaces was introduced as a tool that would allow pooling together disk resources to create a large and redundant pool of disk space (Similar to Raid but without certain limitations-Such as all disks must be of the same size). Storage spaces could also be used in a cluster environment as long as the Storage space as based on a JBOD with direct SAS connectivity to both nodes in the Cluster.

In Windows 2016 we’re receiving storage spaces direct. This technology will allow us to pool multiple local DAS disks from Multiple servers in to one pool. That’s correct local disks from multiple servers into one large shared pool. The pool can be used in a failover cluster for storing your Hyper-V VM’s.

just think, you can have 3 servers all with 3TB of local disk space all pooled together to create a large pool of clustered disk space. Now that’s COOL!
The pool will be fault tolerant and the loss of a single server will not bring down the pool itself.

The possibilities are endless. Smaller environments will defiantly be able create clusters without purchasing expensive Storage appliances, data can be stretched to a remote site for DR scenarios. Yes this is also totally supported.

Monit and CentOS – Solving the Error “Could not execute systemctl”

Monit and CentOS – Solving the Error “Could not execute systemctl”

My Problem – “Error: Could not execute systemctl”

I’m using Monit 5.16 on a CentOS 7 server. Monit is monitoring some crucial services like Apache and MySQL (okay, okay, it’s MariaDB). I have a very simple service check to start with:
check process apache pidfile /var/run/httpd/httpd.pid
start = “systemctl start httpd.service”
stop = “systemctl stop httpd.service”
restart = “systemctl restart httpd.service”

However, when the service stops, I receive the following error in monit’s log file:
“Error: Could not execute systemctl”

My Solution:

Super simple. So simple it’s derpy. Use an absolute path to systemctl in the service check action. So it should look like this:
check process apache pidfile /var/run/httpd/httpd.pid
start = “/usr/bin/systemctl start httpd.service”
stop = “/usr/bin/systemctl stop httpd.service”
restart = “/usr/bin/systemctl restart httpd.service”

Solving NFS Mounts at Boot Time

Let’s face it. NFS is a magical thing. It allows you to centralize your storage, share volumes across systems, and all while maintaining sane permissions and ownership. Unfortunately, it can also be a bit of a fickle beast. Let’s say you just had your volume configured and you set up the mounts. You go and run this command:
mount -t nfs 10.10.10.1:/vol1/fs1 /data

Works like a champ, you now have your data partition mounted over NFS. So you add this line to your /etc/fstab and make it mount automagically.
10.10.10.1:/vol1/fs1 /data nfs defaults 0 0

A few weeks go by and you apply a kernel update. No big deal, you apply the updates and during your next maintenance window reboot to apply the new kernel. Then you start to see applications failing and notice the volume isn’t actually mounted. This is an unfortunate result of the automounter subsystem.

It’s like this. At boot time the root partition gets mounted, automounter reads the /etc/fstab file, and boots any filesystem that doesn’t have noauto as a mount option. We’re still very early in the boot process so the network isn’t up yet, so naturally any network filesystems fail. The real problem here is that at no point does automounter go back and attempt to remount those systems. So your NFS mount points fail because there is no network, and done is done.

The developers were nice enough to provide a fix for this. There exists a mount option called _netdev. If we quote directly from the man page (sourced from RHEL 6.4):

_netdev
The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesystems until the network has been enabled on the system).

This is awesome, and exactly what we want. So you modify your entry in fstab to look like this:
10.10.10.1:/vol1/fs1 /data nfs defaults,_netdev 0 0

You’ve been bitten by NFS mounting in the past so you throw this in your test environment and reboot immediately. After the system comes up you notice a problem. Your NFS volumes are still unmounted. You see, there’s a bit of a hitch. Automounter followed the same procedure that it did before, except this time it didn’t even attempt to mount /data. The _netdev option doesn’t tell the system to mount the filesystem when network comes up, it says don’t attempt to mount it at all if the network isn’t up. There is still a missing piece to the puzzle. If you look at your init scripts there is a service called netfs. If you read the script you can see in the chkconfig header this description:

# description: Mounts and unmounts all Network File System (NFS),
# CIFS (Lan Manager/Windows), and NCP (NetWare) mount points.

This is exactly what you need. It is a service whose sole purpose is to read your /etc/fstab and mount network filesystems. All you have to do is enable it
chkconfig netfs on

and watch the magic happen. Now your mount boot process should look something like this:
1.Automounter reads /etc/fstab
2.Ignores /data since it has _netdev option set
3.Mounts all other filesystems
4.Finishes mount jobs and allows system to continue booting
5.Network comes up
6.Service netfs started
7.netfs reads /etc/fstab and finds an nfs filesystem
8.netfs mounts /data

What’s funny is that while I was researching this problem I never stumbled across netfs as a service. I had even gone so far as to start planning out my own custom init script that would do exactly this, except specifically for my mount points instead of generalizing. It’s nice to see that I was on the right track, but even better that the tools already existed.

REDUCE A LOGICAL VOLUME ONLINE WITHOUT ANY DATA LOSS

It’s possible to reduce the size of a logical volume without any data loss occurring.

The first step is to check the existing size of the logical volume:

[root@slave ~]# lvdisplay /dev/myvg/mylv
— Logical volume —
LV Path /dev/myvg/mylv
LV Name mylv
VG Name myvg
LV UUID K31i4c-mJmI-mNhJ-CvkB-c38D-7wCd-I2erTM
LV Write Access read/write
LV Creation host, time slave, 2014-10-13 20:01:22 -0400
LV Status available
# open 1
LV Size 4.00 GiB
Current LE 1024
Segments 2
Allocation inherit
Read ahead sectors auto
– currently set to 8192
Block device 253:2
The current size is 4 GB, although we would like to change the size to 2 GB.

As a cautious measure, run fsck on the logical volume to ensure that the file system is in a consistent state.

[root@slave ~]# fsck /dev/myvg/mylv
We will now resize the file system to 2 GB.

[root@slave ~]# resize2fs /dev/myvg/mylv 2G
The final step is to reduce the logical volume using lvreduce.

[root@slave ~]# lvreduce /dev/myvg/mylv -L 2G
WARNING: Reducing active and open logical volume to 2.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mylv? [y/n]: y
Reducing logical volume mylv to 2.00 GiB
Logical volume mylv successfully resized
Verify the new logical volume size using lvdisplay.

[root@slave ~]# lvdisplay /dev/myvg/mylv
— Logical volume —
LV Path /dev/myvg/mylv
LV Name mylv
VG Name myvg
LV UUID K31i4c-mJmI-mNhJ-CvkB-c38D-7wCd-I2erTM
LV Write Access read/write
LV Creation host, time slave, 2014-10-13 20:01:22 -0400
LV Status available
# open 1
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 8192
Block device 253:2

DDING DISK SPACE TO AN EXISTING VOLUME GROUP

In the situation where you have exhausted all of the disk space in a volume group, you can add additional disks to the volume group in order to remediate the situation.

Locate the additional disk using the fdisk -l command.

[root@slave ~]# fdisk -l
Disk /dev/sdd: 3221 MB, 3221225472 bytes, 6291456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The next step is to create a LVM partition using the above disk (/dev/sdd).

[root@slave ~]# fdisk /dev/sdd
Type in n and press enter three times.
Type in +2G and press enter.
Type in t and press enter.
Type in 8e and press enter.
Type in w and press enter.
Type in q and press enter.
Run partprobe to make the kernel aware of the disk changes.

[root@slave ~]# partprobe
Check the partition path by running fdisk -l.

[root@slave ~]# fdisk -l
Disk /dev/sdd: 3221 MB, 3221225472 bytes, 6291456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xeef01ba1
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4196351 2097152 8e Linux LVM
Create the physical volume using pvcreate.

[root@slave ~]# pvcreate /dev/sdd1
Physical volume “/dev/sdd1” successfully created
We’re now going to add 2GB of space from the new /dev/sdd1 partition to the lvtestvolume volume group.

[root@slave ~]# vgextend lvtestvolume /dev/sdd1
Volume group “lvtestvolume” successfully extended
Verify the size of the volume group by running vgdisplay lvtestvolume.

[root@slave ~]# vgdisplay lvtestvolume
— Volume group —
VG Name lvtestvolume
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 4.99 GiB
PE Size 4.00 MiB
Total PE 1278
Alloc PE / Size 512 / 2.00 GiB
Free PE / Size 766 / 2.99 GiB
VG UUID wxeQ0N-ZboT-lN2s-CCeQ-zkbb-B24Q-Khh6NB
The disk space has now been made available to the volume group, however the logical volume needs to be extended in order to make use of the additional space.

[root@slave ~]# lvdisplay lvtestvolume
— Logical volume —
LV Path /dev/lvtestvolume/data
LV Name data
VG Name lvtestvolume
LV UUID fwCnof-OoOu-8PNR-wPC2-LqBL-TQK6-DCZbiR
LV Write Access read/write
LV Creation host, time slave, 2014-10-13 19:23:50 -0400
LV Status available
# open 1
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 8192
Block device 253:4
We can now use lvextend to add an additional 2GB of space to the lvtestvolume.

[root@slave ~]# lvextend -L +2G /dev/lvtestvolume/data
Extending logical volume data to 4.00 GiB
Logical volume data successfully resized
We can finally verify the disk space addition from using lvdisplay /dev/lvtestvolume/data or through using df -hk | grep /data.

[root@slave ~]# lvdisplay /dev/lvtestvolume/data
— Logical volume —
LV Path /dev/lvtestvolume/data
LV Name data
VG Name lvtestvolume
LV UUID fwCnof-OoOu-8PNR-wPC2-LqBL-TQK6-DCZbiR
LV Write Access read/write
LV Creation host, time slave, 2014-10-13 19:23:50 -0400
LV Status available
# open 1
LV Size 4.00 GiB
Current LE 1024
Segments 2
Allocation inherit
Read ahead sectors auto
– currently set to 8192
Block device 253:4
[root@slave ~]# df -hk | grep /data
/dev/mapper/lvtestvolume-data 1998672 6144 1871288 1% /data

MOUNTING ISO FILES WITHIN RHEL

Download the ISO file using wget.

[root@memberserver ~]# cd tmp;wget http://cdimage.debian.org/debian-cd/7.6.0/multi-arch/iso-cd/debian-7.6.0-amd64-i386-netinst.iso
Create a directory which you will use to mount the ISO file to.

[root@memberserver ~]# mkdir /isodir
Edit the /etc/fstab as per the below entry:

[root@memberserver ~]# vi /etc/fstab
/tmp/debian-7.6.0-amd64-i386-netinst.iso /isodir iso9660 defaults,loop 0 0
Run partprobe to make the kernel aware of the disk changes and finally run mount -a to mount the ISO.

[root@memberserver ~]# partprobe
[root@memberserver ~]# mount -a
mount: /dev/loop0 is write-protected, mounting read-only
[root@memberserver ~]# df -hk | grep isodir
/dev/loop0 496640 496640 0 100% /isodir