November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

IBM MQ 7.5 Developer Edition installation configuration

IBM MQ 7.5 Developer Edition installation configuration

Download the development version here

Download address: https://www.ibm.com/developerworks/cn/downloads/ws/wmq/

Environment CentOS 7.4 x64

1. Preparation before installation

[root@236 ~]# mkdir mq
#Create a new installation directory [root@236 ~]# tar -xzf mqadv_dev75_linux_x86-64.tar.gz -C mq #Unpack
[root@236 ~]# ls mq
copyright MQSeriesFTAgent-7.5. 0-2.x86_64.rpm MQSeriesMan-7.5.0-2.x86_64.rpm MQSeriesMsg_ko-7.5.0-2.x86_64.rpm MQSeriesSDK-7.5.0-2.x86_64.rpm
crtmqpkg MQSeriesFTBase-7.5.0-2.x86_64 .rpm MQSeriesMsg_cs-7.5.0-2.x86_64.rpm MQSeriesMsg_pl-7.5.0-2.x86_64.rpm MQSeriesServer-7.5.0-2.x86_64.rpm
lap MQSeriesFTLogger-7.5.0-2.x86_64.rpm MQSeriesMsg_de-7.5 .0-2.x86_64.rpm MQSeriesMsg_pt-7.5.0-2.x86_64.rpm MQSeriesXRClients-7.5.0-2.x86_64.rpm
licenses MQSeriesFTService-7.5.0-2.x86_64.rpm MQSeriesMsg_es-7.5.0-2.x86_64.rpm MQSeriesMsg_ru-7.5.0-2.x86_64.rpm MQSeriesXRService-7.5.0-2.x86_64.rpm
mqlicense.sh MQSeriesFTTools-7.5.0-2.x86_64.rpm MQSeriesMsg_fr-7.5.0-2.x86_64.rpm MQSeriesMsg_Zh_CN-7.5.0-2.x86_64.rpm PreReqs
MQSeriesAMS-7.5.0-2.x86_64.rpm MQSeriesGSKit-7.5.0-2.x86_64.rpm MQSeriesMsg_hu-7.5.0-2.x86_64.rpm MQSeriesMsg_Zh_TW-7.5.0-2.x86_64.rpm READMEs
MQSeriesClient-7.5.0-2.x86_64.rpm MQSeriesJava-7.5.0-2.x86_64.rpm MQSeriesMsg_it-7.5.0-2.x86_64.rpm MQSeriesRuntime-7.5.0-2.x86_64.rpm repackage
MQSeriesExplorer-7.5.0-2.x86_64.rpm MQSeriesJRE-7.5.0-2.x86_64.rpm MQSeriesMsg_ja-7.5.0-2.x86_64.rpm MQSeriesSamples-7.5.0-2.x86_64.rpm

Run the license, choose 1 to agree

./mqlicense.sh

Install MQ Server

[root@236 mq]# rpm -ivh MQSeriesRuntime-7.5.0-2.x86_64.rpm #??MQ Runtime
Preparing… ################################# [100%]
Creating group mqm
Creating user mqm
Updating / installing…
1:MQSeriesRuntime-7.5.0-2 ################################# [100%]

[root@236 mq]# rpm -ivh MQSeriesSamples-7.5.0-2.x86_64.rpm ##??MQ Samples
Preparing… ################################# [100%]
Updating / installing…
1:MQSeriesSamples-7.5.0-2 ################################# [100%]

[root@236 mq]# rpm -ivh MQSeriesServer-7.5.0-2.x86_64.rpm #??MQ server
Preparing… ################################# [100%]
Updating / installing…
1:MQSeriesServer-7.5.0-2 ################################# [100%]

After the installation has completed, run the ‘/opt/mqm/bin/mqconfig’
command, using the ‘mqm’ user ID.

For example, execute the following statement when running as the ‘root’ user:

su mqm -c “/opt/mqm/bin/mqconfig”

The ‘mqconfig’ command validates that the system configuration satisfies the
requirements for WebSphere MQ, and ensures that the settings for the ‘mqm’
user ID are suitably configured. Other WebSphere MQ administrators in the
‘mqm’ group can run this command to ensure their user limits are also
properly configured to use WebSphere MQ.

If ‘mqconfig’ indicates that any of the requirements have not been met,
consult the installation section within the WebSphere MQ Information Center
for details about how to configure the system and user limits.

Then follow the prompts and execute the command to check if the environment is allowed.

First check, prompting for missing bc

[root@236 mq]# su mqm -c “/opt/mqm/bin/mqconfig”
mqconfig: Analyzing CentOS Linux release 7.4.1708 (Core) settings for
WebSphere MQ V7.5
mqconfig: The bc program was not found on this system. Please install bc
and try running mqconfig again.

Install bc

[root@236 mq]# yum install -y bc

 

 

 

Second inspection

There are several fail to resolve, refer to the documentation: https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.ins.doc/q008550_.htm

Modify kernel parameters

Edit /sysctl.conf and add the following configuration

[root @ 236 mq] # vim /etc/sysctl.conf

kernel.sem = 500 256000 250 1024

net.ipv4.tcp_keepalive_time = 300

fs.file-max = 524288

Write configuration

[root@236 mq]# sysctl -p

Third check

2 files left to be solved

Edit limit.conf

[root @ 236 mq] # vim /etc/security/limits.conf

Add two lines

mqm hard capsule 10240
mqm soft capsule 10240

The fourth inspection passed

Modify environment variables

Since mq is installed in the /opt/mqm directory by default, you will not find the mq related command after the installation is complete. You need to configure the environment variable to find it.

Vim /etc/profile #Add the following line

PATH=/opt/mqm/bin:/opt/mqm/samp/bin/:$PATH

The installation is complete

2, start the instance

Switch to mqm user startup

[root @ 236 mq] # su mqm
bash-4.2 $

Create a default instance

bash-4.2$ crtmqm -q oe
WebSphere MQ queue manager created.
Directory ‘/var/mqm/qmgrs/oe’ created.
The queue manager is associated with installation ‘Installation1’.
Creating or replacing default objects for queue manager ‘oe’.
Default objects statistics : 74 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.

View the instance, the status is extended

bash-4.2$ dspmq
QMNAME(oe) STATUS(Ended immediately)

Start instance

bash-4.2$ strmqm oe
WebSphere MQ queue manager ‘oe’ starting.
The queue manager is associated with installation ‘Installation1’.
5 log records accessed on queue manager ‘oe’ during the log replay phase.
Log replay for queue manager ‘oe’ complete.
Transaction manager state recovered for queue manager ‘oe’.
WebSphere MQ queue manager ‘oe’ started using V7.5.0.2.

 

Create a queue named test

bash-4.2$ runmqsc oe
5724-H72 (C) Copyright IBM Corp. 1994, 2011. ALL RIGHTS RESERVED.
Starting MQSC for queue manager oe.

define qlocal(test)
1 : define qlocal(test)
AMQ8006: WebSphere MQ queue created.
end
2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.

Send message test, error 2085

bash-4.2$ amqsput Test oe
Sample AMQSPUT0 start
target queue is Test
MQOPEN ended with reason code 2085
unable to open queue for output
Sample AMQSPUT0 end

Later, I found that the queue could not be lowercase. The test queue was converted to uppercase. It is recommended that the queue name be set to uppercase, resend the message test, and press twice to confirm the input.

bash-4.2$ amqsput TEST oe
Sample AMQSPUT0 start
target queue is TEST
hello world!

Sample AMQSPUT0 end

Receive a message and accept success

bash-4.2$ amqsget TEST oe
Sample AMQSGET0 start
message <hello world!>

Start port listening

bash-4.2$ runmqlsr -t tcp -p 2424 -m oe &
[1] 5067
bash-4.2$ 5724-H72 (C) Copyright IBM Corp. 1994, 2011. ALL RIGHTS RESERVED.

Successful startup

bash-4.2$ netstat -tpln | grep 2424
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::2424 :::* LISTEN 5067/runmqlsr

 

docker centos

[root@ docker]# cat /etc/yum.repos.d/docker-main.repo
[docker-main-repo]
name=Docker main Repository
baseurl=https://get.daocloud.io/docker/yum-repo/main/CentOS/7
enabled=1
gpgcheck=1
gpgkey=https://get.daocloud.io/docker/yum/gpg

[root@ docker]# yum install docker-engine –y

[root@ docker]# docker –version
Docker version 17.05.0-ce, build 89658be

firewall-cmd –add-port=2377/tcp –permanent

firewall-cmd –add-port=7946/tcp –permanent

firewall-cmd –add-port=7946/udp –permanent

firewall-cmd –add-port=4789/udp –permanent

firewall-cmd –reload**

[root@ docker]# cat /etc/docker/daemon.json
{
“insecure-registries”:[“docker-registry.rmohan.com:5000”],
“log-driver”:”json-file”,
“log-opts”:{“max-size”:”1024m”,”max-file”:”2″}
}

[root@ ~]# systemctl restart docker.service

[root@master ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

docker swarm join \
–token SWMTKN-1-4p4djbee1kqcss8x5prfzg6v01x0y7hfa7rqob6rffg6e2p2wq-278qafb9ptpqtfebr0kyngi0b \
192.168.191.6:2377

[root@filters ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
0y4ga216x49nd9zt821afe45p * mohan_191_6 Ready Active Reachable
56mgklq3iyf0ajioytmtr0b44 mohan_191_10 Ready Active
b5mljua4e0tchrrk871ggbw7r mohan_191_7 Ready Active Reachable
gxm1gvh8lx2eja36a4cms9w98 mohan_182_212 Ready Active
hgweoxc5vmy9g30uyozmbumhj mohan_182_213 Ready Active
lluom23ugtfuiwphcye2frbmd mohan_182_215 Ready Active
om0ezdbqzdvavn8z8d844osbo mohan_182_214 Ready Active
t00tu1qs7wbll2fg2accsx7wf mohan_191_4 Ready Active Leader
tk89xphvsg16rz7b7l6lam4xv mohan_191_9 Ready Active
xbcx4hx2lq3ji8zog2ayzctat mohan_191_8 Ready Active Reachable
y3pnrw9jt69vkfiuxrs0co7ke mohan_191_5 Ready Active Reachable

docker swarm join –token manger node ip:port

[root@dockernode~]#docker swarm join –token SWMTKN-1-4p4djbee1kqcss8x5prfzg6v01x0y7hfa7rqob6rffg6e2p2wq-278qafb9ptpqtfebr0kyngi0b 192.168.191.6:2377

 

 

 

$ docker run Ubuntu:14.04 /bin/echo ‘Hello world’
Hello world

$ docker run -t -i ubuntu:14.04 /bin/bash
root@af8bae53bdd3:/#

$ docker run -d ubuntu /bin/sh -c “while true; do echo hello world; sleep 1; done”
cb30b87566d0550ec5f1232d148c5ffed6546c347889e58a6405579f2af73f2a

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb30b87566d0 ubuntu “/bin/sh -c ‘while t…” 2 minutes ago Up 2 minutes goofy_mcclintock

$ docker container logs goofy_mcclintock
hello world
hello world
hello world

$ docker container stop goofy_mcclintock
goofy_mcclintock

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb30b87566d0 ubuntu “/bin/sh -c ‘while t…” 20 minutes ago Exited (137) 23 seconds ago goofy_mcclintock

$ docker attach e1ff
root@e1ffd4f792fe:/#

Note: If exit from this stdin, it will cause the container to stop.

The exec command
-i -t parameter
docker exec can be followed by multiple parameters, here mainly the -i -t parameter.
When only the -i parameter, since there is no allocation of pseudo-terminals, the interface is not familiar Linux command prompt, the command execution
line results can still be returned.
When the -i -t parameter is used together, you can see the familiar Linux command prompt.

$ docker run -dit ubuntu
16168d4b66b115b5afac5836db3ff93304774e98489f628ac625fff2bcd640ba

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16168d4b66b1 ubuntu “/bin/bash” 58 seconds ago Up 57 seconds happy_bardeen

$ docker exec -it 16168 bash
root@16168d4b66b1:/#

Exit from this stdin will not cause the container to stop. That’s why the docker exec is recommended.
For more parameter descriptions, please use docker exec –help to view.

Fifth, delete the container

Delete a container that is in a terminated state in the format:
docker container rm [options] CONTAINER [CONTAINER…]

$ docker container rm awesome_payne
awesome_payne

If you want to delete a running container, you can add the -f parameter. Docker will send a SIGKILL signal to the container.

Clean up all containers in the terminated state. Use the docker container ls -a command to view all the containers that have been created, including the termination status. If the number is too large, it may be cumbersome to delete them one by one. You can use the following command to clear all the termination status. Container.

$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
545f8f6d19286efae28307d06ed1acc034d07f109e907c01892471a6f89e772d
cb30b87566d0550ec5f1232d148c5ffed6546c347889e58a6405579f2af73f2a
……

Export and import containers

Exporting a container
If you want to export a local container, you can use the docker export command.

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16168d4b66b1 ubuntu “/bin/bash” 18 minutes ago Up 18 minutes happy_bardeen

$ docker export 16168d4b66b1 > ubuntu.tar

This will export the container snapshot to a local file.

Import container snapshots
can be imported as mirrors from the container snapshot file using docker import, for example

$ cat ubuntu.tar | docker import – test/ubuntu:v1.0
sha256:91b174fec9ed55d7ebc3d2556499713705f40713458e8594efa114f261d7369a

$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
test/ubuntu v1.0 91b174fec9ed 10 seconds ago 69.8MB
ubuntu latest 735f80812f90 3 weeks ago 83.5MB

Alternatively, you can import it by specifying a URL or a directory, such as
$ docker import http://example.com/exampleimage.tgz example/imagerepo

Note: Users can either use the docker load to import the image storage file to the local image library, or use docker import to import a container snapshot to the local image library. The difference between the two is that the container snapshot file will discard all history and metadata information (that is, only the snapshot state of the container at the time), and the image storage file will save the full record and be large. In addition, metadata information such as tags can be reassigned when importing from a container snapshot file.

MySQL: Calculate the free space in IBD files

If you use MySQL with InnoDB, chances are you’ve seen growing IBD data files. Those are the files that actually hold your data within MySQL. By default, they only grow — they don’t shrink. So how do you know if you still have free space left in your IBD files?

There’s a query you can use to determine that:

SELECT round((data_length+index_length)/1024/1024,2)
FROM information_schema.tables
WHERE
  table_schema='zabbix'
  AND table_name='history_text';

The above will check a database called zabbix for a table called history_text. The result will be the size that MySQL has “in use” in that file. If that returns 5.000 as a value, you have 5GB of data in there.

In my example, it showed the data size to be 16GB. But the actual IBD file was over 50GB large.

$ ls -alh history_text.ibd
-rw-r----- 1 mysql mysql 52G Sep 10 15:26 history_text.ibd

In this example I had 36GB of wasted space on the disk (52GB according to the OS, 16GB in use by MySQL). If you run MySQL with innodb_file_per_table=ON, you can individually shrink the IBD files. One way, is to run an OPTIMIZE query on that table.

Note: this can be a blocking operation, depending on your MySQL version. WRITE and READ I/O can be blocked to the table for the duration of the OPTIMIZE query.

MariaDB [zabbix]> OPTIMIZE TABLE history_text;
Stage: 1 of 1 'altering table'   93.7% of stage done
Stage: 1 of 1 'altering table'    100% of stage done

+---------------------+----------+----------+-------------------------------------------------------------------+
| Table               | Op       | Msg_type | Msg_text                                                          |
+---------------------+----------+----------+-------------------------------------------------------------------+
| zabbix.history_text | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| zabbix.history_text | optimize | status   | OK                                                                |
+---------------------+----------+----------+-------------------------------------------------------------------+
2 rows in set (55 min 37.37 sec)

The result is quite a big file size savings:

$ ls -alh history_text.ibd
-rw-rw---- 1 mysql mysql 11G Sep 10 16:27 history_text.ibd

The file that was previously 52GB in size, is now just 11GB.

Apache 2.4 AH01762 & AH01760: failed to initialize shm (Shared Memory Segment)

Apache 2.4 AH01762 & AH01760: failed to initialize shm (Shared Memory Segment)

Mattias Geniar, Tuesday, January 12, 2016

I recently ran into the following problem on an Apache 2.4 server, where after server reboot the service itself would no longer start.

This was the error whenever I tried to start it:

$ tail -f error.log
[auth_digest:error] [pid 11716] (2)No such file or directory:
   AH01762: Failed to create shared memory segment on file /run/httpd/authdigest_shm.11716
[auth_digest:error] [pid 11716] (2)No such file or directory:
   AH01760: failed to initialize shm - all nonce-count checking, one-time nonces,
   and MD5-sess algorithm disabled

Systemd reported the same problem;

$ systemctl status -l httpd.service
 - httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since ...

The cause is shown in the first error message: Failed to create shared memory segment on file /run/httpd/authdigest_shm.11716.

If I traced this, I noticed the directory /run/httpd no longer existed. The simple fix in this case, was to re-create that missing directory.

$ mkdir /run/httpd
$ chown root:httpd /run/httpd
$ chmod 0710 /run/httpd

The directory should be owned by root and writeable for the root user. The Apache group (in my case, httpd), needs executable rights to look into the directory.

 

iven facility which allows non-dependent subsystems to be started, controlled, or stopped in parallel. Here we explain how to add a custom script to the systemd facility.

1. Write And Debug The Custom Script

Typically a systemd script is written as a shell script. Begin by writing your custom script using the normal conventions. We will call our script my-custom-script.sh and is straightforward:

#!/bin/sh
echo "I am a custom script" > /var/tmp/script.out
echo "The script was run at : `date`" >> > /var/tmp/script.out

The script must be executable.

# chmod 0755 /var/tmp/my-custom-script.sh

2. Describe The Custom Script To systemd

With the script written and tested manually, the script is ready to be described to the systemd system. To do this, a [name].service file is needed. The syntax uses the INI format commonly used for configuration files. Continuing our example, we need a my-custom-script.service file. The executable will run exactly once for each time the service is started. The service will not be started until the networking layer is up and stable.

Create a new service unit file at /etc/systemd/system/my-custom-script.service with below content. The name of the service unit is user defined and can be any name of your choice.

# This is my-custom-script.service, which describes the my-custom-script.sh file
[Unit]
Description=This is executed on shutdown or reboot
DefaultDependencies=no
Wants=network-pre.target                                                                   # (if network is required before running the script)
Before=network-pre.target shutdown.target reboot.target halt.target                        # Defines the order in which units are stoped. #(REQUIRED)

[Service]
Type=oneshot                                                                               # enables specifying multiple custom commands that are then executed sequentially. (REQUIRED)
RemainAfterExit=true                                                                       # required by the oneshot setting (REQUIRED)
Environment=ONE='one' "TWO='2"                                                             # you can set some environment variables, that may be necessary to pass as arguments
ExecStart=/bin/true                                                                        # because is a shutdown script nothing is done when this service is started
ExecStop=/bin/bash /var/tmp/my-custom-script.sh ${ONE} ${TWO}                              # < --*********** change to the script full path ************ (REQUIRED)
TimeoutStopSec=1min 35s                                                                    # Configures the time to wait for stop. 

[Install]
WantedBy=multi-user.target                                                                 # When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit. (REQUIRED)

3. Enable The Script For Future Reboots

Similar to the chkconfig from earlier versions, the service must be enabled. Since a new service was added, notify the systemd daemon to reconfigure itself:

# systemctl enable my-custom-script.service
# systemctl daemon-reload

time difference

Time difference

res1=$(date +%s.%N)
sleep 1
res2=$(date +%s.%N)
echo "Start time: $res1"
echo "Stop time:  $res2"
echo "Elapsed:    $(echo "$res2 - $res1"|bc )"

printf "Elapsed:    %.3F\n"  $(echo "$res2 - $res1"|bc )

ELASTICSEARCH: LISTEN ALL NETWORK INTERFACES ON CENTOS 7

ELASTICSEARCH: LISTEN ALL NETWORK INTERFACES ON CENTOS 7

 

 

y default elasticsearch listens to localhost.

# netstat -na|grep LISTEN |grep 9200
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN
tcp6       0      0 ::1:9200                :::*                    LISTEN       

If you want to access over the network you need to edit network.host parameter /etc/elasticsearch/elasticsearch.yml  file

———————————- Network ———————————–
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#         

Comment out network.host and type your IP address or type 0.0.0.0 to listen all interfaces

 network.host: 0.0.0.0

and restart elasticsearch

# systemctl restart elasticsearch 

 

# netstat -na|grep LISTEN |grep 9200
tcp6       0      0 :::9200                 :::*                    LISTEN

curl http://192.168.1.112:9200
{
  “name” : “Phantom Eagle”,
  “cluster_name” : “elasticsearch”,
  “cluster_uuid” : “k9tOhsoyTrOnvR-QpUpHxA”,
  “version” : {
    “number” : “2.4.1”,
    “build_hash” : “c67dc32e24162035d18d6fe1e952c4cbcbe79d16”,
    “build_timestamp” : “2016-09-27T18:57:55Z”,
    “build_snapshot” : false,
    “lucene_version” : “5.5.2”
  },
  “tagline” : “You Know, for Search”
}          

In this case, your elasticsearch will be accessible from network without any restriction. You should enable IP based filtering/firewall or user authentication.

Hotlink Protection

Enable Hotlink Protection on Apache

If your WordPress site is running on Apache, all you need to do is open the .htaccess file in your site’s root directory (or create it) and add the following:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?yourdomain.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?bing.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?yahoo.com [NC]
RewriteRule \.(jpg|jpeg|png|gif|svg)$ http://dropbox.com/hotlink-placeholder.jpg [NC,R,L]

The second line allows blank referrers. You will most likely want to enable this as some visitors use a personal firewall or antivirus program that deletes the page referrer information sent by the web browser. If you don’t allow blank referrers, you could inadvertently disable all of your images for those users.

The third line defines the allowed referrer, the site that is allowed to link to the image directly, this should be your website (update yourdomain.com above with your domain). The fourth, fifth, and sixth lines add search engines to the allowed list, because you don’t want to block crawlers such as Google bot or Bing bot. This could prevent your images from showing and indexing in Google image search.

And the seventh line defines the image you want the visitor to see in place of the hotlink protected image. This not required, but you could give them a friendly warning. If you want to allow multiple sites you can duplicate this row and replace the referrer. If you want to generate some more complex rules, take a look at this htaccess hotlink protection generator.

If you are using the above rules along with a CDN, you might also need to whitelist your CDN subdomain.

Enable Hotlink Protection on NGINX

If you are running on NGINX, all you need to do is open your config file and add the following:

location ~ .(gif|png|jpeg|jpg|svg)$ {
     valid_referers none blocked ~.google. ~.bing. ~.yahoo. yourdomain.com *.yourdomain.com;
     if ($invalid_referer) {
        return   403;
    }
}

If you are a Kinsta user and aren’t using a CDN, we can add this for you. Just open up a quick ticket with our support team from the MyKinsta dashboard. If you are using the above rules along with a CDN, you might also need to whitelist your CDN subdomain.

Implementing Content Security Policy in Apache

Header unset Content-Security-Policy
Header add Content-Security-Policy "default-src 'self'"
Header unset X-Content-Security-Policy
Header add X-Content-Security-Policy "default-src 'self'"
Header unset X-WebKit-CSP
Header add X-WebKit-CSP "default-src 'self'"

You may also be interested in adding those headers:

Header set X-Content-Type-Options "nosniff"
Header set X-XSS-Protection "1; mode=block"
Header set X-Frame-Options "DENY"
Header set Strict-Transport-Security "max-age=631138519; includeSubDomains"

 

 

 

Along with SQL injection attacks, cross-site scripting (XSS) attacks are some of the more common to be used when attacking a website. Cross-site scripting attacks are a kind of hack where the attacker manages to inject a piece of code, normally in the form of Javascript, into a website where it is executed by another user.

Let 100TB give you the tools to execute infrastructure management quickly and efficiently with our secure and monitored virtual servers.
 

Originally, this just covered the use of fooling a website into loading a javascript file from another website, but these days this can also include other content files or malicious fonts and images that may be executed by either the end user’s computer or on the server itself. The purpose of this form of attack is often to infect other users with malware, gain access to their account information or to gain deeper access to the system when run by an administration account.

Content Security Policy

To help prevent against cross-site scripting attacks, the idea of the Content Security Policy was devised. While the first version of CSP was only published in 2012, it has a history running back to 2004 with attempts to resolve this issue. CSP version 2 is the current version of the standard and is supported by  both Chrome and Firefox, while Safari and edge only support version 1. It works when the web server sends a special header to the web browser identifying that the server implements a content security policy.  Ait then dictates from where the browser should load things like stylesheets, script files, images and fonts. The web browser should then reference this information when loading the HTML code for the site and then fail to load any files that aren’t allowed by the policy.

While this won’t render all XSS style attacks impossible, it will (when implemented well) prevent all XSS attacks involving tricking the browser to load malicious files from external websites. Implementing CSP is as simple as placing a few files of configuration in your web server configuration. When running Apache you can place this code in the virtualhost configuration for your website or in a .htaccess file for the directory your website resides within. For anyone running a website on a dedicated server or VPS then the virtualhost configuration method is recommended whilst the .htaccess file method should only be needed if your website is on shared web hosting.

How To Implement CSP

At this point I’m going to be assuming you know how to edit your virtualhost configuration or create a .htaccess file for this purpose. If not then we’ve previously provided guides explaining both that you could use for reference. So prepare your file and add the following directive:

Header set Content-Secure-Policy "default-src 'self';"

This is about the simplest set-up that you can have and informs the browser that the only content  it should be allowing for your site is content that is loaded from your own domain. The Content-Secure-Policy header can be broken down to a number of directives starting with the directive type and then providing the sources for use with the directive. The default-src directive applies to all forms of content such as images, CSS, Javascript, AJAX requests, Frame contents, fonts and media content. There are separate -src directives for each type of file such as script-src and img-src. A full list of the directive types and their potential source declarations is available at the Content Secure Policy site here: https://content-security-policy.com/

Multiple Directives

Each directive can have numerous sources applied to them and you can use multiple directives in the policy separated by semicolons. This allows for both strict and comprehensive settings for the policy. So let’s imagine a more complex example of a blog that may link to images from across the internet, uses Javascript from it’s own domain, jquery from Google’s CDN and Google analytics and only uses it’s own CSS. This could be handled with a header similar to the below:

Header set Content-Secure-Policy "default-src 'none'; script-src 'self' www.google-analytics.com ajax.googleapis.com; img-src *; style-src 'self';"

By including default-src ‘none’ in the directives the browser would block all external files that aren’t explicitly defined later in the Content-Secure-Policy header. The img-src directive uses an asterisk (*) as its source definition to illustrate that it should allow images to be allowed from any domain. Hopefully, the rest of it should be fairly straight forward.
Once you’ve created your Content-Secure-Policy header you can save your file, and if you’ve included the directive within your virtualhost declaration rather than in a .htaccess file, don’t forget to reload the Apache configuration for your changes to take effect.

HTTP Strict Transport Security for Apache, NGINX and Lighttpd

HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature that lets a web site tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. This tutorial will show you how to set up HSTS in Apache2, NGINX and Lighttpd. It is tested with all mentioned webservers, NGINX 1.1.19, Lighttpd 1.4.28 and Apache 2.2.22 on Ubuntu 12.04, Debian 6 & 7 and CentOS 6.It should work on other distro’s however, these are just reference values.

What is HTTP Strict Transport Security?

Quoting the Mozilla Developer Network:

If a web site accepts a connection through HTTP and redirects to HTTPS, the user in this case may initially talk to the non-encrypted version of the site before being redirected, if, for example, the user types http://www.foo.com/ or even just foo.com.

This opens up the potential for a man-in-the-middle attack, where the redirect could be exploited to direct a user to a malicious site instead of the secure version of the original page.

The HTTP Strict Transport Security feature lets a web site inform the browser that it should never load the site using HTTP, and should automatically convert all attempts to access the site using HTTP to HTTPS requests instead.

An example scenario:

You log into a free WiFi access point at an airport and start surfing the web, visiting your online banking service to check your balance and pay a couple of bills. Unfortunately, the access point you're using is actually a hacker's laptop, and they're intercepting your original HTTP request and redirecting you to a clone of your bank's site instead of the real thing. Now your private data is exposed to the hacker.

Strict Transport Security resolves this problem; as long as you've accessed your bank's web site once using HTTPS, and the bank's web site uses Strict Transport Security, your browser will know to automatically use only HTTPS, which prevents hackers from performing this sort of man-in-the-middle attack.

Do note that HSTS does not work if you’ve never visited the website before. A website needs to tell you it is HTTPS only.

Important regarding preload

In the below configuration the preload directive was used. As requested by Lucas Garron from Google I removed it since most people seem to do screw it up.

Please note that that THE PRELOAD DIRECTIVE WILL HAVE SEMI-PERMANENT CONSEQUENCE. If you are testing, screw up or don’t want to use HSTS anymore you might be on the preload list.

It is important that you understand what you are doing and that you understand that the preload directive means that it will end up in browsers. If your HTTPS configuration is wrong, broken or you don’t want to use HTTPS anymore, you will experience problems. See this page as well.

If you still want to use preload, just append it to the header after the semi-colon.

Set up HSTS in Apache2

Edit your apache configuration file (/etc/apache2/sites-enabled/website.conf and /etc/apache2/httpd.conf for example) and add the following to your VirtualHost:

# Optionally load the headers module:
LoadModule headers_module modules/mod_headers.so

<VirtualHost 67.89.123.45:443>
    Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains;"
</VirtualHost>

Now your website will set the header every time someone visits, with an expiration date of two years (in seconds). It sets it at every visit. So tomorrow, it will say two years again.
You do have to set it on the HTTPS vhost only. It cannot be in the http vhost.

To redirect your visitors to the HTTPS version of your website, use the following configuration:

<VirtualHost *:80>
  [...]
  ServerName example.com
  Redirect permanent / https://example.com/
</VirtualHost>

If you only redirect, you dont even need a document root.

You can also use modrewrite, however the above method is simpler and safer. However, modrewrite below redirects the user to the page they were visiting over https, the above config just redirects to /:

<VirtualHost *:80>
  [...]
  <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
  </IfModule>
</VirtualHost>

And don’t forget to restart Apache.

Lighttpd

The lighttpd variant is just as simple. Add it to your Lighttpd configuration file (/etc/lighttpd/lighttpd.conf for example):

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "Strict-Transport-Security" => "max-age=63072000; includeSubdomains; ")
}

And restart Lighttpd. Here the time is also two years.

NGINX

NGINX is even shorter with its config. Add this in the server block for your HTTPS configuration:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; ";

Don’t forget to restart NGINX.

X-Frame-Options header

The last tip I’ll give you is the X-Frame-Options header, which you can add to your HTTPS website to make sure it is not embedded in a frame or iframe. This avoids clickjacking, and might be helpfull for HTTPS websites. Quoting the Mozilla Developer Network again:

The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a `<frame>` or `<iframe>`. Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.

You can change DENY to SAMEORIGIN or ALLOW-FROM uri, see the Mozilla link above for more information on that. (Or the RFC.)

X-Frame-Options for Apache2

As above, add this to the apache config file:

Header always set X-Frame-Options DENY

Lighttpd

This goes in the lighttpd config. Make sure you don’t double the above set config, if you have that, just add the rule it to it.

server.modules += ( "mod_setenv" )
$HTTP["scheme"] == "https" {
    setenv.add-response-header  = ( "X-Frame-Options" => "DENY")
}

NGINX

Yet again, in a server block:

add_header X-Frame-Options "DENY";

Detailed Docker container common operations

First, start the container

There are two ways to start a container. One is to create a new container based on the image and start, and the other is to restart the container in the terminated state. 
Because Docker’s containers are too lightweight, users often delete and recreate containers at any time.

New and start

For example, the following command outputs a “Hello World” and then terminates the container.

$ docker run Ubuntu :14.04 /bin/echo ‘Hello world’ 
Hello world

This is almost indistinguishable from directly executing /bin/echo ‘hello world’ locally.

The following command launches a bash terminal that allows the user to interact.

$ docker run -t -i ubuntu:14.04 /bin/bash 
root@af8bae53bdd3:/#

The -t option causes Docker to assign a pseudo-tty and bind to the container’s standard input, and -i keeps the container’s standard input open.

When using docker run to create containers, the standard operations that Docker runs in the background include:

  • Check if the specified image exists locally. If it does not exist, download it from the public repository.
  • Create and launch a container with an image
  • Allocate a file system and mount a readable and writable layer outside the read-only mirror layer
  • Bridge a virtual interface into the container from the bridge interface configured by the host host
  • Configure an ip address from the address pool to the container
  • Execute a user-specified application
  • The container is terminated after execution

Starting a terminated container 
You can use the docker container start command to start a container that has been terminated.

Second, the guardian state runs

More often, you need to have Docker run in the background instead of directly outputting the results of the execution command under the current host. This can be done by adding the -d parameter.

$ docker run -d ubuntu /bin/sh -c “while true; do echo hello world; sleep 1; done” 
cb30b87566d0550ec5f1232d148c5ffed6546c347889e58a6405579f2af73f2a

A unique id is returned when started with the -d parameter. The output can be viewed with docker logs [container ID or NAMES]. If you do not use the -d parameter. The output result (STDOUT) will be printed on the host

View container information with the docker container ls command.

$ docker container ls 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
cb30b87566d0 ubuntu “/bin/sh -c ‘while t…” 2 minutes ago Up 2 minutes goofy_mcclintock

To get the output of the container, you can use the docker container logs command.

$ docker container logs goofy_mcclintock 
hello world 
hello world 
hello world 
……

Note: Whether the container will run for a long time is related to the command specified by docker run, regardless of the -d parameter.

Third, terminate the container

You can use the docker container stop to terminate a running container. The format is: 
docker container stop [options] CONTAINER [CONTAINER…]

In addition, when the application specified in the Docker container is terminated, the container is also automatically terminated. 
For example, only the container of one terminal is started. When the user exits the terminal through the exit command or Ctrl+d 
, the created container is terminated immediately.

The container for the terminated state can be seen with the docker container ls -a command. E.g

$ docker container stop goofy_mcclintock 
goofy_mcclintock

$ docker container ls -a 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
cb30b87566d0 ubuntu “/bin/sh -c ‘while t…” 20 minutes ago Exited (137) 23 seconds ago goofy_mcclintock

The container in the terminated state is started by the docker container start command; the 
container of a running state is terminated and restarted by the docker container restart command.

Fourth, enter the container

When the -d parameter is used, the container will enter the background after it starts. 
Use the docker attach command or the docker exec command to enter the container. It is recommended to use the docker exec command for reasons explained below.

The attach command 
docker attach is a command that comes with Docker. The following example shows how to use this command.

$ docker run -dit ubuntu 
e1ffd4f792fe0ce7f7e700147051e1f792e352f5b70929eb9376393ac20114b4

$ docker container ls 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
e1ffd4f792fe ubuntu “/bin/bash” About a minute ago Up About a minute awesome_payne

$ docker attach e1ff 
root@e1ffd4f792fe:/#

Note: If exit from this stdin, it will cause the container to stop.

The exec command 
-i -t parameter 
docker exec can be followed by multiple parameters, here mainly the -i -t parameter. 
When only the -i parameter, since there is no allocation of pseudo-terminals, the interface is not familiar Linux command prompt, the command execution 
line results can still be returned. 
When the -i -t parameter is used together, you can see the Linux command prompt we are familiar with.

$ docker run -dit ubuntu 
16168d4b66b115b5afac5836db3ff93304774e98489f628ac625fff2bcd640ba

$ docker container ls 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
16168d4b66b1 ubuntu “/bin/bash” 58 seconds ago Up 57 seconds happy_bardeen

$ docker exec -it 16168 bash 
root@16168d4b66b1:/#

Exit from this stdin will not cause the container to stop. That’s why the docker exec is recommended. 
For more parameter descriptions, please use docker exec –help to view.

Fifth, delete the container

Delete a container that is in a terminated state in the format: 
docker container rm [options] CONTAINER [CONTAINER…]

$ docker container rm awesome_payne 
awesome_payne

If you want to delete a running container, you can add the -f parameter. Docker will send a SIGKILL signal to the container.

Clean up all containers in the terminated state. Use the docker container ls -a command to view all the containers that have been created, including the termination status. If the number is too large, it may be cumbersome to delete them one by one. You can use the following command to clear all the termination status. Container.

Prune Container Docker $ 
! This by Will the Remove All the WARNING stopped Containers. 
Are you the Sure you want to the Continue [the y-/ N] the y-? 
Deleted Containers: 
545f8f6d19286efae28307d06ed1acc034d07f109e907c01892471a6f89e772d 
cb30b87566d0550ec5f1232d148c5ffed6546c347889e58a6405579f2af73f2a 
……

Export and import containers

Exporting a container 
If you want to export a local container, you can use the docker export command.

Container Docker LS -a $ 
CONTAINER ID PORTS the STATUS the IMAGE CREATED the COMMAND NAMES 
16168d4b66b1 Ubuntu “/ bin / the bash” 18 is happy_bardeen minutes minutes ago Member 18 is Up

$ docker export 16168d4b66b1 > ubuntu.tar

This will export the container snapshot to a local file.

Import container snapshots 
can be imported as mirrors from the container snapshot file using docker import, for example

$ cat ubuntu.tar | docker import – test/ubuntu:v1.0 
sha256:91b174fec9ed55d7ebc3d2556499713705f40713458e8594efa114f261d7369a

$ docker image ls 
REPOSITORY TAG IMAGE ID CREATED SIZE 
test/ubuntu v1.0 91b174fec9ed 10 seconds ago 69.8MB 
ubuntu latest 735f80812f90 3 weeks ago 83.5MB

Alternatively, you can import it by specifying a URL or a directory, such as 
$ docker import http://example.com/exampleimage.tgz example/imagerepo

Note: Users can either use the docker load to import the image storage file to the local image library, or use docker import to import a container snapshot to the local image library. The difference between the two is that the container snapshot file will discard all history and metadata information (that is, only the snapshot state of the container at the time), and the image storage file will save the full record and be large. In addition, metadata information such as tags can be reassigned when importing from a container snapshot file.