November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

AWS – Amazon Web Service – Concepts

AWS : Amazon Web Services is a Cloud Service Provider, AKA – Infrastructure as a Service (IaaS).
– Storage, Computing Power, Databases, Networking, Analytics, Developer Tools, Virtualization, Security.

Major Terminology/Reason/Advantages:
####################################
– High Availibility
– Fault Tolerance
– Scalability (automatically grow Dynamically)
– Elasticity (automatically srink Dtynamically)

– Instance (Server)

Services:
########

VPCs: Virtual Private Cloud
*****************************
It is your private section of AWS, where you can place AWS Resources, and allow/restrict access to them.

EC2 (compute power): Elastic Cloud Compute
*******************************************
It is a virtual instance;/sever/computer that you can use for whatever you like.
ex: common use, web host

EC2- Part#2:
************
It is good for any type of “processing” activity.
ex: in netflix – video stream encoding and transacoding happens on the EC2 instance (stream loaded from S3)

Amazon RDS:
***********
It is AWS provisioned database service. Comonly used for things like storing customer account information and cataloging inventory.

AWS S3:
*******
It is massive/long-term storage bucket

 

AWS – Essentials:

 

IAM: Identity & Access Management
**********************************
It is where you manage your AWS users and their access to AWS account and services.

Common use:
Users
Group
IAM Access policies
Roles

The user created when you created the AWS account is called the “root” user.

By default, root user has FULL administrative rights and access to every part of AWS

By default, any newly created user will have no access to any AWS service (except ability to login). permission must be given to grant the access.

Best Practice: Security Status should be green for all configurations.
***********************************************************************

Activate MFA: Multi Factor Authentication – Same as RSA Token (available virtual and hard fob)
***************************************************************************************

Create individual IAM users:
*****************************
– As per best practice, we should be not using the root user in day to day job, including administrator

User groups to assign permission:
*********************************
– Create custom group (we have admin)

VPC – Virtual Private Clouds
****************************

Global Infrastructure:
*********************
AWS Regions:
Availibility Zones – Physical Data Centers (Multiple availibility zone – multiple backup – Redundency – HA Fault Tolerance)

VPC Basics: when you create account with AWS by default VPC have been created, and includes following standard component:
************************************************************************************************************************
(1) Internet Gateway – VPC can have only one IGW, Once active AWS resource would be there then IGW can’t be detached.
– it is horizontally scaled, redundent and highly available VPC component
– Allow communication between instances in your VPC and the internet

Rules/Details for Interner Gateway:
– Only 1 IGW can be attached to a VPC at a time
– IGW can not be detached from VPC while there are active AWS resources in the VPC (such as EC2 instansaces, RDS databases, etc..)

(2) A Route Table (with predefined routes to the default subnets)
– It contains set of rules, called routes, that are used to determine where network traffic is directed.
– Defulat VPC already has a ‘main’ route table.

Rules/Details for Route Tables:
– Unlike an IGW, you can have multiple route tables in a VPC
– You can not delete a route table if it has dependencies (associate subnets)

(3) A network access control list (NACL) (with predefined rules for access)
– it is an optional layer of security for VPC that act as firewall for controlling traffic in and out of one or more subnets.

– Defulat VPC already has a NACL in place and associated with the default subnets.

Rules/Details for NACL:
– Rules are evaluated lowest to highest based on rule number.
– The first rule found that applies to the traffic type immediatly applied, regardless of any highest number of rule come after
– Default NACL allows all the traffic to the default subnets
– Any newly created NACL, deny all traffic by default
– A subnet can be only associated with ONE NACL at a time.

(4) Subnet to provision AWS resources in (such as EC2 instances)
– it is like subnetworks, is sub-section of the network.

Rules/Details for subnets:
– it must be associated with Route table
– Public subnet has route to the internet
– Private subnet does not have a route to the internet
– A subnet is located in specific availibility zone.

Simple Storage Service (S3)
***************************
– An online, bulk storage service that you can access from almost any device

– It has simple webservice interface that you can use to store and retrive any amount of data, at any time, from anywhere on the web.
it gives any user access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that amazon uses to run
its own globle network of websites. the service aim to maximize benefits of scale and to pass those benefis to users.

– Default 5 GB of storage free

(1) S3 Storage Classes:
These classes are defined based on object/file availability ( and the durability (corrupt/lost)

Standard: Default Storage Options
– General all purpose storage
– 99.999999999999% Object durability (“eleven nines”)
– 99.99% Object availability
– Most expensive

Reduce Redundancy Storage (RSS) – Backup
– Designed for non-critical, reproducible objects
– 99.99% object durability
– 99.99% object availability
– less expensive than Standard

Infrequent Access (S3-IA) – Not accessed day to day base – May be weekly or monthly
– Designed for objects that you do not access frequently but must be immediately available when accessed
– 99.999999999999% Object durability (“eleven nines”)
– 99.90% Object livability
– less expensive than Standard/RSS

Glacier
– Designed for long-term archival storage
– May take several hours for objects stored in Glacier to be retrieved
– 99.999999999999% Object durability (“eleven nines”)
– cheapest S3 Storage (very low cost)

(2) Object Lifecycle:

– It is located on the bucket level

– However, it can be applied to
– The entire bucket (applied all the objects in the Bucket)
– One specific folder within a bucket (applied all the objects in that folder)
– one specified object within a bucket

– you can always delete lifecycle policy or manually change the storage class back to whatever you like

(3) Permissions:

– It can be found on  bucket or object level

– On bucket level you can control
List: who can see the backet name
Upload/Delete: Objects to (upload) or in the bucket (delete)
View Permission
Edit Permission

Bucket level permission are generally used for “internal” access control

– On the object level, you can control (for each object individually)
Open/download
View permissions
Edit Permissions

You can share specific objects via a link with the anyone in the world.

(4) Object Versioning

– S3 Versioning is a feature that keeps track of and stores all old/new versions of an object so that you can access and use an older version you like

– Versioning is either ON or OFF
– Once it is turned ON, you can only “suspend” versioning. It can not be fully turned OFF.
– Suspending versioning only prevents versioning going forward. All previous object with versions will still maintain their older versions.
– Versioning can only be set on the bucket level and applies to ALL objects in the bucket

Elastic Compute Cloud (EC2)
***************************

– Think of EC2 as your basic computer (which has OS, cpu, hard drive, network card, firewall, ram)

– EC2 provides scalable computing capacity in AWS Cloud
– It can be used to launch as many or as few virtual servers as you need, configure security and networking, and manage storage

(1) AMI’s – Amazon Machine Images
– A preconfigured package required to launch an EC2 instance
includes OS, software packages and other required settings.

– you specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need.
you can also launch instances from as many different AMIs as you need.

(2) Instance Types:
– it is the CPU/Core
– Each instance offers different compute, memory and storage capabilities

(3) Elastic Block Store (EBS)
– Storage volume for an EC2 instance (like hard drive)

– IOPS – input/output operations per second – More IOPS means better volume performance

(4) Security Groups
– are similar to NACLs in that they allow/deny traffic.
– security groups are found on the instance level (as opposed to subnet level)

– Virtual firewall that controls the traffic for one or more instances
– when you launch instances, you associate one or more security groups with the instance

(5) IP Addressing:
– Private IP addressing for EC2 instance
– By default all EC2 instances created with private IP address,
– It allow for instances to communicate with each other as long as they are located in the same VPC

Public IP addressing for EC2 instance
– Instances can be launched with or without a public IP Address (by default) depending on VPC/Subnet settings.
– Public IP Address REQUIRED for the instance to communicate with the internet.

RDS and DynamoDB
******************

RDS – Relational SQL databases (Amazon Aurora, SQL Server, ORACLE, PostgreSQL, MySQL)
DynamoDB – Non-Relational, No-SQL Database (DynamoDB only available, we can install/download the mongoDB, Cassandra, Oracle noSQL

Simple Notification Service (SNS): In other word it is alert service
***********************************

AWS service that allows you to automate the sending of email or text message notification based on events that happens in your AWS account
Topic – Like EC2 crashed
Subscriber – Person/Group who gets the notification
Publisher – Cloudwatch/human/alarm

AWS CloudWatch: in Other word it is monitoring service..
****************

It is service that allows you to monitor various elements of your AWS account.

This alerts will be distribution using SNS service…

example…
– setup the alerts for the mothly billing exceeding certain amounts.
– setup the alerts for the EC2 instance CPU utilizations..

Elastic Load Balancer (ELB) (Classic) :
*************************************
An ELB evenly distributes traffic between EC2 instances that are associated with it.

AutoScalling:
**************
– Auto Scalling is the process of adding (scalling up) OR removing (scalling down) EC2 instances based on traffic demand for you application.

– Handle the load for your application and Auto Scalling Groups

– It is a service and not the physical part of the infrastructure

Lambda – Serverless Computing
******************************

 

AWS – Cloud Computing

 

AWS Cloud Platform Devided into following categories:

– Compute and Networking (ex: virtual server and vpc)
EC2 – RHEL, CentOS, Ubuntu, Debian, Fedora, Amazon Linux, Oracle Linux, Microsoft Windows Server
Route53 – DNS system which we configure on AWS
VPC – Virtual Private Cloud
– Storage and CDN (ex: various storage services, also content which leaves in network)
Amazon S3 (store your images, contents and even static websites)
Amazon Glacier (Archival system – economical compare to S3)
Amazon CloudFront
– Databases
Amazon RDS:
– MySQL
– MS SQL Server
– Oracle
– Application Services (notification services, emial services.. etc)
– Amazon SES (mass emailing as e-advertisement)
– Amazon SNS (Monitoring email).
– Deployment and Management (CI-CD)
– Amazon CloudWatch (monitoring service for resources such as servers, storage, even billings, DNS, RDS Database)
– Amazone IAM (Manage Users and Groups using Identity and Access management)

aws cli part -1

1. Create a VPC

aws ec2 create-vpc –cidr-block 10.0.0.0/16

2. Create a VPC with dedicated tenancy

aws ec2 create-vpc –cidr-block 10.0.0.0/16 –instance-tenancy dedicated

3. Create a VPC with an IPv6 CIDR block

aws ec2 create-vpc –cidr-block 10.16.0.0/16 –amazon-provided-ipv6-cidr-block >> /root/awscreateVPC.json

4. Create a subnet within the VPC

aws ec2 create-subnet –vpc-id  vpc-b774aace –cidr-block 10.16.1.0/24  >> /root/awscreateSubnet1.json

aws ec2 create-subnet –vpc-id  “vpc-b774aace” –cidr-block “10.16.2.0/24”  –availability-zone  “us-east-1a” >> /root/awscreateSubnet2.json

6. Delete VPC

aws ec2 delete-vpc  –vpc-id vpc-7c6ab405

7. Create route table (a default route table is created during vpc creation)

aws ec2 create-route-table –vpc-id vpc-b774aace  >>  /root/awscreateRouteTable.json

8. Associate subnet (say our subnet2 id = subnet-2b8a2c07) with the above route table (say route table id = rtb-0068f078)

aws ec2 associate-route-table –route-table-id  rtb-0068f078 –subnet-id subnet-2b8a2c07 >>  /root/awsassociateRouteTable.json

9. Dissociate subnet from route table

aws ec2 disassociate-route-table –association-id rtbassoc-802b6efb

10. Create Internet Gateway

aws ec2 create-internet-gateway >> /root/awscreateInternetGateway.json

11. Attach Internet Gateway to VPC (An Internet gateway already attached to an vpc cannot be attached to another vpc)

aws ec2 attach-internet-gateway –internet-gateway-id   igw-b946d3df   –vpc-id vpc-b774aace >> /root/awsattachInternetGateway.json

12. Detach Internet Gateway

aws ec2 detach-internet-gateway     –internet-gateway-id        igw-b946d3df                  –vpc-id  vpc-b774aace

13.  Create Route   (To create new route you need a Internet Gateway, Network Interface, or Virtual Private Gateway as targets.)

aws ec2 create-route –route-table-id  rtb-714cd209 –destination-cidr-block 0.0.0.0/0 –gateway-id igw-b946d3df

14. Create NACL

aws ec2  create-network-acl  –vpc-id vpc-b774aace >> /root/awscreateNetworkACL.json

15. Create NACL entry (to add a allow or deny rule)

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 25 –protocol tcp –port-range From=22,To=22–cidr-block 0.0.0.0/0  –rule-action allow

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 35 –protocol tcp –port-range From=80,To=80–cidr-block 0.0.0.0/0  –rule-action allow

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 50 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.251/32 –rule-action deny

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –exgress  –rule-number 50 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.251/32 –rule-action deny

16. Modify NACL Entry

aws ec2 replace-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 100 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.0/24 –rule-action allow

17. create security group

aws ec2 create-security-group –group-name mySG1 –description “my security group” –vpc-id vpc-b774aace

18. Create SG inbound (To add a rule that allows inbound SSH traffic)

aws ec2 authorize-security-group-ingress –group-id sg-3fdcc241 –protocol tcp –port 22 –cidr 0.0.0.0/0

19. Create SG inbound (To add a rule that allows inbound HTTP traffic from another security group)

aws ec2 authorize-security-group-ingress –group-id sg-3fdcc241 –protocol tcp –port 80 –cidr 0.0.0.0/0

Note: for https use port 443

20. Create key pair

aws ec2 create-key-pair –key-name MyKeyPair –query ‘KeyMaterial’ –output text >> /root/awsMyKeyPair.pem

aws ec2 create-key-pair –key-name MyKeyPair –query ‘KeyMaterial’ –output text | out-file -encoding ascii -filepath MyKeyPair.pem  [windows powershell]

21. Launches the specified number of instances using an AMI for which you have permissions.

aws ec2 run-instances

15. Delete route table

aws ec2  delete-route-table –route-table-id    rtb-4069f138

9. aws ec2 associate-route-table –route-table-id rtb-22574640 –subnet-id subnet-9d4a7b6c
4. To create an endpoint

aws ec2 create-vpc-endpoint –vpc-id vpc-1a2b3c4d –service-name com.amazonaws.us-east-1.s3 –route-table-ids rtb-11aa22bb

This example creates a VPC endpoint between VPC vpc-1a2b3c4d and Amazon S3 in the us-east-1 region, and associates route table rtb-11aa22bb with the endpoint.

5. To create a VPC peering connection between your VPCs

aws ec2 create-vpc-peering-connection –vpc-id vpc-1a2b3c4d –peer-vpc-id vpc-11122233

6. To create a VPC peering connection with a VPC in another account

aws ec2 create-vpc-peering-connection –vpc-id vpc-1a2b3c4d –peer-vpc-id vpc-11122233 –peer-owner-id 123456789012

7. To create a VPN connection with dynamic routing

aws ec2 create-vpn-connection –type ipsec.1 –customer-gateway-id cgw-0e11f167 –vpn-gateway-id vgw-9a4cacf3

8. To create a static route for a VPN connection

aws ec2 create-vpn-connection-route –vpn-connection-id vpn-40f41529 –destination-cidr-block 11.12.0.0/16

9. To create a virtual private gateway

mod_jk or mod_proxy_ajp ?

mod_jk or mod_proxy_ajp ?

A Tomcat servlet container can be put behind an Apache web server using the AJP protocol, which carries all request information from Apache to Tomcat. There are two implementations of AJP module:

  • mod_jk which must be installed separately
  • mod_proxy_ajp which is a standard module since Apache 2.2

They both use protocol AJP, so they both provide the same functionality.

The advantage of mod_jk is its JkEnv directive, that allows to send any environmental variable from Apache to Tomcat as a request attribute. If you need to get for example the SSL_CLIENT_S_DN variable with SSL certificate DN provided by mod_ssl, or the AUTHENTICATE_CN variable provided by mod_ldap, then mod_jk can be directed to send it using simply:

<IfModule mod_jk.c>
   JkEnvVar SSL_CLIENT_S_DN
</IfModule>

while for mod_proxy_ajp, you have to use mod_rewrite to prepend AJP_ prefix to variables that you want to send:

<IfModule mod_proxy_ajp.c>
   RewriteRule .* - [E=AJP_SSL_CLIENT_S_DN:%{SSL:SSL_CLIENT_S_DN}]
</IfModule>

which is more complicated and forces you to activate the mod_rewrite.

The advantage of mod_proxy_ajp is that it is a standard Apache module, so you do not need to compile and install it itself.

An example configuration of mod_jk in Apache http.conf file is as follows:

<IfModule mod_jk.c>
 # a list of Tomcat instances
 JkWorkerProperty worker.list=tomcatA,tomcatB
 # connection properties to instance A on localhost
 JkWorkerProperty worker.tomcatA.type=ajp13
 JkWorkerProperty worker.tomcatA.host=localhost
 JkWorkerProperty worker.tomcatA.port=8009
 # connection properties to instance B on some other machine
 JkWorkerProperty worker.tomcatB.type=ajp13
 JkWorkerProperty worker.tomcatB.host=zeus.example.com
 JkWorkerProperty worker.tomcatB.port=8009
 # some other configuration
 JkLogFile "|/usr/bin/cronolog /var/log/apache2/%Y/%m/%d/mod_jk.log"
 JkLogLevel error
 JkShmFile /var/log/apache2/jk.shm
 JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
 # forwarding URL prefixes to Tomcat instances
 JkMount /opencms tomcatA
 JkMount /otherapp tomcatB
</IfModule>

An example configuration of mod_proxy_ajp is here:

<IfModule mod_proxy_ajp.c>
 <Location "/opencms">
   Allow from all
   ProxyPass ajp://localhost:8009/opencms
 </Location>
 <Location "/otherapp">
   Allow from all
   ProxyPass ajp://zeus.example.com:8009/otherapp
 </Location>
</IfModule>

So mod_jk has more flexible configuration, but needs a separate installation and its configuration is more complex. If you have no special requirements, go for mod_proxy_ajp. If you need something special, like to use authentication modules from Apache for securing applications in Tomcat, go for mod_jk.

New site configuration

If you are running OpenCms (6.0 or greater) in Tomcat using an Apache front end (with mod_jk or mod_proxy_ajp, NOT MOD_PROXY IN HTTP MODE), there are three basic steps to configuring a new site in your implementation:

Create the containing folder for the site in the OpenCms Explorer

In the OpenCms Explorer view, change to the ‘/’ site, go into the ‘sites’ folder, and create a new folder. The folder name is case-sensitive, so keep track of exactly what you entered. For the examples that follow, we’ll assume the creation of a /sites/MyNewSite folder.

Add site information to OpenCms’s configuration

In order to make your new site available within OpenCms (i.e. displayed in the site list of the workplace), we need to modify the opencms-system.xml configuration file, located in <opencmsroot>/WEB-INF/config/.

Find the section of opencms-system.xml that looks like:

 <sites>
    <workplace-server>http://www.mysite.com</workplace-server>
    <default-uri>/sites/default/</default-uri>
    <site server="www.mysite.com" uri="/sites/default/"/>
 </sites>

and add another site definition as follows:

    <site server="www.mynewsite.com" uri="/sites/MyNewSite/"/>

This tells OpenCms that when it receives a request for www.mynewsite.com, it should serve that request out of the MyNewSite container. I believe you have to restart tomcat or reload opencms for this config file to be reread.

Adjust OpenCms automatic link generation (static export, module-resources)

This configuration is only valid if OpenCms is installed as the ROOT application in Tomcat. Edit the file “WEB-INF/config/opencms-importexport.xml” in your OpenCms installation and change the content of the <vfs-prefix> tag to empty:

<rendersettings>
  <rfs-prefix>${CONTEXT_NAME}/export</rfs-prefix>
  <vfs-prefix></vfs-prefix>
</rendersettings>

Then all links will have empty prefix, i.e. a link to the file /dir/file.html will be /dir/file.html instead of /opencms/dir/file.html.

Configuring the Apache WebServer

http.conf

Add the following lines to the http.conf file if needed (not already be done) to load the modules needed. Other apache distributions recommend to configure the modules to load on different locations. For apache 2.2 on SuSE-release this is e.g. done in /etc/sysconfig/apache2. On Debian, use the a2enmod command to link the files from /etc/apache2/mods-available to /etc/apache/mods-enabled. In the end, the following lines need to be somehwo included in the Apache configuration:

LoadModule jk_module modules/mod_jk.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule rewrite_module modules/mod_rewrite.so

After the modules are loaded they have to be configured.

mod_jk

If you use mod_jk, put there the following:

<IfModule mod_jk.c>
 JkWorkerProperty worker.list=ocms
 JkWorkerProperty worker.ocms.type=ajp13
 JkWorkerProperty worker.ocms.host=localhost
 JkWorkerProperty worker.ocms.port=8009
 JkLogFile "|/usr/bin/cronolog /var/log/apache2/%Y/%m/%d/mod_jk.log"
 JkLogLevel error
 JkShmFile /var/log/apache2/jk.shm
 JkOptions +RejectUnsafeURI
 JkMount /opencms/* ocms
 JkMount /export/* ocms
 JkMount /resources/* ocms
 JkMountCopy All
</IfModule>

The JkMount directives forward requests to the OpenCMS servlet at /opencms and the directories at /export and /resources to Tomcat. The JkMountCopy All directive mount that for all virtual servers. If you plan to use some virtual servers without OpenCMS, do not put the directives here, but mount the prefixes in each virtual server.

mod_proxy_ajp

If you use mod_proxy_ajp, put there the following:

  <IfModule mod_proxy_ajp.c>
   <Location "/opencms">
    Allow from all
    ProxyPass ajp://localhost:8009/opencms
   </Location>
   <Location "/export">
    Allow from all
    ProxyPass ajp://localhost:8009/export
   </Location>
   <Location "/resources">
    Allow from all
    ProxyPass ajp://localhost:8009/resources
   </Location>
   <Location "/update">
    Allow from all
    ProxyPass ajp://localhost:8009/resources
   </Location>
  </IfModule>

Defining the virtual hosts

This configuration is for an OpenCms installation which is installed as the ROOT application in Tomcat.

<VirtualHost *:80>
  ServerName www.mysite.com
  ServerAdmin admin@example.com
  DocumentRoot "C:/Tomcat5.5/webapps/ROOT"
  ErrorLog logs/error.log

  # Allow accessing the document root directory 
  <Directory "C:/Tomcat5.5/webapps/ROOT">
    Options FollowSymlinks
    AllowOverride All
    Order allow,deny
    Allow from all
  </Directory>
  
  # If the requested URI is located in the resources folder, do not forward the request
  SetEnvIfNoCase Request_URI ^/resources/.*$ no-jk
  
  # If the requested URI is static content do not forward the request
  SetEnvIfNoCase Request_URI ^/export/.*$ no-jk
  RewriteEngine On
  RewriteLog logs/rewrite.log
  RewriteLogLevel 1

  # Deny access to php files
  RewriteCond %{REQUEST_FILENAME} (.+)\.php(.*)
  RewriteRule (.*) / [F]

  # If the requested URI is NOT located in the resources folder.
  # Prepend an /opencms to everything that does not already starts with it
  # and force the result to be handled by the next URI-handler ([PT]) (JkMount in this case)
  RewriteCond %{REQUEST_URI} !^/resources/.*$
  RewriteCond %{REQUEST_URI} !^/export/.*$
  RewriteCond %{REQUEST_URI} !^/webdav.*$
  RewriteRule !^/opencms/(.*)$ /opencms%{REQUEST_URI} [PT]

  # These are the settings for static export. If the requested resource is not already
  # statically exported create a new request to the opencms404 handler. This has to be
  # a new request, because the current would net get through mod_jk because of the "no-jk" var.
  RewriteCond %{REQUEST_URI} ^/export/.*$
  RewriteCond "%{DOCUMENT_ROOT}%{REQUEST_FILENAME}" !-f
  RewriteCond "%{DOCUMENT_ROOT}%{REQUEST_FILENAME}/index_export.html" !-f
  RewriteRule .* /opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]
  
  JkMount /* ocms
</VirtualHost>

This redirect doesn’t work with opencms 7.5.1 for static export.

RewriteRule .* /opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]

so I change it to:

RewriteRule .* http://127.0.0.1:8080/opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]

After the configuration is finished the Apache WebServer needs to be restarted.

Alternative definition

The previous definition is too complex, here is my simpler definition that works for me:

<VirtualHost 147.251.9.183:80 >
   ServerAdmin admin@example.com
   ServerName www.mysite.com
   DocumentRoot /var/www/mysite
   <Directory /var/www/mysite>
       Options Indexes MultiViews
       AllowOverride None
       Order allow,deny
       allow from all
   </Directory>
   RewriteEngine On
   RewriteRule ^/$ /opencms/ [passthrough]
   RewriteCond %{REQUEST_URI} !^/opencms/.*$
   RewriteCond %{REQUEST_URI} !^/export/.*$
   RewriteCond %{REQUEST_URI} !^/resources/.*$
   RewriteCond %{REQUEST_URI} !^/error/.*$
   RewriteCond %{REQUEST_URI} !^/icons/.*$
   RewriteCond %{REQUEST_URI} !^/update/.*$
   RewriteRule .* /opencms%{REQUEST_URI} [QSA,passthrough]
</VirtualHost>

The configuration rewrites all requests by adding /opencms in front of them, except requests that already have the prefix, or go for static files or go for Apache error files or Apache file icons.

Configuring Tomcat

Make sure the connector to be used by Apache mod_jk is configured in the server.xml file.

<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />

Mysql backup

#!/usr/bin/env bash

USER=""
PASSWORD=""
OUTPUTDIR=""
DAYS_TO_KEEP=60

databases=`mysql -u$USER -p$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
cd $OUTPUTDIR
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        echo "Dumping database: $db"
        mysqldump -u mysqlbackup --databases --events $db | gzip > `date +%Y%m%d`.$db.sql.gz
    fi
done
find ¨$OUTPUTDIR*.gz¨ -type f -ctime +$DAYS_TO_KEEP -exec rm '{}' ';'

How to install Fail2ban in rhel 6 & 7

How to install Fail2ban in rhel 6 & 7

What is fail2ban?

Fail2ban works by scanning and monitoring log files for selected entries then bans IPs that show the malicious signs like too many password failures, seeking for exploits, etc.


1. Install Fail2Ban

For RHEL 6

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

For RHEL 7

rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm

yum install fail2ban

2. Copy the Configuration File

The default fail2ban configuration file is located at /etc/fail2ban/jail.conf. The configuration work should not be done in that file, since it can be modified by package upgrades, but rather copy it so that we can make our changes safely.

We need to copy this to a file called jail.local for fail2ban to find it:


cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local


3. Configure defaults in Jail.Local

The first section of defaults covers the basic rules that fail2ban will follow to all services enabled for fail2ban that are not overridden in the service’s own section.. If you want to set up more nuanced protection for your server, you can customize the details in each section.

You can see the default section below.

[DEFAULT]

# “ignoreip” can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1

# “bantime” is the number of seconds that a host is banned.
bantime  = 3600

# A host is banned if it has generated “maxretry” during the last “findtime”
# seconds.
findtime  = 600

# “maxretry” is the number of failures before a host get banned.
maxretry = 3

4. Add a jail file to protect SSH

Although you can add this parameters in the global jail.local file, it is a good practice to create seperate jail files for each of the services we want to protect with Fail2Ban.

So lets create a new jail for SSH with the vi editor.

vi /etc/fail2ban/jail.d/sshd.local

In the above file, add the following lines of code:

[sshd]
enabled = true
port = ssh
action = iptables-multiport
logpath = /var/log/secure
maxretry = 3
bantime = 3600

5. Restart Fail2Ban

service fail2ban restart

iptables -L

Check Fail2Ban Status

Use fail2ban-client command to query the overall status of the Fail2Ban jails.


fail2ban-client status

You can also query a specific jail status using the following command:

fail2ban-client status sshd

Manually Unban IP Banned by Fail2Ban

If for some reason you want to grant access to an IP that it is banned, use the following expression to manually unban an IP address, banned by fail2ban:

fail2ban-client set JAIL unbanip IP

eg. Unban IP 192.168.1.101, that was banned according to [ssh-iptables] jail:

fail2ban-client set sshd unbanip 192.168.1.101

Ngxin environment. It requires http to force a jump to https

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time

ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

Create a user in Mysql in linux

login to mysql as a root

mysql -u root -p

now create user with following command

CREATE USER ‘testdb’@’localhost’ IDENTIFIED BY ‘test123’;

if you got error like below.

then you have to reset the root password as password policy level in mysql. so simply use the below command to set the password for root in mysql.

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘Root@1234’;then it will show like “Query OK, 0 rows affected (0.00 sec)”

now try again the step to create user as per the password policy.

If you don’t want password policy and you want to create user password with some random simple password then follow the step below.

login mysql as root

mysql -u root -p

then check the policy status with below command

SHOW VARIABLES LIKE ‘validate_paswword%’;

it will show like below image.

you can see the validate_password_policy in MEDIUM.

now you have to change to LOW. So you can proceed in your own way. Now set the paoly rule in low with following command.

SET GLOBAL validat_password_policy=LOW;

now check the password policy like above. You will get like below image.

mysql> SET GLOBAL validate_password_length = 4;
Query OK, 0 rows affected (0.01 sec)

mysql> SHOW VARIABLES LIKE ‘validate_password%’;
+————————————–+——–+
| Variable_name | Value |
+————————————–+——–+
| validate_password_dictionary_file | |
| validate_password_length | 4 |
| validate_password_mixed_case_count | 1 |
| validate_password_number_count | 1 |
| validate_password_policy | MEDIUM |
| validate_password_special_char_count | 1 |
+————————————–+——–+
6 rows in set (0.00 sec)

mysql> SET GLOBAL validate_password_policy = LOW;
Query OK, 0 rows affected (0.01 sec)

 

Performance schema is not installed by default.

For checking, you can run the command

SHOW VARIABLES LIKE 'performance_schema';

Suppose, now you will see OFF

To enable it, start the server with the performance_schema variable enabled. For example, use these lines in your my.cnf file:

[mysqld]
performance_schema=ON

More details you can found in official documentation:

https://dev.mysql.com/doc/refman/en/performance-schema-q

MySQL Slave Failed to Open the Relay Log

This problem is a little tricky, there are possible fixes that MySQL website has stated. Sad to say, the one’s I read in the forum and site didn’t fix my problem. What I encountered was that the relay-bin from my MySQL slave server has already been ‘rotated’, meaning deleted from the folder. This happens when the slave has been disconnected from the master for quite a long time already and has not replicated anything. A simple way to fix this is to flush the logs, but make sure the slave is stopped before using this command…

FLUSH LOGS;

Bring in a fresh copy of the database from the master-server and update the slave-server database. THIS IS IMPORTANT! Since if you don’t update the slave database, you will not have the data from the time you were disconnected until you reset the relay logs. So UPDATE YOUR SLAVE WITH THE LATEST DATABASE FROM THE MASTER!

Now when the logs are flushed,all the relay-bin logs will be deleted when the slave is started again. Usually, this fixes the problem, but when you start the slave and the failed relay log error is still there, now you have to do some more desperate measures… reset the slave. This is what I had to do to fully restore my MySQL slave server. Reseting the slaves restores all the settings to default… password, username, relay-log, port, table to replicate, etc… So better to have a copy of your settings first before actually do a slave reset. When your ready to rest the slave, do the command…

RESET SLAVE;

after which you should restore all your setting with a command something like…

CHANGE MASTER TO MASTER_HOST=.....

now start your server with…

SLAVE START;

check your slave server with…

SHOW SLAVE STATUS\G

look for …

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

both should be YES, if not, check your syslog if there are other errors encountered. I’ll leave this until here since this is what I encountered and I was able to fix it.

Edit 5/14/11:

There is a possible change that after executing the CHANGE MASTER command that you’ll receive this error below…

ERROR 1201 (HY000): Could not initialize master info structure; more error messages can be found in the MySQL error log

This can occur when the relay logs under /var/lib/mysql were not properly cleaned and are still there. The next thing is to delete them manually, log back in to mysql, refresh logs, reset slave then execute the CHANGE MASTER command again. The file to delete would be relay-log.info .This should work now. Sometimes I don’t know why mysql can’t reset the slave logs.

Ngxin do http forced jump https interface POST request becomes GET

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time

Linux Servers Prevent Pings and Open Pings

Linux defaults to allow ping responses, which means that ping is on, but ping may be the start of a network attack, so turning off ping can improve the server’s security factor. Whether the system allows ping is determined by two factors: 1. Kernel parameters, 2. Firewall. Two factors are required to allow ping at the same time. If any of them is forbidden, ping cannot be opened. The specific configuration method is as follows:

1, the kernel parameter settings

Allow/disable ping settings (permit ping by default)

The command to temporarily enable/disable ping is to modify the contents of the /proc/sys/net/ipv4/icmp_echo_ignore_all file. The contents of the file are only 1 character. 0 is for ping, 1 is forbidden, and there is no need to restart the server.

Permanently allow/disable ping configuration method:

Modify the file /etc/sysctl.conf and add a line at the end of the file:

Net.ipv4.icmp_echo_ignore_all = 1

If you already have the net.ipv4.icmp_echo_ignore_all line, you can directly change the value after the = sign to allow 0 and 1 to disable.

Execute sysctl -p after modification to make the new configuration take effect (important).

2, firewall settings (the premise of the method here is the kernel configuration is the default value, that is not prohibited ping)

Here takes the iptables firewall as an example. For other firewall operation methods, refer to the official firewall documentation.

Allow ping settings

Iptables -A INPUT -p icmp –icmp-type echo-request -j ACCEPT

Iptables -A OUTPUT -p icmp –icmp-type echo-reply -j ACCEPT

Or you can temporarily stop the firewall:

Service iptables stop

Prohibit ping setting

Iptables -A INPUT -p icmp –icmp-type 8 -s 0/0 -j DROP