November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Herbal bath poweder

Nalangu Maavu is an hand made 100% natural product for herbal body wash or face wash/face mask which is popular in South India.
It is a product, made with natural ingredients and it was everyone’s choice of a good cleanser.
It is so gentle and will bring glow to your skin.

INGREDIENTS:

MINT LEAVES
Poosanthu pattai


Thirimanjana pattai
Neem powder
SANDALWOOD POWDER
AVARAM POO
NATTU KICHILI
SEEMAI KICHILI

KARBOKA ARISI

cinnamon
lavanga pathiri ilai
kalpasi
pachilai
vasambu
marikozhlunthu
poolan kizhlangu
shebagha poo
mali nanari
KORAI KILANGU
Athi mathuram
USILAM PODI
MAGILAM POO
ROSE PETALS
KASTURI MANJAL
MARU
DAVANAM
NANNARI
VETIVER
VILAMESAI ROOT
KASA KASA
RAW RICE
CHEMBARUTHI FLOWER.

Benefits:
Green Gram exfoliates dead skin cells and brightens up skin texture. Also lightens skin tone and clears tan.
Rose petals have properties and help to reduces wrinkles and prevents the skin from sun damage.As all of us Know,
Kasturi Manjal helps to treat blemishes and pimples.
Korai Kizhangu Helps to remove the unwanted hair from body and face.
Mint Leaves act as a cleaning agent and make you fresh and oil free.
Raw Rice that contains Vitamin E nourishes the skin and makes it look younger.

Preparation :

Dry all the ingredients in the sun for 2 or 3 days.
Then grind it in a flour mill nearby and store in an airtight container. You can even grind in a mixie if you have a good one.

Mix a required amount of powder with water and use it as a soap.

I am sure you will definitely love this amazing bath powder. You will be wondering to see the positive changes in your skin, once you start using this powder.

Ingredients :

Whole green gram – 1 kg

Bengal gram (Kadalai paruppu) – 200 gms

Spiked ginger – 100 gms

Vetiver – one fistful

Wild turmeric – 100 gms

Neem – one hand full of leaves (dried)

Green gram is an excellent cleanser. It is a natural beauty product. It is used in face packs, as it is a good scrubber. It brightens our skin.

Bengal gram is a tan removal agent. It is used in face packs with a few other ingredients, as it lightens our skin, clears pimples, helps to fade acne scars and to get rid of blackheads.

Spiked ginger (Poolaan kilangu or seemai kichilli kilangu in Tamil, Valiya kacholam in Malayalam) is used for enhancing skin complexion. It is the best skin conditioner. It has a nice aroma.

Vetiver, (Ramacham ver in Malayalam) has an amazing aroma. It has anti-inflammatory properties.
It is known for its soothing and cooling effects, as it is loaded with hydrating qualities. It is an anti-aging tonic. Clears acne and boosts skin health.
Wild turmeric, known as Kasthuri manjal is a great healer for skin infections and acne. It is a natural antiseptic and wonderful skin rejuvenator. This turmeric does not leave a yellow tint on the skin.

Neem, as we all know has great benefits for the skin. It is a natural antiseptic. Helps in clearing scars, removing blackheads, retains moisture, tones the skin and a great cure for any kind of skin infection.

Benefits of using Nalangu Maavu Herbal Bath Powder:
Nalangu Maavu absorbs excess oil from the skin without drying it
It helps to restore the natural pH balance of skin
Turmeric in the bath powder works wonders for inflammation
Turmeric also promotes the body’s synthesis of antioxidants and slows down visible signs of aging
Regular usage of herbal bath powder can help people with acne or pimples, by reducing oil secretion
Using Nalangu Maavu Herbal Bath Powder on a daily basis can help reduce body odor and excessive sweating
It acts like a toner, minimizing pores on the skin
Nalangu Maavu Herbal Bath Powder is antifungal, anti-bacterial and anti-microbial

How To Configure IP Address In Ubuntu 18.04 LTS

Netplan has been introduced by Ubuntu developers in Ubuntu 17.10. In this new approach, we no longer use /etc/network/interfaces file to configure IP address rather we use a YAML file. The default configuration files of Netplan are found under /etc/netplan/ directory. In this brief tutorial, we are going to learn to configure static and dynamic IP address in Ubuntu 18.04 LTS server and desktop editions.

Configure Static IP Address In Ubuntu 18.04 LTS Server

Let us find out the default network configuration file:

$ ls /etc/netplan/
50-cloud-init.yaml

As you can see, the default network configuration file is 50-cloud-init.yaml and it is obviously a YAML file.

Now, let check the contents of this file:

$ cat /etc/netplan/50-cloud-init.yaml

Add the configuration for available interfaces like eth0: and eth1:

network:
   ethernets:
     eth0:          
     addresses:
     - 192.168.1.9/24
     dhcp: false
     gateway4: 192.168.1.1
     nameservers:
        addresses:
        - 192.168.1.1
        - 8.8.8.8
        - 8.8.4.4
        search: []
     eth0:
     addresses:
     - 192.168.1.9/24
     dhcp: false
  version: 2  

How to disable Cloud-Init in a RHEL Cloud Image

So this one is pretty simple. However, I found a lot of misinformation along the way, so I figured that I would jot the proper (and most simple) process here.

Symptoms: a RHEL (or variant) VM that takes a very long time to boot. On the VM console, you can see the following output while the VM boot process is stalled and waiting for a timeout. Note that the message below has nothing to do with cloud init, but its the output that I have most often seen on the console while waiting for a VM to boot.

[106.325574} random: crng init done

Note that I have run into this issue in both OpenStack (when booting from external provider networks) and in KVM.

Upon initial boot of the VM, run the command below.

touch /etc/cloud/cloud-init.disabled

How to install Apache, PHP 7.3 and MySQL on CentOS 7.6

How to install Apache, PHP 7.3 and MySQL on CentOS 7.6

I will add the EPEL repo here to install latest phpMyAdmin as follows:

rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY*
yum -y install epel-release

Installing MySQL / MariaDB
MariaDB is a MySQL fork of the original MySQL developer Monty Widenius. MariaDB is compatible with MySQL and I’ve chosen to use MariaDB here instead of MySQL. Run this command to install MariaDB with yum:

yum -y install mariadb-server mariadb
Then we create the system startup links for MySQL (so that MySQL starts automatically whenever the system boots) and start the MySQL server:

systemctl start mariadb.service
systemctl enable mariadb.service
Set passwords for the MySQL root account:

mysql_secure_installation

[root@server1 ~]

# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): <–ENTER
OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]
New password: <–yourmariadbpassword
Re-enter new password: <–yourmariadbpassword
Password updated successfully!
Reloading privilege tables..
… Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] <–ENTER
… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] <–ENTER
… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] <–ENTER

  • Dropping test database…
    … Success!
  • Removing privileges on test database…
    … Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] <–ENTER
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

[root@server1 ~]

#
3 Installing Apache
CentOS 7 ships with Apache 2.4. Apache is directly available as a CentOS 7 package, therefore we can install it like this:

yum -y install httpd

Now configure your system to start Apache at boot time…

systemctl start httpd.service
systemctl enable httpd.service
To be able to access the webserver from outside, we have to open the HTTP (80) and HTTPS (443) ports in the firewall. The default firewall on CentOS is firewalld which can be configured with the firewalld-cmd command.

firewall-cmd –permanent –zone=public –add-service=http
firewall-cmd –permanent –zone=public –add-service=https
firewall-cmd –reload
Now direct your browser to the IP address of your server, in my case http://192.168.1.100, and you should see the Apache placeholder page:

Installing PHP
The PHP version that ships with CentOS as default is quite old (PHP 5.4). Therefore I will show you in this chapter some options to install newer PHP versions like PHP 7.0 to 7.3 from Remi repository.

Add the Remi CentOS repository.

rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
Install yum-utils as we need the yum-config-manager utility.

yum -y install yum-utils
and run yum update

yum update
Now you have to chose which PHP version you want to use on the server. If you like to use PHP 5.4, then proceed to chapter 4.1. To install PHP 7.0, follow the commands in chapter 4.2, for PHP 7.1 chapter 4.3, for PHP 7.4 use chapter 4.4 and for PHP 7.3 follow chapter 4.5 instead. Follow just one of the 4.x chapters and not all of them as you can only use one PHP version at a time with Apache mod_php.

4.1 Install PHP 5.4
To install PHP 5.4, run this command:

yum -y install php
4.2 Install PHP 7.0
We can install PHP 7.0 and the Apache PHP 7.0 module as follows:

yum-config-manager –enable remi-php70
yum -y install php php-opcache
4.3 Install PHP 7.1
If you want to use PHP 7.1 instead, use:

yum-config-manager –enable remi-php71
yum -y install php php-opcache
4.4 Install PHP 7.2
If you want to use PHP 7.2 instead, use:

yum-config-manager –enable remi-php72
yum -y install php php-opcache
4.5 Install PHP 7.3
If you want to use PHP 7.3 instead, use:

yum-config-manager –enable remi-php73
yum -y install php php-opcache
In this example and in the downloadable virtual machine, I’ll use PHP 7.3.

We must restart Apache to apply the changes:

systemctl restart httpd.service
5 Testing PHP / Getting Details About Your PHP Installation
The document root of the default website is /var/www/html. We will create a small PHP file (info.php) in that directory and call it in a browser to test the PHP installation. The file will display lots of useful details about our PHP installation, such as the installed PHP version.

nano /var/www/html/info.php
<?php
phpinfo();
Now we call that file in a browser (e.g. http://192.168.1.100/info.php)

Getting MySQL Support In PHP
To get MySQL support in PHP, we can install the php-mysqlnd package. It’s a good idea to install some other PHP modules as well as you might need them for your applications. You can search for available PHP5 modules like this:

yum search php
Pick the ones you need and install them like this:

yum -y install php-mysqlnd php-pdo
In the next step I will install some common PHP modules that are required by CMS Systems like WordPress, Joomla, and Drupal:

yum -y install php-gd php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-soap curl curl-devel
Now restart Apache web server:

systemctl restart httpd.service
Now reload http://192.168.1.100/info.php in your browser and scroll down to the modules section again. You should now find lots of new modules like curl etc there.:

If you don’t need the PHP info output anymore, then delete that file for security reasons.

rm /var/www/html/info.php

7 phpMyAdmin installation

phpMyAdmin is a web interface through which you can manage your MySQL databases.
phpMyAdmin can now be installed as follows:

yum -y install phpMyAdmin

Now we configure phpMyAdmin. We change the Apache configuration so that phpMyAdmin allows connections not just from localhost (by commenting out the stanza and adding the ‘Require all granted’ line):

nano /etc/httpd/conf.d/phpMyAdmin.conf
[…]
Alias /phpMyAdmin /usr/share/phpMyAdmin
Alias /phpmyadmin /usr/share/phpMyAdmin


AddDefaultCharset UTF-8


# Apache 2.4

Require ip 127.0.0.1

Require ip ::1

Require all granted

# Apache 2.2 Order Deny,Allow Deny from All Allow from 127.0.0.1 Allow from ::1


Options none
AllowOverride Limit
Require all granted

Restart Apache to apply the configuration changes:

systemctl restart httpd.service

Afterwards, you can access phpMyAdmin under http://192.168.1.100/phpmyadmin/

aws-cli

Description

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The AWS CLI introduces a new set of simple file commands for efficient file transfers to and from Amazon S3.

Supported Services

For a list of the available services you can use with AWS Command Line Interface, see Available Services in the AWS CLI Command Reference.

AWS Command Line Interface on GitHub

You can view—and fork—the source code for the AWS CLI on GitHub in the https://github.com/aws/aws-cli project.

Versions#

VersionRelease Date
1.10.382016-06-14
1.10.352016-06-03
1.10.332016-05-25
1.10.302016-05-18

AWS CLI Cheat sheet – List of All CLI commands#

Setup#

Install AWS CLI#

AWS CLI is an common CLI tool for managing the AWS resources. With this single tool we can manage all the aws resources

sudo apt-get install -y python-dev python-pip
sudo pip install awscli
aws --version
aws configure

Bash one-liners#

cat <file> # output a file
tee # split output into a file
cut -f 2 # print the 2nd column, per line
sed -n '5{p;q}' # print the 5th line in a file
sed 1d # print all lines, except the first
tail -n +2 # print all lines, starting on the 2nd
head -n 5 # print the first 5 lines
tail -n 5 # print the last 5 lines

expand # convert tabs to 4 spaces
unexpand -a # convert 4 spaces to tabs
wc # word count
tr ' ' \\t # translate / convert characters to other characters

sort # sort data
uniq # show only unique entries
paste # combine rows of text, by line
join # combine rows of text, by initial column value

Cloudtrail – Logging and Auditing#

http://docs.aws.amazon.com/cli/latest/reference/cloudtrail/ 5 Trails total, with support for resource level permissions

# list all trails
aws cloudtrail describe-trails

# list all S3 buckets
aws s3 ls

# create a new trail
aws cloudtrail create-subscription \
    --name awslog \
    --s3-new-bucket awslog2016

# list the names of all trails
aws cloudtrail describe-trails --output text | cut -f 8

# get the status of a trail
aws cloudtrail get-trail-status \
    --name awslog

# delete a trail
aws cloudtrail delete-trail \
    --name awslog

# delete the S3 bucket of a trail
aws s3 rb s3://awslog2016 --force

# add tags to a trail, up to 10 tags
aws cloudtrail add-tags \
    --resource-id awslog \
    --tags-list "Key=log-type,Value=all"

# list the tags of a trail
aws cloudtrail list-tags \
    --resource-id-list 

# remove a tag from a trail
aws cloudtrail remove-tags \
    --resource-id awslog \
    --tags-list "Key=log-type,Value=all"

IAM#

Users#

https://blogs.aws.amazon.com/security/post/Tx15CIT22V4J8RP/How-to-rotate-access-keys-for-IAM-usershttp://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html Limits = 5000 users, 100 group, 250 roles, 2 access keys / user

http://docs.aws.amazon.com/cli/latest/reference/iam/index.html
# list all user's info
aws iam list-users

# list all user's usernames
aws iam list-users --output text | cut -f 6

# list current user's info
aws iam get-user

# list current user's access keys
aws iam list-access-keys

# crate new user
aws iam create-user \
    --user-name aws-admin2

# create multiple new users, from a file
allUsers=$(cat ./user-names.txt)
for userName in $allUsers; do
    aws iam create-user \
        --user-name $userName
done

# list all users
aws iam list-users --no-paginate

# get a specific user's info
aws iam get-user \
    --user-name aws-admin2

# delete one user
aws iam delete-user \
    --user-name aws-admin2

# delete all users
# allUsers=$(aws iam list-users --output text | cut -f 6);
allUsers=$(cat ./user-names.txt)
for userName in $allUsers; do
    aws iam delete-user \
        --user-name $userName
done

Password policy#

http://docs.aws.amazon.com/cli/latest/reference/iam/
# list policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/get-account-password-policy.html
aws iam get-account-password-policy

# set policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/update-account-password-policy.html
aws iam update-account-password-policy \
    --minimum-password-length 12 \
    --require-symbols \
    --require-numbers \
    --require-uppercase-characters \
    --require-lowercase-characters \
    --allow-users-to-change-password

# delete policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/delete-account-password-policy.html
aws iam delete-account-password-policy

Access Keys#

http://docs.aws.amazon.com/cli/latest/reference/iam/
# list all access keys
aws iam list-access-keys

# list access keys of a specific user
aws iam list-access-keys \
    --user-name aws-admin2

# create a new access key
aws iam create-access-key \
    --user-name aws-admin2 \
    --output text | tee aws-admin2.txt

# list last access time of an access key
aws iam get-access-key-last-used \
    --access-key-id AKIAINA6AJZY4EXAMPLE

# deactivate an acccss key
aws iam update-access-key \
    --access-key-id AKIAI44QH8DHBEXAMPLE \
    --status Inactive \
    --user-name aws-admin2

# delete an access key
aws iam delete-access-key \
    --access-key-id AKIAI44QH8DHBEXAMPLE \
    --user-name aws-admin2

Groups, Policies, Managed Policies#

http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.htmlhttp://docs.aws.amazon.com/cli/latest/reference/iam/
# list all groups
aws iam list-groups

# create a group
aws iam create-group --group-name FullAdmins

# delete a group
aws iam delete-group \
    --group-name FullAdmins

# list all policies
aws iam list-policies

# get a specific policy
aws iam get-policy \
    --policy-arn <value>

# list all users, groups, and roles, for a given policy
aws iam list-entities-for-policy \
    --policy-arn <value>

# list policies, for a given group
aws iam list-attached-group-policies \
    --group-name FullAdmins

# add a policy to a group
aws iam attach-group-policy \
    --group-name FullAdmins \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

# add a user to a group
aws iam add-user-to-group \
    --group-name FullAdmins \
    --user-name aws-admin2

# list users, for a given group
aws iam get-group \
    --group-name FullAdmins

# list groups, for a given user
aws iam list-groups-for-user \
    --user-name aws-admin2

# remove a user from a group
aws iam remove-user-from-group \
    --group-name FullAdmins \
    --user-name aws-admin2

# remove a policy from a group
aws iam detach-group-policy \
    --group-name FullAdmins \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

# delete a group
aws iam delete-group \
    --group-name FullAdmins

EC2#

keypairs#

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
# list all keypairs
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-key-pairs.html
aws ec2 describe-key-pairs

# create a keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/create-key-pair.html
aws ec2 create-key-pair \
    --key-name <value>

# create a new private / public keypair, using RSA 2048-bit
ssh-keygen -t rsa -b 2048

# import an existing keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/import-key-pair.html
aws ec2 import-key-pair \
    --key-name keyname_test \
    --public-key-material file:///home/apollo/id_rsa.pub

# delete a keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/delete-key-pair.html
aws ec2 delete-key-pair \
    --key-name <value>

Security Groups#

http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html
# list all security groups
aws ec2 describe-security-groups

# create a security group
aws ec2 create-security-group \
    --vpc-id vpc-1a2b3c4d \
    --group-name web-access \
    --description "web access"

# list details about a securty group
aws ec2 describe-security-groups \
    --group-id sg-0000000

# open port 80, for everyone
aws ec2 authorize-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/24

# get my public ip
my_ip=$(dig +short myip.opendns.com @resolver1.opendns.com);
echo $my_ip

# open port 22, just for my ip
aws ec2 authorize-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr $my_ip/24

# remove a firewall rule from a group
aws ec2 revoke-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/24

# delete a security group
aws ec2 delete-security-group \
    --group-id sg-00000000

Instances#

http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html
# list all instances (running, and not running)
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html
aws ec2 describe-instances

# create a new instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html
aws ec2 run-instances \
    --image-id ami-f0e7d19a \    
    --instance-type t2.micro \
    --security-group-ids sg-00000000 \
    --dry-run

# stop an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/terminate-instances.html
aws ec2 terminate-instances \
    --instance-ids <instance_id>

# list status of all instances
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-status.html
aws ec2 describe-instance-status

# list status of a specific instance
aws ec2 describe-instance-status \
    --instance-ids <instance_id>

Tags#

# list the tags of an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html
aws ec2 describe-tags

# add a tag to an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/create-tags.html
aws ec2 create-tags \
    --resources "ami-1a2b3c4d" \
    --tags Key=name,Value=debian

# delete a tag on an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/delete-tags.html
aws ec2 delete-tags \
    --resources "ami-1a2b3c4d" \
    --tags Key=Name,Value=

Cloudwatch#

Log Groups#

http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.htmlhttp://docs.aws.amazon.com/cli/latest/reference/logs/index.html#cli-aws-logscreate a group

http://docs.aws.amazon.com/cli/latest/reference/logs/create-log-group.html
aws logs create-log-group \
    --log-group-name "DefaultGroup"

list all log groups

http://docs.aws.amazon.com/cli/latest/reference/logs/describe-log-groups.html
aws logs describe-log-groups

aws logs describe-log-groups \
    --log-group-name-prefix "Default"

delete a group

http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html
aws logs delete-log-group \
    --log-group-name "DefaultGroup"

Log Streams#

# Log group names can be between 1 and 512 characters long. Allowed
# characters include a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen),
# '/' (forward slash), and '.' (period).

# create a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/create-log-stream.html
aws logs create-log-stream \
    --log-group-name "DefaultGroup" \
    --log-stream-name "syslog"

# list details on a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/describe-log-streams.html
aws logs describe-log-streams \
    --log-group-name "syslog"

aws logs describe-log-streams \
    --log-stream-name-prefix "syslog"

# delete a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-stream.html
aws logs delete-log-stream \
    --log-group-name "DefaultGroup" \
    --log-stream-name "Default Stream"

AWS completer for Ubuntu with Bash#

The following utility can be used for auto-completion of commands:

$ which aws_completer
/usr/bin/aws_completer

$ complete -C '/usr/bin/aws_completer' aws

For future shell sessions, consider add this to your ~/.bashrc

$ echo "complete -C '/usr/bin/aws_completer' aws" >> ~/.bashrc

To check, type:

$ aws ec

Press the [TAB] key, it should add 2 automatically:

$ aws ec2

Creating a New Profile#

To setup a new credential profile with the name myprofile :

$ aws configure --profile myprofile
AWS Access Key ID [None]: ACCESSKEY
AWS Secret Access Key [None]: SECRETKEY
Default region name [None]: REGIONNAME
Default output format [None]: text | table | json

For the AWS access key id and secret, create an IAM user in the AWS console and generate keys for it.

Region will be the default region for commands in the format eu-west-1 or us-east-1 .

The default output format can either be text , table or json .

You can now use the profile name in other commands by using the --profile option, e.g.:

$ aws ec2 describe-instances --profile myprofile

AWS libraries for other languages (e.g. aws-sdk for Ruby or boto3 for Python) have options to use the profile you create with this method too. E.g. creating a new session in boto3 can be done like this, boto3.Session(profile_name:'myprofile') and it will use the credentials you created for the profile.

The details of your aws-cli configuration can be found in ~/.aws/config and ~/.aws/credentials (on linux and mac-os). These details can be edited manually from there.

Installation and setup#

There are a number of different ways to install the AWS CLI on your machine, depending on what operating system and environment you are using:

On Microsoft Windows – use the MSI installer. On Linux, OS X, or Unix – use pip (a package manager for Python software) or install manually with the bundled installer.

Install using pip:

You will need python to be installed (version 2, 2.6.5+,3 or 3.3+). Check with

python --version

pip --help

Given that both of these are installed, use the following command to install the aws cli.

sudo pip install awscli

Install on Windows The AWS CLI is supported on Microsoft Windows XP or later. For Windows users, the MSI installation package offers a familiar and convenient way to install the AWS CLI without installing any other prerequisites. Windows users should use the MSI installer unless they are already using pip for package management.

Run the downloaded MSI installer. Follow the instructions that appear.

To install the AWS CLI using the bundled installer

Prerequisites:

  • Linux, OS X, or Unix
  • Python 2 version 2.6.5+ or Python 3 version 3.3+
  1. Download the AWS CLI Bundled Installer using wget or curl.
  2. Unzip the package.
  3. Run the install executable.

On Linux and OS X, here are the three commands that correspond to each step:

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Install using HomeBrew on OS X:

Another option for OS X

brew install awscli

Test the AWS CLI Installation

Confirm that the CLI is installed correctly by viewing the help file. Open a terminal, shell or command prompt, enter aws help and press Enter:

$ aws help

Configuring the AWS CLI

Once you have finished the installation you need to configure it. You’ll need your access key and secret key that you get when you create your account on aws. You can also specify a default region name and a default output type (text|table|json).

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: ENTER

Updating the CLI tool

Amazon periodically releases new versions of the AWS Tool. If the tool was installed using the Python Pip tool the following command will check the remote repository for updates, and apply it to your local system.

$ pip install awscli --upgrade

List S3 buckets#

aws s3 ls

Use a named profile

aws --profile myprofile s3 ls

List all objects in a bucket, including objects in folders, with size in human-readable format and a summary of the buckets properties in the end –

aws s3 ls --recursive --summarize --human-readable s3://<bucket_name>/

Using aws cli commands#

The syntax for using the aws cli is as follows:

aws [options] <command> <subcommand> [parameters]

Some examples using the ‘ec2’ command and the ‘describe-instances’ subcommand:

aws ec2 describe-instances

aws ec2 describe-instances --instance-ids <your-id>

Example with a fake id:

aws ec2 describe-instances --instance-ids i-c71r246a

Swappiness in Linux

Most of Linux users that have installed a distribution before, must have noticed the existence of the “swap space” during the partitioning phase (it is usually found as /sda5). This is a dedicated space in your hard drive that is usually set to at least twice the capacity of your RAM, and along with it constitutes the total virtual memory of your system. From time to time, the Linux kernel utilizes this swap space by copying chunks from your RAM to the swap, allowing active processes that require more memory than it is physically available to run.

Swappiness is the kernel parameter that defines how much (and how often) your Linux kernel will copy RAM contents to swap. This parameter’s default value is “60” and it can take anything from “0” to “100”. The higher the value of the swappiness parameter, the more aggressively your kernel will swap.

Why change it?
The default value is a one-fit-all solution that can’t possibly be equally efficient in all of the individual use cases, hardware specifications and user needs. Moreover, the swappiness of a system is a primary factor that determines the overall functionality and speed performance of an OS. That said, it is very important to understand how swappiness works and how the various configurations of this element could improve the operation of your system and thus your everyday usage experience.

As RAM memory is so much larger and cheaper than it used to be in the past, there are many users nowadays that have enough memory to almost never need to use the swap file. The obvious benefit that derives from this is that no system resources are ever occupied by the swapping process and that cached files are not moved back and forth from the RAM to the swap and vise Versa for no reason.

How to change Swappiness?
The swappiness parameter value is stored in a simple configuration text file located in /proc/sys/vm and is named “swappiness”. If you navigate there through the file manager, you will be able to locate the file and open it to check your system’s swappiness. You can also check it or change it through the terminal (which is faster) by typing the following command:

sudo sysctl vm.swappiness=10
or whatever else between “0” and “100” instead of the value “10” that I used. To ensure that the swappiness value was correctly changed to the desired one, you simply type:

cat /proc/sys/vm/swappiness
on the terminal again and the active value will be outputted.

his change has an immediate effect in your system’s operation and thus no rebooting is required. In fact, rebooting will revert the swappiness back to its default value (60). If you have thoroughly tested your desired swapping value and you found that it works reliably, you can make the change permanent by navigating to /etc/sysctl.conf which is yet another text configuration file. You may open this as root (administrator) and add the following line on the bottom to determine the swappiness: vm.swappiness=”your desire value here”.

Then, save the text file and you’re done!

Install SSM Agent on Ubuntu EC2 instances

sudo apt update
sudo apt install snapd
sudo snap install amazon-ssm-agent –classic

sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl stop snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl status snap.amazon-ssm-agent.amazon-ssm-agent.service

Converting your virtual machine to AWS EC2 AMI

The way to use AWS is not limited to AMI provided by Amazon (or 3rd party/community), but is possible to instantiate an EC2 workload starting from your own image, and converting to AMI.

The steps to create your custom AMI starting from VMware runs through these macro steps:

  • create VM template (ova)
  • create S3 bucket and upload the template
  • convert with awscli

OVA creation and upload in S3

This is the easiest part of this how-to that I don’t want to explain is how to export the Virtual Machine ova from the vInfrastructure or Workstation/Fusion… anyway IMHO the best method to manage VM template is using ova; starting from ovf and vmdk files, you could simply converting these files to ovf using ovftool (https://www.vmware.com/support/developer/ovf/), and executing the following command:

1ovftool <vm_image>.ovf <vm_image>.ova

Create an S3 bucket and upload the ova template via web, keeping in mind the name of the bucket and the name of the ova.

AMI conversion

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: { “Service”: “vmie.amazonaws.com” },
“Action”: “sts:AssumeRole”,
“Condition”: {
“StringEquals”:{
“sts:Externalid”: “vmimport”
}
}
}
]
}

Prepare the policy document trust-policy.json:

Then, create the role:

1aws iam create-role –role-name vmimport –assume-role-policy-document file://trust-policy.json

…and repare the role policy document named role-policy.json

{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:GetBucketLocation”,
“s3:GetObject”,
“s3:ListBucket”
],
“Resource”:[
“arn:aws:s3:::mohanawss3”,
“arn:aws:s3:::mohanawss3/” ] }, { “Effect”:”Allow”, “Action”:[ “ec2:ModifySnapshotAttribute”, “ec2:CopySnapshot”, “ec2:RegisterImage”, “ec2:Describe
],
“Resource”:”*”
}
]
}

After role, create the role policy:

1aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file://role-policy.json

Finally we could proceed with the real conversion, uploading the ova file into S3 bucket and creating the “container” description file.

The container.json will look like this:

[
{
“Description”: “mycentos OVA”,
“Format”: “ova”,
“UserBucket”: {
“S3Bucket”: “mohanawss3”,
“S3Key”: “awsmohan.ova”
}
}]

Then execute the command:

1aws ec2 import-image –description “Mohanaws” –license-type BYOL –disk-containers file://containers.json

The process is asynchronous and to see what is the state of this task, simply issuing the following command using “import-ami-xxxxxx” as task id:

1aws ec2 describe-import-image-tasks –import-task-ids import-ami-xxxx

Following the official documentation ( http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html ) the states are:

  • active — The import task is in progress.
  • deleting — The import task is being canceled.
  • deleted — The import task is canceled.
  • updating — Import status is updating.
  • validating — The imported image is being validated.
  • converting — The imported image is being converted into an AMI.
  • completed — The import task is completed and the AMI is ready to use.

When the conversion is completed, you could start the first EC2 instance to see if all is gone well.


Installing Kubernetes 1.8.1 on centos 7 with flannel

Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

3] Also, make sure of following things.

Network interconnectivity between VMs.
hostnames
Prefer to give Static IP.
DNS entries
Disable SELinux
$ vi /etc/selinux/config

Disable and stop firewall. ( If you are not familiar with firewall )
$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

   https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “–apiserver-advertise-address=” as an argument. So the whole command will be like this-
$ kubeadm init –apiserver-advertise-address=

K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-
$ kubeadm init –apiserver-advertise-address= –pod-network-cidr=10.244.0.0/16

Exa:- $ kubeadm init –apiserver-advertise-address=172.31.14.55 –pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

Create RBAC Pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces

Create Flannel pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

Step 4] Joining your nodes

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join –token :

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

Step 5] Verify your cluster by running sample nginx application

$ vi sample_nginx.yaml

———————————————

apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 2 # tells deployment to run 2 pods matching the template

template: # create pods using pod definition in this template

metadata:

 # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

 # generated from the deployment name

 labels:

   app: nginx

spec:

 containers:

 – name: nginx

   image: nginx:1.7.9

   ports:

   – containerPort: 80

——————————————————

$ kubectl create -f sample_nginx.yaml

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment –name=nginx-service –port=80 –target-port=80 –type=NodePort

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “–port” option for the “–target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

–type=NodePort :- when this option is given k8s tries to find out one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

Check service is created or not

$ kubectl get svc

Try to curl –

$ curl 80

From all the VMs including master. Nginx welcome page should be accessible.

$ curl nodePort

$ curl nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

CentOS 7.6 configures Nginx reverse proxy

First, the experiment introduction
Using a three CentOS 7 virtual machine to build a simple Nginx reverse proxy load cluster, three virtual machine addresses and functions
192.168.1.76 nginx load balancer
192.168.1.82 web01 server
192.168.1.78 web02 server
Second, install the nginx software (the following operations must be carried out on three virtual machines)
Some Centos 7.6 does not have the wget command installed, so install it yourself:
yum -y install wget

Install nginx software: (three servers must be installed)

$ wget http://dl.Fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -ivh epel-release-latest-7.noarch.rpm
$ yum install nginx (direct yum installation)

Installation is so simple and convenient, after the installation is complete, you can use systemctl to control the startup of nginx.
$ systemctl enable nginx (join boot)
$ systemctl start nginx (turn on nginx)
$ systemctl status nginx (view status)
After the three servers are installed with nginx respectively, the test can run normally and provide web services. If the error is probably the cause of the firewall, please see the last few steps about the firewall.
Modify the configuration file of the nginx of the proxy server to implement load balancing. As the name implies, multiple requests are distributed to different services to achieve a balanced load and reduce the pressure on a single service.

$ vi /etc/nginx/nginx.conf (modify configuration file, global configuration file)
For more information on configuration, see:
* Official English Documentation: http://nginx.org/en/docs/
* Official Russian Documentation: http://nginx.org/ru/docs/
User nginx;
worker_processes auto; (default is automatic, you can set it yourself, generally no more than cpu core)
error_log /var/log/nginx/error.log; (error log path)
pid /run/nginx.pid; (pid file path)
Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
Events { accept_mutex on; (set network connection serialization to prevent surprises, default is on) multi_accept on; (set whether a process accepts multiple network connections at the same time, the default is off) worker_connections 1024; (the maximum of a process Number of connections)
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; Sendfile on; # tcp_nopush on; (not commented out here) tcp_nodelay on; keepalive_timeout 65; (connection timeout) types_hash_max_size 2048; gzip on; (open compression) include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf;
Here to set load balancing, load balancing has multiple strategies, nginx comes with polling, weights, ip-hash, response time and so on.
Default is to split the http load, the way to poll.
is to distribute the request according to the weight, the load with high weight is large
ip-hash, according to ip to allocate, keep the same ip on the same server.
Response time, according to the response time of the server nginx, preferentially distributed to the server with fast response.
The centralized strategy can be combined with
upstream tomcat { (tomcat is a custom load balancing rule name)
ip_hash; (ip_hash is the ip-hash method)
??????server 192.168.1.78:80 weight=3 fail_timeout=20s;
??????server 192.168.1.82:80 weight=4 fail_timeout=20s;
can define multiple sets of rules
}
Server { listen 80 default_server; (default listening port 80) listen localhost; (listening server) server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; Location / { ( / means all requests, can be customized to set different load rules and services for different domain names)
proxy_pass http://tomcat; (reverse proxy, fill in your own load balancing rule name)
proxy_redirect off; (The following settings can be copied directly. If not, it may lead to some problems such as unauthentication.)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90; The following are just some timeout settings, but don't)
proxy_send_timeout 90;
proxy_read_timeout 90;
}
# location ~.(gif|jpg|png)$ { (for example, write in regular expression)
# root /home/root/ Images;
# }
error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }
Settings for a TLS enabled server.
#
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
#
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/private/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
#
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
#
location / {
}
#
error_page 404 /404.html;
location = /40x.html {
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
After the configuration is updated, the reload configuration can take effect without restarting the service.
nginx -s reload
If you can't access it, it may be because the firewall is open and the port is not open:
Start: systemctl start firewalld
off: systemctl stop firewalld
view status: systemctl status firewalld
boot disable: systemctl disable firewalld
boot enable: systemctl enable firewalld
Open a port:
Add
firewall-cmd --zone=public --add-port=80/tcp --permanent (--permanent is permanent, no failure after restarting this parameter)
Reload
firewall-cmd --reload
view
firewall-cmd -- zone = public --query-port = 80 / tcp
delete
firewall-cmd --zone = public --remove- port = 80 / tcp --permanent