November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Redis master-slave + KeepAlived achieve high availability

Redis master-slave + KeepAlived achieve high availability

Redis is a non-relational database that we currently use more frequently. It can support diverse data types, multi-threaded high concurrency support, and redis running in memory with faster read and write. Because of the excellent performance of redis, how can we ensure that redis can deal with downtime during operation?

So today’s summary of the construction of the redis master-slave high-availability system, with reference to online bloggers of some great gods, found that many are pitted, so I share this one time, hoping to help everyone.

Redis Features
Redis is completely open source free, complies with the BSD protocol and is a high performance key-value database.

Redis and other key-value cache products have the following three characteristics:

Redis supports the persistence of data, can keep the data in memory on the disk, and can be loaded again for use when restarted.

Redis not only supports simple key-value types of data, but also provides data structures such as: Strings, Maps, Lists, Sets, and sorted sets. Storage.

Redis supports data backup, that is, data backup in master-slave mode.

The Redis advantage
is extremely high – Redis can read 100K+ times/s and write at 80K+ times/s.

Rich data types – Redis supports Strings, Lists, Hashes, Sets, and Ordered Sets data type operations for binary cases.

Atoms – All operations of Redis are atomic, and Redis also supports atomic execution of all operations after all operations.

Rich features – Redis also supports publish/subscribe, notifications, key expiration, and more.

Prepare environment

CentOS 7 –> 172.16.81.140 –> Master Redis –> Master Keepalived

CentOS7 –> 172.16.81.141 –> From Redis –> Prepared Keepalived

VIP –> 172.16.81.139

Redis (normally 3.0 or later)

KeepAlived (direct online installation)

Redis compile and install

cd /opt
tar -zxvf redis-4.0.6.tar.gz
mv redis-4.0.6 redis
cd redis
makeMALLOC=libc
make PREFIX=/usr/local/redis install

2, configure the redis startup script

vim /etc/init.d/redis

#!/bin/sh

#chkconfig:2345 80 90
# Simple Redisinit.d script conceived to work on Linux systems
# as it doeSUSE of the /proc filesystem.

# Configure the redis port number
REDISPORT=6379
# Configure the redis startup command path
EXE=/usr/local/redis/bin/ redisserver
# Configure the redis connection command path
CLIEXE=/usr/local/redis/bin/redis-cli
# Configure
Redis Run PID path PIDFILE=/var/run/redis_6379.pid
# Configure the path of redis configuration file
CONF=”/etc/redis/redis.conf”
# Configure the connection authentication password for redis
REDISPASSWORD=123456
function start () {
if [ -f $PIDFILE ]

then

echo “$PIDFILE exists,process is already running or crashed”

else

echo “Starting Redisserver…”

$EXE $CONF &

fi
}

function stop () {
if [ ! -f $PIDFILE ]

then

echo “$PIDFILE does not exist, process is not running”

else

PID=$(cat $PIDFILE)

echo “Stopping …”

$CLIEXE -p $REDISPORT -a $REDISPASSWORD shutdown

while [ -x /proc/${PID} ]

do

echo “Waiting forRedis to shutdown …”

sleep 1

done

echo “Redis stopped”

fi
}

function restart () {
stop

sleep 3

start
}

case “$1” in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo -e “\e[31m Please use $0 [start|stop|restart] asfirst argument \e[0m”
;;
esac

Grant execution permissions:

chmod +x /etc/init.d/redis

Add boot start:

chkconfig –add redis

chkconfig redis on

See: chkconfig –list | grep redis

The test closed the firewall and selinux beforehand. The production environment is recommended to open the firewall.

3, add redis command environment variables

#vi /etc/profile #Add the
next line of parameters
exportPATH=”$PATH:/usr/local/redis/bin” #Environment
variables become effective
source /etc/profile

4. Start redis service

Service redis start #Check
startup

ps -ef | grep redis

Note: After we perform the same operation on our two servers, the installation completes redis. After the installation is completed, we directly enter the configuration master-slave environment.

Redis master-slave configuration

To extend back to the previous design pattern, our idea is to use 140 as the master, 141 as the slave, and 139 as the VIP elegant address. The application accesses the redis database through the 6379 port of the 139.

In normal operation, when the master node 140 goes down, the VIP floats to 141, and then 141 will take over 140 as the master node, and 140 will become the slave node, continuing to provide read and write operations.

When 140 returns to normal, 140 will perform data synchronization with 141 at this time, 140 the original data will not be lost, and the data that has been written into 141 between synchronization machines will be synchronized. After the data synchronization is completed,

The VIP will return to the 140 node and become the master node because of the weight. 141 will lose the VIP and become the slave node again, and restore to the initial state to continue providing uninterrupted read and write services.

1, configure the redis configuration file

Master-140 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
requirepass 123456
slave-serve-stale-data yes
slave-read-only no

Slave-141 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
slaveof 172.16.81.140 6379
masterauth 123456
slave-serve-stale-data yes
slave-read-only no

2. Restart the redis service after the configuration is complete! Verify that the master and slave are normal.

Master node 140 terminal login test:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> INFO
.
.
.
# Replication
role:master
connected_slaves:1
slave0:ip=172.16.81.141,port=6379,state=online,offset=105768,lag=1
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105768
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:447
repl_backlog_histlen:105322

Login test from node 141 terminal:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> info
.
.
.
# Replication
role:slave
master_host:172.16.81.140
master_port:6379
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_repl_offset:105992
slave_priority:100
slave_read_only:0
connected_slaves:0
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105992
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:239
repl_backlog_histlen:105754
3, synchronous test

Master node 140

The masters and slaves of this redis have been completed!

KeepAlived configuration to achieve dual hot standby

Use Keepalived to implement VIP, and achieve disaster recovery through notify_master, notify_backup, notify_fault, notify_stop.

1, configure Keepalived configuration file

Master Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis01
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

Spare Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis02
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

2, configure the script

Master KeepAlived — 140

Create a script directory: mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ];then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash
REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″
LOGFILE=”/var/log/keepalived-redis-state.log”
sleep 15
echo “[master]” >> $LOGFILE
date >> $LOGFILE
echo “Being master….” >>$LOGFILE 2>&1
echo “Run SLAVEOF cmd …”>> $LOGFILE
$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “data rsync fail.” >>$LOGFILE 2>&1
else
echo “data rsync OK.” >> $LOGFILE 2>&1
fi

Sleep 10 # delay 10 seconds later to cancel synchronization after the data synchronization is complete

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “Run SLAVEOF NO ONE cmd fail.” >>$LOGFILE 2>&1
else
echo “Run SLAVEOF NO ONE cmd OK.” >> $LOGFILE 2>&1
fi

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

Sleep 15 #delay 15 seconds to wait until the data is synchronized to the other side and then switch the role of master-slave

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

Slave KeepAlived — 141

mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ]; then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[master]” >> $LOGFILE

date >> $LOGFILE

echo “Being master….” >>$LOGFILE 2>&1

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

sleep 10 #

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

sleep 15 #

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

systemctl start keepalived

systemctl enable keepalived

ps -ef | grep keepalived

Full understanding of the new features of MySQL 8.0

Full understanding of the new features of MySQL 8.0

First, the function added in MySQL 8.0

1, the new system dictionary table

Integrated transaction data dictionary for storing information about database objects, all metadata is stored using the InnoDB engine

2, support for DDL atomic operations

The DDL of the InnoDB table supports transaction integrity, either to be successful or to roll back, to write the DDL operation rollback log to the data dictionary data dictionary table mysql.innodb_ddl_log for rollback operations

3, security and user management

Added caching_sha2_password authentication plugin and is the default authentication plugin. Enhanced performance and security

Permissions support role

New password history feature restricts reuse of previous passwords

4, support for resource management

Supports creation and management of resource groups and allows the allocation of threads run by the server to specific groups for execution by threads based on the resources available to the resource group

5, innodb enhancements

Self-enhanced optimization to fix MySQL bug#199, this bug causes MySQL to take the maximum self-increment on the table as the maximum when the DB is restarted, and the next allocation is to allocate max(id)+1 if it is an archive table or After the data is deleted in other modes, the DB system restarts, and self-enhancement may be reused.

Added INFORMATION_SCHEMA.INNODB_CACHED_INDEXES to see the index pages of each index cache in the InnoDB buffer pool

InnoDB temporary tables will be created in the shared temporary table space ibtmp1

InnoDB supports NOWAIT and SKIP LOCKED for SELECT … FOR SHARE and SELECT … FOR UPDATE statements

The minimum value of innodb_undo_tablespaces is 2, and it is no longer allowed to set innodb_undo_tablespaces to 0. Min 2 ensures that rollback segments are always created in the undo tablespace, not in the system tablespace

ALTER TABLESPACE … RENAME TO syntax

Added innodb_dedicated_server to let InnoDB automatically configure innodb_buffer_pool_size according to the amount of memory detected on the server, innodb_log_file_size, innodb_flush_method

New INFORMATION_SCHEMA.INNODB_TABLESPACES_BRIEF view

A new dynamic configuration item, innodb_deadlock_detect, is used to disable deadlock checking, because in high-concurrency systems, when a large number of threads wait for the same lock, deadlock checking can significantly slow down the database.

Supports use of the innodb_directories option to move or restore tablespace files to a new location when the server is offline

6, MySQL 8.0 better support for document database and JSON

7, optimization

Invisible index, starting to support invisible index, (feeling the same as Oracle ), you can set the index to be invisible during the optimization of the SQL, the optimizer will not use the invisible index

Support descending index, DESC can be defined on the index, before the index can be reversed scan, but affect performance, and descending index can be completed efficiently

8, support RANK (), LAG (), NTILE () and other functions

9, regular expression enhancements, provide REGEXP_LIKE (), EGEXP_INSTR (), REGEXP_REPLACE (), REGEXP_SUBSTR () and other functions

10. Add a backup lock to allow DML during online backup while preventing operations that may result in inconsistent snapshots. Backup locks supported by LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE syntax

11, the character set The default character set changed from latin1 to utf8mb4

12, configuration file enhancement

MySQL 8.0 supports online modification of global parameter persistence. By adding the PERSIST keyword, you can persist the adjustment to a new configuration file. Restarting db can also be applied to the latest parameters. For adding the PERSIST keyword modification parameter command, the MySQL system will generate a mysqld-auto.cnf file that contains json format data. For example, execute:

Set PERSIST expire_logs_days=10 ; # memory and json files are modified, restart also effective

Set GLOBAL expire_logs_days=10 ; # only modify memory, restart lost

The system will generate a file containing the following in the data directory mysqld-auto.cnf:

{ “mysql_server”: {“expire_logs_days”: “10” } }

When my.cnf and mysqld-auto.cnf exist at the same time, the latter has a high priority.

13. Histogram

The MySQL 8.0 version started to support long-awaited histograms. The optimizer will use the column_statistics data to determine the distribution of field values ??and get a more accurate execution plan.

You can use ANALYZE TABLE table_name [UPDATE HISTOGRAM on col_name with N BUCKETS | DROP HISTOGRAM ON clo_name] to collect or delete histogram information

14, support for session-level SET_VAR dynamically adjust some of the parameters, help to improve statement performance.

Select /*+ SET_VAR(sort_buffer_size = 16M) */ id from test order id ;

Insert /*+ SET_VAR(foreign_key_checks=OFF) */ into test(name) values(1);

15, the adjustment of the default parameters

Adjust the default value of back_log to keep the same with max_connections and increase the connection processing capacity brought by burst traffic.

Modifying event_scheduler defaults to ON, which was previously disabled by default.

Adjust the default value of max_allowed_packet from 4M to 64M.

Adjust bin_log, log_slave_updates default is on.

Adjust the expiry date of expire_logs_days to 30 days, and the old version to 7 days. In the production environment, check this parameter to prevent the binlog from creating too much space.

Adjust innodb_undo_log_truncate to ON by default

Adjust the default value of innodb_undo_tablespaces to 2

Adjust the innodb_max_dirty_pages_pct_lwm default 10

Adjust the default value of innodb_max_dirty_pages_pct 90

Added innodb_autoinc_lock_mode default value 2

16, InnoDB performance improvement

Abandon the buffer pool mutex, split the original mutex into multiple, increase concurrent

Splitting the two mutexes LOCK_thd_list and LOCK_thd_remove can increase the threading efficiency by approximately 5%.

17, line buffer

The MySQL8.0 optimizer can estimate the number of rows to be read, so it can provide the storage engine with an appropriately sized row buffer to store the required data. Mass performance of continuous data scans will benefit from larger record buffers

18, improve the scanning performance

Improving the performance of InnoDB range queries improves the performance of full-table queries and range queries by 5-20%.

19, the cost model

The InnoDB buffer can estimate how many tables and indexes are in the cache, which allows the optimizer to choose the access mode to know if the data can be stored in memory or must be stored on disk.

20, refactoring SQL analyzer

Improve the SQL analyzer. The old analyzer has serious limitations due to its grammatical complexity and top-down analysis, making it difficult to maintain and extend.

Second, MySQL 8.0 in the abandoned features

  • Deprecated validate_password plugin
  • Deprecation of ALTER TABLESPACE and DROP TABLESPACE ENGINE Clauses
  • Discard JSON_MERGE() -> JSON_MERGE_PRESERVE() instead
  • Abandoned have_query_cache system variable

Third, MySQL 8.0 is removed

Query cache functionality was removed and related system variables were also removed

Mysql_install_db is replaced by mysqld –initialize or –initialize-insecure

The INNODB_LOCKS and INNODB_LOCK_WAITS tables under INFORMATION_SCHEMA have been deleted. Replaced with Performance Schema data_locks and data_lock_waits tables

Four tables under INFORMATION_SCHEMA removed: GLOBAL_VARIABLES, SESSION_VARIABLES, GLOBAL_STATUS, SESSION_STATUS

InnoDB no longer supports compressed temporary tables.

PROCEDURE ANALYSE() syntax is no longer supported

InnoDB Information Schema Views Renamed
Old the Name the Name New
INNODB_SYS_COLUMNS INNODB_COLUMNS
INNODB_SYS_DATAFILES INNODB_DATAFILES
INNODB_SYS_FIELDS INNODB_FIELDS
INNODB_SYS_FOREIGN INNODB_FOREIGN
INNODB_SYS_FOREIGN_COLS INNODB_FOREIGN_COLS
INNODB_SYS_INDEXES INNODB_INDEXES
INNODB_SYS_TABLES INNODB_TABLES
INNODB_SYS_TABLESPACES INNODB_TABLESPACES
INNODB_SYS_TABLESTATS INNODB_TABLESTATS
INNODB_SYS_VIRTUAL INNODB_VIRTUAL

Remove’s server option:

–temp-pool
–ignore-builtin-innodb
–des-key-file
–log-warnings
–ignore-db-dir

Remove configuration options:

Innodb_file_format
innodb_file_format_check
innodb_file_format_max
innodb_large_prefix

Remove the system variable

information_schema_stats -> information_schema_stats_expiry
ignore_builtin_innodb
innodb_support_xa
show_compatibility_56
have_crypt
DATE_FORMAT
DATETIME_FORMAT
the time_format
max_tmp_tables
global.sql_log_bin (session.sql_log_bin reserved)
log_warnings -> log_error_verbosity
multi_range_count
secure_auth
sync_frm
TX_ISOLATION -> transaction_isolation
tx_read_only -> transaction_read_only
ignore_db_dirs
the query_cache_limit
the query_cache_min_res_unit
the query_cache_size
the query_cache_type
query_cache_wlock_invalidate
innodb_undo_logs -> innodb_rollback_segments

Remove the state variable

Com_alter_db_upgrade
Slave_heartbeat_period
Slave_last_heartbeat
Slave_received_heartbeats
Slave_retried_transactions, Slave_running
Qcache_free_blocks
Qcache_free_memory
Qcache_hits
Qcache_inserts
Qcache_lowmem_prunes
Qcache_not_cached
Qcache_queries_in_cache
Qcache_total_blocks
Innodb_available_undo_logs Status

Remove function

JSON_APPEND() –> JSON_ARRAY_APPEND()

ENCODE()

DECODE()

DES_ENCRYPT()

DES_DECRYPT()

Remove’s client option:

–ssl –ssl-verify-server-cert is deleted, with –ssl-mode = VERIFY_IDENTITY | alternative DISABLED | REQUIRED

–secure-auth

 

 

MySQL 8 official version 8.0.11 has been released. Officially stated that MySQL 8 is 2 times faster than MySQL 5.7, and it also brings a lot of improvements and faster performance!

The following is the record of the installation process of the person 2018.4.23 days. The entire process takes about an hour, and the make && make install process takes longer.

I. Environment

CentOS 7.4 64-bit Minimal Installation

II. Preparation

??1. Installation dependencies

Yum -y install wget cmake gcc gcc-c++ ncurses ncurses-devel libaio-devel openssl openssl-devel

??2. Download the source package

Wget https://cdn.mysql.com//Downloads/MySQL-8.0/mysql-boost-8.0.11.tar.gz (this version comes with boost)

??Create mysql user

Groupadd mysql
useradd -r -g mysql -s /bin/false mysql

??4 create an installation directory and data directory

Mkdir -p /usr/local/mysql
mkdir -p /data/mysql

Three. Install MySQL8.0.11

??1. Extract the source package

Tar -zxf mysql-boost-8.0.11.tar.gz -C /usr/local

??2. Compile & Install

Cd /usr/local/mysql-8.0.11
cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DMYSQL_DATADIR=/usr/local/mysql/data -DSYSCONFDIR=/etc -DMYSQL_TCP_PORT=3306 -DWITH_BOOST=/usr/local/ Mysql-8.0.11/boost
make && make install

3. Configure the my.cnf file

Cat /etc/my.cnf
[mysqld]
server-id=1
port=3306
basedir=/usr/local/mysql
datadir=/data/mysql

## Please add parameters according to the actual situation

4 directory permissions modify

Chown -R mysql:mysql /usr/local/mysql
chown -R mysql:mysql /data/mysql
chmod 755 /usr/local/mysql -R
chmod 755 /data/mysql -R

5. Initialization

Bin/mysqld –initialize –user=mysql –datadir=/data/mysql/
bin/mysql_ssl_rsa_setup

6. Start mysql

Bin/mysqld_safe –user=mysql &

??7. Modify account password

Bin/mysql -uroot -p
mysql> alter user ‘root’@’localhost’ identified by “123456”;

Mysql> show databases;
+——————–+
| Database |
+——————- – +
| information_schema |
| mysql |
| performance_schema |
| sys |
+——————–+
4 rows in set (0.00 sec)

##Add Remote Account

Mysql> create user root@’%’ identified by ‘123456’;
Query OK, 0 rows affected (0.08 sec)

????Mysql> grant all privileges on *.* to root@’%’;
????Query OK, 0 rows affected (0.04 sec)

????Mysql> flush privileges;
????Query OK, 0 rows affected (0.01 sec)

??8. Create a soft link (non-essential)

Ln -s /usr/local/mysql/bin/* /usr/local/bin/

Mysql -h 127.0.0.1 -P 3306 -uroot -p123456 -e “select version();”
mysql: [Warning] Using a password on the command line interface can be insecure.
+———- -+
| version() |
+———–+
| 8.0.11 |
+———–+

??9. Add to start (non-essential)

Cp support-files/mysql.server /etc/init.d/mysql.server

Hereby explain: MySQL official recommended to use binary installation. (The following figure is a screenshot of the official document)

Nginx load balancing and configuration

Nginx load balancing and configuration

1 Load Balancing Overview The
origin of load balancing is that when a server has a large amount of traffic per unit time, the server will be under great pressure. When it exceeds its own capacity, the server will crash. To avoid crashing the server. The user has a better experience, born load balancing to share the pressure of the server.

Load balancing is essentially implemented with the principle of reverse proxy, is a kind of technology that optimizes server resources and reasonably handles high concurrency, and can balance Server pressure to reduce user request wait time and ensure fault tolerance. Nginx is generally used as an efficient HTTP load balancing server to distribute traffic to multiple application servers to improve performance, scalability, and high availability.

Principle: Internal A large number of servers can be built on the network to form a server cluster. When a user visits the site, they first access the public network intermediate server. The intermediate server is assigned to the intranet server according to the algorithm and shares the pressure of the server. Therefore, each visit of the user ensures the server. The pressure of each server in the cluster tends to balance, sharing server pressure and avoiding servers The collapse of the case.

The nginx reverse proxy implementation includes the following load balancing HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.
To configure HTTPS load balancing, simply use the protocol that begins with ‘http’.
When you want to set load balancing for FastCGI, uwsgi, SCGI, or memcached, use the fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass commands, respectively.

2 Common Balancing Mechanisms of Load Balancing

1 round-robin: The requests are distributed to different servers in a polling manner. Each request is assigned to different back-end servers in chronological order. If the back-end server goes down, it is automatically removed to ensure normal services. .

Configuration 1:
upstream server_back {#nginx distribution service request
server 192.168.2.49;
server 192.168.2.50;
}

Configuration 2:
http {
upstream servergroup { # service group accepts requests, nginx polling distribution service requests
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}
server {
listen 80;
location / {
Proxy_pass http://servergroup; #All requests are proxied to servergroup service group
}
}
}
proxy_pass is followed by proxy server ip, can also be hostname, domain name, ip port mode
upstream set load balancing background server list

2 Weight load balancing: if no weight is configured, the load of each server is the same. When there is uneven server performance, weight polling is used. The weight parameter of the specified server is determined by load balancing. a part of. Heavy load is great.
Upstream server_back {
server 192.168.2.49 weight=3;
server 192.168.2.50 weight=7;
}

3 least-connected: The next request is allocated to the server with the least number of connections. When some requests take longer to respond, the least connections can more fairly control the load of application instances. Nginx forwards the request to the less loaded server.
Upstream servergroup {
least_conn;
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}

4 ip-hash: Client-based IP address. When load balancing occurs, each request is relocated to one of the server clusters. Users who have logged in to one server then relocate to another server and their login information is lost. This is obviously not appropriate. Use ip_hash to solve this problem. If the client has accessed a server, when the user accesses it again, the request will be automatically located to the server through a hash algorithm.

Each request is assigned according to the result of the IP hash, so the request is fixed to a certain back-end server, and it can also solve the session problem
upstream servergroup {
ip-hash;
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}

Attach an example:
#user nobody;
worker_processes 4;
events {
# maximum number of concurrent
workers_connections 1024;
}
http{
# The list of pending servers to be
upstream myserver{
# ip_hash instruction to bring the same user to the same server.
Ip_hash;
server 125.219.42.4 fail_timeout=60s; tentative time after the failure of #max_fails 60s
server 172.31.2.183;
}

Server{
# listening port
listen 80;
# root
location / /
# select which server list
proxy_pass http://myserver;
}
}
}

Max_fails allows the number of request failures to default to 1
fail_timeout=60s fail_timeout=60s timeout for failed timeouts
down indicates that the current server is not participating in the loadbackup. All nonbackup
machines will request backups when they are busy, so their stress will be lightest.

Solution Architect Associate

1. Messaging
2. Desktop and App Streaming
3. Security and Identity
4. Management Tools
5. Storage
6. Databases
7. Networking & Content Delivery
8. Compute
9. AWS Global Infrastructure

10. Migration
11. Analytics
12. Application services
13. Developer Tools

1. Messaging:
SNS - Simple Notification Service (Text, Email, Http)
SQS - Simple queue Service 

3. Security and Identity:
IAM - Identity Access Management
Inspector - Agent installed on VM provides security reports
Certificate Manager - Provides free certificate for domain name.
Directory Services - Provides Microroft Active directory  
WAF - Web Application Firewall (Application layer security x-site scripting / sql injection)
Artifect -  Get compliance ceriticate ISO PCI HIPPA

4. Management Tools:
Cloud Watch - Performance of AWS environment (Disk, CPU, RAM utilization)
Cloud Formation - Turn your AWS infrastructure into code. (Document)
Cloud trail - Auditing your AWS environment
OpsWork - Automatic deployment using chef
Config - Monitor your environemnt, set alerts
Service Catlog - Catlogs Enterprise authorise and authorise services
Trusted Advisor - Scan Environemtn suggest Performance optimization, security opti fault tollerance suggestions

5. Storage
S3 - Simple storage service Object based storage (DropBox)
Glacier - Not instance access 4-5 hours to recoved the archived files.
EFS - Elastic File service - File Storage Service
Storage Gateway - Not in exam

6. Databases
RDS - Relational Database Service ( SQL, MySQL, MariaDB, PostgreSQL, Aurora, and Oracle)
DynamoDB - Non Relational Database (High performance, scalable)
RedShift - Database Warehouse Service (Hudge data queries)
Elasticache - Cacheing data on the cloud (Quicker to featch from database)

7. Networking & Content Delivery:
VPC - Virtual Private Network
Route 53 - Amazons DNS Service
Cloud Front - Content Delivery Network
Direct Connect - Connect your Data Center with direct physical telephone line

8. Compute
EC2 - Elastic Coumpute Cloud
EC2 Container Services - Not in Exam
Elastic Beanstalk - developer's code on an infrastructure that is automatically provisioned to host that code
Lambda - allows you to run code without having to worry about provisioning any underlying resources.
Lightsail - New Service

9. AWS Global Infrastructure :
14 Regions and 38 Availibility zones
4 Regions and 11 more Availibility zones in 2017
66 Edge Locations
Regions - Phisical Geographocal Areas (An independent collection of AWS computing resources in a defined geography.)
Availibility Zones - Logical Data centers (Distinct locations from within an AWS region that are engineered to be isolated from failures.)
Edge Location - Content Delivery Network CDN End Points for CloudFront (Very large media objects)
----------------------------------------------------------------------------------------------------------------
10. Migration:
Snowball - Connect different discs and transfer data into cloud like S3
DMS - Database Migration Service (Migrate Oracle SQL MySQL to cloud)
SMS - Server Migration Service (Migrate VMWare to cloud)

11. Analytics:
Athena - SQL queries on S3
EMR - Elastic Map Reduce is specifically designed to assist you in processing large data sets
	Big Data Processing (Big Data, Log analysis, Analyse finantial markets)
Cloud Search - Fully Managed service
Elastic Search - Open source 
Kinesis - Process tera bit data and analyse it (Financial transaction Social Media centiment analysis,)
Data Pipeline - Move data from s3 to dynamo DB and vice-varsa
Quick Sight - Business analysis tool.

12. Application services:
Step Functions: Microservices used by your applicaitions 
SWF - Simple workflow service (co-ordinate physical and automated tasks)
API Gateway - Publish and monitor API and scale
AppStream - streaming desktop applications.
Elastic transcoder - Helps to run video on different form factor and reolutions

13. Developer Tools:
CodeCommit - Cloud git
CodeBuild - Compiling the code 
CodeDeploy - Way to deploy code to EC2 instances 
CodePipeLine - Track different versions of code UAT

Amazon Web Services (AWS)

Amazon Web Services (AWS)

  • Extensive set of cloud services available via the Internet
  • On-demand, virtually endless, elastic resources
  • Pay-per-use with no up-front costs (with optional commitment)
  • Self-serviced and programmable

 

 

 

Elastic Compute Cloud (EC2)

  • One of the core services of AWS
  • Virtual machines (or instances) as a service
  • Dozens of instance types that vary in performance and cost
  • Instance is created from an Amazon Machine Image (AMI), which in turn can be created again from instances

 

 

 

Regions and Availability Zones (AZ)

Notes: We will only use Ireland (eu-west-1) region in this workshop. See also A Rare Peek Into The Massive Scale of AWS.

Networking in AWS

Exercise: Launch an EC2 instance

  1. Log-in to gofore-crew.signin.aws.amazon.com/console
  2. Switch to Ireland region and go to EC2 dashboard
  3. Launch a new EC2 instance according instructor guidance
  • In “Configure Instance Details”, pass a User Data script under Advanced
  • In “Configure Security Group”, use a recognizable, unique name

#!/bin/sh
# When passed as User Data, this script will be run on boot
touch /new_empty_file_we_created.txt
echo "It works!" > /it_works.txt

Exercise: SSH into the instance

SSH into the instance (find the IP address in the EC2 console)

# Windows Putty users must convert key to .ppk (see notes)
ssh -i your_ssh_key.pem ubuntu@instance_ip_address

View instance metadata

curl http://169.254.169.254/latest/meta-data/

View your User Data and find the changes your script made

curl http://169.254.169.254/latest/user-data/
ls -la /

Notes: You will have to reduce keyfile permissions chmod og-xrw mykeyfile.pem. If you are on Windows and use Putty, you will have to convert the .pem key to .ppk key using puttygen (Conversions -> Import key -> *.pem file -> Save private key. Now you can use your *.ppk key with Putty: Connection -> SSH -> Auth -> Private key file)

Exercise: Security groups

Setup a web server that hosts the id of the instance

mkdir ~/webserver && cd ~/webserver
curl http://169.254.169.254/latest/meta-data/instance-id > index.html
python -m SimpleHTTPServer

Configure the security group of your instance to allow inbound requests to your web server from anywhere. Check that you can access the page with your browser.

Exercise: Security groups

Delete the previous rule. Ask a neighbor for the name of their security group, and allow requests to your server from your neighbor’s security group.

Have your neighbor access your web server from his/her instance.

# Private IP address of the web server (this should work)
curl 172.31.???.???:8000
# Public IP address of the web server (how about this one?)
curl 52.??.???.???:8000

Speaking of IP addresses, there is also Elastic IP Address. Later on, we will see use cases for this, as well as better alternatives.

Also, notice the monitoring metrics. These come from CloudWatch. Later on, we will create alarms based on the metrics.

Elastic Block Store (EBS)

  • Block storage service (virtual hard drives) with speed and encryption options
  • Disks (or volumes) are attached to EC2 instances
  • Snapshots can be taken from volumes
  • Alternative to EBS is ephemeral instance store

EC2 cost


Identity and Access Management

Identity and Access Management (IAM)

Notes: Always use roles inside instances (do not store credentials there), or something bad might happen.

Quiz: Users on many levels

Imagine running a content management system, discussion board or blog web application in EC2. How many different typesof user accounts you might have?


Virtual Private Cloud

Virtual Private Cloud (VPC)

  • Heavy-weight virtual IP networking for EC2 and RDS instances. Integral part of modern AWS, all instances are launched into VPCs (not true for EC2-classic)
  • An AWS root account can have many VPCs, each in a specific region
  • Each VPC is divided into subnets, each bound to an availability zone
  • Each instance connects to a subnet with a Elastic Network Interface

 

 

 

 

VPC with Public and Private Subnets

Access Control Lists

 

 

 

 

Auto Scaling

 

 

Provisioning capacity as needed

  • Changing the instance type is vertical scaling (scale up, scale down)
  • Adding or removing instances is horizontal scaling (scale out, scale in)
  • 1 instance 10 hours = 10 instances 1 hour

Auto Scaling instances

  • Launch Configuration describes the configuration of the instance. Having a good AMI and bootstrapping is crucial.
  • Auto Scaling Group contains instances whose lifecycles are automatically managed by CloudWatch alarms or schedule
  • Scaling Plan refers when scaling happens and what triggers it.

Scaling Plans

  • Maintain current number of instances
  • Manual scaling by user interaction or via API
  • Scheduled scaling
  • Dynamic Auto Scaling. A scaling policy describes how the group scales in or out. You should always have policies for both directions. Policy cooldowns control the rate in which scaling happens.

Auto Scaling Group Lifecycle

Auto Scaling Group Lifecycle

Elastic Load Balancer

  • Route traffic to an Auto Scaling Group (ASG)
  • Runs health checks to instances to decide whether to route traffic to them
  • Spread instances over multiple AZs for higher availability
  • ELB scales itself. Never use ELB IP address. Pre-warm before flash traffic.

 

Public networking

Route 53

  • Domain Name System (DNS)
  • Manage DNS records of hosted zones
  • Round Robin, Weighted Round Robin and Latency-based routing

CloudFront

  • Content Delivery Network (CDN)
  • Replicate static content from S3 to edge locations
  • Also supports dynamic and streaming content

EC2 instance

chmod 400 SeniorServer.pem

Server: ssh -i "SeniorServer.pem" ec2-user@ec2-54-149-37-172.us-west-2.compute.amazonaws.com
password for root:qwe123456

server2: ssh -i "senior_design_victor.pem" ec2-user@ec2-54-69-160-179.us-west-2.compute.amazonaws.com

controller: ssh -i "zheng.pem" ec2-user@ec2-52-34-59-51.us-west-2.compute.amazonaws.com


DB: mysql -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -P 3306 -u root -p
username: root
password: qwe123456


use command "screen" to keep running server codes and controller codes on AWS
screen  #create a new screen session
screen -ls  #check running screens
screen -r screenID   #resume to a screen
screen -X -S screenID kill   #end a screen


MySQL:
create table transaction (username varchar(20), history varchar(20));
insert into property (username,password,money) values ("client1","123",100);


To create more replicated server/db:

Master (RDS) - all in mysql, no Bash - remember to remove [] from statements
	1. Create new slave
		CREATE USER '[SLAVE USERNAME]'@'%' IDENTIFIED BY '[SLAVE PASSWORD]'; 
	2. Give it access 
		GRANT REPLICATION SLAVE ON *.* TO '[SLAVE USERNAME]'@'%';  

Slave (Server) - 
	1. [Bash] Import the database from master
		mysqldump -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -u root -p senior_design > dump.sql

	2. [Bash] Import the dump.sql into your database 
		mysql senior_design < dump.sql

	3. [Bash] Edit the /etc/my.cnf - will require root access, add the follow lines
	**Remember to keep server-id different (currently using 10, so next is 11, etc...)
		# Give the server a unique ID
		server-id               = #CHANGE THIS NUMBER#
		#
		# Ensure there's a relay log
		relay-log               = /var/lib/mysql/mysql-relay-bin.log
		#
		# Keep the binary log on
		log_bin                 = /var/lib/mysql/mysql-bin.log
		replicate_do_db            = senior_design

	4. [Bash] Restart mysqld
		service mysqld restart

Master-Slave Connection Creation
	1. On master (RDS) - type
		show master status;
		** We will need to keep note of File and Position
		+----------------------------+----------+--------------+------------------+-------------------+
		| File                       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
		+----------------------------+----------+--------------+------------------+-------------------+
		| mysql-bin-changelog.010934 |      400 |              |                  |                   |
		+----------------------------+----------+--------------+------------------+-------------------+
	2. On the slave, enter mysql then enter
		CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='[SLAVE NAME]', MASTER_PASSWORD='[SLAVE PWD]', MASTER_LOG_FILE='[MASTER FILE] ', MASTER_LOG_POS= [MASTER POSITION];
	3. On the slave, enter "START SLAVE;"
	4. Make sure the slave started - "SHOW SLAVE STATUS\G;"
	5. You can always triple check by adding a new row to senior_design in master then see if it updates in slave.

	TROUBLESHOOTING 
	- If for some reason you mess up the slave in step 2.
		[mysql] on the slave side
			reset slave;
		then repeat step 2 - 5
	- If in SHOW SLAVE STATUS\G shows error
		try 
			STOP SLAVE;
			SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
			START SLAVE;
		error should be gone, but this will only skip the error; the error may still re-appear

use senior_design;
select count(*) from property;


Slave user pass:
Slave 1 - username: slave1 pass: slave1pwd
Slave 2 - username: slave2 pass: [slave2]

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave1', MASTER_PASSWORD='slave1pwd', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave2', MASTER_PASSWORD='[slave2]', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

Redis install

Install
——-
http://download.redis.io/redis-stable.tar.gz

$ wget http://download.redis.io/redis-stable.tar.gz
$ tar xvzf redis-stable.tar.gz
$ cd redis-stable
$ make
$ make test # optional
————–
# yum install wget tcl
# wget http://download.redis.io/releases/redis-3.2.5.tar.gz
# tar xzf redis-3.2.5.tar.gz
# cd redis-3.2.5
# make
# make test
————–
$ sudo make install
OR
$ sudo cp src/redis-server /usr/local/bin/
$ sudo cp src/redis-cli /usr/local/bin/
$ sudo mkdir /etc/redis
$ sudo mkdir /var/redis
$ sudo cp utils/redis_init_script /etc/init.d/redis_6379
$ sudo cp redis.conf /etc/redis/6379.conf
$ sudo mkdir /var/redis/6379
$ sudo update-rc.d redis_6379 defaults # OR sudo chkconfig –add redis_6379
$ sudo /etc/init.d/redis_6379 start
————–

TERMS
—–
RESP (REdis Serialization Protocol)
RESP, the type of some data depends on the first byte:
Simple Strings “+”
Errors “-”
Integers “:”
Bulk Strings “$”
Arrays “*”
Redis append-only file feature (AOF)

COMMANDS
——–
$ redis-server # start server
/etc/init.d/redis_PORT start
$ redis-cli [-p PORT] shutdown # stop server
/etc/init.d/redis_PORT stop
$ redis-cli ping # check if working
$ redis-cli –stat [-i interval] # continuous stats mode
$ redis-cli –bigkeys # scan for big keys
$ redis-cli [-p port ] –scan [–pattern REGEX] # get a list of keys
$ redis-cli [-p port ] monitor # monitor commands
$ redis-cli [-p port ] –latency # monitor latency of instances
$ redis-cli [-p port ] –latency-history [-i interval]
$ redis-cli [-p port ] –latency-dist # spectrum of latencies
$ redis-cli –intrinsic-latency [-p port ] [test-time] # latency of system
$ redis-cli –intrinsic-latency 5
$ redis-cli –rdb <dest-filename> # remote RDB backup ($?=0 success)
$ redis-cli –rdb /tmp/dump.rdb
$ redis-cli –slave # slave mode (monitor master -> slave writes)
$ redis-cli –lru-test 10000000 # Least Recently Used (LRU) simulation
# used to help configure ‘maxmemory’ for LRU
$ redis-cli save # save dump file (dump.rdb) to $dir
$ redis-cli select <DB_NUMBER> # select DB
$ redis-cli dbsize # show size of DB
$ redis-cli connect <SERVER> <PORT> # connect to different servers/ports
$ redis-cli debug restart
$ redis-cli –version
$ redis-cli pubsub channels [PATTERN]
$ redis-cli pubsub numsub [channel1 … channelN]
$ redis-cli subscribe/psubscribe/publish
$ redis-cli slowlog get [N]|len
a. unique progressive identifier for every slow log entry.
b. timestamp at which the logged command was processed.
c. amount of time needed for its execution, in microseconds.
d. array composing the arguments of the command.

FILES
—–
config file: /etc/redis/6371.conf
dbfilename dump.rdb
dir /var/lib/redis/6371
pidfile /var/run/redis/6371.pid
DB saved to:
/var/lib/redis/6371/dump.rdb

OPTIONS
——-
–raw, –no-raw

Configuration
————-
redis.conf # well documented
default ports: 6379 / 16379 (cluster mode) / 26379 (Sentinel)

daemonize no
pidfile /var/run/redis_6379.pid
port 6379
loglevel info
logfile /var/log/redis_6379.log
dir /var/redis/6379

keyword argument1 argument2 … argumentN

slaveof 127.0.0.1 6380
requirepass “hello world”
maxmemory 2mb
maxmemory-policy allkeys-lru
masterauth <password>
daemonize no # when run under daemontools

Administration
————–
/etc/sysctl.conf

vm.overcommit_memory = 1 # sysctl vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# passing arguments via CLI
$ ./redis-server –port 6380 –slaveof 127.0.0.1 6379

redis> config get GLOB
redis> config set slaveof 127.0.0.1 6380
redis> config rewrite

— —
Actual config

daemonize yes
pidfile /var/run/redis/6371.pid
port 6371
tcp-backlog 511
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile “”
syslog-enabled yes
syslog-ident redis
syslog-facility USER
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/6371
slaveof 10.200.18.115 6371 # only on slave(s)
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
maxmemory-policy noeviction
appendonly no
appendfilename “appendonly.aof”
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events “”
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

Replication
———–
redis> config set masterauth <password>
persistence = enabled OR automatic-restarts = disabled

slaveof 192.168.1.1 6379
repl-diskless-sync
repl-diskless-sync-delay
slave-read-only
masterauth <password> # config set masterauth <password>
min-slaves-to-write <number of slaves>
min-slaves-max-lag <number of seconds>
slave-announce-ip 5.5.5.5
slave-announce-port 1234

Redis Sentinel (26379)
————–
– Monitoring
– Notification
– Automatic failover
– Configuration provider

redis-sentinel /path/to/sentinel.conf
OR
redis-server /path/to/sentinel.conf –sentinel

# typical minimal config

# sentinel monitor <master-group-name> <ip> <port> <quorum>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel parallel-syncs <master-name> <numslaves>

# example typical minimal config:

sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1

# additional configs

# bind 127.0.0.1 192.168.1.1
# protected-mode no
# sentinel announce-ip <ip>
# sentinel announce-port <port>
# dir <working-directory>
# syntax: sentinel <option_name> <master_name> <option_value>
# sentinel auth-pass <master-name> <password>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel parallel-syncs <master-name> <numslaves>
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel notification-script <master-name> <script-path>
# passed: <event type> <event description>
# sentinel client-reconfig-script <master-name> <script-path>
# passed: <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# <state> is currently always “failover”
# <role> is either “leader” or “observer”
— — —
# actual config

/etc/redis/sentinel_26371.conf
# redis-sentinel 2.8.9 configuration file
# sentinel_26371.conf
daemonize no
dir “/var/lib/redis/sentinel_26371”
pidfile “/var/run/redis/sentinel_26371.pid”
port 26371
bind 0.0.0.0
sentinel monitor iowa_master_staging 10.200.18.115 6375 2
sentinel config-epoch iowa_master_staging 0
sentinel leader-epoch iowa_master_staging 0
sentinel known-slave iowa_master_staging 10.200.20.234 6375
logfile “”
syslog-enabled yes
syslog-ident “sentinel_26371”
syslog-facility user
————-
# sentinel messages/events

+monitor master <group-name> <ip> quorum <N>

# Testing
$ redis-cli -p PORT
127.0.0.1:PORT> SENTINEL master mymaster # info about master
127.0.0.1:PORT> SENTINEL slaves mymaster # info about slave(s)
127.0.0.1:PORT> SENTINEL sentinels mymaster # info about sentinel(s)
127.0.0.1:PORT> SENTINEL get-master-addr-by-name mymasterer # get address of master
$ redis-cli -p 6379 DEBUG sleep 30 # simulate master hanging

ping
SENTINEL masters # get list of monitored masters and their state
SENTINEL master <master name>
SENTINEL slaves <master name>
SENTINEL sentinels <master name>
SENTINEL get-master-addr-by-name <master name>
SENTINEL reset <pattern> # reset all masters matching pattern
SENTINEL failover <master name> # force a failover
SENTINEL ckquorum <master name> # check if current config is able to failover
SENTINEL flushconfig # rewrite config file
SENTINEL monitor <name> <ip> <port> <quorum> # start monitoring a new master
SENTINEL remove <name> # stop monitoring master
SENTINEL SET <name> <option> <value>

# Commands
$

Redis Cluster
————-
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

=====================================
redis1:$ WTFI redis-cli => /nb/redis/bin/redis-cli

redis-cli -h <hostname> -p <port> -r <repeat (-1=forever)> -i <interval (secs)> -n <DB_NUM> <COMMAND>
redis-cli -p 6371|26371 info [server|clients|memory|persistence|stats|replication|cpu|keyspace|sentinel]
redis-cli -p 6371 ping
=====================================
Upgrading or restarting a Redis instance without downtime
Check out: https://redis.io/topics/admin (bottom)
=====================================
for p in $(grep ^port /etc/redis/*|awk ‘{print $NF}’); do echo “—- port: $p —-“; /nb/redis/bin/redis-cli -p $p info | grep stat; done

========== tool (redis_monit.sh) [begin] ==========
#!/bin/bash
# get status of redis servers
REDIS_CLI_CMD=/nb/redis/bin/redis-cli
# get the list of ports configured
ports=$(ls /etc/redis/*.conf | tr -d ‘[a-z/.]’)
for port in $ports; do
echo “—- port: $port —-”
if [ -e /etc/redis/$port.conf ]; then
$REDIS_CLI_CMD -p $port info | grep stat
else
echo “no config (/etc/redis/$port.conf)”
fi
done
========== tool (redis_monit.sh) [end] ==========

# update the monitor hosts – “live”
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel monitor redis2 10.204.21.219 6379 2
sentinel failover redis1
sentinel masters
sentinel slaves redis1

# manual failover
# CLUSTER FAILOVER [FORCE|TAKEOVER]

$ redis-cli -p 7002 debug segfault

s3 aws instance

creating a bucket:
——————-
S3 > Create bucket > unique name + region > create
bucket > select > upload > upload file or drag n drop

Backup Files to Amazon S3 using the AWS CLI
———————————————
Step 1: create login for aws console:
IAM > Users > Create > username: AWS_Admin > Permissions > Attach policy > AdministratorFullAccess
> Manage password > Auto generated, uncheck require password change > apply
> Download credentials > credentials.csv

Step 2: install and configure aws cli
download AWSCLI64.msi > install > windows run > cmd

Type aws configure and press enter. Enter the following when prompted:

AWS Access Key ID [None]: enter the Access Key Id from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like AKIAPWINCOKAO3U4FWTN
AWS Secret Access Key [None]: enter the Secret Access Key from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like 5dqQFBaGuPNf5z7NhFrgou4V5JJNaWPy1XFzBfX3

Default region name [None]: enter us-east-1
Default output format [None]: enter json

Step 3: Using the AWS CLI with Amazon S3
a. Creating a bucket is optional if you already have a bucket created that you want to use.
To create a new bucket named my-first-backup-bucket type aws s3 mb s3://my-first-backup-bucket

Note: bucket naming has some restrictions; one of those restrictions is that bucket names must be globally unique (e.g. two different AWS users can not have the same bucket name);
because of this, if you try the command above you will get a BucketAlreadyExists error.

b. To upload the file my-first-backup.bak located in the local directory to the S3 bucket my-first-backup-bucket,
you would use the following command:
aws s3 cp my-first-backup.bak s3://my-first-backup-bucket/

c. To download my-first-backup.bak from S3 to the local directory we would reverse the order of the commands as follows:
aws s3 cp s3://my-first-backup-bucket/my-first-backup.bak ./

d. To delete my-first-backup.bak from your my-first-backup-bucket bucket, use the following command:
aws s3 rm s3://my-first-backup-bucket/my-first-backup.bak

additional commands
recursively copying local files to s3
aws s3 cp myDir s3://mybucket/ –recursive –exclude “*.jpg”

recursively remove files (with caution!)
aws s3 rm myDir s3://mybucket/ –recursive –exclude “*.jpg”

list files:
aws s3 ls s3://mybucket

remove bucket:
$ aws s3 rb s3://bucket-name
or
$ aws s3 rb s3://bucket-name –force

EC2 instance aws

Launch a Linux Virtual Machine
===============================
Step 1: Launch an Amazon EC2 Instance
EC2 console > launch instance

Step 2: Configure your Instance
a. Amazon Linux AMI
b. t2.micro > default options
c. review and launch 
d. create new key pair > "MyKeyPair" > Download key pair
Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex. C:\user\{yourusername}\.ssh\MyKeyPair.pem).
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex. ~/.ssh/MyKeyPair.pem).
Note: On Mac, the key pair is downloaded to your Downloads directory by default. 
To move the key pair into the .ssh sub-directory, enter the following command in a terminal window: mv ~/Downloads/MyKeyPair.pem ~/.ssh/MyKeyPair.pem

click Launch instance

e. EC2 > View instances
f. make note of public ip address of the new instance

Step 3: Connect to your Instance
Windows users:  Select Windows below to see instructions for installing Git Bash.
Mac/Linux user: Select Mac / Linux below to see instructions for opening a terminal window.

a. instructions to install git bash
b. open git bash to run ssh command
c. connect to your instance

Windows users: Enter ssh -i 'c:\Users\yourusername\.ssh\MyKeyPair.pem' ec2-user@{IP_Address} (ex. ssh -i 'c:\Users\adamglic\.ssh\MyKeyPair.pem' ec2-user@52.27.212.125)
Mac/Linux users: Enter ssh -i ~/.ssh/MyKeyPair.pem ec2-user@{IP_Address} (ex. ssh -i ~/.ssh/MyKeyPair.pem ec2-user@52.27.212.125)

Note: if you started a Linux instance that isn't Amazon Linux, there may by a different user name that is used. 
common user names include ec2-user, root, ubuntu, and fedora. 
If you are unsure what the login user name is, check with your AMI provider.

You'll see a response similar to the following:

The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (10.254.142.33)' can't be established. 
RSA key fingerprint is 1f:51:ae:28:df:63:e9:d8:cf:38:5d:87:2d:7b:b8:ca:9f:f5:b1:6f. 
Are you sure you want to continue connecting (yes/no)?

Type yes and press enter.

You'll see a response similar to the following:

Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (RSA) to the list of known hosts.

You should then see the welcome screen for your instance and you are now connected to your AWS Linux virtual machine in the cloud.


Step 4: Terminate Your Instance
a. EC2 console > instance > actions > Instance state > Terminate
b. confirm yes to terminate


Launch a Windows Virtual Machine
================================
Step 1: Enter the EC2 Dashboard
EC2 > console

Step 2: Create and Configure Your Virtual Machine
a. launch instance
b. Microsoft Windows Server 2012 R2 Base > select 
c. instance type > t2.micro > Review and launch
d. default options > launch

Step 3: Create a Key Pair and Launch Your Instance
a. popover > select "create a new key pair" > name: "MyKeyPair" > Download key pair > MyFirstKey.pem

Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex.C:\user\{yourusername}\.ssh\MyFirstKey.pem).
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex.~/.ssh/MyFirstKey.pem). 

b. Launch instance
c. on the next screen "View instances" > status

Step 4: Connect to Your Instance
connect using RDP

a. select instance > connect
b. login
The User name defaults to Administrator
To receive your password, click Get Password
c. choose MyFirstKey.pem > Decrypt password
d. save decrypted password in a safe location

Step 6: Connect to Your Instance
RDP client
a. Click download remote desktop file
b. enter username and password
you should be logged in!!

Step 7: Terminate Your Windows VM
a. EC2 Consolle > select instance > actions > Instance state > Terminate
b. confirm yes to terminate

AWS Notes 2

Limits:
======
VPCs per region: 5
Subnets per VPC: 200
IGW per region: 5
VGW per region: 5
CGW per region: 50
VPN connections per region: 50
Route tables per VPC: 200 (including main route table)
Entries per route table: 50
EIP per region for each account: 5
Security groups per VPC: 100
Rules per security group: 50 (per network interface max limit: 250)
Security groups per network interface: 5
Network ACLs per VPC: 200
Rules per ACL: 20
BGP advertised routes per VPN Connection: 100
Active VPC peering connections per VPC: 50
Outstanding VPC peering connection requests: 25

IAM:
—-
Interfaces:
1. AWS management console
2. CLI
3. IAM Query API
4. Existing Libraries

MyISAM (non transaction db)
—————————
steps to creating a read replica:
1. stop all DDL and DML operations on non transactional tables and wait for them to complete. SELECT statement can continue running
2. Flush and lock those tables
3. Create read replica using CreateDBInstanceReadReplica API
4. Check progress of replication using DescribeDBInstance API
5. Once replica is available unlock tables and resume normal database operations

CloudFront – Alternate Domain Names:
————————————-
/images/image.jpg > http://www.mydomain.com/images/image.jpg
instead of http://d111111abcdef8.cloudfront.net/images/image.jpg

1. add CNAME for www.mydomain.com to your distribution
2.update or cretae CNAME record with your DNS service to route queries

Elasticity:
———-
3 ways of implementing:
1. Proactive cycle scaling – periodic scaling that occurs at fixed intervals (daily, weekly, monthly, quarterly)
2. Proactive event based scaling – scaling based on event like a big surge of traffic requests due to a scheduled business event (new product launc, marketing campaigns, etc.)
3. Auto-scaling based on demand: by using a monitoring service, your system can send triggers to take appropriate actions, scale up or down based on metrics (utilization of servers, network I/O)

Instance stores:
——————-
Data on instance stores persists only during life of the instance
If an instance reboots then data persists

Data is lost under following scenarios:
– Failure of an underlying drive
– Stopping and Amazon EBS backed instance
– Terminating an instance

Therefore, do not rely on instance store volumes for long term data
instead keep replication strategy across multiple instances, storing data on S3 or using EBS volumes

CloudFront instance id:
———————-
Logical ID and physical ID
physical ID can be used to view instance and properties through EC2 console but can only be accessed once cloudfront has created the resources.
logical id is used for mapping resources e.g. EBS to an instance -> logical ids for both EBS and EC2 instance to specify the mapping

Elastic load balancer:
———————-
Internet facing load balancer – DNS name, public IP and IGW (internet gateway)
Internal load balancer – DNS name and private IP
DNS of both load balancers are publicly resolvable
Flow: Internet facing load balancer > DNS resolve > Webservers > Internal load balancer > DNS resolve > private IPs > backend instances within the private subnet
application instances behind the load balancer do not need to be in the same subnet

Auto-scaling AMI:
——————
AMI ID used in Auto scaling policy is configured in the “launch configuration”
There are differences between creating a launch configuration from scratch and creating a launch configuration from an existing EC2 instance. When you create a launch configuration from scratch, you specify the image ID, instance type, optional resources (such as storage devices), and optional settings (like monitoring). When you create a launch configuration from a running instance, by default Auto Scaling derives attributes for the launch configuration from the specified instance, plus the block device mapping for the AMI that the instance was launched from (ignoring any additional block devices that were added to the instance after launch).

When you create a launch configuration using a running instance, you can override the following attributes by specifying then as part of the same request: AMI, block devices, key pair, instance profile, instance type, kernel, monitoring, placement tenancy, ramdisk, security groups, Spot price, user data, whether the instance has a public IP address is associated, and whether the instance is EBS-optimized.

Amazon RDS – High Availability (Multi-AZ)
—————————————–

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring.

Note
Amazon Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single region, regardless of whether the instances in the DB cluster span multiple Availability Zones. For more information on Amazon Aurora, see Aurora on Amazon RDS.
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. For more information on Availability Zones, see Regions and Availability Zones.

Note
The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. For more information, see Working with PostgreSQL, MySQL, and MariaDB Read Replicas.

AWS Storage Gateway:
———————
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers file-based, volume-based and tape-based storage solutions
1. File Gateway: file interface to S3
2. Volume Gateway: iSCSI devices on premise
– cached volumes > S3 > low latency access
3. Tape Gateway: backup data to Amazon Glacier
– stored volumes > S3 > asynchronous backup from on premise to point in time snapshots

AWS Cloud Formation:
——————–
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

You can use AWS CloudFormation?s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don?t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

ELB limitations – pre-warming:
—————————–
For pre-warming following details required from customer:
1. Start or end dates of your test or expected flash traffic
2. Expected request rate per second
3. Total size of the request/response you’ll be testing

Note: To ensure there is no outage during this setup we recommend Multi-AZ setup

Auto-scaling health checks:
—————————
Health Checks for Auto Scaling Instances

Auto Scaling periodically performs health checks on the instances in your Auto Scaling group and identifies any instances that are unhealthy. After Auto Scaling marks an instance as unhealthy, it is scheduled for replacement. For more information, see Replacing Unhealthy Instances.

Instance Health Status

An Auto Scaling instance is either healthy or unhealthy. Auto Scaling determines the health status of an instance using one or more of the following:

Status checks provided by Amazon EC2. For more information, see Status Checks for Your Instances in the Amazon EC2 User Guide for Linux Instances.
Health checks provided by Elastic Load Balancing. For more information, see Health Checks for Your Target Groups in the Application Load Balancer Guide or Configure Health Checks for Your Classic Load Balancer in the Classic Load Balancer Guide.
Custom health checks. For more information, see Instance Health Status and Custom Health Checks.
By default, Auto Scaling health checks use the results of the status checks to determine the health status of an instance. Auto Scaling marks an instance as unhealthy if its instance status is any value other than running or its system status is impaired.

If you have attached a load balancer to your Auto Scaling group, you can optionally have Auto Scaling include the results of Elastic Load Balancing health checks when determining the health status of an instance. After you add these health checks, Auto Scaling also marks an instance as unhealthy if Elastic Load Balancing reports the instance state as OutOfService. For more information, see Adding Health Checks to Your Auto Scaling Group.

Health Check Grace Period

Frequently, an Auto Scaling instance that has just come into service needs to warm up before it can pass the Auto Scaling health check. Auto Scaling waits until the health check grace period ends before checking the health status of the instance. While the EC2 status checks and ELB health checks can complete before the health check grace period expires, Auto Scaling does not act on them until the health check grace period expires. To provide ample warm-up time for your instances, ensure that the health check grace period covers the expected startup time for your application. Note that if you add a lifecycle hook to perform actions as your instances launch, the health check grace period does not start until the lifecycle hook is completed and the instance enters the InService state.

Instance Health Status and Custom Health Checks

If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy. The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance.

Internet Gateways
—————–

An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

An Internet gateway supports IPv4 and IPv6 traffic.

Enabling Internet Access

To enable access to or from the Internet for instances in a VPC subnet, you must do the following:

Attach an Internet gateway to your VPC.
Ensure that your subnet’s route table points to the Internet gateway.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.
To use an Internet gateway, your subnet’s route table must contain a route that directs Internet-bound traffic to the Internet gateway. You can scope the route to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6), or you can scope the route to a narrower range of IP addresses; for example, the public IPv4 addresses of your company?s public endpoints outside of AWS, or the Elastic IP addresses of other Amazon EC2 instances outside your VPC. If your subnet is associated with a route table that has a route to an Internet gateway, it’s known as a public subnet.

To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The Internet gateway logically provides the one-to-one NAT on behalf of your instance, so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or Elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC.

To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.

In the following diagram, Subnet 1 in the VPC is associated with a custom route table that points all Internet-bound IPv4 traffic to an Internet gateway. The instance has an Elastic IP address, which enables communication with the Internet.

You can deploy and update a template and its associated collection of resources (called a stack) by using the AWS Management Console, AWS Command Line Interface, or APIs. CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications.

 

 

EC2 stop running instance:
————————–
When you stop a running instance, the following happens:

The instance performs a normal shutdown and stops running; its status changes to stopping and then stopped.
Any Amazon EBS volumes remain attached to the instance, and their data persists.
Any data stored in the RAM of the host computer or the instance store volumes of the host computer is gone.
In most cases, the instance is migrated to a new underlying host computer when it’s started.
EC2-Classic: We release the public and private IPv4 addresses for the instance when you stop the instance, and assign new ones when you restart it.
EC2-VPC: The instance retains its private IPv4 addresses and any IPv6 addresses when stopped and restarted. We release the public IPv4 address and assign a new one when you restart it.
EC2-Classic: We disassociate any Elastic IP address that’s associated with the instance. You’re charged for Elastic IP addresses that aren’t associated with an instance. When you restart the instance, you must associate the Elastic IP address with the instance; we don’t do this automatically.
EC2-VPC: The instance retains its associated Elastic IP addresses. You’re charged for any Elastic IP addresses associated with a stopped instance.
When you stop and start a Windows instance, the EC2Config service performs tasks on the instance such as changing the drive letters for any attached Amazon EBS volumes. For more information about these defaults and how you can change them, see Configuring a Windows Instance Using the EC2Config Service in the Amazon EC2 User Guide for Windows Instances.
If you’ve registered the instance with a load balancer, it’s likely that the load balancer won’t be able to route traffic to your instance after you’ve stopped and restarted it. You must de-register the instance from the load balancer after stopping the instance, and then re-register after starting the instance. For more information, see Register or Deregister EC2 Instances for Your Classic Load Balancer in the Classic Load Balancer Guide.
If your instance is in an Auto Scaling group, the Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance. For more information, see Health Checks for Auto Scaling Instances in the Auto Scaling User Guide.
When you stop a ClassicLink instance, it’s unlinked from the VPC to which it was linked. You must link the instance to the VPC again after restarting it. For more information about ClassicLink, see ClassicLink.
For more information, see Differences Between Reboot, Stop, and Terminate.

You can modify the following attributes of an instance only when it is stopped:

Instance type
User data
Kernel
RAM disk
If you try to modify these attributes while the instance is running, Amazon EC2 returns the IncorrectInstanceState error.

Stopping and Starting Your Instances

You can start and stop your Amazon EBS-backed instance using the console or the command line.

By default, when you initiate a shutdown from an Amazon EBS-backed instance (using the shutdown, halt, or poweroff command), the instance stops. You can change this behavior so that it terminates instead. For more information, see Changing the Instance Initiated Shutdown Behavior.

To stop and start an Amazon EBS-backed instance using the console

In the navigation pane, choose Instances, and select the instance.

[EC2-Classic] If the instance has an associated Elastic IP address, write down the Elastic IP address and the instance ID shown in the details pane.

Choose Actions, select Instance State, and then choose Stop. If Stop is disabled, either the instance is already stopped or its root device is an instance store volume.

Warning
When you stop an instance, the data on any instance store volumes is erased. Therefore, if you have any data on instance store volumes that you want to keep, be sure to back it up to persistent storage.
In the confirmation dialog box, choose Yes, Stop. It can take a few minutes for the instance to stop.

[EC2-Classic] When the instance state becomes stopped, the Elastic IP, Public DNS (IPv4), Private DNS, and Private IPs fields in the details pane are blank to indicate that the old values are no longer associated with the instance.

While your instance is stopped, you can modify certain instance attributes. For more information, see Modifying a Stopped Instance.

To restart the stopped instance, select the instance, choose Actions, select Instance State, and then choose Start.

In the confirmation dialog box, choose Yes, Start. It can take a few minutes for the instance to enter the running state.

[EC2-Classic] When the instance state becomes running, the Public DNS (IPv4), Private DNS, and Private IPs fields in the details pane contain the new values that we assigned to the instance.

[EC2-Classic] If your instance had an associated Elastic IP address, you must reassociate it as follows:

In the navigation pane, choose Elastic IPs.

Select the Elastic IP address that you wrote down before you stopped the instance.

Choose Actions, and then select Associate address.

Select the instance ID that you wrote down before you stopped the instance, and then choose Associate.

To stop and start an Amazon EBS-backed instance using the command line

You can use one of the following commands. For more information about these command line interfaces, see Accessing Amazon EC2.

stop-instances and start-instances (AWS CLI)
Stop-EC2Instance and Start-EC2Instance (AWS Tools for Windows PowerShell)
Modifying a Stopped Instance

You can change the instance type, user data, and EBS-optimization attributes of a stopped instance using the AWS Management Console or the command line interface. You can’t use the AWS Management Console to modify the DeleteOnTermination, kernel, or RAM disk attributes.

To modify an instance attribute

To change the instance type, see Resizing Your Instance.
To change the user data for your instance, see Configuring Instances with User Data.
To enable or disable EBS?optimization for your instance, see Modifying EBS?Optimization.
To change the DeleteOnTermination attribute of the root volume for your instance, see Updating the Block Device Mapping of a Running Instance.
To modify an instance attribute using the command line

You can use one of the following commands. For more information about these command line interfaces, see Accessing Amazon EC2.

modify-instance-attribute (AWS CLI)
Edit-EC2InstanceAttribute (AWS Tools for Windows PowerShell)
Troubleshooting

If you have stopped your Amazon EBS-backed instance and it appears “stuck” in the stopping state, you can forcibly stop it. For more information, see Troubleshooting Stopping Your Instance.