October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

BackUp MySQL Databases using Amazon S3

Backup MySQL to Amazon S3

 

Yum install s3cmd -y – If you can’t install s3cmd then please follow the instructions

cd /etc/yum.repos.d
touch s3cmd.repo
nano s3cmd.repo

AND COPY THE CODE BELOW –

# 
# Save this file to /etc/yum.repos.d on your system
# and run "yum install s3cmd"
# 
[s3tools]
name=Tools for managing Amazon S3 - Simple Storage Service (RHEL_6)
type=rpm-md
baseurl=http://s3tools.org/repo/RHEL_6/
gpgcheck=1
gpgkey=http://s3tools.org/repo/RHEL_6/repodata/repomd.xml.key
enabled=1

Try Above YUM Command again – This time it will install S3CMD, Once installation is completed try running the following command – It will configure your S3CMD using your AMAZON KEY AND Access ID.

s3cmd --configure
#!/bin/bash
S3_BUCKET=YOUR BUCKET NAME HERE
DATE=`date +%d%m%Y_%H%M`
BACKUP_LOC=/home/admin/user_backups/$DATE

mysql_backup(){
mkdir $BACKUP_LOC
mysqldump -uUSERNAME -pPASSWORD DBNAMEHERE > $BACKUP_LOC/databasename_
$DATE.sql

   s3cmd ls s3://$S3_BUCKET/database/$DATE > /tmp/log.txt
   grep -lr "$DATE" /tmp/log.txt
  if [ $? -ne 0 ]
    then
    mkdir /tmp/$DATE
    s3cmd put -r /tmp/$DATE s3://$S3_BUCKET/datbase/

    s3cmd sync -r $BACKUP_LOC s3://$S3_BUCKET/database/$DATE/

  else
    s3cmd sync -r $BACKUP_LOC s3://$S3_BUCKET/database/$DATE/

  fi

}
mysql_backup
exit 0

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup – this is all to be done on your server 🙂

Sister Document – Restore MySQL from Amazon S3 – read that next
1 – Install s3cmd

this is for Centos 5.6, see http://s3tools.org/repositories for other systems like ubuntu etc

# Install s3cmd
cd /etc/yum.repos.d/
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd
# Setup s3cmd
s3cmd –configure
# You’ll need to enter your AWS access key and secret key here, everything is optional and can be i

2 – Add your script

Upload a copy of s3mysqlbackup.sh (it will need some tweaks for your setup), make it executable and test it

# Add the executable bit
chmod +x s3mysqlbackup.sh
# Run the script to make sure it’s all tickety boo
./s3mysqlbackup.sh

3 – Run it every night with CRON

Assuming the backup script is stored in /var/www/s3mysqlbackup.sh we need to add a crontask to run it automatically:

# Edit the crontab
env EDITOR=nano crontab -e
# Add the following lines:
# Run the database backup script at 3am
0 3 * * * bash /var/www/s3mysqlbackup.sh >/dev/null 2>&1

4 – Don’t expose the script!

If for some reason you put this script in a public folder (not sure why you would do this), you should add the following to your .htaccess or httpd.conf file to prevent public access to the files:

### Deny public access to shell files
<Files *.sh>
Order allow,deny
Deny from all
</Files>

s3mysqlbackup.sh
Shell
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
#!/bin/bash

# Based on https://gist.github.com/2206527

# Be pretty
echo -e ” ”
echo -e ” . ____ . ______________________________”
echo -e ” |/ \| | |”
echo -e “[| \e[1;31m? ?\e[00m |] | S3 MySQL Backup Script v.0.1 |”
echo -e ” |___==___| / © oodavid 2012 |”
echo -e ” |______________________________|”
echo -e ” ”

# Basic variables
mysqlpass=”ROOTPASSWORD”
bucket=”s3://bucketname”

# Timestamp (sortable AND readable)
stamp=`date +”%s – %A %d %B %Y @ %H%M”`

# List all the databases
databases=`mysql -u root -p$mysqlpass -e “SHOW DATABASES;” | tr -d “| ” | grep -v “\(Database\|information_schema\|performance_schema\|mysql\|test\)”`

# Feedback
echo -e “Dumping to \e[1;32m$bucket/$stamp/\e[00m”

# Loop the databases
for db in $databases; do

# Define our filenames
filename=”$stamp – $db.sql.gz”
tmpfile=”/tmp/$filename”
object=”$bucket/$stamp/$filename”

# Feedback
echo -e “\e[1;34m$db\e[00m”

# Dump and zip
echo -e ” creating \e[0;35m$tmpfile\e[00m”
mysqldump -u root -p$mysqlpass –force –opt –databases “$db” | gzip -c > “$tmpfile”

# Upload
echo -e ” uploading…”
s3cmd put “$tmpfile” “$object”

# Delete
rm -f “$tmpfile”

done;

# Jobs a goodun
echo -e “\e[1;32mJobs a goodun\e[00m”

 

 

!/bin/bash
# Shell script to backup MySql database
# CONFIG – Only edit the below lines to setup the script
# ===============================
MyUSER=”root” # USERNAME
MyPASS=”password” # PASSWORD
MyHOST=”localhost” # Hostname
S3Bucket=”mysql-backup” # S3 Bucket
# DO NOT BACKUP these databases
IGNORE=”test”
# DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING
# ===============================
# Linux bin paths, change this if it can not be autodetected via which command
MYSQL=”$(which mysql)”
MYSQLDUMP=”$(which mysqldump)”
CHOWN=”$(which chown)”
CHMOD=”$(which chmod)”
GZIP=”$(which gzip)”
# Backup Dest directory, change this if you have someother location
DEST=”/backup”
# Main directory where backup will be stored
MBD=”$DEST/mysql-$(date +”%d-%m-%Y”)”
# Get hostname
HOST=”$(hostname)”
# Get data in dd-mm-yyyy format
NOW=”$(date +”%d-%m-%Y”)”
# File to store current backup file
FILE=””
# Store list of databases
DBS=””
[ ! -d $MBD ] && mkdir -p $MBD || :
# Only root can access it!
$CHOWN 0.0 -R $DEST
$CHMOD 0600 $DEST
# Get all database list first
if [ “$MyPASS” == “” ];
then
DBS=”$($MYSQL -u $MyUSER -h $MyHOST -Bse ‘show databases’)”
else
DBS=”$($MYSQL -u $MyUSER -h $MyHOST -p$MyPASS -Bse ‘show databases’)”
fi
for db in $DBS
do
skipdb=-1
if [ “$IGNORE” != “” ];
then
for i in $IGNORE
do
[ “$db” == “$i” ] && skipdb=1 || :
done
fi
if [ “$skipdb” == “-1” ] ; then
FILE=”$MBD/$db.$HOST.$NOW.gz”
# dump database to file and gzip
if [ “$MyPASS” == “” ]; then
$MYSQLDUMP -u $MyUSER -h $MyHOST $db | $GZIP -9 > $FILE
else
$MYSQLDUMP -u $MyUSER -h $MyHOST -p$MyPASS $db | $GZIP -9 > $FILE
fi
fi
done
# copy mysql backup directory to S3
s3cmd sync -rv –skip-existing $MBD s3://$S3Bucket/

u can use the above script in a cron too, so your server is backed up regularly. The below cronjob will run the MySQL database backup script everyday at 2am:

12
# Run everday at 2am
0 2 * * * /path/to/sql_backup.sh

Soft Link and Hard Link in Linux

Linux provides the user’s to save the same data of a file with one or more different names in different places of a same partition in two ways :

  1. Hard Link
  2. Symbolic Link
Hard link is  a directory entry which is associated with a file on a file system  The term is used to represent multiple links to a single file on a file system. lets say we have created two different files with same data and opened one of the file and made some changes to the files, we can see the changes applied to the file when the other file is opened.
Hard links can only be created to files on the same volume. If a link to a file on a different volume is needed, it may be created with a symbolic link.
How to Create Hard Link in Linux ..?
Use the following syntax to create a hard link between two files on the same file system.
ln {source} {link}
Where ,

  • Source is an existing file.
  • link is the file to create(a hard link).
To create a hard link for a foo file, follow the below commands.
echo ‘This is a test file for Hard Link’ > foo.txt
ln foo bar
ll -li bar fooHere is how output looks like …..[root@localhost prem]# ls -li bar.txt foo.txt
152667 -rw-r–r– 2 root root 34 May 18 21:07 bar.txt
152667 -rw-r–r– 2 root root 34 May 18 21:07 foo.txt

Symbolic Link is also known as symlink & soft link is a special type of file which contains the reference of other file, just like a pointer which stores the values to which it points. Similarly symbolic link stores the reference of other files, so as if any changes are ,made on one file then automatically the link which stores the reference of other file will automatically gets updated. Symlink is major used when two files are on different partitions in Linux.
How to Create Symlink or Soft Link in Linux ..?
Use the below syntax to create soft link:
ln -s {target filename} {symbolic file name}
 -s option for soft link
For example :
ln -s /home/httpd/index.html /home/prem/fwss.html
ls -l
Here is the sample output.
 lrwzrwxrwx 1 prem prem16 2013-05-16 10:13 index.html >/home/prem/fwss.html

How to Sync Files to Amazon S3 on Linux

Amazon’s Simple Storage Service (S3) has a lot to like. It’s cheap, can be used for storing a little bit of data or as much as you want, and it can be used for distributing files publicly or just storing your private data. Let’s look at how you can take advantage of Amazon S3 on Linux.

Amazon S3 isn’t what you’d want to use for storing just a little bit of personal data. For that, you might want to use Dropbox, SpiderOak, ownCloud, or SparkleShare. Which one depends on how much data, your tolerance for non-free software, and which features you prefer. For my work files, I use Dropbox – in large part because of its LAN sync feature.

But S3 is really good if you need to make backups of a large amount of data, or smaller amounts but you need an offsite backup. It’s also good if you want to use S3 to host files for public distribution and don’t have a server or need to offload data sharing because of capacity issues. Maybe you just want to use it to host a blog, cheaply. S3 also has some nifty features for content distribution and data storage from multiple regions, which we’ll get into another time.

Getting the Tools

You can use S3 in a number of ways on Linux, depending on how you’d like to manage your backups. If you look around, you’ll find a bunch of tools that support S3, including:

S3 Tools and Duplicity are command line utilities that support S3. S3 Tools, as the name implies, focuses on Amazon S3. Duplicity has S3 support, but also supports several other methods of transferring files. Deja Dup is a fairly simple GNOME app for backups, which has S3 support thanks to Duplicity. Dragon Disk is a freeware (but not free software) utility that provides more fine-grained control of backups to S3. It also supports Google Cloud Storage and other cloud storage software.

For the purposes of this article, I’m going to focus on S3 Tools. If you’re a GNOME user, it should take very little effort to set up Deja Dup for S3. We’ll tackle Duplicity and Dragon Disk another time.

S3 Tools

You might find S3 Tools in your distribution’s repositories. If not, the S3 Tools folks have package repositories and have support for several versions of Red Hat, CentOS, Fedora, openSUSE, SUSE Linux Enterprise, Debian, and Ubuntu. You’ll also find instructions on adding the tools on the package repositories page.

Once you have S3 Tools installed, you need to configure it with your Amazon S3 credentials. If you haven’t signed up for them yet, hit the Sign Up button at the top of the S3 overview page. You’ll also want to look at the pricing, which starts at $0.125 per GB per month.

The pricing calculator can help you get an idea how much it would cost to store your data in S3. For example, if you’re storing 100GB in S3, it would run about $12.50 per month – before any costs for data transfer out of S3. Transfer in to S3 is free. Amazon also charges for get/put requests and so forth – so if you’re using S3 to serve up content, then the pricing is going to be higher.

Back to the tools. You need to configure s3cmd (the command line utility from the S3 Tools project) like so:

s3cmd --configure

It will walk you through adding your Amazon credentials and GPG information if you want to encrypt files while stored on S3. Amazon’s storage is supposed to be private, but you should always assume that data stored on remote servers is potentially visible to others. Since I’m storing information that has no real need for privacy (WordPress backups, MP3s, photos that I’d happily publish online anyway) I don’t worry overmuch about encrypting for storage on S3.

There’s another advantage of foregoing GPG encryption, which is that s3cmd can use an rsync-like algorithm for syncing files instead of just re-copying everything.

Now to copy files and use s3cmd sync. You’ll find that the s3cmd syntax mimics standard *nix commands. Want to see what is being stored in your S3 account? Use s3cmd ls to show all buckets. (Amazon calls ’em buckets instead of directories.)

Want to copy between buckets? Use s3cmd cp bucket1 bucket2. Note that buckets are specified by the syntax s3://bucketname.

To put files in a bucket, use s3cmd put filename s3://bucket. To get files, use s3cmd getfilename local. To upload directories, you need to use the --recursive option.

But if you want to sync files and save yourself some trouble down the road, there’s the sync command. It’s dead simple to use:

s3cmd sync directory s3://bucket/

The first time, it will copy up all files. The next time it will only copy up files that don’t already exist on Amazon S3. However, if you want to get rid of files that you have removed locally, use the --delete-removed option. Note that you should test this with the --dry-run option first. You can accidentally delete files that way.

It’s pretty simple to use s3cmd, and you should look at its man page as well. It even has some support for the CloudFront CDN service if you need that. Happy syncing!

 

 

Download

S3cmd source code and packages for major linux distributions can be downloaded on our Download page
Here are currently supportive *nux distributions
Steps to Perform for installing s3cmd
  1. Login as a superuser (root), launch a terminal.
  2. Go to /etc/yum.repos.d ( You can use ftp ot wget commands)
  3. Download respective s3tools.repo file for your distribution. For example     wget http://s3tools.org/repo/CentOS_5/s3tools.repo if you’re on CentOS 5 
  4. Run yum install s3cmd if you don’t have s3cmd rpm package installed yet.
  5. Do yum upgrade s3cmd if you already have s3cmd rpm installed and long for a newer version.
  6. You will be asked to accept a new GPG key – answer yes (for twice).
  7. That’s it. From Next time you run yum upgrade you’ll automatically get the very latest s3cmd for your system.

s3cmd

s3cmd is a free Linux command line tool for uploading and downloading data to and from your Amazon S3 account.

Download and install s3tools manually or do what I did and add their package repository to your package manager for a much easier install.

After installing s3cmd configure it by running the following command:
# s3cmd --configure
Enter your Access Key ID and Secret Access Key discussed earlier and use the default settings for the rest of the options unless you know otherwise.

If you haven’t already created a bucket you can do that now with s3cmd:
# s3cmd mb s3://unique-bucket-name
List your current buckets to make sure you successfully created one:
# s3cmd ls
2010-10-30 02:15 s3://your-bucket-name

You can now upload, list, and download content:
# s3cmd put somefile.txt s3://your-bucket-name/somefile.txt
somefile.txt -> s3://your-bucket-name/somefile.txt [1 of 1]
17835 of 17835 100% in 0s 35.79 kB/s done
# s3cmd ls s3://your-bucket-name
2010-10-30 02:20 17835 s3://your-bucket-name/somefile.txt
# s3cmd get s3://your-bucket-name/somefile.txt somefile-2.txt
s3://your-bucket-name/somefile.txt -> somefile-2.txt [1 of 1]
17835 of 17835 100% in 0s 39.77 kB/s done

A much better and more advanced method of backing up your data is to use ‘sync’ instead of ‘put’ or ‘get’. Read more about how I use sync in the next section.

Automate backup with a shell script and cron job

Below is a sample of the shell script I wrote to backup one of my servers:
#!/bin/sh
# Syncronize /root with S3
s3cmd sync --recursive /root/ s3://my-bucket-name/root/
# Syncronize /home with S3
s3cmd sync --recursive /home/ s3://my-bucket-name/home/
# Syncronize crontabs with S3
s3cmd sync /var/spool/cron/ s3://my-bucket-name/cron/
# Syncronize /var/www/vhosts with S3
s3cmd sync --exclude 'mydomain.com/some-directory/*.jpg' --recursive /var/www/vhosts/ s3://my-bucket-name/vhosts/
# Syncronize MySQL databases with S3
mysqldump -u root --password=mysqlpassword --all-databases --result-file=/root/all-databases.sql
s3cmd put /root/all-databases.sql s3://my-bucket-name/mysql/
rm -f /root/all-databases.sql

I use ‘s3cmd sync –recursive /root/ s3://my-bucket-name/root/’ and ‘s3cmd sync –recursive /home/ s3://my-bucket-name/home/’ to synchronize all data in the local /root and /home directories including their subdirectories with S3. I use ‘sync’ instead of ‘put’ because I do not always know exactly what files are stored in these folders. I want everything backed up, including any new files created in the future.

With ‘s3cmd sync /var/spool/cron/ s3://my-bucket-name/cron/’ I omit ‘–recursive’ because I do not care about any subdirectories (there aren’t any).

With “s3cmd sync –exclude ‘mydomain.com/some-directory/*.jpg’ –recursive /var/www/vhosts/ s3://my-bucket-name/vhosts/” I synchronize /var/www/vhosts but exclude all jpg files inside a particular directory because they are replaced very frequently by new versions and are unimportant to me once they are a few minutes old.

Using mysqldump I export all databases to a text file that can be easily used to recreate them if needed. I upload the newly created file using ‘s3cmd put /root/hold-for-S3/all-databases s3://my-bucket-name/mysql/’.

To read more about sync and its options such as ‘–dry-run’, ‘–skip-existing’, and ‘–delete-removed’ read http://s3tools.org/s3cmd-sync.

Create a cron job to execute your shell script as often as you like. Now you can be less worried about losing all your important data.

Creating Internal Root Servers Using Cache.dns

To configure your Windows NT DNS server to query internal root servers instead of the Internet root servers, follow these steps:

  1. Stop the Microsoft DNS server using one of the following methods:
    • Use the Services tool in Control Panel.-or-
    • Type the following at a command prompt and press ENTER:net stop DNS
  2. Use a text editor (Notepad.exe) to open the Cache.dns file, which is located in the %Systemroot%\System32\DNS folder.
  3. Delete the Internet root server entries and add your internal root servers.
  4. Save the file as Cache.dns and close the editor.
  5. Restart the Microsoft DNS server service.

When you run DNS Manager and drill down into the Cache object of your DNS server, you will see the internal root servers listed. DNS Manager may show duplicate entries for the same record even though the zone file only contains the record once. This will not affect functionality.

DISM Operating System Package Servicing Command-Line Options

Applies To: Windows 8, Windows 8.1, Windows Server 2012, Windows Server 2012 R2

[This topic is pre-release documentation and is subject to change in future releases. Blank topics are included as placeholders.]

Operating system package-servicing commands can be used offline to install, remove, or update Windows® packages provided as cabinet (.cab) or Windows Update Stand-alone Installer (.msu) files. Packages are used by Microsoft® to distribute software updates, service packs, and language packs. Packages can also contain Windows features. You can also use these servicing commands to enable or disable Windows features, either offline or on a running Windows installation. Features are optional components for the core operating system.

The base syntax for servicing a Windows image using DISM is:

DISM.exe {/Image:<path_to_image_directory> | /Online} [dism_global_options] {servicing_option} [<servicing_argument>]

The following operating system package-servicing options are available for an offline image:

DISM.exe /Image:<path_to_image_directory> [/Get-Packages | /Get-PackageInfo | /Add-Package | /Remove-Package ] [/Get-Features | /Get-FeatureInfo | /Enable-Feature | /Disable-Feature ] [/Cleanup-Image]

The following operating system package-servicing options are available for a running operating system:

DISM.exe /Online [/Get-Packages | /Get-PackageInfo | /Add-Package | /Remove-Package ] [/Get-Features | /Get-FeatureInfo | /Enable-Feature | /Disable-Feature ] [/Cleanup-Image]

This section describes how you can use each operating system package-servicing option. These options are not case sensitive. However, feature names are case sensitive if you are servicing a Windows image other than Windows® 8.

When used immediately after a package-servicing command-line option, information about the option and the arguments is displayed.

Additional topics might become available when an image is specified.

Examples:

Dism /Image:C:\test\offline /Add-Package /?

Dism /Online /Get-Packages /?

Displays basic information about all packages in the image. Use the /Format:Table or /Format:List argument to display the output as a table or a list.

Examples:

Dism /Image:C:\test\offline /Get-Packages

Dism /Image:C:\test\offline /Get-Packages /Format:Table

Dism /Online /Get-Packages

Displays detailed information about a package provided as a .cab file. Only .cab files can be specified. You cannot use this command to obtain package information for .msu files. /PackagePath can point to either a .cab file or a folder.

You can use the /Get-Packages option to find the name of the package in the image, or you can specify the path to the .cab file. The path to the .cab file should point to the original source of the package, not to where the file is installed on the offline image.

Examples:

Dism /Image:C:\test\offline /Get-PackageInfo /PackagePath:C:\packages\package.cab

Dism /Image:C:\test\offline /Get-PackageInfo /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0

Installs a specified .cab or .msu package in the image. Multiple packages can be added on one command line. The applicability of each package will be checked. If the package is cannot be applied to the specified image, you will receive an error message. Use the /IgnoreCheck argument if you want the command to process without checking the applicability of each package.

Use the /PreventPending option to skip the installation of the package if the package or Windows image has pending online actions. This option can only be used when servicing Windows 8, Windows Server 2012, or Windows® Preinstallation Environment (Windows PE) 4.0 images.

/PackagePath can point to:

  • A single .cab or .msu file.
  • A folder that contains a single expanded .cab file.
  • A folder that contains a single .msu file.
  • A folder that contains multiple .cab or .msu files.
noteNote
If /PackagePath points to a folder that contains a .cab or .msu files at its root, any subfolders will also be recursively checked for .cab and .msu files.

Examples:

Dism /Image:C:\test\offline /LogPath:AddPackage.log /Add-Package /PackagePath:C:\packages\package.msu

Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\packages\package1.cab /PackagePath:C:\packages\package2.cab /IgnoreCheck

Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\test\packages\package.cab /PreventPending

Removes a specified .cab file package from the image. Only .cab files can be specified. You cannot use this command to remove .msu files.

noteNote
Using this command to remove a package from an offline image will not reduce the image size.

You can use the /PackagePath option to point to the original source of the package, specify the path to the CAB file, or you can specify the package by name as it is listed in the image. Use the /Get-Packages option to find the name of the package in the image.

Examples:

Dism /Image:C:\test\offline /LogPath:C:\test\RemovePackage.log /Remove-Package /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0

Dism /Image:C:\test\offline /LogPath:C:\test\RemovePackage.log /Remove-Package /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0 /PackageName:Microsoft-Windows-MediaPlayer-Package~31bf3856ad364e35~x86~~6.1.6801.0

Dism /Image:C:\test\offline /LogPath:C:\test\RemovePackage.log /Remove-Package /PackagePath:C:\packages\package1.cab /PackagePath:C:\packages\package2.cab

Displays basic information about all features (operating system components that include optional Windows foundation features) in a package. You can use the /Get-Features option to find the name of the package in the image, or you can specify the path to the original source of the package. If you do not specify a package name or path, all features in the image will be listed. /PackagePath can point to either a .cab file or a folder.

Feature names are case sensitive if you are servicing a Windows image other than Windows 8.

Use the /Format:Table or /Format:List argument to display the output as a table or a list.

Examples:

Dism /Image:C:\test\offline /Get-Features

Dism /Image:C:\test\offline /Get-Features /Format:List

Dism /Image:C:\test\offline /Get-Features /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0

Dism /Image:C:\test\offline /Get-Features /PackagePath:C:\packages\package1.cab

Displays detailed information about a feature. You must use /FeatureName. Feature names are case sensitive if you are servicing a Windows image other than Windows 8. You can use the /Get-Features option to find the name of the feature in the image.

/PackageName and /PackagePath are optional and can be used to find a specific feature in a package.

Examples:

Dism /Image:C:\test\offline /Get-FeatureInfo /FeatureName:Hearts

Dism /Image:C:\test\offline /Get-FeatureInfo /FeatureName:Hearts /PackagePath:C:\packages\package.cab

Enables or updates the specified feature in the image. You must use the /FeatureName option. Feature names are case sensitive if you are servicing a Windows image other than Windows 8. Use the /Get-Features option to find the name of the feature in the image.

You can specify the /FeatureName option multiple times in one command line for features that share the same parent package.

You do not have to specify the package name using the /PackageName option if the package is a Windows Foundation Package. Otherwise, use /PackageName to specify the parent package of the feature.

You can restore and enable a feature that has previously been removed from the image. Use the /Source argument to specify the location of the files that are required to restore the feature. The source of the files can by the Windows folder in a mounted image, for example c:\test\mount\Windows. You can also use a Windows side-by-side folder as the source of the files, for example z:\sources\SxS.

If you specify multiple /Source arguments, the files are gathered from the first location where they are found and the rest of the locations are ignored. If you do not specify a /Source for a feature that has been removed, the default location in the registry is used or, for online images, Windows Update (WU) is used.

Use /LimitAccess to prevent DISM from contacting WU for online images.

Use /All to enable all parent features of the specified feature.

The /Source, /LimitAccess, and /All arguments can only be used when servicing Windows 8, Windows Server 2012, or Windows® Preinstallation Environment (Windows PE) 4.0 images.

Examples:

Dism /Online /Enable-Feature /FeatureName:Hearts /All

Dism /Online /Enable-Feature /FeatureName:Calc /Source:c:\test\mount\Windows /LimitAccess

Dism /Image:C:\test\offline /Enable-Feature /FeatureName:Calc /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0

Disables the specified feature in the image. You must use the /FeatureName option. Feature names are case sensitive if you are servicing a Windows image other than Windows 8. Use the /Get-Features option to find the name of the feature in the image.

You can specify /FeatureName multiple times in one command line for features in the same parent package.

You do not have to specify the package name using the /PackageName option if it the package is a Windows Foundation Package. Otherwise, use /PackageName to specify the parent package of the feature.

Use /Remove to remove a feature without removing the feature’s manifest from the image. This option can only be used when servicing Windows 8 or Windows Server 2012. The feature will be listed as “Removed” when you use /Get-FeatureInfo to display feature details and can be restored and enabled using /Enable-Feature with the /Source option.

Examples:

Dism /Online /Disable-Feature /FeatureName:Hearts

Dism /Image:C:\test\offline /Disable-Feature /FeatureName:Calc /PackageName:Microsoft.Windows.Calc.Demo~6595b6144ccf1df~x86~en~1.0.0.0

Performs cleanup or recovery operations on the image.

If you experience a boot failure, you can use the /RevertPendingActions option to try to recover the system. The operation reverts all pending actions from the previous servicing operations because these actions might be the cause of the boot failure. The /RevertPendingActions option is not supported on a running operating system or a Windows PE or Windows Recovery Environment (Windows RE) image.

ImportantImportant
You should use the /RevertPendingActions option only in a system-recovery scenario on a Windows image that did not boot.

Use /SPSuperseded to remove any backup files created during the installation of a service pack. Use /HideSP to prevent the service pack from being listed in the Installed Updates Control Panel.

ImportantImportant
The service pack cannot be uninstalled after the /SPSuperseded operation is completed.

Use /StartComponentCleanup to clean up the superseded components and reduce the size of the component store. Use /ResetBase to reset the base of superseded components, which can further reduce the component store size.

WarningWarning
Installed Windows updates can’t be uninstalled after running /StartComponentCleanup with the /ResetBase option.

Use /AnalyzeComponentStore to create a report of the component store. For more information about the report and how to use the information provided in the report, see http://go.microsoft.com/fwlink/?LinkId=293367.

Use /CheckHealth to check whether the image has been flagged as corrupted by a failed process and whether the corruption can be repaired.

Use /ScanHealth to scan the image for component store corruption. This operation will take several minutes.

Use /RestoreHealth to scan the image for component store corruption, and then perform repair operations automatically. This operation will take several minutes.

Use /Source with /RestoreHealth to specify the location of known good versions of files that can be used for the repair, such as a path to the Windows directory of a mounted image.

If you specify multiple /Source arguments, the files are gathered from the first location where they are found and the rest of the locations are ignored. If you do not specify a /Source for a feature that has been removed, the default location in the registry is used or Windows Update (WU) is used for online images.

Use /LimitAccess to prevent DISM from contacting WU for repair of online images.

/AnalyzeComponentStore and /ResetBase can only be used when servicing Windows 8.1 Preview and Windows Server 2012 R2 Preview images.

/CheckHealth, /ScanHealth, /RestoreHealth, /Source, and /LimitAccess cannot be used when servicing a version of Windows that is earlier than Windows 8 or Windows Server 2012 images.

/HideSP and /SPSuperseded cannot be used when servicing a version of Windows that is earlier than Windows® 7 Service Pack 1 (SP1).

Examples:

Dism /Image:C:\test\offline /Cleanup-Image /RevertPendingActions

Dism /Image:C:\test\offline /Cleanup-Image /SPSuperseded /HideSP

Dism /Online /Cleanup-Image /ScanHealth

Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess

How to verify that SRV DNS records have been created for a domain controller

The SRV record is a Domain Name System (DNS) resource record that is used to identify computers that host specific services. SRV resource records are used to locate domain controllers for Active Directory. To verify SRV locator resource records for a domain controller, use one of the following methods.

DNS Manager

After you install Active Directory on a server running the Microsoft DNS service, you can use the DNS Management Console to verify that the appropriate zones and resource records are created for each DNS zone.

Active Directory creates its SRV records in the following folders, where Domain_Name is the name of your domain:

Forward Lookup Zones/Domain_Name/_msdcs/dc/_sites/Default-First-Site-Name/_tcp Forward Lookup Zones/Domain_Name/_msdcs/dc/_tcp

In these locations, an SRV record should appear for the following services:

_kerberos
_ldap

Netlogon.dns

If you are using non-Microsoft DNS servers to support Active Directory, you can verify SRV locator resource records by viewing Netlogon.dns. Netlogon.dns is located in the %systemroot%\System32\Config folder. You can use a text editor, such as Microsoft Notepad, to view this file.

The first record in the file is the domain controller’s Lightweight Directory Access Protocol (LDAP) SRV record. This record should appear similar to the following:

_ldap._tcp.Domain_Name

Nslookup

Nslookup is a command-line tool that displays information you can use to diagnose Domain Name System (DNS) infrastructure.
To use Nslookup to verify the SRV records, follow these steps:

  1. On your DNS, click Start, and then click Run.
  2. In the Open box, type cmd.
  3. Type nslookup, and then press ENTER.
  4. Type set type=all, and then press ENTER.
  5. Type _ldap._tcp.dc._msdcs.Domain_Name, where Domain_Name is the name of your domain, and then press ENTER.

Nslookup returns one or more SRV service location records that appear in the following format, where Server_Name is the host name of a domain controller, and where Domain_Name is the domain the domain controller belongs to, and Server_IP_Address is the domain controller’s Internet Protocol (IP) address:

Server: localhost
Address:  127.0.0.1
_ldap._tcp.dc._msdcs.Domain_Name
SRV service location:
	priority	= 0
	weight		= 100
	port		= 389
	srv hostname	= Server_Name.Domain_NameServer_Name.Domain_Name		internet address = Server_IP_Address

Add-NetLbfoTeamNic

Outputs

The output type is the type of the objects that the cmdlet emits.

MSFT_NetLbfoTeamNic

This cmdlet produces an MFT_NetLbfoTeamNic object containing the newly created team interface.

Examples
Example 1: Add a team interface

This command adds a team interface with VLAN ID 42 to the specified team named Team1.

PS C:\> Add-NetLbfoTeamNIC -Team Team1 -VlanID 42

HOW TO SEND MAIL FROM SHELL SCRIPT

! /bin/bash
#Use: To Send Mail From Shell Script in Linux
#Prepared By: Online cyber Study Group
TO_ADDRESS=”testusr@yourdomain.com”
FROM_ADDRESS=”root@ yourdomain.com”
SUBJECT=”HOW TO SEND MAIL FROM SHELL SCRIPT”
BODY=” This is free webmail server we also called it open source mail server and we have write a bash script that have title HOW TO SEND MAIL FROM SHELL SCRIPT”
echo ${BODY MESSAGE}| mail -s ${SUBJECT } ${TO_ADDRESS} — -r ${FROM_ADDRESS}

:wq!

In above BASH SCRIPT I have used below four variable along with switch.
TO_ADDRESS = Write here user mail id which you want to send mail.
FROM_ADDRESS= Write sender user mail id
SUBJECT= Writer here subject of mail
BODY= Write here body message
-S = for Subject
-r = recipient address

How to send mail in shell script with attachment

#!/bin/bash
#Use: send mail in shell script with attachment
#Prepared By: Online cyber Study Group
TO_ADDRESS=”testusr@domain.com”
FROM_ADDRESS=”sender@yourdomain.com”
SUBJECT=”linux mail send attachment”
BODY_FILE=”server.dat”
ATTACHMENT_FILE=”serverlogfile.txt”
CC_LIST=”testuser2@gmail.com;testuser3@redhat.com;testuser3@onlinecyberstudy.com”
uuencode ${ATTACHMENT_FILE} | mail -s ${SUBJECT} -c ${CC_LIST} ${TO_ADDRESS} — -r ${FROM_ADDRESS} < ${BODY_FILE} :wq! CC_LIST variable you can put here group mail id or more user id which you want to send mail. –s option in mail command user to mail subject and –c option in mail command used to set more user mail in in CC ( carbon copy). Uuencode is used to send attachment file with mail command.

How to check mailbox size of all accounts in Zimbra mail server

I would like to check a mailbox size of all users in my Zimbra mail server and I can check these sizes by create a shell script which has 2 commands for query.

1. zmprov gaa => I use this command for query all accounts in my Zimbra server.
2. zmmailbox -z -m your-account gms => Get mailbox size of your-account account

For all user we can use use a bash script.

# su zimbra
#vim zmchkmailsize.sh

#!/bin/sh
WHO=`whoami`
if [ $WHO = “zimbra” ]
then
all_account=`zmprov -l gaa`;
for account in ${all_account}
do
mb_size=`zmmailbox -z -m ${account} gms`;
echo “Mailbox size of ${account} = ${mb_size}”;
done
else
echo “Execute this script as user zimbra (\”su – zimbra\”)”
fi

:wq
# chmod 755
# ./zmchkmailsize.sh

OUTPUT:

Mailbox size of admin@example.com.com = 15.74 KB

Mailbox size of spam.g1oxhlgg@example.com = 0 B

Mailbox size of ham.hp_pg_h5@iexample.com = 0 B

Mailbox size of virus-quarantine.cs29ig2sm@example.com = 0 B

Mailbox size of xxx.yyy@example.com = 1.70 KB

Mailbox size of xxx.yyy@example.com = 1.29 KB

Mailbox size of xxx.yyy@example.com = 0 B

Mailbox size of xxx.yyy@example.com = 0 B

Run WinRM Commands on the Target Computer

use WinRM commands on the installer computer

Click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.

In the command prompt window, type the following command, pressing Enter after each command:

winrm set winrm/config/client/auth @{CredSSP=”True”}winrm set winrm/config/client @{TrustedHosts=”*”}

Type exit, and then press Enter.


Retrieve the IP configurations of Server2.

winrs -r:server2 ipconfig