[gview file=”http://rmohan.com/wp-content/uploads/2015/05/unix_tutorial.pdf”]
|
||||||||||
[gview file=”http://rmohan.com/wp-content/uploads/2015/05/unix_tutorial.pdf”] How to Install MQ on Redhat Linux1.1 User creationLogin as root to install mq
Create following user with group name of “mqm” which will be used to run MQ 1.2 Create mount points to install MQ create mqm folder under /opt and /var where the Websphere MQ gets installed. root@li-dev01/> cd /opt/ 1.3 Kernel configuration parameters make following kernel changes (/etc/sysctl.conf ): # mq prerequisites edit /etc/sysctl.conf If you wish to load these sysctl values immediately, enter the command sysctl -p. 1.4 Max open files If the system is heavily loaded, you might need to increase the maximum possible number of open files. If your distribution supports the proc filesystem you can query the current limit by issuing the following command: cat /proc/sys/fs/file-max If you are using a pluggable security module such as PAM (Pluggable Authentication Module), ensure that this does not unduly restrict the number of open files for the ’mqm’ user. For a standard WebSphere MQ queue manager, set the ’nofile’ value to 10240 or more for the ’mqm’ user. We suggest you add this command to a startup script in /etc/rc.d/… 1.5 Max process A running WebSphere MQ queue manager consists of a number of thread programs, and each connected application will increase the number of threads running in the queue manager processes. You should ensure that the maximum number of processes which the mqm user is allowed to run is not unduly restricted by one of the pluggable security modules such as PAM. Set nproc for the mqm user to 4090 or more. 1.6 Root access privilege 1.7 64bit consideration The recommended way of using WebSphere MQ commands and your applications is as follows: 2. Installation of Websphere MQ on Linux: 2.1 Install rpm 2.1.1 Log in as root. extract Websphere MQ files to the current directory(/home/test/Desktop/MQ) And make sure all RPMs are in your current directory(/home/test/Desktop/MQ) 2.1.2 Run the mqlicense.sh script. If you want to view a text-only version of the license, which can be read by a screen-reader, type: # run this command from the current directory where Websphere MQ files are extracted. 2.1.3 Install components rpm -ivh MQSeriesRuntime-6.0.0-0.i386.rpm MQSeriesSDK-6.0.0-0.i386.rpm MQSeriesServer-6.0.0-0.i386.rpm MQSeriesJava-6.0.0-0.i386.rpm for 64bit: You can reinstall the /bin/sh shell using RPM, or specify the RPM option –nodeps to disable dependency checking during installation of WebSphere MQ. Install rest of the components: when you are installing MQSeriesIES33-7.0.0-0.x86_64.rpm, if you get any error saying that libstdc++.so is needed then go to the link http://rpmfind.net/linux/rpm2html/search.php?query=libstdc%2B%2B.so.5&submit=Search+… use rpm -ivh -nodeps MQSeriesIES33-7.0.0-0.x86_64.rpm 2.1.4 client install: Install minimum components: for 64bit: Then: ================ 2.2.1 Create a sample Queue login as MQM user mqm@li-dev01>crtmqm -q QM_test.queue.manager #-q indicates default queue manager # if queue manager is not created and gives AMQ8081 error with863 code tar -xvf 6.0.2-WS-MQ-LinuxIA32-FP0005.tar rpm -ivh all the below rpms • MQSeriesRuntime # if queue manager is not created and gives AMQ8108 error with893 code then install IBMJava2-142-ia32-SDK-1.4.2-6.0.i386.rpm from the fix pack, when you are installing this rpm if you get any error saying that libXp.so, then go to the link and download that and install this rpm mqm@li-dev01>strmqm mqm@li-dev01>runmqsc define qlocal (test_QL.queue) 2.2.2 Test the sample queue # if /opt/mqm/samp/bin is not found install rpm MQSeriesSamples. Sample AMQSPUT0 end 3. Uninstalling Websphere MQ in Linux: Remove install RPMs login as root root@websphere mwinstall]# rpm -ivh IBMJava2-SDK-1.4.2-0.0.i386.rpm to install, so I queried to find out what the software name of the rpm was as you cannot use the rpm filename that you used to install the rpm. root@websphere mq]# rpm -qa | grep IBM rpm -e IBMJava2-SDK-1.4.2-0.0 This worked however for the other rpm;s I had to remove the fix-packs first Remove fix pack rpm’s Use this command to see which Websphere MQSeries base rpms and fixpacks are installed I used rpm -ivh IBMJava2-142-ia32-SDK-1.4.2-6.0.i386.rpm to apply fix pack install: rpm -ivh MQSeriesRuntime-U809950-6.0.2-2.i386.rpm install: rpm -ivh MQSeriesServer-U809950-6.0.2-2.i386.rpm install: rpm -ivh MQSeriesJava-U809950-6.0.2-2.i386.rpm Remove base rpm’s I had to do in this order rpm -e MQSeriesJava-6.0.0-0 rpm -e MQSeriesServer-6.0.0-0 rpm -e MQSeriesRuntime-6.0.0-0 Query to make sure they have all been removed rpm -qa | grep MQSeries Remove MQ directories /opt/mqm # cd /opt , rm -rf mqm I have the following configuration 1st : ClusterManager it autostart at boot correctly because when I created it I was asked if I want to boot as a service 2nd : Node1 it doesn’t autostart because I wasn’t asked during the profile creation wizard (advanced mode) I tried the following command:
Shell program View System Date, Calender Calender [root@cluster1 ~]# cal [root@cluster1 ~]# cal 7 2015 date Syntax to specify format [root@cluster1 ~]# date [root@cluster1 ~]# date +%Y%m%d [root@cluster1 ~]# date +”%x %r %Z” [root@cluster1 ~]# right_now=$(date +”%x %r %Z”) Task: Display date in mm-dd-yy format Retrieve yesterday date using bash shell $ date ‘+%A, %Y-%m-%d’ [root@cluster1 ~]# echo “Today is `date +%A` `date +%e`th of `date +%B` ” A specific date minus one day, formatted as we wish: ============================================================================================================================== Create Files & Directories a) touch mkdir dir1 mkdir dir2 dir3 mkdir -p dir4/dir4.1 mkdir -m 744 dir5 mkdir -p /home/mohan/test1 /home/mohan/test2 /home/mohan/test3 c) PWD d) cat cat > test cat < test this test file cat test test1 > test2 cat test2 e) touch g) mv h) rm i) rmdir ======================================================================================================================= copy cp [OPTION]… [-T] SOURCE DEST -i, –interactive cp -r dir2/ dir1/ Copy file1 from dir1 to dir2 but prompt before overwrite: cp -i dir1/file1 dir2/ Copy file1 from dir1 to dir2 but prompt before overwrite: cp -i dir1/file1 dir2/ Link command HARD LINK :It is copy of the old file to new file if the source file is deleted still we can have the use link file ln old new ln with soft link SOFT LINK :It is copy of the old file to if the source file is deleted we wil lose the new link as well. ln -s old old_soft rm old cat old_soft ========================================================================================================================= chmod 775 ls ls -d ls -l ===================================================================================================================================== #!/bin/sh SYSTEM=`uname -s` #system info fn_distro(){ #!/bin/bash fn_uptime(){ Count Lines, Words & Characters Using ‘wc’ $ cat sample.txt $ wc -w sample.txt time perl -nle ‘$word += scalar(split(” “, $_)); END{print $word}’ subst.c real 0m0.021s wc -l : Prints the number of lines in a file. cat > animals ctrl D base ball $ cat test sort test Create the following test file for this example: $ cat test $ sort -h test If we want to sort in the order of months of year, then we can use -M or –month-sort option. Create the following test file for this example: $ cat test $ sort -M test If we want to check data in text file is sorted or not, then we can use -c or –check, –check=diagnose-first option. Create the following test file for this example: $ cat test $ sort -c test If we want to get sorted output in reverse order, then we can use -r or –reverse option. If file contains duplicate lines, then to get unique lines in sorted output, “-u” option can be used. Create the following test file for this example: $ cat test $ sort -r test $ sort -r -u test If we want to sort on the column or word position in lines of text file, then “-k” option can be used. If we each word in each line of file is separated by delimiter except ‘space’, then we can specify delimiter using “-t” option. We can get sorted output in any specified output file (using “-o” option) instead of displaying output on standard output. Create the following test file for this example: $ cat test $ sort -k3 test $ sort -n -t’|’ -k2 test -o outfile $ cat outfile sort -r filename.txt Example6: Some times its required to sort the file and display only uniq values. sort -u filename Note: though the values on other field are different this will not consider by -u option. Example7: I want to sort a file according to my requirement and save it to a different file. Use -o option to save the sorted output to a file. sort -o temp.txt filename.txt Example8: Do you have file content with sizes like 10K, 20G, 45M, 32T etc. You can sort accourding to human readable by using -h option. This option works RHEL5 and above versions. sort -h filename.txt Similar to above example we can use -m for sorting according to month of the year. sort -M filename.txt Example9: Check if the file is alrady in sorted format or not by using -c option. This option will show you what is the first occurence disorderd value. sort -c filename.txt You can now mix above options to get your sorting work done. There are two types of sorting keys definition in Unix sort: new (with -k option) New-style sort keys definitions New style definition use -k option: where: type When there are multiple key fields, later keys are compared only after all earlier keys compare equal. Except when the -u option is specified, lines that otherwise compare equal are ordered as if none of the options -d, -f, -i, -n or -k were present (but with -r still in effect, if it was specified) and with all bytes in the lines significant to the comparison. The notation: defines a key field that begins at field_start and ends at field_end inclusive, unless field_start falls beyond the end of the line or after field_end, in which case the key field is empty. A missing field_end means the last character of the line. The field_start portion of the keydef option-argument has the form: field_number[.first_character] Fields and characters within fields are numbered starting with 1. field_number and first_character, interpreted as positive decimal integers, specify the first character to be used as part of a sort key. If .first_character is omitted, it refers to the first character of the field. The field_end portion of the keydef option-argument has the form: field_number[.last_character] The field_number is as described above for field_start. last_character, interpreted as a non-negative decimal integer, specifies the last character to be used as part of the sort key. If last_character evaluates to zero or .last_character is omitted, it refers to the last character of the field specified by field_number. If the -b option or b type modifier is in effect, characters within a field are counted from the first non-blank character in the field. (This applies separately to first_character and last_character.) Old-style keys definition pos1 and pos2 each have the form m.n optionally followed by one or more of the flags bdfiMnr. A starting position specified by +m.n is interpreted to mean the n+1st character in the m+1st field. A missing .n means .0, indicating the first character of the m+1st field. If the b flag is in effect n is counted from the first non-blank in the m+1st field; +m.0b refers to the first non-blank character in the m+1st field. A last position specified by -m.n is interpreted to mean the nth character (including separators) after the last character of the mth field. A missing .n means .0, indicating the last character of the mth field. If the b flag is in effect n is counted from the last leading blank in the m+1st field; -m.1b refers to the first non-blank in the m+1st field. The fully specified +pos1 -pos2 form with type modifiers T and U: +w.xT -y.zU is equivalent to: undefined (z==0 & U contains b & -t is present) Implementations support at least nine occurrences of the sort keys (the -k option and obsolescent +pos1 and -pos2) which are significant in command line order. If no sort key is specified, a default sort key of the entire line is used. If the ordering rule options precede the sort key options, they are globally applied to all sort keys. For example, sort -r +2 -3 infile sort -r +2d -3d infile sorts field 3 by dictionary comparison but sorts the rest of the line using reverse comparison. The most common mistake is to forget to use -n option for sorting numeric fields. Also specifying delimiter (option -t) with an unquoted character after it can be a source of problems; it’s better to use single quotes around the character that you plan to use as a delimiter. for example -t ‘:’ The most common mistake is to forget to use -n option for sorting numeric fields Here is a standard example of usage of the sort utility, sorting /etc/passwd file (user database) by UID (the third colon-separated field in the passwd file structure): sort -t ‘:’ +2 /etc/passwd # incorrect result, the field is numeric sort -t ‘:’ -k 3,3n /etc/passwd See Sorting key definitions and Examples for more details. Generally you will be surprised how often the result is not what you want due to the obscurity of the definitions Be careful and always test your sorting keys on a small sample before sorting the whole file. You will be surprised how often the result is not what you want. By default sort sorts the file in ascending order using the entire line as a sorting key. Please note that a lot of WEB resources interpret this sort utility behavior incorrectly (most often they state that by default sorting is performed on the first key). The three most important options of Unix soft are -n (numeric keys sorting), +n (sorting using n-th field, counting from zero) and -r (sort in reverse). For example: Comparisons are based on one or more sort keys extracted from each line of input. Again, please remember that by default, there is one sort key, the entire input line. Lines are ordered according to the collating sequence of the current locale. By changing locale you can change the behavior of the sort. In Solaris there are two variants of sort: System V version and BSD version. Both have identical options: /usr/bin/sort [ -cmu ] [ -o output ] [ -T directory ] [ -y [ kmem ]] [ -z recsz ] [ -dfiMnr ] [ – b ] [ t char ] [ -k keydef ] [ +pos1 [ -pos2 ]] [ file…] /usr/xpg4/bin/sort [ -cmu ] [ -o output ] [ -T directory ] [ -y [ kmem ]] [ -z recsz ] [ – dfiMnr ] [ -b ] [ -t char ] [ -k keydef ] [ +pos1 [ -pos2 ]] [ file…] The sort command can (and should) be used in pipes or have its output redirected as desired. Here are some practically important examples that illustrates using of this utility (for more examples please look into our sort examples collection page): 1. Sort the file and display the sorted file, one page at a time. (If you prefer the standard command “more”, you can use this instead of “less”. “less” is an enhanced version of “more” – for example, it allows you to move backwards and forwards in the file; to exit “less” , use “q”.) 2. Read and sorts infile and display the result on standard output, which has been redirected to outfile: sort infile > outfile # sorts infile and writes the results to outfile 3. Write sorted output directly to outfile. sort -o outfile infile # same as in 1, but using an option -o 4. Read and sort infile “in place” writing to the same file sort -o infile infile # sort file “in place” 5. Sort the file and pipe it to uniq command with the number of identical keys counter printed (-c option in uniq) sort infile uniq -c 6. Pipe the output of the who command to the input of the sort command: who sort 7. Classic “number of instances” analysis of log files: cat messages awk ‘{“select the keywords”}’ sort uniq -c sort -nr In simple cases cut can be used instead of AWK. For example the following example couts distinc visitors from HTTP logs (assuming this is the first field in the logs): cat http.log cut -d ” ” -f 1 sort uniq -c sort -nr 8. Sort the file, then prepend line numbers to each line (not many Unix adminitrators know that cat can be used to number lines): sort -n file cat -n This can be useful if you want to count the number of lines in which the first entry is in a given range: simply subtract the line numbers corresponding to the beginning and end of the range. As I mentioned about by default the sort command uses entire lines as a key. It compares the characters starting with the first, until non-matching character or the end of the shortest line. Leading blanks (spaces and tabs) are considered valid characters to compare. Thus a line beginning with a space precedes a line beginning with the letter A. If you do not want this effect you need to delete leading spaces beforehand. Multiple sort keys may be used on the same command line. If two lines have the same value in the same field, sort uses the next set of sort keys to resolve the equal comparison. For example, sort +4 -5 +1 -2 infile means to sort based on field 5 (+4 -5). If two lines have the same value in field 5, sort those two lines based on field 2 (+1 -2). Beside sorting Unix sort is useful for merging files (option -m). It can also checked whether the file is sorted or not (option -c). It can also suppress duplicates (option -u): -c Check whether the given files are already sorted: if they are not all sorted, print an error message and exit with a status of 1. -m Merge the given files by sorting them as a group. Each input file should already be individually sorted. It always works to sort instead of merge; merging is provided because it is faster, in the case where it works. -u Suppress all but one occurrence of matching keys. In case Unix sort does not produce the required results you might want to look into Perl built-in function. If it is too slow more memory can be specified on invocation. The most important options The following list describes the options and their arguments that may be used to control how sort functions. – Forces sort to read from the standard input. Useful for reading from pipes and files simultaneously. -c Verifies that the input is sorted according to the other options specified on the command line. If the input is sorted correctly then no output is provided. If the input is not sorted then sort informs you of the situation. The message resembles this. sort: disorder: This line not in sorted order. -m Merges the sorted input. sort assumes the input is already sorted. sort normally merges input as it sorts. This option informs sort that the input is already sorted, thus sort runs much faster. -o output Sends the output to file output instead of the standard output. The output file may be the same name as one of the input files. -u Suppress all but one occurrence of matching keys. Normally, the entire line is the key. If field or character keys are specified, then the suppressing is done based on the keys. -y kmem Use kmem kilobytes of main memory to initially start the sorting. If more memory is needed, sort automatically requests it from the operating system. The amount of memory allocated for the sort impacts the speed of the sort significantly. If no kmem is specified, sort starts with the default amount of memory (usually 32K). The maximum (usually 1 Megabyte) amount of memory may be allocated if needed. If 0 is specified for kmem, the minimum (usually 16K) amount of memory is allocated. -z recsz Specifies the record size used to store each line. Normally the recsz is set to the longest line read during the sort phase. If the -c or -m options are specified, the sort phase is not performed and thus the record size defaults to a system size. If this default size is not large enough, sort may abort during the merge phase. To alleviate this problem you can specify a recsz that will allow the merge phase to run without aborting. Ordering Options -d Specifies dictionary sort. Only blanks, digits, and alphabetic characters are significant in the comparison. -f Fold lowercase letters to uppercase. Ignores the difference between upper and lowercase ASCII characters. -i Ignore characters outside the ASCII range of 040 (octal) and 0176 (octal). Only alphabetic characters, blanks, digits, and punctuation are used for comparison (printable characters). Control characters are ignored. This is only valid for nonnumeric sorts. -M Compare fields as months. The first three nonblank characters are folded (see -i option) to uppercase and compared. Thus January is compared as JAN. JAN precedes FEB, and fields not containing months precede JAN. The -b option is placed in effect automatically. -n Sorts the input numerically. The comparison is based on numerical value instead of alphabetic value. The number field used for comparison can contain optional blanks, optional minus signs, optional decimal point, and zero or more digits. The -b option is placed in effect automatically. Exponential numbers are not sorted correctly. -r Reverse the order of the output. Old-style Sort Key Options +pos1 -pos2 -b -t c Old News 😉 #cd /export/home/esofthub Find recursively (a little awkward) ps -ef sort This command pipeline sorts the output of the “ps -ef” command. Because no arguments are supplied to the sort command, the output is sorted in alphabetic order by the first column of the ps -ef output (i.e., the output is sorted alphabetically by username). ls -al sort +4n This command performs a numeric sort on the fifth column of the “ls -al” output. This results in a file listing where the files are listed in ascending order, from smallest in size to largest in size. ls -al sort +4n more The same command as the previous, except the output is piped into the more command. This is useful when the output will not all fit on one screen. ls -al sort +4nr This command reverses the order of the numeric sort, so files are listed in descending order of size, with the largest file listed first, and the smallest file listed last. [Feb 15, 2007] UNIX Disk Usage Simplifying Analysis with sort One final concept and we’re ready to move along. If you want to only see the five largest files or directories in a specific directory, all that you’d need to do is pipe the command sequence to head: # du -s * sort -nr head -5 The ! command (pronounced “bang”) creates a temporary file to be used with a program that requires a filename in its command line. This is useful with shells that don’t support process substitution. For example, to diff two files after sorting them, you might do: diff `! sort file1` `! sort file2` This command pipeline sorts the output of the “ps -ef” command. Because no arguments are supplied to the sort command, the output is sorted in alphabetic order by the first column of the ps -ef output (i.e., the output is sorted alphabetically by username). ls -al sort +4n This command performs a numeric sort on the fifth column of the “ls -al” output. This results in a file listing where the files are listed in ascending order, from smallest in size to largest in size. ls -al sort +4n more The same command as the previous, except the output is piped into the more command. This is useful when the output will not all fit on one screen. ls -al sort +4nr This command reverses the order of the numeric sort, so files are listed in descending order of size, with the largest file listed first, and the smallest file listed last. Cut Command Linux command cut is used for text processing. You can use this command to extract portion of text from a file by selecting columns. Cutting specific fileds -c, –characters=LIST The cut command, as the man page states, “removes sections from each line of a file.” The cut command can also be used on a stream and it can do more than just remove section. If a file is not specified or “-” is used the cut command takes input from standard in. The cut command can be used to extract sections from a file or stream based upon a specific criteria. An example of this would be cutting specific fields from a csv (comma separated values) file. For instance, cut can be used to extract the name and email address from a csv file with the following content: id, date, username, first name, last name, email address, phone, fax he syntax for cut would be: cut -d”,” -f4,5,6 users.csv The result would be displayed on standard out: first name, last name, email address The -d option specifies the delimiter which is defaults to a TAB. In the example above the cut command will “cut” the line at each “,” instead of a TAB. The -f option indicates which fields to select, in this case fields 4, 5, and 6 which correspond to “first name,” “last name,” and “email address.” The cut command can operate on fields, characters or bytes and must include one and only one of these options. The field option operates on the cuts defined by the delimiter (-d), which is TAB by default. The -d option can only be used with the field option. Attempting to use the -d option with the character (-c) or bytes (-b) options will result in an error. The -f value can be a command separated list or a range separated by a “-“: cut -d”,” -f 1,2,3,4 cut is a very frequently used command for file parsing. It is very useful in splitting columns on files with or without delimiter. In this article, we will see how to use the cut command on files having a delimiter. Let us consider a sample file, say file1, with a comma as delimiter as shown below: cut command has 2 main options to work on files with delimiters: -f – To indicate which field(s) to cut. Let us now try to work with this command with a few examples: 1. To get the list of names alone from the file, which is the first column: 2. To get the relationship alone:, i.e, 2nd field 4. To get the name, relationship and age, excluding the profession, i.e, 1st to 3rd fields: The same result above can also be retrieved in other ways also: 5. To retrieve all the fields except the name field. i.e, to retrieve from field 2 to field 4: Let us consider the same input file with a space as the delimiter: 6. To retrieve the first field from a space delimited file: 7. To retrieve the first field from this tab separated file. How to specify the tab space with the “-d” option? he options to cut by are below. N N’th byte, character or field, counted from 1 The options pretty much explain themselves but I have included some simple examples below: echo “123456789” | cut -c -5 echo “123456789” | cut -c 5- echo “123456789” | cut -c 3-7 echo “123456789” | cut -c 5 Sometimes output from a command is delimited so a cut by characters will not work. Take the example below: echo -e “1\t2\t3\t4\t5” |cut -c 5-7 To echo a tab you have to use the -e switch to enable echo to process back slashed characters. If the desired output is 3\t4 then this would work great if the strings were always 1 character but if anywhere before field 3 a character was added the output would be completely changed as followed: echo -e “1a\t2b\t3c\t4d\t5e” | cut -c 5-7 This is resolved by cutting by fields. The syntax to cut by fields is the same as characters or bytes. The two examples below display different output but are both displaying the same fields (Fields 3 Through to the end of line.) echo -e “1\t2\t3\t4\t5” | cut -f 3- echo -e “1a\t2a\t3a\t4a\t5a” | cut -f 3- The default delimiter is a tab, if the output is delimited another way a custom delimiter can be specified with the -d option. It can be just about any printable character, just make sure that the character is escaped (back slashed) if needed. In the example below I cut the string up using the pipe as the delimiter. echo “1|2|3|4|5” | cut -f 3- -d \| One great feature of cut is that the delimiter that was used for input can be changed by the output of cut. In the example below I change the format of the string from a dash delimited output and change it to a comma. echo -e “1a-2a-3a-4a-5a” | cut -f 3- -d – –output-delimiter=, Formatting with Cut Example Sometimes certain Linux applications such as uptime do not have options to format the output. Cut can be used to pull out the information that is desired. root@cluster1 :~$ uptime Time with up-time displayed: root@cluster1 :~$ uptime |cut -d , -f 1,2 | cut -c 2- For the above example I pipe the output of uptime to cut and tell it I want to split it with a comma , delimiter. I then choose fields 1 and 2. The output from that cut is piped into another cut that removes the spaces in front of the output. root@cluster1 :~$ uptime |cut -d , -f 4- | cut -c 3- This is about the same as the previous example except the fields changed. Instead of fields 1 and 2 I told it to display fields 4 through the end. The output from that is piped to another cut which removes the three spaces that were after the comma in “4 users, ” by starting at the 3rd character. root@cluster1 :~$ uptime root@cluster1 :~$ uptime |cut -d , -f 4- | cut -c 3- That just about covers everything for the cut command. Now you know about it you can use cut to chop up all types of strings. It is one of the many great tools available for string manipulation in bash. If you can remember what cut does it will make your shell scripting easier, you don’t need to memorize the syntax because all of the information on how to use cut is available here, in the man pages and all over the web. ======================================================================================================================================================================================================== The Linux command ‘dd’ is one of the very powerful utility which can be used in a variety of ways. This tool is mainly used for copying and converting data, hence it stands for ‘data duplicator’. This tool can be used for. • Backing up and restoring an entire hard drive or a partition • Copy regions of raw device files like backing up MBR(master boot record) • Converting data formats like ASCII to EBCDIC • Converting lowercase to uppercase and vice versa • Creating files with fixed size Only superuser can execute this command. You should be very careful while using this command as improper usage may cause huge data loss. So, some people consider this tool as ‘data destroyer’. Syntax of ‘dd’ command dd if= [root@cluster1 tmp]# cat doc [root@cluster1 tmp]# dd if=doc of=out conv=ucase 1. Backing up and restoring an entire hard drive or a partition a. Backup entire hard drive to another drive. dd if=/dev/sda of=/dev/sdb bs=4096 conv=noerror,sync Here, ‘if’ stands for input file , ‘of’ stands for output file and ‘bs’ stands for the block size (number of bytes to be read/write at a time). The conversion parameter ‘noerror’ allows the tool to continue to copy the data eventhough it encounter any errors. The sync option allows to use synchronized I/O. The above command will copy all the data from the disk /dev/sda to /dev/sdb. ‘dd’ doesn’t know anything about the filesystem or partitions- it will just copy everything from /dev/sda to /dev/sdb. So, this will clone the disk with the same data on same partition. b. Creating a disk image dd if=/dev/sda of=/tmp/sdadisk.img Backing up a disk to an image will be faster than copying the exact data. Also, disk image make the restoration much more easier. c. Creating a compressed disk image dd if=/dev/sda | gzip >/tmp/sdadisk.img.gz d. Restoring hard disk image dd if=/tmp/sdadisk.img of=/dev/sda e. Restoring compressed image gzip –dc /tmp/sdadisk.img.gz | dd of=/dev/sda f. Clone one partition to another dd if=/dev/sda1 of=/dev/sdb1 bs=4096 conv=noerror,sync This will synchronize the partition /dev/sda1 to /dev/sdb1. You must verify that the size of /dev/sdb1 should be larger than /dev/sda1 2. Backing up and Restoring MBR Master Boot record is the boot sector which houses the GRUB boot loader. If MBR got corrupted, we will not be able to boot into Linux. MBR -512 byte data- is located at the first sector of the hard disk. It consists of 446 byte bootstrap, 64 byte partition table and 2 bytes signature. a. Backing up MBR dd if=/dev/sda of=/tmp/mbr.img bs=512 count=1 The option “count” refers to the number of input blocks to be copied b. Backing up the boot data of MBR excluding the partition table dd if=/dev/sda of=/tmp/mbr.img bs=446 count=1 c. Restoring MBR from MBR image dd if=/tmp/mbr.img of=/dev/sda d. Display master boot record dd if=/dev/hda of=mbr.bin bs=512 count=1 3. Converting data formats a. Convert the data format of a file from ASCII to EBCDIC dd if=textfile.ascii of=textfile.ebcdic conv=ebcdic b. Convert the data format of a file from EBCDIC to ASCII dd if=textfile.ebcdic of=textfile.ascii conv=ascii 4. Converting case of a file a. Converting a file to Uppercase dd if=file1 of=file2 conv=ucase b. Converting a file to lowercase dd if=file1 of=file2 conv=lcase 5. Creating or modifying data files a. Create a fixed size, say 10MB file dd if=/dev/zero of=file1 bs=10485760 count=1 The block size is calculated as 10MB=10*1024*1024 b. Modify the first 512 bytes of a file with null data dd if=/dev/zero of=file1 bs=512 count=1 conv=notrunc The option ‘notrunc’ refers to do not truncate the file, only replace the first 512 bytes, if it exists. Otherwise, you will get a 512 byte file. Creating an image file with dd command To create 1MB file (1024kb), enter: You will get an empty files (also known as “sparse file”) of arbitrary size using above syntax. To create 10MB file , enter: To create 100MB file , enter: To create 1GB, file: 0+0 records in Burn an ISO File Using the Terminal To burn the iso on an usb stick, enter the following command in a terminal : ============================================================================================================ Get Help, View Fancy Text & Reduce File Size man banner command How do I use figlet? Simply use it as follows: # escapestr_sed() Weak quoting Inside a weak-quoted string there’s no special interpretion of: spaces as word-separators (on inital commandline splitting and on word splitting!) ls -l “*” if the baskslash is followed by a character that would have a special meaning even inside double-quotes, the backslash is removed and the following character looses its special meaning Strong quoting Strong quoting is very easy to explain: Inside a single-quoted string nothing(!!!!) is interpreted, except the single-quote that closes the quoting. echo ‘Your PATH is: $PATH’ # WRONG # RIGHT # ALTERNATIVE: It’s also possible to mix-and-match quotes for readability: There’s another quoting mechanism, Bash provides: Strings that are scanned for ANSI C like escape sequences. The Syntax is $’string’ hell scripts commonly used ANSI escape codes for color output. Color Foreground Background The numbers in the above table work for xterm terminal.Result may vary Use the following template for writing colored text. echo -e “\033[COLORm Sample text” The “\033[” begins the escape sequence.You can also use “\e[” instead Note: With an echo, the -e option enables the escape sequences.You can also use printf instead of echo. printf “\e[COLORm sample text\n” To print Green text echo -e “\033[32m Hello World” The problem with above statement is that the blue color that starts To return to the plain, normal mode, we have yet another sequence. echo -e “\033[0m” Now you won’t see anything new on the screen, as this echo statement Escape sequence also allow you to control the manner in which The following table summarizes numbers representing text attributes ANSI CODE Meaning Note: Blink attribute doesn’t work in any terminal emulator, but it will work on the console. Combining all these Escape Sequences, you can get more fancy effect. echo -e “\033[COLOR1;COLOR2m sample text\033[0m” The semicolon separated numbers “COLOR1” and “COLOR2″ specify a There are some differences between colors when combining colors with The following table summarises these differences. Bold off color Bold on color The following shell script prints all the colors and codes on the #!/bin/bash # This script echoes colors and codes echo -e “\n\033[4;31mLight Colors\033[0m \t\t\033[1;4;31mDark Colors\033[0m” echo -e “\e[0;30;47m Black \e[0m 0;30m \t\e[1;30;40m Dark Gray \e[0m 1;30m” echo -e “\e[0;31;47m Red \e[0m 0;31m \t\e[1;31;40m Dark Red \e[0m 1;31m” echo -e “\e[0;32;47m Green \e[0m 0;32m \t\e[1;32;40m Dark Green \e[0m 1;32m” echo -e “\e[0;33;47m Brown \e[0m 0;33m \t\e[1;33;40m Yellow \e[0m 1;33m” echo -e “\e[0;34;47m Blue \e[0m 0;34m \t\e[1;34;40m Dark Blue \e[0m 1;34m” echo -e “\e[0;35;47m Magenta \e[0m 0;35m \t\e[1;35;40m DarkMagenta\e[0m 1;35m” echo -e “\e[0;36;47m Cyan \e[0m 0;36m \t\e[1;36;40m Dark Cyan \e[0m 1;36m” echo -e “\e[0;37;47m LightGray\e[0m 0;37m \t\e[1;37;40m White \e[0m 1;37m” OUTPUT: Some examples: Block background and white text echo -e “\033[40;37m Hello World\033[0m” Reverse video text attribute option interchanges fg and bg colors. echo -e “\033[40;37;7m Hello World\033[0m” echo -e “\033[33;44m Yellow text on blue background\033[0m” The “tput” command: Other than echo there is a command called tput using which we #!/bin/bash Users who have been using Linux for awhile often learn that creating a basic script is a good way to run multiple, often-repeated commands. Adding a little color to scripts can additionally provide nice feedback. This can be done in a fairly straight-forward way by using the tput command. A common way of doing this is to define the colors that tput can produce by putting them at the beginning of the bash script: #!/bin/bash # Text color variables If just needing to use tput colors for specific instances this script can display the tput definitions and their corresponding possibilities: #!/bin/bash echo for i in $(seq 1 7); do echo ‘ Bold $(tput bold)’ tput Command The tput command is used to set terminal features. With tput you can set: Move the cursor around the screen. #!/bin/bash # clear the screen # Move cursor to screen location X,Y (top left is 0,0) # Set a foreground colour using ANSI escape tput cup 5 17 tput cup 7 15 tput cup 8 15 tput cup 9 15 tput cup 10 15 # Set bold mode tput clear ===================================if-then===================================================================================== check the exit status wither command executed successfully or not [root@cluster1 shell]# mkdir test What Are Conditions? Conditions in the realm of computing work similarly. We can test whether a string matches another string, whether it doesn’t match another string, or even if it exists at all. To get something to happen after the conditions of the test are met, we use “if-then” statements. Their format is pretty simple. if CONDITION if test $1 -gt $2 if…fi statement is the fundamental control statement that allows Shell to make decisions and execute statements conditionally. Syntax: Give you attention on the spaces between braces and expression. This space is mandatory otherwise you would get syntax error. If expression is a shell command then it would be assumed true if it return 0 after its execution. If it is a boolean expression then it would be true if it returns true. Example: a=10 if [ $a == $b ] if [ $a != $b ] a is not equal to b Here are some other numerical operators you may want to try out: -eq: equal to Shell scripts use fairly standard syntax for if statements. The conditional statement is executed using either the test command or the [ command. In its most basic form an if statement is: #!/bin/bash if [ “$#” -gt 0 ] if [ “$1” = “cool” ] #!/bin/bash if [ “$1” = “cool” ] #!/bin/bash if [ “$1” = “cool” ] #!/bin/bash if [ -f “$1” ] String Comparison Description Numeric Comparison Description File Conditionals Description File attributes comparisons -a file Example Example [ -b /dev/sda ] && echo “block special file found” || echo “block special file not found” Example Example # Make sure backup dir exits # If source directory does not exits, die… # Okay, dump backup using tar # Find out if our backup job failed or not and notify on screen Example Example #!/bin/bash # Apache vroot for each domain # Path to GeoIP DB # Get the Internet domain such as cyberciti.biz # Make sure we got the Input else die with an error on screen # Alright, set some variable based upon $DOMAIN # Die if configuration file exits… # Make sure configuration directory exists # Write a log file >$OUT echo “Weblizer config wrote to $OUT” -h file -k file -p file -r file -s file -t fd -u file -w file -x file -O file -G file -L file -S file -N file In its most basic form an if statement is: To add an else, we just use standard syntax. #!/bin/bash if [ “$1” -eq “abc” ] then echo “in if block” else echo “in else block” fi #!/bin/bash if [ “$1” -eq “abc” ] then echo “in if block” elif [ “$1” -eq “xyz” ] echo “in else if block” else echo “in else block” fi You can use single flags as well. The following code tests to see if the first parameter is a file or not. #!/bin/bash if [ -f “$1” ] then echo “$1 is a file” else echo “$1 is not a file” fi #!/bin/bash Enter source and target files names. if…else…fi statement is the next form of control statement that allows Shell to execute statements in more controlled way and making decision between two choices. Syntax: then else if test var -eq val else if [ condition ] else if [ condition ] #!/bin/bash if [ $i -eq 10 ] if else statements are useful where we have a two programs for execution, and need to execute only one based on results of if condition. if [ condition ] #!/bin/bash if [ $i -eq 10 ] if elif and else statements are useful where we have more than two programs for execution, and need to execute only one based on results of if and elif condition. if [ condition ] #!/bin/bash if [ $i -eq 5 ] This is something similar to above one where we are adding multiple elif statements together. elif (else if) ladder are useful where we have multiple programs for execution and need to execute only one based on results of if and elif condition. if [ condition ] elif [ condition ] elif [ condition ] elif [ condition ] else #!/bin/bash if [ $i -eq 5 ] Nested if are useful in the situation where one condition will be checked based on results of outer condition. if [ condition ] #!/bin/bash read -p “Enter value of i :” i if [ $i -gt $j ] Nested ifs if condition Number Testing Script !/bin/bash #!/bin/bash read -p “Enter first Number:” n1 if((n1>n2)) ; then ========================================Exit status of a command Parameters Set by the Shell================================================================== exit status of a command Parameters Set by the Shell Bash shell set several special parameters. For example $? (see return values section) holds the return value of the executed command. All command line parameters or arguments can be accessed via $1, $2, $3,…, $9. echo $# status=$? Exit Status echo $? 0 date1 2 How Do I Store Exit Status Of The Command In a Shell Variable? ls -l /tmp #!/bin/bash # get user name # try to locate username in in /etc/passwd # store exit status of grep if test $status -eq 0 chmod +x finduser.sh Enter a user name : vivek chmod +x finduser.sh Enter a user name : tommy if grep “^$username:” /etc/passwd >/dev/null #!/bin/bash #!/bin/bash #!/bin/sh # Prompt for a user name… # Check for the file. if [ “$AGE” -le 2 ]; then The syntax is as follows: exit N chmod +x exitcmd.sh This is a test. echo $? # See if $BAK exits or not else die # See if $TAPE device exits or not else die # Okay back it up if [ $? -ne 0 ] # Terminate our shell script with success message i.e. backup done! chmod +x datatapebackup.sh ============================================================================================= OPERATOR DESCRIPTION $ cat > strtest.sh #!/bin/bash Sourcesystem=”ABC” if [ “$Sourcesystem” = “XYZ” ]; then if [ ‘XYZ’ == ‘ABC’ ]; then # Double equal to will work in Linux but not on HPUX boxes it should be if [ ‘XYZ’ = ‘ABC’ ] which will work on both if [ “$var” == “” ] then To determine if the value of a variable is not empty: if [ “$var” != “” ] then To compare the contents of a variable to a fixed string: if [ “$var” == “value” ] then To determine if variable’s contents are not equal to a fixed string: if [ “$var” != “value” ] then Empty string in Bash In Bash you quite often need to check to see if a variable has been set or has a value other than an empty string. This can be done using the -n or -z string comparison operators. The -n operator checks whether the string is not null. Effectively, this will return true for every case except where the string contains no characters. ie: VAR=”hello” Similarly, the -z operator checks whether the string is null. ie: VAR=”” The following program shows the comparison of two strings in shell script #!/bin/sh Case Insensitive comparision of strings in Shell script str1=”MATCH” var1=match I want to compare 2 strings with ignore case in bash shell. Suppose, if [ “test” = “TEst” ] … should be true and enter into if loop. Solution: Please try this: if [ `echo “TeSt” | tr -s ‘[:upper:]’ ‘[:lower:]` = `echo “Test” | tr -s ‘[:upper:]’ ‘[:lower:]` ] or Try similar, it still worked. input=TESt if [[ “$input” =~ “test” ]]; then #!/bin/bash [ “$str1” = “$str2” ] [ “$str1” != “$str2” ] echo $? [ -n “$str1” ] echo $? [ -z “$str3” ] echo $? ================================================================================================= Logical and (&&) is boolean operator. It can execute commands or shell functions based on the exit status of another command. command1 && command2 First_command && Second_command Boolean AND: && or -a used to do boolean AND operations in bash/shell scripts. To validate boolean AND condition there are two ways: if [ $condition1 ] && [ $condition2 ] if [ $condition1 -a $condition2 ] Type the following at a shell prompt: rm /tmp/filename && echo “File deleted.” #!/bin/bash You can test multiple expressions at once by using the || (or) operator or the && (and) operator. This can save you from writing extra code to nest if statements. The above code has a nested if statement where it checks if the age is greater than or equal to 100. This could be changed as well by using elif (else if). The structure of elif is the same as the structure of if, we will use it in an example below. In this example, we will check for certain age ranges. If you are less than 20 or greater than 50, you are out of the age range. If you are between 20 and 30 you are in your 20’s and so on. # Prompt for a user name… if [ “$AGE” -lt 20 ] || [ “$AGE” -ge 50 ]; then Count The Number of Characters in User’s Input in Your Script #!/bin/bash =========================================================================================== Logical OR (||) is boolean operator. It can execute commands or shell functions based on the exit status of another command. Syntax First_command || Second_command Boolean OR: || or -o used to do boolean OR operations in bash/shell scripts. To validate boolean OR there are 2 ways: if [ $condition1 ] || [ $condition2 ] if [ $condition1 -o $condition2 ] Example Find username else display an error cat /etc/shadow 2>/dev/null && echo “File successfully opened.” || echo “Failed to open file.” test $(id -u) -eq 0 && echo “You are root” || echo “You are NOT root” test $(id -u) -eq 0 && echo “Root user can run this script.” || echo “Use sudo or su to become a root user.” ============================================================================================================================================================= Logical Not ! Logical not (!) is boolean operator, which is used to test whether expression is true or not. For example, if file not exists, then display an error on screen. Syntax ! expression [ ! expression ] if test ! condition True if expression is false. test ! -f /etc/resolv.conf && echo “File /etc/resolv.conf not found.” test ! -f /etc/resolv.conf && echo “File /etc/resolv.conf not found.” || echo “File /etc/resolv.conf found.” [ ! -d /backup ] && mkdir /backup [ ! -f $HOME/.config ] && { echo “Error: $HOME/.config file not found.”; exit 1; } [ ! -d /usr/bin ] && exit ================================================================================================================ count #!/bin/bash END=`expr $END + 1` while [ $END -ne $BEGIN ]; do # While END is not equal to BEGIN do … ======================================================================================================== The test command is used to check file types and compare values. You can also use [ as test command. It is used for: File attributes comparisons [ ! condition ] [ condition ] && true-command [ condition ] || false-command [ condition ] && true-command || false-command ==================================================================================================== he case statement is good alternative to multilevel if-then-else-fi statement. It enable you to match several values against one variable. It is easier to read and write. Syntax case $variable-name in case $variable-name in #!/bin/bash # if no command line arg given # use case statement to make decision for rental chmod +x rental.sh Sorry, I can not get a *** Unknown vehicle *** rental for you! Using Multiple Patterns #!/bin/bash # test -e and -E command line args matching chmod +x casecmdargs.sh #!/bin/bash chmod +x allinonebackup.sh #!/bin/sh echo “Please talk to me …” $ ./talk.sh That’s all folks! #!/bin/bash MENU=” while true; do case $INPUT in done !/bin/sh FRUIT=”apple” case “$FRUIT” in #!/bin/sh root@cluster1 shell]# ./case.sh ============================================================================================================================= ================================================================================================================================================= Quiz: ShellCheck is aware of many common usage problems. Are you? find . -name *.mp3 ShellCheck gives more helpful messages for many Bash syntax errors Bash says ShellCheck points to the exact position and says What is hair loss?
Hair loss is commonly referred to as “alopecia.” Alopecia is a blanket term that refers to many different conditions related to hair loss. It can refer to conditions such as hereditary balding (androgenic alopecia) or even alopecia areata, which is when the body’s immune system begins attacking the hair follicles resulting in patchy bald spots.1
Factors that contribute to hair loss
While there are many factors that can contribute to hair loss, some of the most common ones include:
Not all hair loss is the same
Not everyone experiences hair loss in the same way. Some individuals may experience a gradual thinning of the hair with age, while others may experience more drastic hair loss, such as losing it in large clumps. Regardless, it’s important to check with your doctor to determine what is causing your alopecia. This can also help you to find out if your hair loss is indicative of a more serious health condition.
Essential oils for hair loss
If you suffer from hair loss, the good news is that there may be some natural ways you can help slow it down and possibly even promote hair growth. Besides maintaining good health by focusing on diet, exercise, and lowering stress, research suggests that some essential oils may be beneficial for hair loss as well.
Did you know that your can get your hair in better shape, stimulate If you’re wondering the ways to which you can add essential oils to your You can make essential oils a part of the active ingredients in your hair So without further adieu, I proudly present to you the working powers of Basil: In case you suffer from dry and dull hair, Carrot Root: This particular oil is high in antioxidants and Cedarwood: Bacteria, any type of infection and a weakened Chamomile: This is one of my favorite essential oils. Clary Sage: This essential oil is believed to help balance Clove: If you want an essential oil with both antioxidant Cypress: The Cypress essential oil helps to increase Lavender: Found in most haircare products, Lavender Peppermint: If you want an ultimately refreshing and clean Rosemary: This is yet another one of my favorite essential Sage: The most effective essential oil for hair growth is Tea Tree Oil: Is the very best essential oil to use if you Ylang Ylang: Similar to Lavender essential oil, Ylang Ylang 1. Rosemary rosemary sprigs and bottle – best essential oils for hair Arguably one of the most well-known essential oils for hair loss, rosemary is considered beneficial for a myriad of issues, including skin and hair care. It has traditionally been used to treat lice, greasy hair, and dandruff, as well as promote hair growth and stimulate the scalp.2 As a regenerating oil, it’s useful for maintaining hair health in general. In fact, a study in 2012 was conducted testing the efficacy of rosemary leaf extract promoting hair growth and the results were promising. The study found that the rosemary extract did in fact promote hair regrowth in mice that had been treated with it.3 Still another study in 2015 found that individuals who were treated with rosemary oil had a significant increase in hair count after 6 months.4 However, it’s important to note that due to its high 1,8-cineole content, it is not recommended to be used on or near the face of children or infants.5 2. Lavender Lavender is widely considered one of the most versatile essential oils, being extremely beneficial for numerous issues. Because of its normalizing and synergistic properties, it is considered useful for most ailments.6 Therefore, it should come as no surprise that lavender is also considered one of the most beneficial essential oils for treating hair loss as well. A a very soothing and calming oil, lavender can help with an irritated scalp, but it is also regenerating as well. It has long been used as a treatment for alopecia and baldness. In fact, a study conducted in 2016 found that lavender oil significantly increased the number of hair follicles and hair follicle depth on mice that had been treated with it.7 And since it’s considered one of the safest essential oils, lavender can be used on just about anybody! 3. Peppermint peppermint plant in pot It may surprise you to find peppermint on this list. Traditionally, peppermint essential oil has been most commonly used for its ability to help with gastric and digestive issues. However, it is a very stimulating oil that can not only perk you up mentally, but also promote hair growth. According to a 2014 study, mice that had been treated with peppermint oil experienced an increase in follicle count, increase in follicle depth, and a thickened dermal layer after four weeks.8 However, please note that due to its high menthol content, peppermint oil should not be used on or near the face of young children or infants. 4. Geranium Geranium is another essential oil that is used for hair loss. Because it is another regenerating essential oil, it is considered beneficial in stimulating hair growth. Comprised of mostly citronellol and geraniol, geranium has a high count of monoterpenes which are considered to be immune stimulants and general tonics.9 It is also commonly used in skin care due to its anti-inflammatory properties and its abilities to balance out skin types. This also makes it beneficial to use when dealing with hair loss due to allergic or irritated skin. 5. Cedarwood Cedarwood essential oil can also be beneficial when dealing with hair loss. It is considered to be useful for multiple skin and scalp conditions, including itchy scalp, dandruff, oily hair, and hair loss. A study was published in 1998 regarding patients who suffered from alopecia areata. The patients were treated with an essential oil blend of cedarwood, rosemary, lavender, and thyme. The blend was massaged on the scalp of individuals every day for 7 months. The results showed that 44% of the patients found improvement with their alopecia after using the essential oil blend.10 6. Clary sage clary sage plant in garden Another essential oil to consider when dealing with hair loss is clary sage. Not to be confused with common sage, clary sage is a stimulating oil and can be used to both reduce hair loss and promote hair growth.11 Because it is commonly used to regulate hormones and stress, it can be especially beneficial for hair loss that is due to stress or hormonal issues. Please note that clary sage should not be used by those who are pregnant or breastfeeding, when drinking alcohol, or if estrogen levels need to be monitored.12 7. Roman chamomile Another one of the best essential oils for hair loss is Roman chamomile. Gentle and calming, chamomile has been used for numerous issues for decades. While it’s most known for its calming and gastric properties, chamomile is also used to support hair health. Because of its ability to help with stress and anxiety, chamomile can be beneficial for hair loss due to anxiety or nervous disorders. It is also very calming to the skin and helpful when dealing with irritated, allergic, or itchy skin and scalp. It is considered beneficial in reducing hair loss.13 Chamomile is often used in hair care and is even thought to help lighten hair color. Like lavender, chamomile is considered a safe oil and can be used on most individuals. Essential oil blends for hair loss hair dressing applying essential oils to woman’s hair It’s important to remember that a little goes a long way with essential oils. Because they are highly concentrated substances, only a small amount is needed. If not diluted properly, essential oils can cause irritation and sensitization, so make sure to always dilute them accordingly.
![]() LVM2: device filter and LVM metadata restoreCustomize LVM device filter to get rid of the annoying “/dev/cdrom: open failed” warning
##/dev/cdrom: open failed warning $pvcreate /dev/sdb1 /dev/cdrom: open failed: Read-only file system $ vgcreate vg01 /dev/sdb1 /dev/cdrom: open failed: Read-only file system ##The error because LVM scan all device files by default, you can exclude some device files by device filters ##File /etc/lvm/cache/.cache contains the device file names scanned by LVM $ cat /etc/lvm/cache/.cache persistent_filter_cache { valid_devices=[ "/dev/ram11", "/dev/cdrom", ##Edit /etc/lvm/lvm.conf, Change default filter filter = [ "a/.*/" ] #to filter = [ "r|/dev/cdrom|","r|/dev/ram*|" ] ##You need to delete the cache file or ran vgscan to regenerate the file $rm /etc/lvm/cache/.cache OR vgscan LVM metadata backup and restore $pvcreate /dev/sdb1 /dev/sdb2 Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdb2" successfully created $vgcreate vg01 /dev/sdb1 /dev/sdb2 Volume group "vg01" successfully created $ lvcreate -L100M -n lv01 vg01 Logical volume "lv01" created $ mkfs.ext3 /dev/vg01/lv01 $ mount /dev/vg01/lv01 /mnt/ $cp /etc/hosts /mnt/ $ ls /mnt/ hosts lost+found 2.Destroy LV,VG,and PV $vgremove vg01 Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y Do you really want to remove active logical volume lv01? [y/n]: y Logical volume "lv01" successfully removed Volume group "vg01" successfully removed #VG is removed and PV was also wiped out $ pvcreate /dev/sdb1 /dev/sdb2 Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdb2" successfully created 3.Lets recover the LV and the data ##Find out the backup file to restore from $vgcfgrestore -l vg01 .. file: /etc/lvm/archive/vg01_00002.vg VG name: vg01 Description: Created *before* executing 'vgremove vg01' Backup time: Tue May 10 15:41:31 2011 ##first attempt failed, because PV UUID is changed $ vgcfgrestore -f /etc/lvm/archive/vg01_00002.vg vg01 Couldn't find device with uuid 'pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB'. Couldn't find device with uuid 'J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W'. Cannot restore Volume Group vg01 with 2 PVs marked as missing. Restore failed. ##Find old UUID $ grep -B 2 /dev/sdb /etc/lvm/archive/vg01_00002.vg pv0 { id = "pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB" device = "/dev/sdb1" # Hint only -- pv1 { id = "J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W" device = "/dev/sdb2" # Hint only $ ##Recreate PV with the old UUID $ pvcreate -u pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB /dev/sdb1 Physical volume "/dev/sdb1" successfully created $ pvcreate -u J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W /dev/sdb2 Physical volume "/dev/sdb2" successfully created ##run vgcfgrestore again $ vgcfgrestore -f /etc/lvm/archive/vg01_00002.vg vg01 Restored volume group vg01 ##data was also recovered $ mount /dev/vg01/lv01 /mnt/ mount: special device /dev/vg01/lv01 does not exist $ lvchange -a y vg01/lv01 $ mount /dev/vg01/lv01 /mnt/ $ cat /mnt/hosts 127.0.0.1 localhost .. Boot your rescue media. Activate all volume groups: List logical volumes: With this information, and the volumes activated, you should be able to mount the volumes: With the announcement of the ShellShock Bash vulnerability last week it has caught news around the security updates. This is bug is being dubbed to be bigger than The Heartbleed Bug. Some interesting read about ShellShock can be found here. Fix ShellShock Bash Vulnerability on CentOS – Test Before you begin it’s better to test all the available vulnerability for Bash. You can run the following tests from your shell to verify the vulnerability for bash. Exploit 1 (CVE-2014-6271) Execute the ofllowing command from your shell.
If you see “vulnerable!” you have a vulnerable version of bash which needs to be updated. If you won’t see “vulnerable!” in the output you should be fine. Exploit 2 (CVE-2014-7169) Run the following command:
If this command provides the output for current date on the terminal, the version of bash is vulnerable to this exploit. However, if it prints just the word “date” then you are fine against this vulnerability. You might see a file created under the /tmp directory with the name echo. You can delete it safely. Important: Please note that while testing this exploit you might come across some tests which will include rm -f echo. This could possibilly delete the binary for echo whichis /bin/echo. Avoid running such tests!! Exploit 3 (CVE-2014-7186) Run the following command to test this exploit.
If the output includes “CVE-2014-7186 vulnerable, some_stack”, bash is vulnerable to this exploit. Exploit 4 (CVE-2014-7187) Run the following command to test this exploit.
If the output includes “CVE-2014-7187 vulnerable, print_me” you have a vulnerable bash version for this exploit. Fix ShellShock Bash Vulnerability on CentOS There have multiple patches being pushed during the last week to handle the different exploits above. Almost all the distributions now have a completely patched version of bash available for upgrade. On CentOS / RedHat systems you can use yum to update to the latest available version for Bash. On the affected systems, run the following command to update Bash.
Once the update is complete you can check for the above mentioned exploits to ensure the bash version installed is indeed patched. There are still more exploits discovered. It is possible that we may see more patches coming for Bash in next few days. Fix ShellShock Bash Vulnerability on CentOS Solved mount: special device /dev/VolGroup00/LogVol00 does not exist There are times when using LVM you might come accross the error while mounting an LVM partition. The error that we are discussing here is mount: special device /dev/VolGroup00/LogVol00 does not exist. [root@rmohan ~]# mount /dev/VolGroup00/LogVol00 /media/data Output: mount: special device /dev/VolGroup00/LogVol00 does not exist Maybe in your case the VolumeGroup name would be different, so can be the LogicalVolume name. Solved mount: special device /dev/VolGroup00/LogVol00 does not exist In order to fix this problem I ran the standard display commands (pvdisplay, vgdisplay, lvdisplay) under LVM. However, everything looked good. I then ran lvscan which provided some interesting results. [root@rmohan ~]# lvscan Output: inactive ‘/dev/VolGroup00/LogVol00’ [5.40 TB] inherit So it looks like the Volume Group and the Logical Volume is in INACTIVE state for the kernel. We will have to enable the Volume Group for Kernel to be able to mount it. The following is how we can enable a Volume Group. [root@rmohan ~]# vgchange -ay VolGroup00 Where, -a Controls the availability or unavailability of the logical volumes. It makes the volume available or unavailable to the kernel. Output: 1 logical volume(s) in volume group “VolGroup00” now active Verify the Volumes again using lvscan [root@rmohan ~]# lvscan Output: ACTIVE ‘/dev/VolGroup00/LogVol00’ [5.40 TB] inherit Kernel should now be able to mount the Logical Volume. [root@rmohan ~]# mount /dev/VolGroup00/LogVol00 /media/data Verify the mounts using mount or df [root@rmohan ~]# mount Solved mount: special device /dev/VolGroup00/LogVol00 does not exist Start/Stop/Delete a Queue ManagerStarting a Queue Manager Before we can use a Queue Manager, we need to start it, using the STRMQM command. The command to start a Queue Manager called QMA is: $ strmqm QMA You should see output similar to the following on your screen: WebSphere MQ queue manager ‘QMA’ starting. 2108 log records accessed on queue manager ‘QMA’ during the log replay phase. Log replay for queue manager ‘QMA’ complete. Transaction manager state recovered for queue manager ‘QMA’. WebSphere MQ queue manager ‘QMA’ started. Checking that the Queue Manager is running To check that a Queue Manager is active, use the DSPMQ MQ command: $ dspmq If the Queue Manager is active it should have a status of “Running” as follows: QMNAME(QMA) STATUS(Running) Stopping a Queue Manager To stop (end) a Queue Manager, use the ENDMQM command. This command has four possible parameters:
If we want to suppress error messages, then we just have to add the –z parameter to the command. An example of the command to stop Queue Manager QMA is shown next: $ endmqm –i QMA Deleting a Queue Manager The command to delete/drop a Queue Manager is DLTMQM, but before we can issue that command we need to stop all the Listeners for the Queue Manager and then stop (end) the Queue Manager. The following command will stop all the Listeners associated with Queue Manager pointed to by the –m flag (QMA in this example). The –w flag means the command will wait until all the Listeners are stopped before returning control: $ endmqlsr -w -m QMA The command to stop (end) the Queue Manager is: $ endmqm QMA And fnally the command to delete the Queue Manager is: $ dltmqm QMA |
||||||||||
Copyright © 2025 - All Rights Reserved Powered by WordPress & Atahualpa |
Recent Comments