|
Redis Install
Using epel Repository
sudo yum --enablerepo=epel install redis
=====================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================
Installing:
redis x86_64 2.4.10-1.el6 warm 213 k
Transaction Summary
=====================================================================================================================================================================
Install 1 Package
version 2.4.10-1.el6 has been installed. (January 30, 2016)
Using remi Repository
sudo rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sudo yum --enablerepo=remi install redis
=====================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================
Installing:
redis x86_64 3.0.7-1.el6.remi remi 442 k
Installing for dependencies:
jemalloc x86_64 3.3.1-1.8.amzn1 amzn-main 111 k
Transaction Summary
=====================================================================================================================================================================
Install 1 Package (+1 Dependent package)
version 3.0.7-1.el6.remi was installed. (January 30, 2016)
Reference URL
Install redis on AWS EC 2
Setting Service
Start service, set automatic startup
sudo service redis start
sudo chkconfig --level 35 redis on
sudo chkconfig --list | grep redis
Execution result
$ sudo service redis start
Starting redis-server: [ OK ]
$ sudo chkconfig --level 35 redis on
$ sudo chkconfig --list | grep redis
redis 0:off 1:off 2:off 3:on 4:off 5:on 6:off
redis-sentinel 0:off 1:off 2:off 3:off 4:off 5:off 6:off
I think that is a common question for every Linux user soon or later in their career of desktop or server administrator “Why Linux uses all my Ram while not doing much ?”. To this one today I’ve add another question that I’m sure is common for many Linux system administrator “Why the command free show swap used and I’ve so much free Ram ?”, so from my study of today on SwapCached i present to you some useful, or at least i hope so, information on the management of memory in a Linux system.
Linux has this basic rule: a page of free RAM is wasted RAM. RAM is used for a lot more than just user application data. It also stores data for the kernel itself and, most importantly, can mirror data stored on the disk for super-fast access, this is reported usually as “buffers/cache”, “disk cache” or “cached” by top . Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.
Keeping the cache means that if something needs the same data again, there’s a good chance it will still be in the cache in memory.
So as first thing in your system you can use the command free to get a first idea of how is going the use of your RAM.
This is the output on my old laptop with Xubuntu:
xubuntu-home:~# free
total used free shared buffers cached
Mem: 1506 1373 133 0 40 359
-/+ buffers/cache: 972 534
Swap: 486 24 462
|
xubuntu-home:~# free total used free shared buffers cached Mem: 1506 1373 133 0 40 359 -/+ buffers/cache: 972 534 Swap: 486 24 462
The -/+ buffers/cache line shows how much memory is used and free from the perspective of the applications. In this example 972 MB of RAM are used and 534 MB are available for applications.
Generally speaking, if little swap is being used, memory usage isn’t impacting performance at all.
But if you want to get some more information about your memory the file you must check is /proc/meminfo, this is mine on Xubuntu 12.04 with a 3.2.0-25-generic Kernel:
xubuntu-home:~# cat /proc/meminfo
MemTotal: 1543148 kB
MemFree: 152928 kB
Buffers: 41776 kB
Cached: 353612 kB
SwapCached: 8880 kB
Active: 629268 kB
Inactive: 665188 kB
Active(anon): 432424 kB
Inactive(anon): 474704 kB
Active(file): 196844 kB
Inactive(file): 190484 kB
Unevictable: 160 kB
Mlocked: 160 kB
HighTotal: 662920 kB
HighFree: 20476 kB
LowTotal: 880228 kB
LowFree: 132452 kB
SwapTotal: 498684 kB
SwapFree: 470020 kB
Dirty: 44 kB
Writeback: 0 kB
AnonPages: 891472 kB
Mapped: 122284 kB
Shmem: 8060 kB
Slab: 56416 kB
SReclaimable: 44068 kB
SUnreclaim: 12348 kB
KernelStack: 3208 kB
PageTables: 10380 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1270256 kB
Committed_AS: 2903848 kB
VmallocTotal: 122880 kB
VmallocUsed: 8116 kB
VmallocChunk: 113344 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
DirectMap4k: 98296 kB
DirectMap4M: 811008 kB
|
xubuntu-home:~# cat /proc/meminfo MemTotal: 1543148 kB MemFree: 152928 kB Buffers: 41776 kB Cached: 353612 kB SwapCached: 8880 kB Active: 629268 kB Inactive: 665188 kB Active(anon): 432424 kB Inactive(anon): 474704 kB Active(file): 196844 kB Inactive(file): 190484 kB Unevictable: 160 kB Mlocked: 160 kB HighTotal: 662920 kB HighFree: 20476 kB LowTotal: 880228 kB LowFree: 132452 kB SwapTotal: 498684 kB SwapFree: 470020 kB Dirty: 44 kB Writeback: 0 kB AnonPages: 891472 kB Mapped: 122284 kB Shmem: 8060 kB Slab: 56416 kB SReclaimable: 44068 kB SUnreclaim: 12348 kB KernelStack: 3208 kB PageTables: 10380 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1270256 kB Committed_AS: 2903848 kB VmallocTotal: 122880 kB VmallocUsed: 8116 kB VmallocChunk: 113344 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 98296 kB DirectMap4M: 811008 kB
MemTotal and MemFree are easily understandable for everyone, these are some of the other values:
Cached
The Linux Page Cache (“Cached:” from meminfo ) is the largest single consumer of RAM on most systems. Any time you do a read() from a file on disk, that data is read into memory, and goes into the page cache. After this read() completes, the kernel has the option to simply throw the page away since it is not being used. However, if you do a second read of the same area in a file, the data will be read directly out of memory and no trip to the disk will be taken. This is an incredible speedup and is the reason why Linux uses its page cache so extensively: it is betting that after you access a page on disk a single time, you will soon access it again.
dentry/inode caches
Each time you do an ‘ls’ (or any other operation: open(), stat(), etc…) on a filesystem, the kernel needs data which are on the disk. The kernel parses these data on the disk and puts it in some filesystem-independent structures so that it can be handled in the same way across all different filesystems. In the same fashion as the page cache in the above examples, the kernel has the option of throwing away these structures once the ‘ls’ is completed. However, it makes the same bets as before: if you read it once, you’re bound to read it again. The kernel stores this information in several “caches” called the dentry and inode caches. dentries are common across all filesystems, but each filesystem has its own cache for inodes.
This ram is a component of “Slab:” in meminfo
You can view the different caches and their sizes by executing this command:
head -2 /proc/slabinfo; cat /proc/slabinfo | egrep dentry\|inode
|
head -2 /proc/slabinfo; cat /proc/slabinfo | egrep dentry\|inode
Buffer Cache
The buffer cache (“Buffers:” in meminfo) is a close relative to the dentry/inode caches. The dentries and inodes in memory represent structures on disk, but are laid out very differently. This might be because we have a kernel structure like a pointer in the in-memory copy, but not on disk. It might also happen that the on-disk format is a different endianness than CPU.
Memory mapping in top: VIRT, RES and SHR
When you are running top there are three fields related to memory usage. In order to assay your server memory requirements you have to understand their meaning.
VIRT stands for the virtual size of a process, which is the sum of memory it is actually using, memory it has mapped into itself (for instance the video cards’s RAM for the X server), files on disk that have been mapped into it (most notably shared libraries), and memory shared with other processes. VIRT represents how much memory the program is able to access at the present moment.
RES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming. (This also corresponds directly to the %MEM column.) This will virtually always be less than the VIRT size, since most programs depend on the C library.
SHR indicates how much of the VIRT size is actually sharable (memory or libraries). In the case of libraries, it does not necessarily mean that the entire library is resident. For example, if a program only uses a few functions in a library, the whole library is mapped and will be counted in VIRT and SHR, but only the parts of the library file containing the functions being used will actually be loaded in and be counted under RES.
Swap
Now we have seen some information on our RAM, but what happens when there is no more free RAM? If I have no memory free, and I need a page for the page cache, inode cache, or dentry cache, where do I get it?
First of all the kernel tries not to let you get close to 0 bytes of free RAM. This is because, to free up RAM, you usually need to allocate more. This is because our Kernel need a kind of “working space” for its own housekeeping, and so if it arrives to zero free RAM it cannot do anything more.
Based on the amount of RAM and the different types (high/low memory), the kernel comes up with a heuristic for the amount of memory that it feels comfortable with as its working space. When it reaches this watermark, the kernel starts to reclaim memory from the different uses described above. The kernel can get memory back from any of the these.
However, there is another user of memory that we may have forgotten about by now: user application data.
When the kernel decides not to get memory from any of the other sources we’ve described so far, it starts to swap. During this process it takes user application data and writes it to a special place (or places) on the disk, note that this happen not only when RAM go close to become full, but the Kernel can decide to move to swap also some data on RAM that has not be used from some time (see swappiness).
For this reason, even a system with vast amounts of RAM (even when properly tuned) can swap. There are lots of pages of memory which are user application data, but are rarely used. All of these are targets for being swapped in favor of other uses for the RAM.
You can check if swap is used with the command free, the last line of the output show information about our swap space, taking the free I’ve used in the example above:
xubuntu-home:~# free
total used free shared buffers cached
Mem: 1506 1373 133 0 40 359
-/+ buffers/cache: 972 534
Swap: 486 24 462
|
xubuntu-home:~# free total used free shared buffers cached Mem: 1506 1373 133 0 40 359 -/+ buffers/cache: 972 534 Swap: 486 24 462
We can see that on this computer there are 24 MB of swap used and 462 MB available.
So the mere presence of used swap is not evidence of a system which has too little RAM for its workload, the best way to determine this is to use the command vmstat if you see a lot of pages that are swapped in (si) and out (so) it means that the swap is actively used and that the system is “thrashing” or that it is needing new RAM as fast as it can swap out application data.
This is an output on my gentoo laptop, while it’s idle:
~ # vmstat 5 5
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 2802448 25856 731076 0 0 99 14 365 478 7 3 88 3
0 0 0 2820556 25868 713388 0 0 0 9 675 906 2 2 96 0
0 0 0 2820736 25868 713388 0 0 0 0 675 925 3 1 96 0
2 0 0 2820388 25868 713548 0 0 0 2 671 901 3 1 96 0
0 0 0 2820668 25868 713320 0 0 0 0 681 920 2 1 96 0
|
~ # vmstat 5 5 procs ———–memory———- —swap– —–io—- -system– —-cpu—- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 2802448 25856 731076 0 0 99 14 365 478 7 3 88 3 0 0 0 2820556 25868 713388 0 0 0 9 675 906 2 2 96 0 0 0 0 2820736 25868 713388 0 0 0 0 675 925 3 1 96 0 2 0 0 2820388 25868 713548 0 0 0 2 671 901 3 1 96 0 0 0 0 2820668 25868 713320 0 0 0 0 681 920 2 1 96 0
Note that in the output of the free command you have just 2 values about swap: free and used, but there is another important value also for the swap space : Swap cache.
Swap Cache
The swap cache is very similar in concept to the page cache. A page of user application data written to disk is very similar to a page of file data on the disk. Any time a page is read in from swap (“si” in vmstat), it is placed in the swap cache. Just like the page cache, this is a bet on the kernel’s part. It is betting that we might need to swap this page out _again_. If that need arises, we can detect that there is already a copy on the disk and simply throw the page in memory away immediately. This saves us the cost of re-writing the page to the disk.
The swap cache is really only useful when we are reading data from swap and never writing to it. If we write to the page, the copy on the disk is no longer in sync with the copy in memory. If this happens, we have to write to the disk to swap the page out again, just like we did the first time. However, the cost of saving _any_ writes to disk is great, and even with only a small portion of the swap cache ever written to, the system will perform better.
So to know the swap used for real we should subtract to the value of SwapUsed the value of SwapCached , you can find these information in /proc/meminfo
Swappiness
When an application needs memory and all the RAM is fully occupied, the kernel has two ways to free some memory at its disposal: it can either reduce the disk cache in the RAM by eliminating the oldest data or it may swap some less used portions (pages) of programs out to the swap partition on disk. It is not easy to predict which method would be more efficient. The kernel makes a choice by roughly guessing the effectiveness of the two methods at a given instant, based on the recent history of activity.
Before the 2.6 kernels, the user had no possible means to influence the calculations and there could happen situations where the kernel often made the wrong choice, leading to thrashing and slow performance. The addition of swappiness in 2.6 changes this.
Swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.
The default swappiness is 60. A value of 0 gives something close to the old behavior where applications that wanted memory could shrink the cache to a tiny fraction of RAM. For laptops which would prefer to let their disk spin down, a value of 20 or less is recommended.
Conclusions
In this article I’ve put some information that I’ve found useful in my work as system administrator i hope they can be useful to you as well.
Reference
Most of this article is based on the work found on these pages:
By now, most of you have probably already heard of the biggest disaster in the history of IT – Meltdown and Spectre security vulnerabilities which affect all modern CPUs, from those in desktops and servers, to ones found in smartphones. Unfortunately, there’s much confusion about the level of threat we’re dealing with here, because some of the impacted vendors need reasons to explain the still-missing security patches. But even those who did release a patch, avoid mentioning that it only partially addresses the threat. And, there’s no good explanation of these vulnerabilities on the right level (not for developers), something that just about anyone working in IT could understand to make their own conclusion. So, I decided to give it a shot and deliver just that.
First, some essential background. Both vulnerabilities leverage the “speculative execution” feature, which is central to the modern CPU architecture. Without this, processors would idle most of the time, just waiting to receive I/O results from various peripheral devices, which are all at least 10x slower than processors. For example, RAM – kind of the fastest thing out there in our mind – runs at comparable frequencies with CPU, but all overclocking enthusiasts know that RAM I/O involves multiple stages, each taking multiple CPU cycles. And hard disks are at least a hundred times slower than RAM. So, instead of waiting for the real result of some IF clause to be calculated, the processor assumes the most probable result, and continues the execution according to the assumed result. Then, many cycles later, when the actual result of said IF is known, if it was “guessed” right – then we’re already way ahead in the program code execution path, and didn’t just waste all those cycles waiting for the I/O operation to complete. However, if it appears that the assumption was incorrect – then, the execution state of that “parallel universe” is simply discarded, and program execution is restarted back from said IF clause (as if speculative execution did not exist). But, since those prediction algorithms are pretty smart and polished, more often than not the guesses are right, which adds significant boost to execution performance for some software. Speculative execution is a feature that processors had for two decades now, which is also why any CPU that is still able to run these days is affected.
Now, while the two vulnerabilities are distinctly different, they share one thing in common – and that is, they exploit the cornerstone of computer security, and specifically the process isolation. Basically, the security of all operating systems and software is completely dependent on the native ability of CPUs to ensure complete process isolation in terms of them being able to access each other’s memory. How exactly is such isolation achieved? Instead of having direct physical RAM access, all processes operate in virtual address spaces, which are mapped to physical RAM in the way that they do not overlap. These memory allocations are performed and controlled in hardware, in the so-called Memory Management Unit (MMU) of CPU.
At this point, you already know enough to understand Meltdown. This vulnerability is basically a bug in MMU logic, and is caused by skipping address checks during the speculative execution (rumors are, there’s the source code comment saying this was done “not to break optimizations”). So, how can this vulnerability be exploited? Pretty easily, in fact. First, the malicious code should trick a processor into the speculative execution path, and from there, perform an unrestricted read of another process’ memory. Simple as that. Now, you may rightfully wonder, wouldn’t the results obtained from such a speculative execution be discarded completely, as soon as CPU finds out it “took a wrong turn”? You’re absolutely correct, they are in fact discarded… with one exception – they will remain in the CPU cache, which is a completely dumb thing that just caches everything CPU accesses. And, while no process can read the content of the CPU cache directly, there’s a technique of how you can “read” one implicitly by doing legitimate RAM reads within your process, and measuring the response times (anything stored in the CPU cache will obviously be served much faster). You may have already heard that browser vendors are currently busy releasing patches that makes JavaScript timers more “coarse” – now you know why (but more on this later).
As far as the impact goes, Meltdown is limited to Intel and ARM processors only, with AMD CPUs unaffected. But for Intel, Meltdown is extremely nasty, because it is so easy to exploit – one of our enthusiasts compiled the exploit literally over a morning coffee, and confirmed it works on every single computer he had access to (in his case, most are Linux-based). And possibilities Meltdown opens are truly terrifying, for example how about obtaining admin password as it is being typed in another process running on the same OS? Or accessing your precious bitcoin wallet? Of course, you’ll say that the exploit must first be delivered to the attacked computer and executed there – which is fair, but here’s the catch: JavaScript from some web site running in your browser will do just fine too, so the delivery part is the easiest for now. By the way, keep in mind that those 3rd party ads displayed on legitimate web sites often include JavaScript too – so it’s really a good idea to install ad blocker now, if you haven’t already! And for those using Chrome, enabling Site Isolation feature is also a good idea.
OK, so let’s switch to Spectre next. This vulnerability is known to affect all modern CPUs, albeit to a different extent. It is not based on a bug per say, but rather on a design peculiarity of the execution path prediction logic, which is implemented by so-called Branch Prediction Unit (BPU). Essentially, what BPU does is accumulating statistics to estimate the probability of IF clause results. For example, if certain IF clause that compares some variable to zero returned FALSE 100 times in a row, you can predict with high probability that the clause will return FALSE when called for the 101st time, and speculatively move along the corresponding code execution branch even without having to load the actual variable. Makes perfect sense, right? However, the problem here is that while collecting this statistics, BPU does NOT distinguish between different processes for added “learning” effectiveness – which makes sense too, because computer programs share much in common (common algorithms, constructs implementation best practices and so on). And this is exactly what the exploit is based on: this peculiarity allows the malicious code to basically “train” BPU by running a construct that is identical to one in the attacked process hundreds of times, effectively enabling it to control speculative execution of the attacked process once it hits its own respective construct, making one dump “good stuff” into the CPU cache. Pretty awesome find, right?
But here comes the major difference between Meltdown and Spectre, which significantly complicates Spectre-based exploits implementation. While Meltdown can “scan” CPU cache directly (since the sought-after value was put there from within the scope of process running the Meltdown exploit), in case of Spectre it is the victim process itself that puts this value into the CPU cache. Thus, only the victim process itself is able to perform that timing-based CPU cache “scan”. Luckily for hackers, we live in the API-first world, where every decent app has API you can call to make it do the things you need, again measuring how long the execution of each API call took. Although getting the actual value requires deep analysis of the specific application, so this approach is only worth pursuing with the open-source apps. But the “beauty” of Spectre is that apparently, there are many ways to make the victim process leak its data to the CPU cache through speculative execution in the way that allows the attacking process to “pick it up”. Google engineers found and documented a few, but unfortunately many more are expected to exist. Who will find them first?
Of course, all of that only sounds easy at a conceptual level – while implementations with the real-world apps are extremely complex, and when I say “extremely” I really mean that. For example, Google engineers created a Spectre exploit POC that, running inside a KVM guest, can read host kernel memory at a rate of over 1500 bytes/second. However, before the attack can be performed, the exploit requires initialization that takes 30 minutes! So clearly, there’s a lot of math involved there. But if Google engineers could do that, hackers will be able too – because looking at how advanced some of the ransomware we saw last year was, one might wonder if it was written by folks who Google could not offer the salary or the position they wanted. It’s also worth mentioning here that a JavaScript-based POC also exists already, making the browser a viable attack vector for Spectre.
Now, the most important part – what do we do about those vulnerabilities? Well, it would appear that Intel and Google disclosed the vulnerability to all major vendors in advance, so by now most have already released patches. By the way, we really owe a big “thank you” to all those dev and QC folks who were working hard on patches while we were celebrating – just imagine the amount of work and testing required here, when changes are made to the holy grail of the operating system. Anyway, after reading the above, I hope you agree that vulnerabilities do not get more critical than these two, so be sure to install those patches ASAP. And, aside of most obvious stuff like your operating systems and hypervisors, be sure not to overlook any storage, network and other appliances – as they all run on some OS that too needs to be patched against these vulnerabilities. And don’t forget your smartphones! By the way, here’s one good community tracker for all security bulletins (Microsoft is not listed there, but they did push the corresponding emergency update to Windows Update back on January 3rd).
Having said that, there are a couple of important things you should keep in mind about those patches. First, they do come with a performance impact. Again, some folks will want you to think that the impact is negligible, but it’s only true for applications with low I/O activity. While many enterprise apps will definitely take a big hit – at least, big enough to account for. For example, installing the patch resulted in almost 20% performance drop in the PostgreSQL benchmark. And then, there is this major cloud service that saw CPU usage double after installing the patch on one of its servers. This impact is caused due to the patch adding significant overhead to so-called syscalls, which is what computer programs must use for any interactions with the outside world.
Last but not least, do know that while those patches fully address Meltdown, they only address a few currently known attacks vector that Spectre enables. Most security specialists agree that Spectre vulnerability opens a whole slew of “opportunities” for hackers, and that the solid fix can only be delivered in CPU hardware. Which in turn probably means at least two years until first such processor appears – and then a few more years until you replace the last impacted CPU. But until that happens, it sounds like we should all be looking forward to many fun years of jumping on yet another critical patch against some newly discovered Spectre-based attack
Jenkins is a free and open source CI (Continuous Integration) tool which is written in JAVA. Jenkins is widely used for project development, deployment, and automation. Jenkins allows you to automate the non-human part of the whole software development process. It supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi.[3] Released under the MIT License.Jenkins’ security depends on two factors: 1. access control 2. protection from external threats.
- Access control can be customized via two ways, user authentication and authorization.
- Protection from external threats such as CSRF attacks and malicious builds is supported as well.
Requirements
It does not require any special kind of hardware, you’ll only need a CentOS 7 server and a root user access over it. You can switch from non root user to root user using sudo -i command.
Update System
It is highly recommended to install Jenkins on a freshly updated server. To upgrade available packages and system run below given command and it’ll do the job for you.
yum -y update
Install Java
Before going through the installation process of Jenkins you’ll need to set up Java Virtual Machine or JDK to your system. Simply run following command to install Java.
yum -y install java-1.8.0-openjdk.x86_64 Once installation is finished you can check the version to confirm the installation, to do run following command.
java -version The above command will tell you about the installation details of java. By running the above command you should see the following result on your terminal screen.
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
Next, you’ll need to setup two environment variables JAVA_HOME and JRE_HOME and to do so run following commands one by one.
cp /etc/profile /etc/profile_backup echo 'export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk' | sudo tee -a /etc/profile echo 'export JRE_HOME=/usr/lib/jvm/jre' | sudo tee -a /etc/profile source /etc/profile Finally print them for review using following commands.
echo $JAVA_HOME You should see following output.
/usr/lib/jvm/jre-1.8.0-openjdk
echo $JRE_HOME
shell scirpt
#!/bin/sh
yum install -y java
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install -y jenkins
yum install -y git
service jenkins start/stop/restart
chkconfig jenkins on
java
java -version
java -jar jenkins.war –httpPort=8080
# get jenkin-cli
wget http://localhost:8080/jnlpJars/jenkins-cli.jar
# jenkin-cli install plugin
java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin checkstyle cloverphp crap4j dry htmlpublisher jdepend plot pmd violations warnings xunit –username=yang –password=lljkl
# safe restart
java -jar jenkins-cli.jar -s http://localhost:8080 safe-restart –username=yang –password=lljkl
Install Jenkins
We have installed all the dependencies required by Jenkins and now we are ready to install Jenkins. Run following commands to install latest stable release of Jenkins.
wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key The above two commands will add the jenkins repository and also import the key. If in case you have previously imported the key from Jenkins then the rpm --import will fail because you already have a key. Please ignore that and move on.
Now run following command to install Jenkins on your server.
yum -y install jenkins Next, you’ll need to start Jenkins services and set it to run at boot time and to do so use following commands.
systemctl start jenkins.service systemctl enable jenkins.service
Connecting to a Git Repo
You will probably want to connect to a git repository next. This is also somewhat dependent on the operating system you use, so I provide the steps to do this on CentOS as well:
sudo yum install git
- Generate an SSH key on the server
ssh-keygen -t rsa
- When prompted, save the SSH key under the following path (I got this idea from reading the comments here)
/var/lib/jenkins/.ssh
- Assure that the .ssh directory is owned by the Jenkins user:
sudo chown -R jenkins:jenkins /var/lib/jenkins/.ssh
- Copy the public generated key to your git server (or add it in the GitHub/BitBucket web interface)
- Assure your git server is listed in the known_hosts file. In my case, since I am using BitBucket my /var/lib/jenkins/.ssh/known_hosts file contains something like the following
bitbucket.org,104.192.143.3 ssh-rsa [...]
- You can now create a new project and use Git as the SCM. You don’t need to provide any git credentials. Jenkins pulls these automatically form the /var/lib/jenkins/.ssh directory. There are good instructions for this available here.
Connecting to GitHub
- In the Jenkins web interface, click on Credentials and then select the Jenkins Global credentials. Add a credential for GitHub which includes your GitHub username and password.
- In the Jenkins web interface, click on Manage Jenkins and then on Configure System. Then scroll down to GitHub and then under GitHub servers click the Advanced Button. Then click the button Manage additional GitHub actions.
- In the popup select Convert login and password to token and follow the prompts. This will result in a new credential having been created. Save and reload the page.
- Now go back to the GitHub servers section and now click to add an additional server. As credential, select the credential which you have just selected.
- In the Jenkins web interface, click on New Item and then select GitHub organisation and connect it to your user account.
Any of your GitHub projects will be automatically added to Jenkins, if they contain a Jenkinsfile. Here is an example.
Connect with BitBucket
- First, you will need to install the BitBucket plugin.
- After it is installed, create a normal git project.
- Go to the Configuration for this project and select the following option:
- Log into BitBucket and create a webhook in the settings for your project pointing to your Jenkins server as follows: http://youserver.com/bitbucket-hook/ (note the slash at the end)
Testing a Java Project
Chances are high you would like to run tests against a Java project, in the following, some instructions to get that working:
If you can not mount your XFS partition with classical wrong fs type, bad superblock etc. error and you see a message in kernel logs (dmesg) like that:
XFS: Filesystem sdb7 has duplicate UUID - can't mount
you can still mount the filesystem with nouuid options as below:
mount -o nouuid /dev/sdb7 disk-7
But every mount, you have to provide nouuid option. So, for exact solution you have to generate a new UUID for this partition with xfs_admin utility:
xfs_admin -U generate /dev/sdb7
Clearing log and setting UUID
writing all SBs
new UUID = 01fbb5f2-1ee0-4cce-94fc-024efb3cd3a4
Patching a server is an important task for Linux system administrators in order to make the system more stable and perform better. Manufacturers often release some security / high-risk patches, the software needs to be upgraded to guard against potential security risks.
Yum (Yellowdog Update Modified) is an RPM package management tool used on CentOS and RedHat systems. The yum history command allows the system administrator to roll back the system to the previous state, but due to some limitations, rollback is not possible in all cases Success, sometimes yum command may do nothing, and sometimes may delete some other packages.
I suggest that you still have to do a complete system backup before you upgrade, and yum history can not be used to replace the system backup. System backup allows you to restore the system to an arbitrary node status.
In some cases, what should I do if the installed application does not work or has some errors after it has been patched (possibly due to library incompatibilities or package upgrades)?
Talk to the application development team and find out where the problem is with libraries and packages, then use the yum history command to roll back.
Server patching is one of the important task of Linux system administrator to make the system more stable and better performance. All the vendors used to release security/vulnerabilities patches very often, the affected package must be updated in order to limit any potential security risks.
Yum (Yellowdog Update Modified) is RPM Package Management utility for CentOS and Red Hat systems, Yum history command allows administrator to rollback the system to a previous state but due to some limitations, rollbacks do not work in all situations, or The yum command may simply do nothing, or it may remove packages you do not expect.
I advise you to take a full system backup prior to performing any update/upgrade is always recommended, and yum history is NOT meant to replace systems backups. This will help you to restore the system to previous state at any point of time.
yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 12 kB 00:00
* epel: mirror.csclub.uwaterloo.ca
base | 3.7 kB 00:00
dockerrepo | 2.9 kB 00:00
draios | 2.9 kB 00:00
draios/primary_db | 13 kB 00:00
epel | 4.3 kB 00:00
epel/primary_db | 5.9 MB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 2.5 MB 00:00
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be updated
—> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
—> Package httpd.x86_64 0:2.2.15-60.el6.centos.4 will be updated
—> Package httpd.x86_64 0:2.2.15-60.el6.centos.5 will be an update
—> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.4 will be updated
—> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.5 will be an update
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
–> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
httpd x86_64 2.2.15-60.el6.centos.5 updates 836 k
httpd-tools x86_64 2.2.15-60.el6.centos.5 updates 80 k
perl-Git noarch 1.7.1-9.el6_9 updates 29 k
Transaction Summary
=================================================================================================
Upgrade 4 Package(s)
Total download size: 5.5 M
Is this ok [y/N]: n
As you can see in the above output git package update is available, so we are going to take that. Run the following command to know the version information about the package (current installed version and available update version).
yum list git
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
* epel: mirror.csclub.uwaterloo.ca
Installed Packages
git.x86_64 1.7.1-8.el6 @base
Available Packages
git.x86_64
Run the following command to update git package from 1.7.1-8 to 1.7.1-9.
# yum update git
Loaded plugins: fastestmirror, presto
Setting up Update Process
Loading mirror speeds from cached hostfile
* base: repos.lax.quadranet.com
* epel: fedora.mirrors.pair.com
* extras: mirrors.seas.harvard.edu
* updates: mirror.sesp.northwestern.edu
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be updated
–> Processing Dependency: git = 1.7.1-8.el6 for package: perl-Git-1.7.1-8.el6.noarch
—> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
–> Running transaction check
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
–> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
Updating for dependencies:
perl-Git noarch 1.7.1-9.el6_9 updates 29 k
Transaction Summary
=================================================================================================
Upgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-9.el6_9.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-9.el6_9.noarch.rpm | 29 kB 00:00
————————————————————————————————-
Total 5.8 MB/s | 4.6 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : perl-Git-1.7.1-9.el6_9.noarch 1/4
Updating : git-1.7.1-9.el6_9.x86_64 2/4
Cleanup : perl-Git-1.7.1-8.el6.noarch 3/4
Cleanup : git-1.7.1-8.el6.x86_64 4/4
Verifying : git-1.7.1-9.el6_9.x86_64 1/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 2/4
Verifying : git-1.7.1-8.el6.x86_64 3/4
Verifying : perl-Git-1.7.1-8.el6.noarch 4/4
Updated:
git.x86_64 0:1.7.1-9.el6_9
Dependency Updated:
perl-Git.noarch 0:1.7.1-9.el6_9
Complete!
Verify updated version of git package.
# yum list git
Installed Packages
git.x86_64 1.7.1-9.el6_9 @updates
or
# rpm -q git
git-1.7.1-9.el6_9.x86_64
As of now, we have successfully completed package update and got a package for rollback. Just follow below steps for rollback mechanism.
First get the yum transaction id using following command. The below output clearly shows all the required information such transaction id, who done the transaction (i mean username), date and time, Actions (Install or update), how many packages altered in this transaction.
# yum history
or
# yum history list all
Loaded plugins: fastestmirror, presto
ID | Login user | Date and time | Action(s) | Altered
——————————————————————————-
13 | root | 2017-08-18 13:30 | Update | 2
12 | root | 2017-08-10 07:46 | Install | 1
11 | root | 2017-07-28 17:10 | E, I, U | 28 EE
10 | root | 2017-04-21 09:16 | E, I, U | 162 EE
9 | root | 2017-02-09 17:09 | E, I, U | 20 EE
8 | root | 2017-02-02 10:45 | Install | 1
7 | root | 2016-12-15 06:48 | Update | 1
6 | root | 2016-12-15 06:43 | Install | 1
5 | root | 2016-12-02 10:28 | E, I, U | 23 EE
4 | root | 2016-10-28 05:37 | E, I, U | 13 EE
3 | root | 2016-10-18 12:53 | Install | 1
2 | root | 2016-09-30 10:28 | E, I, U | 31 EE
1 | root | 2016-07-26 11:40 | E, I, U | 160 EE
The above command shows two packages has been altered because git updated it’s dependence package too perl-Git. Run the following command to view detailed information about the transaction.
# yum history info 13
Loaded plugins: fastestmirror, presto
Transaction ID : 13
Begin time : Fri Aug 18 13:30:52 2017
Begin rpmdb : 420:f5c5f9184f44cf317de64d3a35199e894ad71188
End time : 13:30:54 2017 (2 seconds)
End rpmdb : 420:d04a95c25d4526ef87598f0dcaec66d3f99b98d4
User : root
Return-Code : Success
Command Line : update git
Transaction performed with:
Installed rpm-4.8.0-55.el6.x86_64 @base
Installed yum-3.2.29-81.el6.centos.noarch @base
Installed yum-plugin-fastestmirror-1.1.30-40.el6.noarch @base
Installed yum-presto-0.6.2-1.el6.noarch @anaconda-CentOS-201207061011.x86_64/6.3
Packages Altered:
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
history info
Fire the following command to Rollback the git package to the previous version.
# yum history undo 13
Loaded plugins: fastestmirror, presto
Undoing transaction 53, from Fri Aug 18 13:30:52 2017
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
Loading mirror speeds from cached hostfile
* base: repos.lax.quadranet.com
* epel: fedora.mirrors.pair.com
* extras: repo1.dal.innoscale.net
* updates: mirror.vtti.vt.edu
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
—> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
–> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k
Transaction Summary
=================================================================================================
Downgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 29 kB 00:00
————————————————————————————————-
Total 3.4 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4
Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9
Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6
Complete!
After rollback, use the following command to re-check the downgraded package version.
Rollback Updates using YUM downgrade command
Alternatively we can rollback an updates using YUM downgrade command.
# yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6
Loaded plugins: search-disabled-repos, security, ulninfo
Setting up Downgrade Process
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
—> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
–> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================
Package Arch Version Repository Size
=================================================================================================
Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k
Transaction Summary
=================================================================================================
Downgrade 2 Package(s)
Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 28 kB 00:00
————————————————————————————————-
Total 3.7 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4
Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9
Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6
Complete!
Note : You have to downgrade a dependence packages too, otherwise this will remove the current version of dependency packages instead of downgrade because the downgrade command cannot satisfy the dependency.
For Fedora Users
Use the same above commands and change the package manager command to DNF instead of YUM.
# dnf list git
# dnf history
# dnf history info
# dnf history undo
# dnf list git
# dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6
# dnf listgit
# dnf history
# dnf history info
# dnf history undo
# dnf listgit
# dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6
MySQL backup command mysqldump parameters and examples
1. Grammar options
-h, –host=name
CPU name
-P[ port_num], –port=port_num
The TCP / IP port number used to connect to the MySQL server
–master-data
This option adds the location and filename of the binlog to the output. If it is equal to 1, it will be printed as a CHANGE MASTERcommand; if equal to 2, the comment prefix will be added. And this option will automatically open –lock-all-tables, unless at the same time set up –single-transaction(in this case, the global read lock will only add a small amount of time to start the dump, do not forget to read –single-transactionthe part). In all cases, the actions in all logs occur at the exact moment of export. This option will automatically shut down –lock-tables.
-x, –lock-all-tables
Lock all the tables in all the libraries. This is achieved by holding a global read lock throughout the dump. Will automatically shut down –single-transactionand –lock-tables.
–single-transaction
The exported data is a consistent snapshot by encapsulating the export operation within a single transaction. Works only if the table uses a storage engine that supports MVCC (currently only InnoDB); other engines can not guarantee that the export is consistent. When the export –single-transactionoption is enabled , to make sure that the export file is valid (the correct table data and binary log location), make sure that no other connections execute the following statement: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLEThis will invalidate the consistent snapshot. This option will automatically shut down when turned on –lock-tables.
-l, –lock-tables
Read lock on all tables. (Default is open, use –skip-lock-tablesto close the above options will shut down -loption)
-F, –flush-logs
Refresh the server’s log file before starting export. Note that if you export many databases at a time (use -databases=or –all-databasesoption), log refreshes when each library is exported. The exception is when using –lock-all-tablesor –master-data: The log will only be refreshed once, at that time all tables will be locked. So if you want your exports and log refreshes to happen at exactly the same time, you need to use –lock-all-tablesor –master-datacooperate –flush-logs.
–delete-master-logs
After the backup is complete delete the log on the main library. This option automatically opens “ –master-data`.
–opt
With -add-drop-table, –add-locks, –create-options, –quick, –extended-insert, –lock-tables, –set-charset, –disable-keys. ( –skip-optWhich is turned on by default and off means that these options keep their defaults) should give you the quickest possible export for reading into a MySQL server, –compactalmost banning the options above.
-q, –quick
Do not buffer the query, directly exported to stdout. (Turned on by default, use –skip-quickto close) This option is used for dumping large tables.
–set-charset
Will be SET NAMES default_character_setadded to the output. This option is enabled by default. To disable the SET NAMESstatement, use –skip-set-charset.
–add-drop-tables
CREATE TABLEAdd a DROP TABLEstatement before each statement. Open by default.
–add-locks
Added before LOCK TABLESand after each table export UNLOCK TABLE. (In order to make it faster to insert into MySQL). Open by default.
–create-option
Include all MySQL table options in the CREATE TABLE statement. Open by default, use –skip-create-optionsto close.
-e, –extended-insert
Using the new multi-line INSERT syntax, turned on by default (gives tighter and faster insert statements)
-d, –no-data
Any row information not written to the table. This is useful if you only want to export the structure of a table.
–add-drop-database
Before the create database DROP DATABASE, the default off, so generally need to ensure that the database has been imported.
–default-character-set=
The default character set to use. If not specified, mysqldump uses utf8.
-B, –databases
Dump several databases. Usually, mysqldump treats the first name parameter in the command line as the database name, followed by the name as the table name. With this option, it treats all name parameters as database names. CREATE DATABASE IF NOT EXISTS db_nameAnd the USE db_namestatement is included in the output in front of each new database.
–tables
Overlay –databaseoptions. All arguments after the option are treated as table names.
-u[ name], –user=
MySQL user name to use when connecting to the server.
-p[password], –password[=password]
The password to use when connecting to the server. If you use short form (-p), you can not have a space between the option and the password. If on the command line, the password value behind –passwordor -poption is ignored , you will be prompted for one.
Example
Export a database:
$ mysqldump -h localhost -uroot -ppassword \ – master-data = 2 –single-transaction –add-drop-table –create-options –quick \ – extended-insert –default-character-set = utf8 \ – databases discuz> backup-file.sql
Export a table:
$ mysqldump -u pak -p –opt –flush-logs pak t_user> pak-t_user.sql
Compress backup files:
$ mysqldump -hhostname -uusername -ppassword –databases dbname | gzip> backup-file.sql.gz
The corresponding reduction action is
gunzip <backup-file.sql.gz | mysql -uusername -ppassword dbname
Import the database:
mysql> use target_dbname
mysql> source /mysql/backup/path/backup-file.sql
or
$ mysql target_dbname <backup-file.sql
Import there is a mysqlimportcommand, yet to study.
Dump directly from one database to another:
mysqldump -u username -p –opt dbname | mysql –host remote_host -C dbname2
CentOS 7 through nmcli team to achieve multiple network card binding
nmcli con add type team con-name team0 ifname team0 config ‘{“runner”:{“name”:”roundrobin”}}’
Run ip link command to view the interface available in the system
1, create a bond card
nmcli con add type team con-name team0 ifname team0 config ‘{ “runner”: { “name”: “roundrobin”}}’
various modes:
the METHOD is One of the following: broadcast, activebackup, roundrobin, loadbalance, or lacp.
The first mode: mod = 0, namely: (balance-rr) Round-robin policy Switch configuration Eth-Trunk
The second mode: mod = 1, namely: policy The
third mode: mod = 2, namely: (balance-xor) XOR policy The
fourth mode: mod = 3, namely: broadcast The
fifth mode: mod = 4, namely: (802.3ad) IEEE 802.3ad Dynamic link aggregation
Sixth mode: mod = 5, namely: balance-tlb Adaptive transmit load balancing:
The seventh mode: mod = 6, namely: (balance-alb) Adaptive load balancing (adapter adaptable load balancing)
2, view the creation of the network card
nmcli con show
3, add the load of the network card
nmcli con add type team-slave con-name team0-port1 ifname em1 master team0
nmcli con add type team-slave con-name team0-prot2 ifname em4 master team0
4. Configure IP address and gateway
nmcli con mod team0 ipv4.addresses “171.16 .41.x / 24 ”
nmcli con mod team0 ipv4.gateway” 171.15.41.x ”
nmcli con mod team0 ipv4.method manual
nmcli con up team0
5, restart the network service
systemctl restart network
6, check the network card binding status
teamdctl team0 state
7, check NIC binding
nmcli dev dis em1 / / closed binding state
nmcli dev con em1 / / restore the binding state
CentOS and RHEL Performance Tuning Utilities and Daemons
Tuned and Ktune
Tuned is a daemon that can monitors and collects data on the system load and activity, by default tuned won’t dynamically change settings, however you can modify how the tuned daemon behaves and allow it to dynamically adjust settings on the fly based on activity. I prefer to leave the dynamic monitoring disabled and just use tuned-adm to set the profile once and be done with it. By using the latency-performance profile, tuned can significantly improve performance on CentOS 6 and CentOS 7 servers.
You can install tuned on a CentOS 6.x server by using the command below. For CentOS 7, tuned is installed and activated by default.
yum install tuned
To activate the latency-performance profile, you can use the “tuned-adm” command to set the profile. Personally I’ve found that latency-performance is one of the best profile to use if you want high disk IO and low latency. Who doesn’t want better performance?
tuned-adm profile latency-performance
To check what tuned profile is currently in use you can use the “active” option which lists the active tuned profile.
tuned-adm active
Ktune provides many different profiles that tuned can use to optimize performance.
tuned-adm has the ability to set the following profiles on CentOS 6 or CentOS 7:
- default – Default power saving profile, this is the most basic. It enables only the disk and cpu pligins. This is not the same as turning tuned-adm off.
- latency-performance – This turns off power saving features, cpuspeed mode is turned to performance. I/O elevator is changed to deadline. cpu_dma_latency requirement value 0 is registered.
- throughput-performance – Recommended if the system is not using “enterprise class” storage. Same as “latency-performance” except:
kernel.sched_min_granularity_ns is set to 10 ms
kernel.sched_wakeup_granularity_ns is set to 15 ms
vm.dirty_ratio is set to 40% and transparent huge pages are enabled.
- enterprise-storage – Recommended for enterprise class storage servers that have BBU raid controller cache protection and management of on-disk cache. Same as “throughput-performance except:
file systems are re-mounted with barrier=0
- virtual-guest – Same as “enterprise storage” but it sets readahead value to x 4 of what it normally is. Non boot / root FS are remounted with barrier=0
- virtual-host – Based on “enterprise storage”. This reduces swappiness of virtual memory and enables more aggressive writeback of dirty pages. Recommended for KVM hosts.
perf
Basic command to get some info on a running process:
perf stat -p $PID
You can also view process info in real time by using perf’s top like command:
perf top -p $PID
SystemTap
Lots of stuff here, will be going over this later.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/SystemTap_Beginners_Guide/index.html
CPU Overview
[Additional CPU and Numa Information]
SMP and NUMA
Older systems used to have only a few CPUs per system, this was known as SMP (Symmetric Multi-Processor). This means that each CPU in the system had more or less the same access to available memory. This was done by laying out physical connections between CPUs and RAM. These connections were known as Parallel Buses.
Newer systems have many more CPUs (and multiple cores per CPU), so giving them all the same access to memory can be expensive in terms of space needed to draw all the physical connections. This is known as NUMA (Non-Uniform Memory Access).
- AMD has used this for a long time with Hyper Transport Interconnects (HT).
- Intel has starting implementing NUMA with Quick Path Interconnect (QPI)
- Tuning applications depends on whether or not a system is using SMP or NUMA. Most modern systems will be using NUMA, however this really only becomes a factor on multi socket systems. A server that has a single E3-1240 processor will not need the same tuning as a 4 socket AMD Opteron 6230.
Threads
A unit of execution is known as a thread. The OS schedules threads for the CPU. The OS is mainly concerned with keeping all the CPUs as busy as possible all the time. The issue with this is that the OS could decide to start a thread on a CPU that does not have the process’s Memory in it’s local bank, which means that there is latency involved, which ca n reduce performance.
Interrupts (IRQs)
An interrupt (known as IRQ) can impact an applications performance. The OS handles these events, and are used by peripherals to signal the arrival of data or the completion of an operation. IRQs don’t effect the applications functionality, however they can cause performance issues.
Parallel and Serial Buses
- Early CPUs were designed to have the same path to a single pool of memory on the system. This meant that each CPU could access the same pool of RAM at the same speed. This worked up to a point, however as more CPUs were added, more physical connections needed to be added to the board, which take up more space, and don’t allow for new features to be added to the board.
- However, once more CPUs were added, there was a space issue on the board, and the paths were taking up too much space. These paths are known as parallel buses.
- On newer systems, a serial bus is used, which is a single wire communication path that has a very high clock rate. Serial buses are used to connect CPUs and pools of RAM. So instead of having 8 – 16 parallel buses for each CPU, there is now one bus that carrys information to and from the CPU and RAM. This means that all the CPUs talk to each other through this path, bursts of information are sent so that multiple CPUs can issue requests, much like how the Internet works.
Questions to ask yourself while performance tuning
If we have a two socket motherboard, and two quad core CPUs, each socket would also have it’s own bank of RAM. Let’s say that each CPU socket has 4 GB of RAM in it’s local bank. The local bank includes the RAM, along with a built in memory manager.
- Socket One has 4 CPUs, known as CPU 0-3. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.
- Socket Two has 4 CPUs, known as CPU 4-7. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.
Each socket in this example has a corresponding NUMA node, which means that we have two NUMA nodes on this system.
The ideal process for a CPU in Socket One, would be to execute processes on CPU 0 – 3 and also have the process use the RAM that is locted in Socket One’s RAM bank.
However, if there is not enough RAM in Socket One’s bank, or the OS does not realize that NUMA is in use, then it is possible for a CPU in Socket One to use the RAM from Socket Two. This is not optimal because there are two times as many steps involved to access this RAM. Having to make additional hops causes latency, which is bad.
To optimize performance for applications, you must ask the following questions:
- 1) What is the topology of the system? (Is this NUMA, or not?)
- 2) Where is the application currenly executing? (Using one Socket, or all of them?)
- 3) If the application is using one CPU, what is the closest bank of RAM to it?
NUMA in action
The process for a CPU to get access to local memory is as follows:
1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.
2) The memory controller setups up access to this address for that CPU.
3) The CPU then starts doing work with this address.
The process for a CPU to get access to Remote memory is as follows:
1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.
2) Since the process is unable to use the local bank (not enough space, or started elsewhere) the local memory controller passes the request to Socket Two's memory manager.
3) Socket Two's memory manager (remote) then setups up access for that CPU on Socket two's RAM.
4) The CPU then starts doing work with this address.
You can see that there are extra steps involved, which can really add up over time and slow down performance.
CPU Affinity with Taskset
This utility can be used to retrieve and set CPU affinity of a running process. You can also use this utility to launch a process on specific CPUs. However, this command will not ensure that local memory is used by whatever CPU is set.
To control this, you would use numactl instead.
CPU affinity is represented as a bitmask. The lowest order bits represent the first logical CPU, and the highest order bits would represent the last logical CPU.
Examples would be 0x00001 = processor 0 and 0x00003 = processor 0 and 1
Command used to set the CPU affinity of a running process would be:
taskset -p $CPU_bit_mask $PID_of_task_to_set
Command used to launch a process with a set affinity:
taskset $CPU_bit_mask --$program_to_launch
You can also just specify logical CPU numbers with this option:
taskset -c 0,1,2,3 --$program_to_launch
CPU Scheduling
The Linux scheduler has a few different policies that it uses to determine where, and for how long a thread will run.
The two main categories are:
1) Realtime Policies
2) Normal Policies
- SCHED_OTHER
- SCHED_BATCH
- SCHED_IDLE
Realtime scheduling policies:
Realtime threads are scheduled first, and normal threads are scheduled to run after realtime threads.
Realtime policies are used for time-critical tasks that must complete without interruptions.
SCHED_FIFO
This policy defines a fixed priority for each thread (1 – 99). The scheduler scans the list of these threads and run t he highest priority threads first. The threads then run until they are blocked, exit or another thread comes along tha t has a higher priority to run.
Even a low priority thread using this policy will run before any other thread under a normal policy.
SCHED_RR
This uses a Round Robin style, and it load balances between threads with the same priority.
Memory
Transparent HugePage Overview
Most modern CPUs can take advantage of multiple pages sizes, however the Linux Kernel will usually stick with the smallest page size unless you tell it otherwise. The smallest page size is 4096 bytes or 4KB. Any page size that is above 4KB is considered a huge page, larger pages can improve performance in some cases because the larger page sizes reduce the amount of page faults needed to access data in memory.
While this might be an over simplified example, it should explain why huge pages can be awesome. If an application needs, say, 2MB of data and I am using 4KB page sizes, there would need to be 512 page faults to read all the data from memory (2048KB / 4KB = 512). Now, if I was using 2MB page sizes, there would only need to be 1 page fault since all the data can fit inside a single “huge page”.
In addition to reduced page faults, huge pages also boost performance by reducing the performance cost of virtual to physical address translation since less pages need to be accessed to obtain all the data from memory. Since there are less lookups and translations going on, the CPU caches will become warmer, thus improving performance.
The Kernel will map it’s own address space with hugepages to help reduce the amount of TLB pressure, however the only way that a user space application can utilize huge pages is through hugetlbfs which can be a huge pain in the ass to configure. However, Transparent Huge Pages came along and rescued the day by automating some of the use of huge pages in a safe way that won’t break an application.
Since the Transparent HugePage patch was added to the 2.6.38 Kernel by Andrea Arcangeli. A new thread called khugepaged was introduced which scans for pages that could be merged into a single large page, once the pages have been merged into a single huge page, the smaller pages are removed and freed up. If the huge page needs to be swapped to disk, then the single page is automatically split back up into smaller pages and written to disk, so THP basically is awesome.
How to Configure Transparent HugePages
You can see if your OS is configured to use transparent huge pages by cating the file below. The file should exist and most of the time the value will return “always”
cat /sys/kernel/mm/transparent_hugepage/enabled
You can disable transparent huge pages by echoing “never” into the value. Although I do not suggest disabling THP unless you really know what you are doing/
echo never > /sys/kernel/mm/transparent_hugepage/enabled
If you want to see if there are any active huge pages, you can check /proc/meminfo. In this case I am not using Transparent Huge Pages, but if you were using Transparent Huge Pages there would be values other than zero in the output.
cat /proc/meminfo | grep -i huge
AnonHugePages: 301056 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
NUMA
Controlling NUMA with numactl
If you are running CentOS 6 or CentOS 7, you can use numactl to run processes with specific scheduling or memory placement policies. These policies then applies to the process and any children processes that it creates.
You can check the following locations to determine how the CPUs talk to each other, along with NUMA node info.
/sys/devices/system/cpu
/sys/devices/system/node
Long running applications that require high performance should be configured so that they only use memory that is located as close to the CPU as possible.
Applications that are multi-threaded should be locked down to a specific NODE, not CPU core. This is because of the way that the CPU caches instructions, it is better to lock down some cores for specific threads, otherwise it’s possible that another applications thread will run, and clear out the previous cache, the CPU then wastes time re caching things.
To display the current NUMA policy settings for a current process:
numactl --show
To display available NUMA nodes on a CentOS 7 system
numactl --hardware
To only allocate memory to a process from specific NUMA nodes, use the following command
numactl --membind=$nodes $program_to_run
To only run a process on specific CPU nodes, use the following command
numactl --cpunodebind=$nodes $program_to_run
To run a process on specific CPUs, (not NUMA nodes), run this command
numactl --physcoubind=$CPU $program_to_run
Viewing NUMA stats with numastat
The numastat command can be used to view memory statistics for CentOS and what processes on running on which NUMA node, broken down on a per NUMA node basis. If you want to know if your server is achieving optimal CPU performance by avoiding NUMA hits, use numastat and look for low numa_miss and numa_foreign values. If you notice that there are a lot of numa misses (like 5% or more) you may want to try and pin the process to specific nodes to improve performance/
You can run numastat without any options and will display quite a lot of information. If you run numastat on CentOS 7 the output might look slightly different than it does on CentOS 6
numastat
The default tracking categories are as follows:
- numa_hit – The number of successful allocations to this node.
- numa_miss – The amount of memory accesses / attempted allocations to another NUMA node that were instead allocated to this node. If this value is high, it means that some NUMA nodes are using remote memory instead of local memory. This will hurt performance badly since there is additional latency for every remote memory access, many remote accesses begin to slow down the application. Lots of NUMA misses could be caused by lack of available memory, so if your server is close to utilizing all memory you may want to look into adding more memory, or using taskset to balance out process placement on NUMA nodes.
- numa_foreign – Similar to numa_miss, but instead it shows the amount of allocations that were initially intended for this node, but were moved to another node. High values here are also bad. This value should reflect numa_miss
- interleave_hit – The number of attempted interleave allocations to this node that were a success.
- local_node – The number of times a process on this node successfully allocated memory on this node.
- other_node – The number of times a process on another node allocated memory on this node.
NUMA Affinity Management Daemon (numad)
Numad is a daemon that automatically monitors NUMA topology and resource usage within the OS, numad can run on CentOS 6 or CentOS 7 servers. Numad can help to dynamically improve NUMA resource allocation and management, which can help to improve performance by watching what CPU processes run on, and what NUMA nodes those processes access. Over time NUMAD will balance out processes so that they are able to access data from the local memory bank which helps to lower latency.
The NUMA daemon looks for significant, long running processes, and then attempts to pin the process down to certain NUMA nodes, so that the CPU and Memory reside in the same NUMA node, this assumes there is enough free memory for the process to use..
You will see significant performance improvements on systems that have long running processes that consume a lot of resources, but not enough to occupy multiple NUMA nodes. For instance, MySQL is a prime candidate for NUMA balancing, Varnish would be another good candidate.
Applications such as large, in memory databases may not see any improvement, this is because you cannot lock down resources to a single NUMA node if all the system RAM is being used by one process.
To start the numad process on CentOS 6 or CentOS 7
service numad start
To ensure NUMAD starts on reboot, use chkconfig to set numad to “on”
chkconfig numad on
File System Optimization
Barriers
Write barriers are a kernel mechanism used to ensure that file system meta data is correctly written and ordered on persistent storage. If this is enabled, it ensures that any data transmitted using fsynch persists accross power outages. This is enabled by default.
If a system does not have volatile write caches, this can be disabled to help improve performance.
Mount option:
nobarrier
Access Time
Whenever a file is read, the access time for that file must be updated in the inode metadata, whic usually involves additional write I/O. If this is not needed, you can disable this to help with IO performance.
Mount option:
noatime
Increased Read-Ahead Support
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk.
To view the current read-ahead value for a particular block device, run:
blockdev -getra device
To modify the read-ahead value for that block device, run the following command. N represents the number of 512-byte sectors.
blockdev -setra N device
This will not persist accross reboots, so add this to a run level init.d script to apply after reboots.
Syscall Utilities
strace
strace can be used to watch systems calls made my a program.
strace $program
sysdig
Sysdig is newer than strace and there is a lot you can do with it. By default it shows all system calls on a server but you can filter out certain applications if you want.
sysdig proc.name=$program
For more information on how to install and use sysdig:
To install on CentOS 6
rpm --import https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public
curl -s -o /etc/yum.repos.d/draios.repo http://download.draios.com/stable/rpm/draios.repo
rpm -i http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm
yum -y install kernel-devel-$(uname -r)
yum -y install sysdig
System Calls
http://sysdigcloud.com/fascinating-world-linux-system-calls/
A System Call is how a program requests a service from an OS’s kernel. System calls can ask for access to a harddrive, open a file, create a new process, etc, etc, etc.
System calls require a switch from user mode to kernel mode.
Clone
The Clone syscall creates new processes and threads. It is one of the more complex system calls and can be expensive to run so if you notice tons of these syscalls and performance is low you may want to try and reduce the amount of times this happens by increasing the process lifetime or reducing the amount of processes in general.
sysdig filter for clone
sysdig evt.type=clone
Execve
This syscall executes programs, typically you will see this call after the clone syscall. Everything that gets executed goes through this call.
sysdig filter for execve
sysdig evt.type=execve
Chdir
This syscall changes the process working directory. If anything changes directory you can see it by filtering this syscall.
sysdig filter for chdir
sysdig evt.type=chdir
open/creat
These syscalls opens files and can also create them. If you trace this syscall you can view file creation and who is touching what.
sysdig filter for open and creat
sysdig evt.type=open
sysdig evt.type=creat
connect
This syscall initiates connections on a socket(s). This syscall is the only one that can establish a network connection.
sysdig filter for connect. You can also specify a port or IP to view specific services or IPs.
sysdig evt.type=connect
sysdig evt.type=connect and fd.port=80
accept
This syscall accepts a connection on a socket. You will always see this syscall when connect is called.
sysdig filter for accept. You can also specify a port or IP to view specific services or IPs.
sysdig evt.type=accept
sysdig evt.type=accept and fd.port=80
read/write
These syscalls read or write data to or from a file.
sysdig filter for IO
sysdig evt.is_io=true
You can also use chisel to view IO for certain files, ports, or programs, for example
sysdig -c echo_fds fd.name=/var/lib/mysql/
sysdig -c echo_fds proc.name=httpd and fd.port!=80
unlink/rename
These syscalls delete or rename files.
sysdig evt.type=unlink
sysdig evt.type=rename
Links
Performance Tuning in centos7
Tuned
In RedHat (and thus CentOS) 7.0, a daemon called “tuned” was introduced as a unified system for applying tunings to Linux. tuned operates with simple, file-based tuning “profiles” and provides an admin command-line interface named “tuned-adm” for applying, listing and even recommending tuned profiles.
Some operational benefits of tuned:
- File-based configuration – Profile tunings are contained in a simple, consolidated files
- Swappable profiles – Profiles are easily changed back/forth
- Standards compliance – Using tuned profiles ensures tunings are not overridden or ignored
Note: If you use configuration management systems like Puppet, Chef, Salt, Ansible, etc., I suggest you configure those systems to deploy tunings via tuned profiles instead of applying tunings directly, as tuned will likely start to fight this automation, overriding the changes.
The default available tuned profiles (as of RedHat 7.2.1511) are:
- balanced
- desktop
- latency-performance
- network-latency
- network-throughput
- powersave
- throughput-performance
- virtual-guest
- virtual-host
The profiles that are generally interesting for database usage are:
- latency-performance
“A server profile for typical latency performance tuning. This profile disables dynamic tuning mechanisms and transparent hugepages. It uses the performance governer for p-states through cpuspeed, and sets the I/O scheduler to deadline.”
- throughput-performance
“A server profile for typical throughput performance tuning. It disables tuned and ktune power saving mechanisms, enables sysctl settings that improve the throughput performance of your disk and network I/O, and switches to the deadline scheduler. CPU governor is set to performance.”
- network-latency – Includes “latency-performance,” disables transparent_hugepages, disables NUMA balancing and enables some latency-based network tunings.
- network-throughput – Includes “throughput-performance” and increases network stack buffer sizes.
I find “network-latency” is the closest match to our recommended tunings, but some additional changes are still required.
T
Tuning a server according to specific requirements is not an easy task. You need to know a lot of system parameters and how to change them in a intelligent manner.
Red Hat offers a tool called tuned-adm that makes these changes easy by using tuning profiles.
The tuned-adm command requires the tuned package (if not already installed):
# yum install -y tuned
Tuning Profiles
A tuning profile consists in a list of system changes corresponding to a specific requirement.
To get the list of the available tuning profiles, type:
# tuned-adm list
Available profiles:
- balanced
- desktop
- latency-performance
- network-latency
- network-throughput
- powersave
- sap
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: virtual-guest
Note: All these tuning profiles are explained in details in the tuned-adm man page.
To only get the active profile, type:
# tuned-adm active
Current active profile: virtual-guest
To get the recommended tuning profile in your current configuration, type:
# tuned-adm recommend
virtual-guest
To apply a different tuning profile (here throughput-performance), type:
# tuned-adm profile throughput-performance
cpu setting
tuned-adm profile throughput-performance
tuned-adm active
cpupower idle-set -d 4
cpupower idle-set -d 3
cpupower idle-set -d 2
cpupower frequency-set -g performance
sysctl
kernel.numa_balancing=0
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_sack = 0
net.core.netdev_budget=600
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_low_latency=1
net.ipv4.tcp_rmem=16384 349520 16777216
net.ipv4.tcp_wmem=16384 349520 16777216
net.ipv4.tcp_mem = 2314209 3085613 4628418
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.somaxconn=2048
net.ipv4.tcp_adv_win_scale=1
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max=16777216
Linux Kernel Tuning for Centos 7
tuned` should already be installed for Cent 7 and default profile is balanced.
tuned-adm profiles can be found in this directory
ls /usr/lib/tuned/
balanced/ latency-performance/ powersave/ virtual-guest/
desktop/ network-latency/ recommend.conf virtual-host/
functions network-throughput/ throughput-performance/
To see what the active profile is:
To activated tuned xxx
latency-performance
- latency-performance
- Profile for low latency performance tuning.
- Disables power saving mechanisms.
- CPU governor is set to performance and locked to the low C states (by PM QoS).
- CPU energy performance bias to performance.
- This profile is the Parent profile to “network-latency”.
Activate tuned latency-performance for CentOS 7
tuned-adm profile latency-performance
For CentOS 7, the latency-performance profile includes the following tweaks
cat /usr/lib/tuned/latency-performance/tuned.conf
[cpu]
force_latency=1
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[sysctl]
kernel.sched_min_granularity_ns=10000000
vm.dirty_ratio=10
vm.dirty_background_ratio=3
vm.swappiness=10
kernel.sched_migration_cost_ns=5000000
network-latency
- network-latency
- This is a Child profile of “latency-performance”.
- That this means is that if you were to activate network-latency profile via tuned, it would automatically enable latency-performance, then make some additional tweaks to improve network latency.
- Disables transparent hugepages, and makes some net.core kernel tweaks.
cat /usr/lib/tuned/network-latency/tuned.conf
[main]
include=latency-performance
[vm]
transparent_hugepages=never
[sysctl]
net.core.busy_read=50
net.core.busy_poll=50
net.ipv4.tcp_fastopen=3
kernel.numa_balancing=0
throughput-performance
- throughput-performance
- This is the Parent profile to virtual-guest, virtual-host and network-throughput.
- This profile is optimized for large, streaming files or any high throughput workloads.
cat /usr/lib/tuned/throughput-performance/tuned.conf
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[vm]
transparent_hugepages=always
[disk]
readahead=>4096
[sysctl]
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.swappiness=10
virtual-guest
- virtual-guest
- Profile optimized for virtual guests based on throughput-performance profile.
- It additionally decreases virtual memory swapiness and increases dirty_ratio settings.
cat /usr/lib/tuned/virtual-guest/tuned.conf
[main]
include=throughput-performance
[sysctl]
vm.dirty_ratio = 30
vm.swappiness = 30
virtual-host
- virtual-host
- Profile optimized for virtual hosts based on throughput-performance profile.
- It additionally enables more aggressive write-back of dirty pages.
cat /usr/lib/tuned/virtual-host/tuned.conf
[main]
include=throughput-performance
[sysctl]
vm.dirty_background_ratio = 5
kernel.sched_migration_cost_ns = 5000000
I/O scheduler
echo 'deadline' > /sys/block/sda/queue/scheduler
menuentry ‘CAKE 3.0, with Linux 3.10.0-229.1.2.el7.x86_64′
set root=’hd0,msdos1’
linux16 /vmlinuz-3.10.0-229.1.2.el7.x86_64 root= …. elevator=deadline
initrd16 /initramfs-3.10.0-229.1.2.el7.x86_64.img
|
|
Recent Comments