October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Web Server’s SSL Ciphers

How to Disable Weak SSL Protocols and Ciphers in IIS

 March 17th, 2011  Wayne Zimmerman

I recently undertook the process of moving websites to different servers here at work. This required that university networking group scan the new webserver with a tool called Nessus. Unfortunately this turned up several errors, all of them had to do with Secure Sockets Layer or SSL which in Microsoft Windows Server 2003 / Internet Information Server 6 out of the box support both unsecure protocols and cipher suites. These problems would have to be solved before they would allow the new server though the firewalls. The report they university sent me was generated by Nessus generated errors like this:

SSL Version 2 (v2) Protocol Detection

Synopsis :

The remote service encrypts traffic using a protocol with known
weaknesses.

Description :

The remote service accepts connections encrypted using SSL 2.0, which
reportedly suffers from several cryptographic flaws and has been
deprecated for several years. An attacker may be able to exploit
these issues to conduct man-in-the-middle attacks or decrypt
communications between the affected service and clients.

See also :

http://www.schneier.com/paper-ssl.pdf

http://support.microsoft.com/kb/187498

http://www.linux4beginners.info/node/disable-sslv2

Solution :

Consult the application's documentation to disable SSL 2.0 and use SSL
3.0 or TLS 1.0 instead.

Risk factor :

Medium / CVSS Base Score : 5.0
(CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N)

Nessus ID : 20007
----------------------------------------------------------
SSL Medium Strength Cipher Suites Supported

Synopsis :

The remote service supports the use of medium strength SSL ciphers.

Description :

The remote host supports the use of SSL ciphers that offer medium
strength encryption, which we currently regard as those with key
lengths at least 56 bits and less than 112 bits.

Note: This is considerably easier to exploit if the attacker is on the
same physical network.

Solution :

Reconfigure the affected application if possible to avoid use of
medium strength ciphers.

Risk factor :

Medium / CVSS Base Score : 4.3
(CVSS2#AV:N/AC:M/Au:N/C:P/I:N/A:N)

Plugin output :

Here are the medium strength SSL ciphers supported by the remote server :

Medium Strength Ciphers (>= 56-bit and < 112-bit key)
SSLv2
DES-CBC-MD5 Kx=RSA Au=RSA Enc=DES(56) Mac=MD5
SSLv3
DES-CBC-SHA Kx=RSA Au=RSA Enc=DES(56) Mac=SHA1
TLSv1
EXP1024-DES-CBC-SHA Kx=RSA(1024) Au=RSA Enc=DES(56) Mac=SHA1 export
EXP1024-RC4-SHA Kx=RSA(1024) Au=RSA Enc=RC4(56) Mac=SHA1 export
DES-CBC-SHA Kx=RSA Au=RSA Enc=DES(56) Mac=SHA1

The fields above are :

{OpenSSL ciphername}
Kx={key exchange}
Au={authentication}
Enc={symmetric encryption method}
Mac={message authentication code}
{export flag}

Nessus ID : 42873
--------------------------------------------------------------------
SSL Weak Cipher Suites Supported

Synopsis :

The remote service supports the use of weak SSL ciphers.

Description :

The remote host supports the use of SSL ciphers that offer either weak
encryption or no encryption at all.

Note: This is considerably easier to exploit if the attacker is on the
same physical network.

See also :

http://www.openssl.org/docs/apps/ciphers.html

Solution :

Reconfigure the affected application if possible to avoid use of weak
ciphers.

Risk factor :

Medium / CVSS Base Score : 4.3
(CVSS2#AV:N/AC:M/Au:N/C:P/I:N/A:N)

Plugin output :

Here is the list of weak SSL ciphers supported by the remote server :

Low Strength Ciphers (< 56-bit key)
SSLv2
EXP-RC2-CBC-MD5 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export
EXP-RC4-MD5 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export
SSLv3
EXP-RC2-CBC-MD5 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export
EXP-RC4-MD5 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export
TLSv1
EXP-RC2-CBC-MD5 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export
EXP-RC4-MD5 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export

The fields above are :

{OpenSSL ciphername}
Kx={key exchange}
Au={authentication}
Enc={symmetric encryption method}
Mac={message authentication code}
{export flag}

Other references : CWE:327, CWE:326, CWE:753, CWE:803, CWE:720

Nessus ID : 26928
-----------------------------------------------------------------

These three error messages pretty much mean that you need to turn off SSL 2.0 due to exploits that were found after the standard was created. You need to turn off any encryption suites lower than 128bits. The third error message says we need to turn off anything for less than 56bits, but this will be accomplished by turning of anything less than 128bits. Basically your are modifying the settings that restrict the use of specific protocols and ciphers that are used by the schannel.dll. More detailed information can be found at Micorsoft’s KB187498 or KB245030

How do we do this?

Disabling SSL 2.0 on IIS 6

  1. Open up “regedit” from the command line
  2. Browse to the following key:
    HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server
  3. Create a new REG_DWORD called “Enabled” and set the value to 0
  4. You will need to restart the computer for this change to take effect. (you can wait on this if you also need to disable the ciphers)

Disable unsecure encryption ciphers less than 128bit

  1. Open up “regedit” from the command line
  2. Browse to the following key:
    HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56
  3. Create a new REG_DWORD called “Enabled” and set the value to 0
  4. Browse to the following key:
    HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128
  5. Create a new REG_DWORD called “Enabled” and set the value to 0
  6. Browse to the following key:
    HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128
  7. Create a new REG_DWORD called “Enabled” and set the value to 0
  8. Browse to the following key:
    HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128
  9. Create a new REG_DWORD called “Enabled” and set the value to 0
  10. You will need to restart the computer for this change to take effect.

How to verify the changes?

Now that you have made these changes how can you be sure that they have taken place without having to go to your boss or higher authority just to find that you did them wrong. Well I found a nice tool called SSL-SCAN which you can download at http://code.google.com/p/sslscan-win/ for the Windows port or you can download an compile for your favorite operating system at the original project SSL-SCAN site http://sourceforge.net/projects/sslscan/. This tool provides some great detail about what is allows and not allows plus some analysis of the SSL certificate itself.

Below the screen shot shows that we have disabled any ciphers that attempt to use the SSL 2.0 protocol and we’ve disabled all ciphers that less than 128bit.

sslscan1-595x1024

Be Sociable, Share!

Hardening Your Web Server’s SSL Ciphers

  • ?
There are many wordy ar­ti­cles on con­fig­ur­ing your web server’s SSL ci­phers. This is not one of them. In­stead I will share a con­fig­u­ra­tion which is both com­pat­i­ble enough for today’s needs and scores a straight “A” onQualys’s SSL Server Test.

Dis­claimer: I’m up­dat­ing this post con­tin­u­ally in order to rep­re­sent what I con­sider the best prac­tice in the mo­ment – there are way too many dan­ger­ously out­dated ar­ti­cles about TLS-de­ploy­ment out there al­ready.

There­fore it may be a good idea to check back from time to time be­cause the crypto land­scape is chang­ing pretty quickly at the mo­ment. You can fol­low me on Twit­ter to get no­ti­fied about note­wor­thy changes.

If you find any fac­tual prob­lems, please reach out to me im­me­di­ately and I will fix it ASAP.

Rationale

If you con­fig­ure a web server’s SSL con­fig­u­ra­tion, you have pri­mar­ily to take care of three things:

  1. disable SSL 2.0, and – if you can afford it – SSL 3.0 (Internet Explorer 6 is the last remaining reason to keep it around; you can’t have elliptic curve crypto with SSL 3.0 and downgrade attacks exist),
  2. disable TLS 1.0 compression (CRIME),
  3. disable weak ciphers (DESRC4), prefer modern ciphers (AES), modes (GCM), and protocols (TLS 1.2).

You should also put ef­fort into mit­i­gat­ing BREACH. That’s out of scope here though as it’s largely ap­pli­ca­tion-de­pen­dent.

Software and Versions

On the server side, you should up­date your OpenSSL to 1.0.0+ so you can sup­port TLS 1.2GCM, and ECDH as soon as pos­si­ble. For­tu­nately that’s al­ready the case in the cur­rent Ubuntu LTS.

On the client side, the browser ven­dors are start­ing to catch up. As of now, Chrome 30, In­ter­net Ex­plorer 11 on Win­dows 8, Sa­fari 7 on OS X 10.9, and Fire­fox 26 sup­port TLS 1.2 (but no GCM, Chrome 32 is going to be the first one to sup­port that). Fire­fox also has TLS 1.2 dis­abled by de­fault which changed re­cently in Au­rora.

RC4

There used to be a bul­let point sug­gest­ing to use RC4 to avoid BEASTand Lucky Thir­teen. And iron­i­cally, that used to be the orig­i­nal rea­son for this ar­ti­cle: when Lucky Thir­teen came out, the word in the streets was: “use RC4 to mit­i­gate” and every­one was like “how!?”.

Un­for­tu­nately shortly there­after, RC4 was bro­ken in a way that makes de­ploy­ing TLS with it nowa­days a risk. While BEAST et al re­quire an ac­tive at­tack on the browser of the vic­tim, pas­sive at­tacks on RC4 ci­pher­text are get­ting stronger every day. In other words: it’s pos­si­ble that it will be­come fea­si­ble to de­crypt in­ter­cepted RC4 traf­fic even­tu­ally. Mi­crosoft even is­sued a se­cu­rity ad­vi­sory that rec­om­mends to dis­able RC4.

The String

Until re­cently, Qualys pre­ferred RC4 over CBC-mode ci­phers and I gave you two ci­pher strings to choose from: one that gave you an “A” but used RC4 and one that gave you a “B” but was ac­tu­ally se­cure. Since they fi­nally changed their mind – and as of Sa­fari 7 there’s no main­stream browser left that is sus­cep­ti­ble to BEAST – I can jump di­rectly to the se­cure one:

ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

You can test it against your OpenSSL in­stal­la­tion using

openssl ciphers -v 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS'

to see what’s sup­ported.

You’ll get:

  • Best possible encryption in all browsers.
  • Perfect forward secrecy; if your web server, your OpenSSL, and theirbrowser support it.
  • It doesn’t offer RC4 even as a fallback. Although its inclusion at the endof the cipher string shouldn’t matter, active downgrade attacks on SSL/TLS exist and having RC4 as part of the the cipher string you potentially expose all of your users to it. Even IE 6 does 3DES just fine.

The string also prefers AES-256 over AES-128 (ex­cept for GCM which is pre­ferred over every­thing else). It does so mostly for li­a­bil­ity rea­sons be­cause cus­tomers may in­sist on it for bogus rea­sons.

How­ever quoth a cryp­tog­ra­pher:

AES-128 isn’t re­ally worse than AES-any­thin­gelse, at least not in ways you care about

So if AES-128 is fine for you, feel free to add an ‘:!AES256’ to the end of the ci­pher string to keep your ci­pher suite shorter which will also ex­pe­dite your TLS hand­shakes.

Apache

1
2
3
SSLProtocol ALL -SSLv2
SSLHonorCipherOrder On
SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

This works on both Apache 2.2 and 2.4. If your OpenSSL doesn’t sup­port the pre­ferred mod­ern ci­phers (like the still com­mon 0.9.8), it will fall back grace­fully but your con­fig­u­ra­tion is ready for the fu­ture.

Please note: you need Apache 2.4 for ECDH and ECDSA. You can cir­cum­vent that lim­i­ta­tion by putting an SSL proxy like stud or even nginx in front of it and let Apache serve only plain HTTP.

TLS com­pres­sion is a bit more com­pli­cated: as of Apache 2.2.23, it’s not pos­si­ble to switch it off in­side of Apache. For Apache 2.2.24+ and 2.4.3+, you can switch it off using:

1
SSLCompression Off

Cur­rently the de­fault is On, but that changed from 2.4.4 on.

The good news for Ubuntu ad­mins is that Ubuntu has back portedthat op­tion into their 2.2 pack­ages – and set it to off by de­fault – so you should be fine. The so­lu­tion on Red Hat based OS (RHEL, Fe­dora, Cen­tOS, Sci­en­tific Linux…) is set­ting an en­vi­ron­ment vari­able in­side of your Apache startup script:

1
export OPENSSL_NO_DEFAULT_ZLIB=1

nginx

1
2
3
ssl_prefer_server_ciphers On;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;

SSL 2.0 is off and the best pro­to­cols on by de­fault. How­ever it may be that you have some ar­ti­fact from pre-TLS 1.2 times lurk­ing some­where in your con­fig so it’s bet­ter to be ex­plicit.

TLS com­pres­sion de­pends on the ver­sion of nginx and the ver­sion of OpenSSL. If OpenSSL 1.0.0 or later is in­stalled, any­thing after nginx 1.0.9 and 1.1.6 is fine. If an older OpenSSL is in­stalled, you’ll need at least nginx 1.2.2 or 1.3.2.

For more de­tails, have a look at this server­fault an­swer.

TL;DR on TLS com­pres­sion & nginx: if you’re using Ubuntu Pre­cise (i.e. the cur­rent LTS re­lease) you’re fine (OpenSSL 1.0.1/nginx 1.1.19).

Bonus Points

Qualys up­dated their re­quire­ments on 2014-01-21 and the ci­pher suites here are still “A”–ma­te­r­ial. If you want an “A+” though, you’ll need to add HSTS head­ers too, which is out of scope for this ar­ti­cle but the linked Wikipedia ar­ti­cle will get you started.

Finally

Make sure to test your server af­ter­wards!

If you want to learn more about de­ploy­ing SSL/TLS, Qualys’s SSL/TLS De­ploy­ment Best Prac­tices are a de­cent primer.

For in­ves­ti­gat­ing the SSL/TLS be­hav­ior of your browserHow’s My SSL?will give you all the de­tails you need.

The (Near) Future

2013 has gal­va­nized the whole in­dus­try. This is a good thing. In 2012 barely any­one lost a thought about con­fig­ur­ing their TLS ci­phers, how many bits their cer­tifi­cates had, or even for­ward se­crecy. That made it way too easy for the bad folks. Nowa­days, peo­ple are ques­tion­ing their own prac­tices, open source pro­jects work on en­hanc­ing their TLS sup­port, and the pub­lic started to lis­ten to cryp­tog­ra­phers again in­stead of dis­count­ing them as crazy tin­foil crowd.

Good things are shap­ing on the hori­zon and Google’s Adam Lan­g­leygiven the power of hav­ing con­trol over both servers and the most pop­u­lar browser is press­ing ahead. Their servers widely sup­port TLS 1.2 with AES-GCM. Chrome has the best TLS sup­port al­ready. Ad­di­tion­ally, its Ca­nary re­leases now have grown sup­port for ChaCha20 which is an ex­tremely fast yet se­cure stream ci­pher by Dan Bern­stein and Poly1305a great MAC of the same pedi­gree.

Now if peo­ple just stopped using old browsers and we could roll outSNI and manda­tory TLS 1.2.

File Sharing Tool Collections Cloud

Box
Box began as a consumer-focused cloud storage platform, but the service has since added several enterprise features, such as Active Directory management and integration with other productivity applications. Box not only competes with cloud-based file-sharing services such as Dropbox but also with collaboration platforms such as Microsoft SharePoint.

Dropbox
Dropbox is a cloud service that lets users store data in the cloud and syncs that data across multiple devices. On the desktop, its folders integrate with Windows Explorer, and mobile apps are available for all major platforms. Typically considered a consumer-focused cloud-based file-sharing service, Dropbox also offers an enterprise option, Dropbox for Teams, which provides more storage.

GDrive
Google’s entry in the cloud-based file-sharing and storage market is Google Drive, which integrates with Google’s other services, including Google Docs, Gmail, Google Analytics and Google+. It lets users access files and apps through a browser.

iCloud
Apple’s cloud storage service is iCloud, which lets users store everything from contacts to photos to music and makes that data available across all the user’s Apple devices. The service is available on Macs with OS X 10.7 and up and iOS devices running version 5.0 and newer. In addition to offering data storage, iCloud also provides users with an email address, the Find My Phone feature and automatic device backups. Users can also save their iTunes, App Store and iBookstore purchases in a digital locker and download them to their other Apple devices.

Octopus
VMware Octopus, now in private beta, is an enterprise alternative to consumer-focused cloud-based file-sharing services. Like Dropbox, Octopus offers data synchronization and sharing services across devices, but it also gives IT the ability to define and enforce security policies.

SkyDrive
Microsoft’s Windows Live SkyDrive offers users document storage and sharing. It has many of the same consumer-focused features as other cloud-based file-sharing services, but it also lets IT admins control permissions and determine which users can see which files. In addition, SkyDrive integrates with Windows Live Hotmail.

Ubuntu One
Ubuntu One is a suite of online services from Canonical. The service enables users to store and sync files online and between computers and share files and folders with others using file synchronization. It also offers integration with Evolution for syncing contacts and with Tomboy for notes due to the access to the local CouchDB instance. Further possibilities include the capability of editing the contacts, as well as the Tomboy notes, online via the Ubuntu One Web interface.

Syncplicity
Syncplicity is a backup and synchronization service provided by Syncplicity Inc. The service allows users to store and sync files online between computers. Currently it supports Microsoft Windows and Mac OSX. Syncplicity, Inc. was acquired by EMC Corporation on May 21, 2012

Disable HTTP Methods in Tomcat

HOWTO: Disable HTTP Methods in Tomcat
Introduction

In the Apache web server, if you want to disable access to specific methods, you can take advantage of mod_rewrite and disable just about anything, often with only one or two lines of configuration file entries. In Apache Tomcat, security is enforced by way of security constraints that are built into the Java Servlet specification. These are not contained within the main server.xml file within tomcat but within the web.xml configuration file.

The Java Servlet specification contains a fairly complete collection of security-related configuration parameters that allows you to do, among other things, disable HTTP methods, enable SSL on specific URIs, and allow access to specific resources based upon user role. Security constraints are the way to protect web content within Java-based applications. One common item that crops up in security related scans are HTTP methods allowed on a web site or within a web application. For those of us running our web sites using Apache Tomcat and not a front-end web server like Apache or IIS, having a good understanding of how security constraints work will be vital. This particular HOWTO will examine the steps necessary to disable access to specific HTTP methods.

A security constraint utilizes an xml syntax, just like other configuration directives in web.xml. Values in the examples are bolded to provide better readability. Example 1 is a basic web site, which serves up nothing but JSPs, images, scripts, and styles and does not contain any forms for a user to fill out. Network Security wants all HTTP methods disabled with the exception of HTTP HEAD and GET requests.

Example 1 – Basic Web Site – No Forms

01
// Sample Security Constraint
02
<security-constraint>
03
<web-resource-collection>
04
<web-resource-name><strong>restricted methods</strong></web-resource-name>
05
<url-pattern>/*</url-pattern>
06
<http-method>PUT</http-method>
07
<http-method>POST</http-method>
08
<http-method>DELETE</http-method>
09
<http-method>OPTIONS</http-method>
10
<http-method>TRACE</http-method>
11
</web-resource-collection>
12
<auth-constraint />
13
</security-constraint>
All constraints start out with a <security-contraint> deployment descriptor. The <web-resource-collection> comprises a set of URIs and HTTP Methods that are allowable within that set of URIs. In the example above, a <url-pattern> of /* (meaning everything under the root of the web site has been constrained to only allow access to GET and HEAD only. Setting an authorization constraint to <auth-constraint />, sets an All Users policy so this example literally means: “For any user, deny access to PUT, POST, DELETE, OPTIONS, and TRACE methods”. In a stock Tomcat installation, if I were to send an HTTP OPTIONS request, for example, to the web site, it would work. In my newly constrained configuration, OPTIONS requests now fail with an HTTP Status code of 403 – Forbidden.

The second example below takes our basic web site example a step further where a “Contact Us” form has been made available. The site user would fill out a form located under /contact and data would be passed using HTTP POST.

Example 2 – Basic Web Site with Contact Form

view sourceprint?
01
// Sample Security Constraint
02
<security-constraint>
03
<web-resource-collection>
04
<web-resource-name>restricted methods</web-resource-name>
05
<url-pattern>/*</url-pattern>
06
<http-method>PUT</http-method>
07
<http-method>POST</http-method>
08
<http-method>DELETE</http-method>
09
<http-method>OPTIONS</http-method>
10
<http-method>TRACE</http-method>
11
</web-resource-collection>
12
<auth-constraint />
13
</security-constraint>
14

15
<security-constraint>
16
<web-resource-collection>
17
<web-resource-name><strong>Contact Form</strong></web-resource-name>
18
<url-pattern>/contact/*</url-pattern>
19
<http-method>PUT</http-method>
20
<http-method>DELETE</http-method>
21
<http-method>OPTIONS</http-method>
22
<http-method>TRACE</http-method>
23
</web-resource-collection>
24
<auth-constraint />
25
</security-constraint>

Apache Log Counting using Awk and Sed

Since I (and you as a visitor) don’t want your IP-address to be spread around the internet, I’ve anonymized the log data. It’s a fairly easy process that is done in 2 steps:

  1. IP’s are translated into random values.
  2. Admin url’s are removed.

Step 1: Translating IP’s

All the IP’s are translated into random IP’s, but every IP has it’s own random counterpart. This means that you can still identify users who are browsing through the site. The actual command I have used for this is:

cat apache-anon-noadmin.log | awk 'function ri(n) {  return int(n*rand()); }  \
BEGIN { srand(); }  { if (! ($1 in randip)) {  \
randip[$1] = sprintf("%d.%d.%d.%d", ri(255), ri(255), ri(255), ri(255)); } \
$1 = randip[$1]; print $0  }'

If you read a bit further we will find out what this will actually do, but most of it you should be able to understand (a least the global format).

Step 2: Removing admin url’s

I don’t like that everybody can view all the admin-requests I’ve done on the site. Luckely this is a very simple process. We only have to remove the requests that start with “/wp-admin”. This can be done by an inverse grep-command:

cat apache-anon.log | grep -v '/wp-admin' > apache-anon-noadmin.log

 

Example 1: count http status codes

For now we want to deal with the status-codes. This is found in field $9. The following code will print every field 9 for every record from our log:

cat apache-anon-noadmin.log | awk ' { print $9 } '

That’s nice, but let’s aggregate this data. We want to know how many times we outputted each status code. By using the “uniq” command, we can count (and display) the number of times we encounter data, but before we can use uniq we have to sort the data since uniq will stop counting as soon as another piece of data is encountered. (try the following line with and without the “sort” to see what I mean).

cat apache-anon-noadmin.log | awk ' { print $9 } ' | sort | uniq -c

And the output should be:

72951 200
  235 206
 1400 301
   38 302
 2911 304
 2133 404
 1474 500

As you see, the 200 (which stands for OK), is returned 72951 times, while we returned 2133 times a 404 (page not found). Cool…

 

Example 2: top 10 of visiting ip’s

Let’s try to create some top-10?s. The first one about the IP’s that did the most pageviews (my fans, but most probably it would be me :p)

cat apache-anon-noadmin.log | awk '{ print $1 ; }' | \
sort | uniq -c | sort -n -r | head -n 10

We use awk to print the first field – the IP, we sort and count them. THEN we sort again, but this time in a reversed order and with a natural sort so 10 will be sorted after 9, instead of after 1. (again, remove the sort to find out what I mean). After this, we filter out the first 10 lines with the head command, which only prints the first 10 lines.

As you can see, I use (a lot of) different unix commands to achieve what I need to do. It MIGHT be possible to this all with awk itself as well, but by using other commands we get the job done quick and easy.

 

Example 3: traffic in kilobytes per status code

Let’s introduce arrays. Field $10 holds the number of bytes we have send out, and field $9 the status code. In the null-pattern (the block without any pattern, that will be executed on every line) we add the number of bytes to the array in the $9 index. It will NOT print out any information yet. At the end of the program, we will iterate over the “total”-array and print each status code, and the total sum of bytes / 1024, so we get kilobytes. Still, pretty easy to understand.

cat apache-anon-noadmin.log  | awk ' { total[$9] += $10 } \
END {  for (x in total) { printf "Status code %3d : %9.2f Kb\n", x, total[x]/1024 } } '
Status code 200 : 329836.22 Kb
Status code 206 :   4649.29 Kb
Status code 301 :    535.72 Kb
Status code 302 :     20.26 Kb
Status code 304 :    572.77 Kb
Status code 404 :   5106.29 Kb
Status code 500 :   2336.42 Kb

Not a lot of redirections, but still: 5 megabyte wasted by serving pages that are not found 🙁

Let’s expand this example so we get a total sum:

cat apache-anon-noadmin.log  | awk ' { totalkb += $10; total[$9] += $10 } \
END {  for (x in total) { printf "Status code %3d : %9.2f Kb\n", x, total[x]/1024 } \
printf ("\nTotal send      : %9.2f Kb\n", totalkb/1024); } '
Status code 200 : 329836.22 Kb
Status code 206 :   4649.29 Kb
Status code 301 :    535.72 Kb
Status code 302 :     20.26 Kb
Status code 304 :    572.77 Kb
Status code 404 :   5106.29 Kb
Status code 500 :   2336.42 Kb

Total send      : 343056.96 Kb

 

Example 4: top 10 referrers

We use the ” as separator here. We need this because the referrer is inside those quotes. This is how we can deal with request-url’s, the referrers and user-agents without problems. This time we don’t use a BEGIN block to change the FS-variable, but we change it through a command line parameter. Now, most of the referrers are either from our own blog, or a ‘-’, when no referrer is given. We add additional grep commands to remove those referrers. Again, sorting, doing a unique count, reverse nat sorting and limiting with head gives us a nice result:

cat apache-anon-noadmin.log | awk -F\" ' { print $4 } ' | \
grep -v '-' | grep -v 'http://www.adayinthelife' | sort | \
uniq -c | sort -rn | head -n 10
 343 http://www.phpdeveloper.org/news/15544
 175 http://www.dzone.com/links/rss/top5_certifications_for_every_php_programmer.html
 71 http://www.dzone.com/links/index.html
 64 http://www.google.com/reader/view/
 54 http://www.phpdeveloper.org/
 50 http://phpdeveloper.org/
 49 http://www.dzone.com/links/r/top5_certifications_for_every_php_programmer.html
 45 http://www.phpdeveloper.org/news/15544?utm_source=twitterfeed&utm_medium=twitter
 22 http://abcphp.com/41578/
 21 http://twitter.com

At least I can see quickly to which sites I need to send some christmas cards to.

 

Example 5: top 10 user-agents

How simple is this? The user-agent is in column 6 instead of 4 and we don’t need the grep’s, so this one needs no explanation:

cat apache-anon-noadmin.log | awk -F\" ' { print $6 } ' | \
sort | uniq -c | sort -rn | head -n 10
 5891 Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10
 4145 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12
 3440 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10
 2338 Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12
 2314 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.6) Gecko/2009011912 Firefox/3.0.6
 2001 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12
 1959 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_5; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10
 1241 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_5; en-us) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4
 1122 Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10
 1010 Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12

There are many different packages that allow you to generate reports on who’s visiting your site and what they’re doing. The most popular at this time appear to be “Analog”, “The Webalizer” and “AWStats” which are installed by default on many shared servers.

While such programs generate attractive reports, they only scratch the surface of what the log files can tell you. In this section we look at ways you can delve more deeply – focussing on the use of simple command line tools, particularly grep, awk and sed.

1. Combined log format

The following assumes an Apache HTTP Server combined log format where each entry in the log file contains the following information:

%h %l %u %t “%r” %>s %b “%{Referer}i” “%{User-agent}i”
where:

%h = IP address of the client (remote host) which made the request
%l = RFC 1413 identity of the client
%u = userid of the person requesting the document
%t = Time that the server finished processing the request
%r = Request line from the client in double quotes
%>s = Status code that the server sends back to the client
%b = Size of the object returned to the client
The final two items: Referer and User-agent give details on where the request originated and what type of agent made the request.

Sample log entries:

66.249.64.13 – – [18/Sep/2004:11:07:48 +1000] “GET /robots.txt HTTP/1.0” 200 468 “-” “Googlebot/2.1”
66.249.64.13 – – [18/Sep/2004:11:07:48 +1000] “GET / HTTP/1.0” 200 6433 “-” “Googlebot/2.1″
Note: The robots.txt file gives instructions to robots as to which parts of your site they are allowed to index. A request for / is a request for the default index page, normally index.html.

2. Using awk

The principal use of awk is to break up each line of a file into ‘fields’ or ‘columns’ using a pre-defined separator. Because each line of the log file is based on the standard format we can do many things quite easily.

Using the default separator which is any white-space (spaces or tabs) we get the following:

awk ‘{print $1}’ combined_log # ip address (%h)
awk ‘{print $2}’ combined_log # RFC 1413 identity (%l)
awk ‘{print $3}’ combined_log # userid (%u)
awk ‘{print $4,5}’ combined_log # date/time (%t)
awk ‘{print $9}’ combined_log # status code (%>s)
awk ‘{print $10}’ combined_log # size (%b)
You might notice that we’ve missed out some items. To get to them we need to set the delimiter to the ” character which changes the way the lines are ‘exploded’ and allows the following:

awk -F\” ‘{print $2}’ combined_log # request line (%r)
awk -F\” ‘{print $4}’ combined_log # referer
awk -F\” ‘{print $6}’ combined_log # user agent
Now that you understand the basics of breaking up the log file and identifying different elements, we can move on to more practical examples.

3. Examples

You want to list all user agents ordered by the number of times they appear (descending order):

awk -F\” ‘{print $6}’ combined_log | sort | uniq -c | sort -fr
All we’re doing here is extracing the user agent field from the log file and ‘piping’ it through some other commands. The first sort is to enable uniq to properly identify and count unique user agents. The final sort orders the result by number and name (both descending).

The result will look similar to a user agents report generated by one of the above-mentioned packages. The difference is that you can generate this ANY time from ANY log file or files.

If you’re not particulary interested in which operating system the visitor is using, or what browser extensions they have, then you can use something like the following:

awk -F\” ‘{print $6}’ combined_log \
| sed ‘s/(\([^;]\+; [^;]\+\)[^)]*)/(\1)/’ \
| sort | uniq -c | sort -fr
Note: The \ at the end of a line simply indicates that the command will continue on the next line.

This will strip out the third and subsequent values in the ‘bracketed’ component of the user agent string. For example:

Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR)
becomes:

Mozilla/4.0 (compatible; MSIE 6.0)
The next step is to start filtering the output so you can narrow down on a certain page or referer. Would you like to know which pages Google has been requesting from your site?

awk -F\” ‘($6 ~ /Googlebot/){print $2}’ combined_log | awk ‘{print $2}’
Or who’s been looking at your guestbook?

awk -F\” ‘($2 ~ /guestbook\.html/){print $6}’ combined_log
It’s just too easy isn’t it!

Using just the examples above you can already generate your own reports to back up any kind of automated reporting your ISP provides. You could even write your own log analysis program.

4. Using log files to identify problems with your site

The steps outlined below will let you identify problems with your site by identifying the different server responses and the requests that caused them:

awk ‘{print $9}’ combined_log | sort | uniq -c | sort
The output shows how many of each type of request your site is getting. A ‘normal’ request results in a 200 code which means a page or file has been requested and delivered but there are many other possibilities.

The most common responses are:

200 – OK
206 – Partial Content
301 – Moved Permanently
302 – Found
304 – Not Modified
401 – Unauthorised (password required)
403 – Forbidden
404 – Not Found
Note: For more on Status Codes you can read the article HTTP Server Status Codes.

A 301 or 302 code means that the request has been re-directed. What you’d like to see, if you’re concerned about bandwidth usage, is a lot of 304 responses – meaning that the file didn’t have to be delivered because they already had a cached version.

A 404 code may indicate that you have a problem – a broken internal link or someone linking to a page that no longer exists. You might need to fix the link, contact the site with the broken link, or set up a PURL so that the link can work again.

The next step is to identify which pages/files are generating the different codes. The following command will summarise the 404 (“Not Found”) requests:

# list all 404 requests
awk ‘($9 ~ /404/)’ combined_log

# summarise 404 requests
awk ‘($9 ~ /404/)’ combined_log | awk ‘{print $9,$7}’ | sort
Or, you can use an inverted regular expression to summarise the requests that didn’t return 200 (“OK”):

awk ‘($9 !~ /200/)’ combined_log | awk ‘{print $9,$7}’ | sort | uniq
Or, you can include (or exclude in this case) a range of responses, in this case requests that returned 200 (“OK”) or 304 (“Not Modified”):

awk ‘($9 !~ /200|304/)’ combined_log | awk ‘{print $9,$7}’ | sort | uniq
Suppose you’ve identifed a link that’s generating a lot of 404 errors. Let’s see where the requests are coming from:

awk -F\” ‘($2 ~ “^GET /path/to/brokenlink\.html”){print $4,$6}’ combined_log
Now you can see not just the referer, but the user-agent making the request. You should be able to identify whether there is a broken link within your site, on an external site, or if a search engine or similar agent has an invalid address.

If you can’t fix the link, you should look at using Apache mod_rewrite or a similar scheme to redirect (301) the requests to the most appropriate page on your site. By using a 301 instead of a normal (302) redirect you are indicating to search engines and other intelligent agents that they need to update their link as the content has ‘Moved Permanently’.

5. Who’s ‘hotlinking’ my images?

Something that really annoys some people is when their bandwidth is being used by their images being linked directly on other websites.

Here’s how you can see who’s doing this to your site. Just change www.example.net to your domain, and combined_log to your combined log file.

awk -F\” ‘($2 ~ /\.(jpg|gif)/ && $4 !~ /^http:\/\/www\.example\.net/){print $4}’ combined_log \
| sort | uniq -c | sort
Translation:

explode each row using “;
the request line (%r) must contain “.jpg” or “.gif”;
the referer must not start with your website address (www.example.net in this example);
display the referer and summarise.
You can block hot-linking using mod_rewrite but that can also result in blocking various search engine result pages, caches and online translation software. To see if this is happening, we look for 403 (“Forbidden”) errors in the image requests:

# list image requests that returned 403 Forbidden
awk ‘($9 ~ /403/)’ combined_log \
| awk -F\” ‘($2 ~ /\.(jpg|gif)/){print $4}’ \
| sort | uniq -c | sort
Translation:

the status code (%>s) is 403 Forbidden;
the request line (%r) contains “.jpg” or “.gif”;
display the referer and summarise.
You might notice that the above command is simply a combination of the previous, and one presented earlier. It is necessary to call awk more than once because the ‘referer’ field is only available after the separator is set to \”, wheras the ‘status code’ is available directly.

6. Blank User Agents

A ‘blank’ user agent is typically an indication that the request is from an automated script or someone who really values their privacy. The following command will give you a list of ip addresses for those user agents so you can decide if any need to be blocked:

awk -F\” ‘($6 ~ /^-?$/)’ combined_log | awk ‘{print $1}’ | sort | uniq
A further pipe through logresolve will give you the hostnames of those addresses.

 

View Apache requests per day

Run the following command to see requests per day:
awk ‘{print $4}’ rmohan.com | cut -d: -f1 | uniq -c
View Apache requests per hour

Run the following command to see requests per hour:
grep “23/Jan” rmohan.com | cut -d[ -f2 | cut -d] -f1 | awk -F: ‘{print $2″:00″}’ | sort -n | uniq -c
View Apache requests per minute

grep “23/Jan/2013:06″ rmohan.com | cut -d[ -f2 | cut -d] -f1 | awk -F: ‘{print $2″:”$3}’ | sort -nk1 -nk2 | uniq -c | awk ‘{ if ($1 > 10) print $0}’
1- Most Common 404s (Page Not Found)
cut -d'”‘ -f2,3 /var/log/apache/access.log | awk ‘$4=404{print $4” “$2}’ | sort | uniq -c | sort -rg

2 – Count requests by HTTP code

cut -d'”‘ -f3 /var/log/apache/access.log | cut -d’ ‘ -f2 | sort | uniq -c | sort -rg

3 – Largest Images
cut -d'”‘ -f2,3 /var/log/apache/access.log | grep -E ‘\.jpg|\.png|\.gif’ | awk ‘{print $5” “$2}’ | sort | uniq | sort -rg

4 – Filter Your IPs Requests
tail -f /var/log/apache/access.log | grep <your IP>

5 – Top Referring URLS
cut -d'”‘ -f4 /var/log/apache/access.log | grep -v ‘^-$’ | grep -v ‘^http://www.rmohan.com’ | sort | uniq -c | sort -rg

6 – Watch Crawlers Live
For this we need an extra file which we’ll call bots.txt. Here’s the contents:

Bot
Crawl
ai_archiver
libwww-perl
spider
Mediapartners-Google
slurp
wget
httrack

This just helps is to filter out common user agents used by crawlers.
Here’s the command:
tail -f /var/log/apache/access.log | grep -f bots.txt

7 – Top Crawlers
This command will show you all the spiders that crawled your site with a count of the number of requests.
cut -d'”‘ -f6 /var/log/apache/access.log | grep -f bots.txt | sort | uniq -c | sort -rg
How To Get A Top Ten
You can easily turn the commands above that aggregate (the ones using uniq) into a top ten by adding this to the end:
| head

That is pipe the output to the head command.
Simple as that.

Zipped Log Files
If you want to run the above commands on a logrotated file, you can adjust easily by starting with a zcat on the file then piping to the first command (the one with the filename).

So this:
cut -d'”‘ -f3 /var/log/apache/access.log | cut -d’ ‘ -f2 | sort | uniq -c | sort -rg
Would become this:
zcat /var/log/apache/access.log.1.gz | cut -d'”‘ -f3 | cut -d’ ‘ -f2 | sort | uniq -c | sort -rg

 

Analyse an Apache access log for the most common IP addresses
Terminal – Analyse an Apache access log for the most common IP addresses
tail -10000 access_log | awk ‘{print $1}’ | sort | uniq -c | sort -n | tail
Terminal – Alternatives
zcat access_log.*.gz | awk ‘{print $7}’ | sort | uniq -c | sort -n | tail -n 20
awk ‘NR<=10000{a[$1]++}END{for (i in a) printf “%-6d %s\n”,a[i], i|”sort -n”}’ access.log

 

 

Print lines within a particular time range

awk ‘/01:05:/,/01:20:/’ access.log
Sort access log by response size (increasing)

awk –re-interval ‘{ match($0, /(([^[:space:]]+|\[[^\]]+\]|”[^”]+”)[[:space:]]+){7}/, m); print m[2], $0 }’ access.log|sort -nk 1

View TCP connection status
netstat-nat | awk ‘{print $ 6}’ | sort | uniq-c | sort-rn
netstat-n | awk ‘/ ^ tcp / {+ + S [$ NF]}; END {for (a in S) print a, S [a]}’
netstat-n | awk ‘/ ^ tcp / {+ + state [$ NF]}; END {for (key in state) print key, “/ t”, state [key]}’
netstat-n | awk ‘/ ^ tcp / {+ + arr [$ NF]}; END {for (k in arr) print k, “/ t”, arr [k]}’
netstat-n | awk ‘/ ^ tcp / {print $ NF}’ | sort | uniq-c | sort-rn
netstat-ant | awk ‘{print $ NF}’ | grep-v ‘[az]’ | sort | uniq-c
netstat-ant | awk ‘/ ip: 80 / {split ($ 5, ip, “:”); + + S [ip [1]]} END {for (a in S) print S [a], a}’ | sort-n
netstat-ant | awk ‘/: 80 / {split ($ 5, ip, “:”); + + S [ip [1]]} END {for (a in S) print S [a], a}’ | sort-rn | head-n 10
awk ‘BEGIN {printf (“http_code / tcount_num / n”)} {COUNT [$ 10] + +} END {for (a in COUNT) printf a “/ t / t” COUNT [a] “/ n”}’
Find the number of requests please 20 IP (commonly used to find the source of attack):
netstat-anlp | grep 80 | grep tcp | awk ‘{print $ 5}’ | awk-F: ‘{print $ 1}’ | sort | uniq-c | sort-nr | head-n20
netstat-ant | awk ‘/: 80 / {split ($ 5, ip, “:”); + + A [ip [1]]} END {for (i in A) print A [i], i}’ | sort-rn | head-n20
With tcpdump the sniffer port 80 access to see who the highest
tcpdump-i eth0-tnn dst port 80-c 1000 | awk-F “.” ‘{print $ 1 “.” $ 2 “.” $ 3 “.” $ 4}’ | sort | uniq-c | sort-nr | head – 20
4. Find more time_wait to connect
netstat-n | grep TIME_WAIT | awk ‘{print $ 5}’ | sort | uniq-c | sort-rn | head-n20
Find check more SYN connection.
netstat-an | grep SYN | awk ‘{print $ 5}’ | awk-F: ‘{print $ 1}’ | sort | uniq-c | sort-nr | more
6 according to the port column process
netstat-ntlp | grep 80 | awk ‘{print $ 7}’ | cut-d /-f1
Web logs (Apache):
1 ip address to gain access to the top 10
cat access.log | awk ‘{print $ 1}’ | sort | uniq-c | sort-nr | head -10
cat access.log | awk ‘{counts [$ (11)] + = 1}; END {for (url in counts) print counts [url], url}’
2 the Most Visited file or page, take the top 20 and statistics all IP
cat access.log | awk ‘{print $ 11}’ | sort | uniq-c | sort-nr | head -20
awk ‘{print $ 1}’ access.log | sort-n-r | uniq-c | wc-l
3 List the transmission exe file (analysis commonly used when the download station)
cat access.log | awk ‘($ 7 ~ / /. exe /) {print $ 10 “” $ 1 “” $ 4 “” $ 7}’ | sort-nr | head -20
Exe file and the corresponding file occurrences List output is greater than 200000byte (about 200kb)
cat access.log | awk ‘($ 10> 200000 && $ 7 ~ / /. exe /) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -100
If the log of a page file transfer time, the page lists the most time-consuming to client
cat access.log | awk ‘($ 7 ~ / /. php /) {print $ NF “” $ 1 “” $ 4 “” $ 7}’ | sort-nr | head -100
6 page lists the most time-consuming (more than 60 seconds) as well as the frequency and the corresponding page
cat access.log | awk ‘($ NF> 60 && $ 7 ~ / /. php /) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -100
List file transmission time over 30 seconds
cat access.log | awk ‘($ NF> 30) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -20
Statistics website traffic (G)
cat access.log | awk ‘{sum + = $ 10} END {print sum/1024/1024/1024}’
Statistics 404 connections 9.
awk ‘($ 9 ~ / 404 /)’ access.log | awk ‘{print $ 9, $ 7}’ | sort
10 Statistics http status.
cat access.log | awk ‘{counts [$ (9)] + = 1}; END {for (code in counts) print code, counts [code]}’
cat access.log | awk ‘{print $ 9}’ | sort | uniq-c | sort-rn
11 concurrent per second:
awk ‘{if ($ 9 ~ / 200 | 30 | 404 /) COUNT [$ 4] + +} END {for (a in COUNT) print a, COUNT [a]}’ | sort-k 2-nr | head-n10
12. Bandwidth statistics
cat apache.log | awk ‘{if ($ 7 ~ / GET /) count + +} END {print “client_request =” count}’
cat apache.log | awk ‘{BYTE + = $ 11} END {print “client_kbyte_out =” BYTE/1024 “KB”}’
Average size of 13. Statistical number of objects and object
cat access.log | awk ‘{byte + = $ 10} END {print byte/NR/1024, NR}’
cat access.log | awk ‘{if ($ 9 ~ / 200 | 30 /) COUNT [$ NF] + +} END {for (a in COUNT) print a, COUNT
[A], NR, COUNT [a] / NR * 100 “%”}
14 to take a 5-minute log
if [$ DATE_MINUTE! = $ DATE_END_MINUTE]; then # determine the start timestamp and end timestamps are equal START_LINE = `sed-n” / $ DATE_MINUTE / = “$ APACHE_LOG | head-n1` # if not equal, then remove the the line numbers start timestamp and the end timestamp line number
# END_LINE = `sed-n” / $ DATE_END_MINUTE / = “$ APACHE_LOG | tail-n1`
END_LINE = `sed-n” / $ DATE_END_MINUTE / = “$ APACHE_LOG | head-n1` sed-n “$ {START_LINE}, $ {END_LINE} p” $ APACHE_LOG> $ MINUTE_LOG # # by line number, remove the 5 minutes the contents of the log is stored into a temporary file
GET_START_TIME = `sed-n” $ {START_LINE} p “$ APACHE_LOG | awk-F ‘[‘ ‘{print $ 2}’ | awk ‘{print $ 1}’ |
sed ‘s # / # # g’ | sed ‘s #: # #’ `# remove timestamp obtained by the line number
GET_END_TIME = `sed-n” $ {END_LINE} p “$ APACHE_LOG | awk-F ‘[‘ ‘{print $ 2}’ | awk ‘{print $ 1}’ | sed
‘S # / # # g’ | sed ‘s #: # #’ `# line number to get the end timestamp
15 Spiders analysis
See which spiders crawl the content
/ Usr / sbin / tcpdump-i eth0-l-s 0-w – dst port 80 | strings | grep-i user-agent | grep-i-E ‘bot | crawler | slurp | spider’
Site on the analysis 2 (Squid papers)
A flow rate of 2 domain statistics
zcat squid_access.log.tar.gz | awk ‘{print $ 10, $ 7}’ | awk ‘BEGIN {FS = “[/]”} {trfc [$ 4] + = $ 1} END {for
(Domain in trfc) {printf “% s / t% d / n”, domain, trfc [domain]}} ‘
Efficient perl version, please download here:
Database articles
1 View sql database
/ Usr / sbin / tcpdump-i eth0-s 0-l-w – dst port 3306 | strings | egrep-i ‘SELECT | UPDATE | DELETE | INSERT | SET | COMMIT | ROLLBACK | CREATE | DROP | ALTER | CALL’
System Debug analyze articles
1 debug command
strace-p pid
Tracking the specified process PID
gdb-p PID

CHECKING FOR HIGH VISITS FROM A LIMITED NUMBER OF IPS

First locate the log file for your site. The generic log is generally at /var/log/httpd/access_log or/var/log/apache2/access_log (depending on your distro). For virtualhost-specific logs, check the conf files or (if you have one active site and others in the background) run ls -alt /var/log/httpd to see which file is most recently updated.

cat access.log | awk ‘{print $1}’ | sort | uniq -c | wc -l

2. Check out unique visitors today:-
cat access.log | grep `date ‘+%e/%b/%G’` | awk ‘{print $1}’ | sort | uniq -c | wc -l

3. Check out unique visitors this month:-
cat access.log | grep `date ‘+%b/%G’` | awk ‘{print $1}’ | sort | uniq -c | wc -l

4. Check out unique visitors on arbitrary nomber:-
cat access.log | grep 22/Mar/2013 | awk ‘{print $1}’ | sort | uniq -c | wc -l

5. Check out unique visitors Month of march:-
cat access.log | grep Mar/2013 | awk ‘{print $1}’ | sort | uniq -c | wc -l

6. Check out statistics of number of visit/request “visitors IP”-
cat access.log | awk ‘{print “requests from ” $1}’ | sort | uniq -c | sort

7. Check out statistics of number of visit/request with date-
cat access.log | grep 26/Mar/2013 | awk ‘{print “requests from ” $1}’ | sort | uniq -c | sort

8. Find out targets the last 5,000 hits:

tail -5000 access.log| awk ‘{print $1}’ | sort | uniq -c |sort -n

9. Finally, if you have a ton of domains you may want to use this to aggregate them:

for k in `ls –color=none`; do echo “Top visitors by ip for: $k”;awk ‘{print $1}’ ~/logs/$k/http/access.log|sort|uniq -c|sort -n|tail;done

10. This command is great if you want to see what is being called the most (that can often show you that a specific script is being abused if it’s being called way more times than anything else in the site):

awk ‘{print $7}’ access.log|cut -d? -f1|sort|uniq -c|sort -nk1|tail -n10

11. If you have multiple domains on and on a PS (PS only!) run this command to get all traffic for all domains on the PS:

for k in `ls -S /home/*/logs/*/http/access.log`; do wc -l $k | sort -r -n; done

12. Here is an alternative to the above command which does the same thing, this is for VPS only using an admin user:
sudo find /home/*/logs -type f -name “access.log” -exec wc -l “{}” \; | sort -r -n

13. If you’re on a shared server you can run this command which will do the same as the one above but just to the domains in your logs directory. You have to run this commands while your in your user’s logs directory:
for k in `ls -S */http/access.log`; do wc -l $k | sort -r -n; done

14. grep apache access.log and list IP’s by hits and date:-
grep Mar/2013 /var/log/apache2/access.log | awk ‘{ print $1 }’ | sort -n | uniq -c | sort -rn | head

15. Find out top referring URLs:

cut -d'”‘ -f4 /var/log/apache/access.log | grep -v ‘^-$’ | grep -v ‘^http://www.your-site.com’ | sort | uniq -c | sort -rg

16. Check out top of ‘Page Not Found’s (404):
cut -d'”‘ -f2,3 /var/log/apache/access.log | awk ‘$4=404{print $4″ “$2}’ | sort | uniq -c | sort -rg

17. Check out top largest images:-
cut -d'”‘ -f2,3 /var/log/apache/access.log | grep -E ‘\.jpg|\.png|\.gif’ | awk ‘{print $5” “$2}’ | sort | uniq | sort -rg
18. Check out server response code:-
cut -d'”‘ -f3 /var/log/apache/access.log | cut -d’ ‘ -f2 | sort | uniq -c | sort -rg
19. Check out Apache request per day
awk ‘{print $4}’ access.log | cut -d: -f1 | uniq -c
20. Check out Apache request per houre.
grep “6/May” access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: ‘{print $2″:00″}’ | sort -n | uniq -c
21. Check out Apache request per minute.
grep “6/May/2013:06″ access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: ‘{print $2”:”$3}’ | sort -nk1 -nk2 | uniq -c | awk ‘{ if ($1 > 10) print $0}’
22. All those commands can be easily run on a log-rotated file:
zcat /var/log/apache/access.log.1.gz | cut -d'”‘ -f3 | cut -d’ ‘ -f2 | sort | uniq -c | sort -rg

Example:
Grep log log analysis collate finishing
1 . Analyze log files to access the page next 2012-05-04 The top 20 URL and sorting
cat access.log | grep ’04 / May/2012 ‘| awk’ {print $ 11} ‘| sort | uniq-c | sort-nr | head -20
Query the URL address to access the page URL contains the IP address of www.abc.com
cat access_log | awk ‘($ 11 ~ / \ www.abc.com/) {print $ 1}’ | sort | uniq-c | sort-nr
(2) to gain access to up to 10 IP addresses can also be queried by time
cat linewow-access.log | awk ‘{print $ 1}’ | sort | uniq-c | sort-nr | head -10
1 to gain access to the ip address before 10
cat access.log | awk ‘{print $ 1}’ | sort | uniq-c | sort-nr | head -10
cat access.log | awk ‘{counts [$ (11)] + = 1}; END {for (url in counts) print counts [url], url}’
2 Most Visited file or page , take the top 20 and all access to IP Statistics
cat access.log | awk ‘{print $ 11}’ | sort | uniq-c | sort-nr | head -20
awk ‘{print $ 1}’ access.log | sort-n-r | uniq-c | wc-l
cat wangsu.log | egrep ’06 / Sep/2012: 14:35 | 06/Sep/2012: 15:05 ‘| awk’ {print $ 1} ‘| sort | uniq-c | sort-nr | head -10 query log period of time the situation
3 lists some of the largest transfer exe file ( download station when analyzing common )
cat access.log | awk ‘($ 7 ~ / \. exe /) {print $ 10 “” $ 1 “” $ 4 “” $ 7}’ | sort-nr | head -20
4 lists the output is greater than 200000byte ( about 200kb) an exe file and the number of occurrences of the corresponding file
cat access.log | awk ‘($ 10> 200000 && $ 7 ~ / \. exe /) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -100
5 If the log records the last one is the page file transfer time , there are lists to the client the most time-consuming page
cat access.log | awk ‘($ 7 ~ / \. php /) {print $ NF “” $ 1 “” $ 4 “” $ 7}’ | sort-nr | head -100
6 lists the most time-consuming page ( more than 60 seconds ) as well as the corresponding page number of occurrences
cat access.log | awk ‘($ NF> 60 && $ 7 ~ / \. php /) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -100
7 lists the transmission of documents longer than 30 seconds
cat access.log | awk ‘($ NF> 30) {print $ 7}’ | sort-n | uniq-c | sort-nr | head -20
8 Statistics website traffic (G)
cat access.log | awk ‘{sum + = $ 10} END {print sum/1024/1024/1024}’
9 Statistics 404 connection
awk ‘($ 9 ~ / 404 /)’ access.log | awk ‘{print $ 9, $ 7}’ | sort
10 Statistical http status.
cat access.log | awk ‘{counts [$ (9)] + = 1}; END {for (code in counts) print code, counts [code]}’
cat access.log | awk ‘{print $ 9}’ | sort | uniq-c | sort-rn
11 sec Concurrency :
awk ‘{if ($ 9 ~ / 200 | 30 | 404 /) COUNT [$ 4] + +} END {for (a in COUNT) print a, COUNT [a]}’ | sort-k 2-nr | head-n10
12 . Bandwidth statistics
cat apache.log | awk ‘{if ($ 7 ~ / GET /) count + +} END {print “client_request =” count}’
cat apache.log | awk ‘{BYTE + = $ 11} END {print “client_kbyte_out =” BYTE/1024 “KB”}’
One day out of the 10 most visited IP
cat / tmp / access.log | grep “20/Mar/2011” | awk ‘{print $ 3}’ | sort | uniq-c | sort-nr | head
Maximum number of connections that day ip ip are doing :
cat access.log | grep “10.0.21.17” | awk ‘{print $ 8}’ | sort | uniq-c | sort-nr | head-n 10
Find out the most visited several minutes
awk ‘{print $ 1}’ access.log | grep “20/Mar/2011” | cut-c 14-18 | sort | uniq-c | sort-nr | head
Attachment: View tcp connection status
netstat-nat | awk ‘{print $ 6}’ | sort | uniq-c | sort-rn
netstat-n | awk ‘/ ^ tcp / {+ + S [$ NF]}; END {for (a in S) print a, S [a]}’
netstat-n | awk ‘/ ^ tcp / {+ + state [$ NF]}; END {for (key in state) print key, “\ t”, state [key]}’
netstat-n | awk ‘/ ^ tcp / {+ + arr [$ NF]}; END {for (k in arr) print k, “\ t”, arr [k]}’
netstat-n | awk ‘/ ^ tcp / {print $ NF}’ | sort | uniq-c | sort-rn
netstat-ant | awk ‘{print $ NF}’ | grep-v ‘[az]’ | sort | uniq-c
netstat-ant | awk ‘/ ip: 80 / {split ($ 5, ip, “:”); + + S [ip [1]]} END {for (a in S) print S [a], a}’ | sort-n
netstat-ant | awk ‘/: 80 / {split ($ 5, ip, “:”); + + S [ip [1]]} END {for (a in S) print S [a], a}’ | sort-rn | head-n 10
awk ‘BEGIN {printf (“http_code \ tcount_num \ n”)} {COUNT [$ 10] + +} END {for (a in COUNT) printf a “\ t \ t” COUNT [a] “\ n”}’
(2) Find requests please 20 IP ( commonly used in the attack source lookup ) :
netstat-anlp | grep 80 | grep tcp | awk ‘{print $ 5}’ | awk-F: ‘{print $ 1}’ | sort | uniq-c | sort-nr | head-n20
netstat-ant | awk ‘/: 80 / {split ($ 5, ip, “:”); + + A [ip [1]]} END {for (i in A) print A [i], i}’ | sort-rn | head-n20
3 with a sniffer tcpdump port 80 access to see who the highest
tcpdump-i eth0-tnn dst port 80-c 1000 | awk-F “.” ‘{print $ 1 “.” $ 2 “.” $ 3 “.” $ 4}’ | sort | uniq-c | sort-nr | head – 20
4 Find more time_wait connection
netstat-n | grep TIME_WAIT | awk ‘{print $ 5}’ | sort | uniq-c | sort-rn | head-n20
5 more investigation to find SYN connections
netstat-an | grep SYN | awk ‘{print $ 5}’ | awk-F: ‘{print $ 1}’ | sort | uniq-c | sort-nr | more
6 According to port out process
netstat-ntlp | grep 80 | awk ‘{print $ 7}’ | cut-d /-f1

SCONFIG CLI TOOL 2012

Besides sconfig.cmd, you have the full arsenal of command line tools available on Server Core. You can quickly change the most rudimentary settings with sconfig.cmd, but if you want to configure your server using scripts, you can still perform most of these actions through the usual suspects netdom.exe (to change the hostname and join an Active Directory domain), netsh.exe (to change IP addressing), and net.exe (to add or remove user accounts).

Of course, with the built-in command line tools, you can also configure settings beyond sconfig.cmd, like the Windows firewall (netsh), driver installation (pnputil and drvinst), RAID volumes (diskraid), and services (sc).

For networking troubleshooting, arp, nbtstat, netstat, ping, pathping, route, and tracert are available. For file handling, you can put cacls, icacls, attrib, cipher, compact, expand, takeown, and robocopy to good use. Need to claim disk space on other drives? Slap on some diskpart, format, fsutil, and label. Defrag often, unless you’re running on SSDs. Lost? No worries. Simply use hostname and whoami to get you sorted. And I’m still leaving half of the tools out.

Of course, between a Server with a GUI installation and a Server Core installation, some command line tools are missing. Most notably, ServerManagerCmd.exe is not available on Server Core, since the whole of Server Manager is not available.

 

Pretty scary, if you’re not used to CLI tools.

BTW, it’s not that Server Core cannot run GUI-based tools. In fact, there are a bunch of tools that still run fine on Core, such as Task Manager, Notepad, Regedit, and a couple of Control Panel Applets. In addition, you may be able to run many 3rd-party software such as Mozilla Firefox and others. In addition, the management tasks for Server Core can be performed remotely by using GUI-based MMC Snap-Ins, as long as you initially configure the machine with a proper IP address, add it to a domain (if needed) and open the correct Firewall rules and ports.

Luckily for us, most of this pain has been solved by usage of the either manually created scripts, 3rd-party graphical user interface tools, and lately in R2 – the SCONFIG tool.

SCONFIG was initially developed for Microsoft Hyper-V Server 2008, a free virtualization platform that is based on Windows Server 2008 RTM Core and that has the Hyper-V role pre-installed. There have been clients who have gone ahead and copied the script onto their Core installations on other machines. Since then, SCONFIG was made a  part of the R2 release of Windows Server 2008.

With SCONFIG you can manage many aspects of the Server Core machine. SCONFIG dramatically eases server configuration for Windows Server 2008 R2 core deployments. With SCONFIG, you can easily set your system up, get it on the network so you can easily manage the server remotely.

Note that SCONFIG is also localized in almost 20 languages.

To run SCONFIG simply enter sconfig.cmd in the command prompt window, and press Enter.sconfig_on_server_core_2-590x349

 

 

avigation through SCONFIG’s options is done by typing a number or letter representing the correct configuration or information option. These tasks include:

1) Join a Domain/Workgroup
2) Change Computer Name
3) Add Local Administrator
4) Configure/disable Remote Management
5) Windows Update Settings
6) Download and Install Updates
7) Enable/disable Remote Desktop
8) View/change Network Settings
9) View/change Date and Time
10) Log Off User
11) Restart Server
12) Shut Down Server
13) Exit to Command Line

For example, to enable remote management of the machine, one would:

Press 2 to configure a computer name. Reboot.sconfig_on_server_core_3-590x349

How to Install and Turn on GUI from Command Line

What happens if you install Windows Server 2012 without the GUI features and then realize that you want to turn on the GUI? For those who are used to GUI based Windows Server administration, seeing a command line interface can be daunting. This guide will help you to go from the command line interface, using PowerShell, to installing and turning on the GUI.

Install the GUI on Windows Server 2012

The first step in this is to enter the PowerShell. At the command line prompt, just enter PowerShell and you will see something like the below:

 

Windows-Server-2012-Turn-on-GUI-PowerShell

 

Windows Server 2012 – Turn on GUI – PowerShell

The next step is to type Install-WindowsFeature Server-Gui-Shell, Server-Gui-Mgmt-Infra in order to get theWindows-Server-2012-Turn-on-GUI-PowerShell-Step-2

 

Windows Server 2012 – Turn on GUI – Install GUI

You will see a text based installer. This part of the Windows Server 2012 GUI installation is rather easy and there isWindows-Server-2012-Turn-on-GUI-PowerShell-Step-3

 

Windows Server 2012 – Turn on GUI – Grab a Drink During Install

Once this is complete, you do need to reboot the server before the GUI will be turned on. Unless you have something else going on that you need to shutdown first, you can shutdown  and reboot immediately with shutdown -r -t 0 as seen below:Windows-Server-2012-Turn-on-GUI-PowerShell-Step-4-Reboot

2012 Feature

image2

 

 

If recent surveys cited by tech blogs and mainstream media are to be believed, then Windows Server 2012 is a win. Microsoft reveals that according to one survey, 65% of customers were satisfied with the new platform, which was released on August 1, 2012.

The company also says that the new hypervisor is getting raves, with 77% of companies already using Hyper-V or open to using it. This could be a sign that even if you have Server 2008, it might be time to upgrade.

So how is Windows Server 2012 better? Microsoft explains that Windows Server has a host of new features and improvements over Windows Server 2008 R2, such as:

1. More Secure Multitenancy

The security of a multi-tenant environment is one of the biggest concerns of cloud users today. There’s the notion that if your data resides in the same computer system or database as another company’s data, then your data is more vulnerable to being exposed. This could happen if there is a breach or a system bug that would allow a user from the other company to access your data.

Microsoft’s Windows Server 2008 provided some solutions to that problem by allowing you to isolate two virtual machines through server virtualization. But the network layer of your data center may not be fully isolated.

In Windows Server 2012, server virtualization not only covers the machines but also the network layer of your data center. You can restrict access to a virtual machine while also isolating your network and storage.

Private Virtual Local Area Network or PVLAN

More than that, Windows Server 2012 also has PVLAN capability, which allows administrators to isolate virtual machines over your network but public network resources are still accessible.

This feature was not found on Windows Server 2008.

Dynamic Host Configuration Protocol Guard and Router Guard

A Dynamic Host Configuration Protocol allows various devices to communicate on your IP network. The problem with DHCP is that it does not allow for authentication, which exposes it to different types of attacks. Windows Server 2012 has the DHCP guard feature that disconnects any unauthorized DHCP servers and automatically drops traffic from other switch ports. In short, you are protected from unauthorized DHCP servers connecting to your network and providing wrong information to clients, among other things.

Windows Server 2012 also has the Router Guard feature, which gives you more security as well as authorization checks for virtual machines. This feature drops redirects and advertisement from rogue routers. Windows Server 2008 does not support these features.

Hyper-V Extensible Switch.

Another feature that is found exclusively on Windows Server 2012 is the Hyper-V Extensible Switch that gives you an open platform that allows you to install plug-ins from third-party developers.  This lets you to extend the capabilities of your network and virtual machines.  It also allows you to gain functionality without having to code it yourself.

Each of these extension configurations is unique to every Hyper-V Extensible Switch instance and is therefore more secure.

Supported extensions on Windows Server 2012

Extension monitoring allows you to easily see traffic statistics relating to different layers.

There are extensions that are able to learn the flow of traffic to your network by looking at the workload life cycle of your virtual machines. This helps you to optimize the network for it to perform better.

Further, there are extensions that can deny stop state changes that are harmful.  These extensions provide you with the tools you need to improve management, diagnostics and performance.

Lastly, one Hyper-V Extensible Switch can host multiple extensions.  This lets you to cut on costs while also giving you better security and ease in management.

2. Infrastructure Flexibility

Robert Mullins at eWeek.com wrote that Microsoft has announced that its Windows Server 2012 with Hyper-V will enable them to get VMware users into the fold.

With Windows Server 2008, you are able to use VLANs to isolate your networks but as you grow larger, this becomes more and more difficult and complex.  So much so that it becomes very difficult to manage your networks on a larger scale. Windows Server 2012 takes care of this problem by foregoing the use of VLANs by using Hyper-V Network Virtualization.

Hyper-V Network Virtualization enables you to move virtual machines when you need to. You do not have to worry about hierarchical IP address provisioning across your virtual machines.  This means that you would not need new hardware such as switches, appliances and servers.

Other features that are present on Windows Server 2012 and that are not to be found on 2008:

  • IP Address Rewrite
  • Generic Routing Encapsulation – allows you to lessen the burden on your switches
  • Live storage migration – allows you to migrate virtual hard disks without shutting down the machine.

Features that were partially supported on Windows Server 2008 that are improved in Server 2012:

  • Live migration: allows users to move a live machine from one VM to another without having any down time.  In Windows Server 2012, however, you will be able to migrate several machines simultaneously.  What’s more, unlike in Windows Server 2008, live migrations are no longer limited to a cluster.
  • Importing virtual machines: Windows Server 2012 gives you an easier way to import and copy virtual machines.
  • Snapshot merging: Windows Server 2012 provides a way for you to merge snapshots into a running virtual machine.
  • Automation support: There are over 150 Hyper-V cmdlets that are built into Windows Server 2012 so you do not need to have development skills to automate tasks.

3. Higher Capacity Servers

With Windows Server 2012 you can now configure:

  • 320 logical processors, up from 64
  • 4 terabytes of physical memory, up from 1 terabyte
  • 64 virtual processors, up from 4
  • 1 terabyte of memory, up from only 64 gigabytes

You can also support 64 nodes and 8,000 virtual machines, up from only 16 nodes and 1,000 VMs.

Unlike in Windows Server 2008, Windows Server 2012:

  • Allows for non-uniform memory access that improves performance on large VMs
  • Supports single-root I/O virtualization that reduces network latency
  • Hyper-V smart paging is now available
  • Allows you to revise your dynamic memory configurations even when the VM is running
  • Allows you to easily monitor your usage of resources such as network, CPU, storage and memory
  • Allows you to format virtual hard disks
  • Uses offloaded data transfer
  • Joins together multiple types of traffic on your network
  • Has Multipath I/O functionality
  • Increases your capacity and reliability by supporting 4KB disk sectors

Windows Server 2012 also ensures high levels of availability by:

  • Reducing the size and cost of backups, disk space and network bandwidth use
  • Making it easier to have a disaster recovery plan and solution; it can actually recover business functions in minutes.
  • Lessening the chances of network failure
  • Protecting you from downtime by enabling access to server side applications
  • Providing faster migration by using higher bandwidth during live migrations
  • Securing your physical devices by encrypting cluster volumes

Aside from these improvements, Windows Server 2012 also boasts a new Windows Task Manager with a new file system. It also has IP address management. If you have an Itanium-based system, however, you might want to stay with Windows Server 2008 as Server 2012 does not support Itanium systems. This is a pretty exhaustive comparison

2012 commands

mmand
Opens the command prompt.
Compmgmt.msc
Opens the computer management console.
Devmgmt.msc
Opens the device manager.
Dfrg.msc
Opens Windows’ disk defragmenter.
Diskmgmt.msc
Opens the disk management tool.
Eventvwr.msc
Opens the event viewer.
Fsmgmt.msc
Opens shared folders.
Gpedit.msc
Opens the group policy editor.
Lusrmgr.msc
Opens the local users and groups.
Mailto:
Opens the default email client.
msconfig
Opens the system configuration utility.
Msinfo32
Opens the system information utility.
Perfmon.msc
Opens the performance monitor
regedit
Opens the registry editor.
rsop.msc
Opens resultant set of policy.
Secpol.msc
Opens local security settings.
Services.msc
Opens services utility.
Sysedit
Opens system configuration editor.
System.ini
Windows loading information
Win.ini
Windows loading information
winver
Shows current version of Windows.
Control Panel Access Run Commands
The following run commands access various parts of the control panel directly.
Appwiz.cpl
Add/Remove Programs.
Timedate.cpl
Date/Time Properties.
Desk.cpl
Display Properties.
Fonts
Fonts Folder
Inetcpl.cpl
Internet Properties
Main.cpl keyboard
Keyboard Properties.
Main.cpl
Mouse Properties
Mmsys.cpl
Multimedia Properties
Netcpl.cpl
Network Properties
Password.cpl
Password Properties
Printers
Printers Folder
Mmsys.cpl sounds
Sound Properties.
Sysdm.cpl

Windows Server 2012 Server Core

sconfig.cmd

The most important built-in configuration tool is sconfig.cmd. The easiest way to give you a quick heads-up for this tool is to call it the command line equivalent of Server Manager on Server with a GUI installations and Full Installations of Windows Server, except for the remote management and multi-server management capabilities.

It allows you to easily change the hostname, IPv4 addressing, domain membership, time and time zone, Windows Update, and all the other settings you would like to change on a Windows Server installation after you just installed it. To run it, simply type sconfig.cmd at the command prompt and hit Enter:Sconfig.cmd_thumb

 

SCRegEdit.wsf

Another great configuration utility in Server Core installations is SCRegEdit.wsf in the C:\Windows\System32 folder. This tool gives you the options to enable or disable Remote Desktop and configure Windows Update settings.

 

Command line tools

Besides sconfig.cmd, you have the full arsenal of command line tools available on Server Core. You can quickly change the most rudimentary settings with sconfig.cmd, but if you want to configure your server using scripts, you can still perform most of these actions through the usual suspects netdom.exe (to change the hostname and join an Active Directory domain), netsh.exe (to change IP addressing), andnet.exe (to add or remove user accounts).

Of course, with the built-in command line tools, you can also configure settings beyond sconfig.cmd, like the Windows firewall (netsh), driver installation (pnputil and drvinst), RAID volumes (diskraid), and services (sc).

For networking troubleshooting, arpnbtstatnetstatpingpathpingroute, and tracert are available. For file handling, you can put caclsicaclsattribciphercompactexpandtakeown, and robocopy to good use. Need to claim disk space on other drives? Slap on some diskpartformatfsutil, and labelDefragoften, unless you’re running on SSDs. Lost? No worries. Simply use hostname and whoami to get you sorted. And I’m still leaving half of the tools out.

Of course, between a Server with a GUI installation and a Server Core installation, some command line tools are missing. Most notably, ServerManagerCmd.exe is not available on Server Core, since the whole of Server Manager is not available.

PowerShell

Unless you’ve been living under a rock for the past few years, you might have heard about this new command line shell in Windows, called PowerShell. Exchange Server was the first product to embrace this new technology for its management needs, and since then most of the Microsoft product teams have followed its example.

PowerShell is identical between all installations of Windows Server 2012: Server Core installations, Minimal Server installations, and Server with a GUI installations all offer the same built-in PowerShell modules and cmdlets. Also, per Server Role and Feature, independent of the installation type, the PowerShell modules and cmdlets are identical.

 

owerShell, therefore, is the recommended command line shell to base scripts on. These scripts can be run on all Windows Server installations and on Windows clients (depending on the availability of the PowerShell modules).

Note:
The PowerShell Integrated Scripting Environment (ISE) is the recommended tool to build scripts in, since it offers several tools to help you. However, the PowerShell ISE, itself, is a graphical tool and is not available on Server Core installations.

To start using PowerShell, simply type powershell at the command line:

PowerShell-on-Server-Core_thumb

 

Graphical tools

Until reading up to this point, you might have the impression that Server Core is all about command line tools. But, this is not the case. Server Core installations of Windows Server 2012 offer several graphical tools:

MSInfo32.exe

It’s not easy to get an overview of a Windows installation from the command line. Systeminfo.exe might get you up to speed from the command line, but if you really want to know about the system and get a comprehensive view of your hardware, system components, and software environment, start upMSInfo32.exe from the command line. Now, you will have MSInfo32.exe, just like you would on any other system.

Within the SYSTEM32 folder of your Server Core installation, another great inventory tool is available:GatherNetworkInfo.vbs. As its name suggests, you can use it to dump the networking information of the installation into a file of your choice.

Notepad.exe

So how would you read such a file? Simple. Notepad.exe is available to you. You can use it as you would on a Server with a GUI, and this allows for a neat trick. If you liked to work within the Windows Explorer in previous versions of Windows, you can use the Open… command from the File menu (Ctrl+O) to get the same functionality. Also, it allows for right-clicking files to get their properties:Notepad-on-Server-Core_thumb

 

Regedit.exe and Regedt32.exe

Sometimes you really need to dig into the core of the operating system. Just as on a Server with a GUI, you can use the Registry Editor on a Server Core installation. This allows for easy manipulation of even the minutest setting in the operating system. Of course, you can also use secedit.exe to import Group Policy settings packages, allowing for easy manipulation of a series of registry settings.

Control Panel applets

Alas, even with the full arsenal of command line and graphical tools, some things cannot be achieved. This is why Microsoft has included Control Panel applets on Server Core installations. Although you cannot have the Control Panel view in Server Core installations, you can still run the following Control Panel applets from the command line:

TimeDate.cpl

To change the time and time zone on a Server Core installation, you can use timezone.cpl. This true Control Panel applet allows for all the usual settings, including configuring servers to synchronize time.

Intl.cpl

To change regional settings on a Server Core installation, you can use intl.cpl. It allows for changing time and date formats, currency settings, and the location of the server. There’s also the option to copy your settings to the logon screen so they apply to all system accounts and new users.

Iscsicpl.exe

In many virtual environments, where Server Core installations are used as Hyper-V hosts, shared storage is available through the iSCSI protocol. On Windows Server 2008-based Server Core installations,iscsicli.exe was the only tool available to connect to this storage. Since Windows Server 2008 R2, however, you can use iscsicpl.exe to connect to shared storage over iSCSI.

Proxy configuration from CMD line

Windows 2003

  1. Open a CMD prompt
  2. Type: proxycfg -p proxy.fqdn.com:8080, *.microsoft.com
    • Everything after the comma is for anything you want in the bypass proxy list.
  3. Hit ENTER
Windows 2008
  1. Open a CMD prompt, type:
    • NetSH
    • WinHTTP
    • Set Proxy proxy-server=”PROXY.COM:8080″ bypass-list=”SERVER.COM”
    • Show Proxy