July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

BERKSHELF

BERKSHELF

Manage a Cookbook or an Application’s Cookbook dependencies

$ gem install berkshelf
Successfully installed berkshelf-2.0.0
1 gem installed

Specify your dependencies in a Berksfile in your cookbook’s root

site :opscode

cookbook 'mysql'
cookbook 'nginx', '~> 0.101.5'

Install the cookbooks you specified in the Berksfile and their dependencies

$ berks install

Add the Berksfile to your project

$ git add Berksfile
$ git commit -m "add Berksfile to project"

A Berksfile.lock will also be created. Add this to version control if you want to ensure that other developers (or your build server) will use the same versions of all cookbook dependencies.

MANAGING AN EXISTING COOKBOOK

If you already have a cookbook and it’s not managed by Berkshelf it’s easy to get up and running. Just locate your cookbook and initialize it!

$ berks init ~/code/my_face-cookbook

CREATING A NEW COOKBOOK

Want to start a new cookbook for a new application or supporting application?

$ berks cookbook new_application

GETTING HELP

If at anytime you are stuck or if you’re just curious about what Berkshelf can do, just type the help command

$ berks help

Commands:
  berks apply ENVIRONMENT     # Apply the cookbook version locks from Berksfile.lock to a Chef environment
  berks configure             # Create a new Berkshelf configuration file
  berks contingent COOKBOOK   # List all cookbooks that depend on the given cookbook
  berks cookbook NAME         # Create a skeleton for a new cookbook
  berks help [COMMAND]        # Describe available commands or one specific command
  berks init [PATH]           # Initialize Berkshelf in the given directory
  berks install               # Install the cookbooks specified in the Berksfile
  berks list                  # List all cookbooks (and dependencies) specified in the Berksfile
  berks outdated [COOKBOOKS]  # Show outdated cookbooks (from the community site)
  berks package [COOKBOOK]    # Package a cookbook (and dependencies) as a tarball
  berks shelf SUBCOMMAND      # Interact with the cookbook store
  berks show [COOKBOOK]       # Display name, author, copyright, and dependency information about a cookbook
  berks update [COOKBOOKS]    # Update the cookbooks (and dependencies) specified in the Berksfile
  berks upload [COOKBOOKS]    # Upload the cookbook specified in the Berksfile to the Chef Server
  berks version               # Display version and copyright information

Options:
  -c, [--config=PATH]    # Path to Berkshelf configuration to use.
  -F, [--format=FORMAT]  # Output format to use.
                         # Default: human
  -q, [--quiet]          # Silence all informational output.
  -d, [--debug]          # Output debug information

You can get more detailed information about a command, or a sub command, but asking it for help

$ berks shelf help

Commands:
  berks shelf help [COMMAND]  # Describe subcommands or one specific subcommand
  berks shelf list            # List all cookbooks and their versions
  berks shelf show            # Display information about a cookbook in the Berkshelf shelf
  berks shelf uninstall       # Remove a cookbook from the Berkshelf shelf

THE BERKSHELF

After running berks install you may ask yourself, “Where did my cookbooks go?”. They were added to The Berkshelf.

The Berkshelf is a location on your local disk which contains the cookbooks you have installed and their dependencies. By default, The Berkshelf is located at ~/.berkshelf but this can be altered by setting the environment variable BERKSHELF_PATH.

Berkshelf stores every version of a cookbook that you have ever installed. This is the same pattern found with RubyGems where once you have resolved and installed a gem, you will have that gem and it’s dependencies until you delete it.

This central location is not the typical pattern of cookbook storage that you may be used to with Chef. The traditional pattern is to place all of your cookbooks in a directory called cookbooks or site-cookbooks within your Chef Repository. We do have all of our cookbooks in one central place, it’s just not the Chef Repository and they’re stored within directories named using the convention {name}-{version}.

Given you have the cookbooks installed:

* nginx - 0.101.2
* mysql - 1.2.4

These cookbooks will be located at:

~/.berkshelf/cookbooks/nginx-0.101.2
~/.berkshelf/cookbooks/mysql-1.2.4

By default Chef interprets the name of a cookbook by the directory name. Some Chef internals weigh the name of the directory more heavily than if a cookbook developer were to explicitly set the name attribute in their metadata. Because the directory structure contains the cookbook’s version number, do not treat The Berkshelf as just another entry in your Chef::Config#cookbooks_path.

VENDORING COOKBOOKS

You can easily install your Cookbooks and their dependencies to a location other than The Berkshelf. A good case for this is when you want to “vendor” your cookbooks to be packaged and distributed.

$ berks install --path vendor/cookbooks

This will install your Cookbooks to the vendor/cookbooks directory relative to where you ran the command from. Inside the vendored cookbooks directory you will find a directory named after the cookbook it contains.

CONFIGURING BERKSHELF

Berkshelf will run with a default configuration unless you explicitly generate one. By default, Berkshelf uses the values found in your Knife configuration (if you have one).

You can override this default behavior by generating an explicit Berkshelf configuration file with the configure command

$ berks configure

Answer each question prompt with a value or press enter to accept the default value. As with Berkshelf’s default behavior, Berkshelf attempts to populate the default values from your Knife configuration (otherwise using something else sensible).

Config written to: '/Users/reset/.berkshelf/config.json'

You will only be prompted to fill in the most travelled configuration options. Looking in the generated configuration will give you some insight to some other configurable values.

{
  "chef": {
    "chef_server_url": "https://api.opscode.com/organizations/vialstudios",
    "validation_client_name": "chef-validator",
    "validation_key_path": "/etc/chef/validation.pem",
    "client_key": "/Users/reset/.chef/reset.pem",
    "node_name": "reset"
  },
  "vagrant": {
    "vm": {
      "box": "Berkshelf-CentOS-6.3-x86_64-minimal",
      "box_url": "https://dl.dropbox.com/u/31081437/Berkshelf-CentOS-6.3-x86_64-minimal.box",
      "forward_port": {

      },
      "network": {
        "bridged": true,
        "hostonly": "33.33.33.10"
      },
      "provision": "chef_solo"
    }
  },
  "ssl": {
    "verify": true
  }
}

CONFIGURABLE OPTIONS

  • chef.chef_server_url – URL to a Chef Server API endpoint. (default: whatever is in your Knife file if you have one)
  • chef.node_name – your Chef API client name. (default: whatever is in your Knife file if you have one)
  • chef.client_key – filepath to your Chef API client key. (default: whatever is in your Knife file if you have one)
  • chef.validation_client_name – your Chef API’s validation client name. (default: whatever is in your Knife file if you have one)
  • chef.validation_key_path – filepath to your Chef API’s validation key. (default: whatever is in your Knife file if you have one)
  • vagrant.vm.box – name of the VirtualBox box to use when provisioning Vagrant virtual machines. (default: Berkshelf-CentOS-6.3-x86_64-minimal)
  • vagrant.vm.box_url – URL to the VirtualBox box (default: https://dl.dropbox.com/u/31081437/Berkshelf-CentOS-6.3-x86_64-minimal.box)
  • vagrant.vm.forward_port – a Hash of ports to forward where the key is the port to forward to on the guest and value is the host port which forwards to the guest on your host.
  • vagrant.vm.network.bridged – use a bridged connection to connect to your virtual machine?
  • vagrant.vm.network.hostonly – use a hostonly network for your virtual machine? (default: 33.33.33.10)
  • vagrant.vm.provision – use the chef_solo or chef_client provisioner? (default: chef_solo)
  • ssl.verify – should we verify all SSL http connections? (default: true)
  • cookbook.copyright – the copyright information should be included when you generate new cookbooks. (default: whatever is in your Knife file if you have one)
  • cookbook.email – the email address to include when you generate new cookbooks. (default: whatever is in your Knife file if you have one)
  • cookbook.license – the license to use when you generate new cookbooks. (default: whatever is in your Knife file if you have one)

The configuration values are notated in ‘dotted path’ format. These translate to a nested JSON structure.

VAGRANT WITH BERKSHELF

Berkshelf was designed for iterating on cookbooks and applications quickly. Vagrant provides us with a way to spin up a virtual environment and configure it using a built-in Chef provisioner. If you have never used Vagrant before – stop now – read the Vagrant documentation and give it a try. Your cookbook development life is about to become 100% better.

If you have used Vagrant before, READ ON!

INSTALL VAGRANT

Visit the Vagrant downloads page and download the latest installer for your operating system.

INSTALL THE VAGRANT BERKSHELF PLUGIN

As of Berkshelf 1.3.0 there is now a separate gem which includes the Vagrant Berkshelf plugin. This plugin supports Vagrant 1.1.0 and greater.

To install the plugin run the Vagrant plugin install command

$ vagrant plugin install vagrant-berkshelf
Installing the 'vagrant-berkshelf' plugin. This can take a few minutes...
Installed the plugin 'vagrant-berkshelf (1.2.0)!'

USING THE VAGRANT BERKSHELF PLUGIN

Once the Vagrant Berkshelf plugin is installed it can be enabled in your Vagrantfile

Vagrant.configure("2") do |config|
  ...
  config.berkshelf.enabled = true
  ...
end

If your Vagrantfile was generated by Berkshelf it’s probably already enabled

The plugin will look in your current working directory for your Berksfile by default. Just ensure that your Berksfile exists and when you run vagrant up, vagrant provision, or vagrant destroythe Berkshelf integration will automatically kick in!

$ vagrant provision
[Berkshelf] Updating Vagrant's berkshelf: '/Users/reset/.berkshelf/vagrant/berkshelf-20130320-28478-sy1k0n'
[Berkshelf] Installing nginx (1.2.0)
...

You can use both the Vagrant provided Chef Solo and Chef Client provisioners with the Vagrant Berkshelf plugin.

Chef Solo provisioner

The Chef Solo provisioner’s cookbook_path attribute is hijacked when using the Vagrant Berkshelf plugin. Cookbooks resolved from your Berksfile will automatically be made available to your Vagrant virtual machine. There is no need to explicitly set a value for cookbook_path attribute.

Chef Client provisioner

Cookbooks will automatically be uploaded to the Chef Server you have configured in the Vagrantfile’s Chef Client provisioner block. Your Berkshelf configuration’s chef.node_name and chef.client_key credentials will be used to authenticate the upload.

Setting a Berksfile location

By default, the Vagrant Berkshelf plugin will assume that the Vagrantfile is located in the same directory as a Berksfile. If your Berksfile is located in another directory you can override this behavior

Vagrant.configure("2") do |config|
  ...
  config.berkshelf.berksfile_path = "/Users/reset/code/my_face/Berksfile"
end

The above example will use an absolute path to the Berksfile of a sweet application called MyFace.

THE BERKSFILE

Dependencies are managed via the file Berksfile. The Berksfile is like Bundler’s Gemfile. Entries in the Berksfile are known as sources. It contains a list of sources identifying what Cookbooks to retrieve and where to get them.

metadata
cookbook 'memcached'
cookbook 'nginx'
cookbook 'pvpnet', path: '/Users/reset/code/riot-cookbooks/pvpnet-cookbook'
cookbook 'mysql', git: 'git://github.com/opscode-cookbooks/mysql.git'
cookbook 'myapp', chef_api: :config

All sources and their dependencies will be retrieved, recursively. Two kinds of sources can be defined.

METADATA SOURCE

The metadata source is like saying gemspec in Bundler’s Gemfile. It says, “There is a metadata.rb file within the same relative path of my Berksfile”. This allows you to resolve a Cookbook’s dependencies that you are currently working on just like you would resolve the dependencies of a Gem that you are currently working on with Bundler.

Given a Berksfile at ~/code/nginx-cookbook containing:

metadata

A metadata.rb file is assumed to be located at ~/code/nginx-cookbook/metadata.rbdescribing your nginx cookbook.

COOKBOOK SOURCE

A cookbook source is a way to describe a cookbook to install or a way to override the location of a dependency.

Cookbook sources are defined with the format:

cookbook {name}, {version_constraint}, {options}

The first parameter is the name and is the only required parameter

cookbook "nginx"

The second parameter is a version constraint and is optional. If no version constraint is specified the latest is assumed

cookbook "nginx", ">= 0.101.2"

Constraints can be specified as

  • Equal to (=)
  • Greater than (>)
  • Greater than or equal to (>=)
  • Less than (<)
  • Less than or equal to (<=)
  • Pessimistic (~>)

The final parameter is an options hash

SOURCE OPTIONS

Options passed to a source can contain a location or a group(s).

Locations

By default a cookbook source is assumed to come from the Opscode Community site http://cookbooks.opscode.com/api/v1/cookbooks. This behavior can be customized with a different location type. You might want to use a different location type if the cookbook is stored in a git repository, at a local file path, or at a different community site.

Chef API Location

The Chef API location allows you to treat your Chef Server like an artifact server. Cookbooks or dependencies can be pulled directly out of a Chef Server. This is super useful if your organization has cookbooks that isn’t available to the community but may be a dependency of other proprietary cookbooks in your organization.

A Chef API Location is expressed with the chef_api key followed by some options. You can tell Berkshelf to use the Chef credentials found in your Berkshelf config by passing the symbol :config to chef_api.

cookbook "artifact", chef_api: :config

The Berkshelf configuration is by default located at ~/.berkshelf/config.json. You can specify a different configuration file with the -c flag.

$ berks install -c /Users/reset/.berkshelf/production-config.json

You can also explicitly define the chef_server_url, node_name, and client_key to use:

cookbook "artifact", chef_api: "https://api.opscode.com/organizations/vialstudios", node_name: "reset", client_key: "/Users/reset/.chef/reset.pem"
Site Location

The Site location can be used to specify a community site API to retrieve cookbooks from

cookbook "artifact", site: "http://cookbooks.opscode.com/api/v1/cookbooks"

The symbol :opscode is an alias for “Opscode’s newest community API” and can be provided in place of a URL

cookbook "artifact", site: :opscode
Path Location

The Path location is useful for rapid iteration because it does not download, copy, or move the cookbook to The Berkshelf or change the contents of the target. Instead the cookbook found at the given filepath will be used alongside the cookbooks found in The Berkshelf.

cookbook "artifact", path: "/Users/reset/code/artifact-cookbook"

The value given to :path can only contain a single cookbook and must contain a metadata.rb file.

Git Location

The Git location will clone the given Git repository to The Berkshelf if the Git repository contains a valid cookbook.

cookbook "mysql", git: "https://github.com/opscode-cookbooks/mysql.git"

Given the previous example, the cookbook found at the HEAD revision of the opscode-cookbooks/mysql Github project will be cloned to The Berkshelf.

An optional branch key can be specified whose value is a branch or tag that contains the desired cookbook.

cookbook "mysql", git: "https://github.com/opscode-cookbooks/mysql.git", branch: "foodcritic"

Given the previous example, the cookbook found at branch foodcritic of the opscode-cookbooks/mysql Github project will be cloned to The Berkshelf.

An optional tag key is an alias for branch and can be used interchangeably.

cookbook “mysql”, git: “https://github.com/opscode-cookbooks/mysql.git”, tag: “3.0.2”

Given the previous example, the cookbook found at tag 3.0.2 of the opscode-cookbooks/mysql Github project will be cloned to The Berkshelf.

An optional ref key can be specified for the exact SHA-1 commit ID to use and exact revision of the desired cookbook.

cookbook “mysql”, git: “https://github.com/opscode-cookbooks/mysql.git”, ref: “eef7e65806e7ff3bdbe148e27c447ef4a8bc3881”

Given the previous example, the cookbook found at commit id eef7e65806e7ff3bdbe148e27c447ef4a8bc3881 of the opscode-cookbooks/mysql Github project will be cloned to The Berkshelf.

An optional rel key can be specified if your repository contains many cookbooks in a single repository under a sub-directory or at root.

cookbook "rightscale", git: "https://github.com/rightscale/rightscale_cookbooks.git", rel: "cookbooks/rightscale"

This will fetch the cookbook rightscale from the speficied Git location from under the cookbookssub-directory.

GitHub Location

As of version 1.0.0, you may now use GitHub shorthand to specify a location.

cookbook "artifact", github: "RiotGames/artifact-cookbook", tag: "0.9.8"

Given this example, the artifact cookbook from the RiotGames organization in the artifact-cookbook repository with a tag of 0.9.8 will be cloned to The Berkshelf.

The git protocol will be used if no protocol is explicity set. To access a private repository specify the ssh or https protocol.

cookbook "keeping_secrets", github: "RiotGames/keeping_secrets-cookbook", protocol: :ssh

You will receive a repository not found error if you are referencing a private repository and have not set the protocol to https or ssh.

DEFAULT LOCATIONS

Any source that does not explicit define a location will attempted to be retrieved at the latest Opscode community API. Any source not explicitly defined in the Berksfile but found in the metadata.rb of the current cookbook or a dependency will also attempt to use this default location.

Additional site locations can be specified with the site keyword in the Berksfile

site "http://cookbooks.opscode.com/api/v1/cookbooks"

This same entry could also have been written

site :opscode

A Chef API default location can also be specified to attempt to retrieve your cookbook and it’s dependencies from

chef_api "https://api.opscode.com/organizations/vialstudios", node_name: "reset", client_key: "/Users/reset/.chef/reset.pem"

Provided my Berkshelf config contains these Chef credentials – this could have been simplified by using the :config symbol

chef_api :config

Specifying a Chef API default location is particularly useful if you have cookbooks that are private to your organization that are not shared on the Opscode community site.

It is highly recommended that you upload your cookbooks to your organization’s Chef Server and then set a chef_api default location at the top of every application cookbook’s Berksfile

Multiple default locations

A combination of default locations can be specified in case a location is unavailable or does not contain the desired cookbook or version

chef_api :config
site :opscode

cookbook "artifact", "= 0.10.0"

The order in which the default locations keywords appear in the Berksfile is the order in which sources will be tried. In the above example Berkshelf would first try a Chef API using my Berkshelf configuration to find the “artifact” cookbook. If the Chef API didn’t contain the “artifact” cookbook, or version 0.10.0 of the cookbook, it will try the Opscode community site.

GROUPS

Adding sources to a group is useful if you want to ignore a cookbook or a set of cookbooks at install or upload time.

Groups can be defined via blocks:

group :solo do
  cookbook 'riot_base'
end

Groups can also be defined inline as an option:

cookbook 'riot_base', group: 'solo'

To exclude the groups when installing updating, or uploading just add the -except flag.

$ berks install --except solo

GENERATING A NEW COOKBOOK

Berkshelf includes a command to help you quickly generate a cookbook with a number of helpful supporting tools

$ berks cookbook my_face --foodcritic

This will generate a cookbook called “my_face” in your current directory with Vagrant, Git, and Foodcritic support. Check out this guide for more information and the help provided in the Berkshelf CLI for the cookbook command.

BUILD INTEGRATION

Instead of invoking Berkshelf directly on the command-line, you can also run Berkshelf from within a Thor process.

THOR

Just add the following line to your Thorfile:

require 'berkshelf/thor'

Now you have access to common Berkshelf tasks without shelling out

$ thor list

$ berkshelf
$ ---------
$ thor berkshelf:init [PATH]  # Prepare a local path to have it's Cook...
$ thor berkshelf:install      # Install the Cookbooks specified by a B...
$ thor berkshelf:update       # Update all Cookbooks and their depende...
$ thor berkshelf:upload       # Upload the Cookbooks specified by a Be...
$ thor berkshelf:version      # Display version and copyright informat...



Update your Gemfile

Update the Gemfile and dependencies; this is also in the development block because the CI will not need it:

group :development do
  gem "knife-spork", "1.0.17"
  gem "berkshelf", "2.0.3"
end

Configure your Cookbook Dependencies

Create a new file Berksfile in the root of the project with the following content:

site :opscode

cookbook 'minitest-handler', '0.1.7'

This is like your Gem file for cookbooks.

Install the Community Cookbooks

Now, we want to install the cookbook dependency:

bundle exec berks install

Before uploading our community cookbooks to the Chef server, we need to override one of the default configurations by creating a config/berks-config.json file with the following content:

{
  "ssl": {
    "verify": false
  }
}

When we upload the community cookbooks, Berkshelf will pull the settings from our knife.rb file and then the berks-config.json file. We could run this command to do so:

bundle exec berks upload -c config/berks-config.json

But, it makes sense to wrap this in a Rake task for usability:

desc "Uploads Berkshelf cookbooks to our chef server"
task :berks_upload do
  sh "bundle exec berks upload -c config/berks-config.json"
end

Update the Vagrant File Run List

Finally, we modify the run list for in our Vagrant file to support the mini test handler:

...
chef.run_list = [
    'motd',
    'minitest-handler'
]
...

Running a vagrant provision will now execute our MOTD cookbook, and then run a blank minitest suite against it.

Coming up…

Next time, we will conclude our series by setting up final verification tests with a post on “Minitest” and make it part of our workflow process.

 

Add a Red Hat Enterprise Linux 6 system to Microsoft Active Directory

Add a Red Hat Enterprise Linux 6 system to Microsoft Active Directory

UPDATE!! .. This article also works perfectly on Windows 2012 Server as well as Windows Server 2008. The process is exactly the same.

I’ve had countless numbers of people ask me over the years how to add a Linux system to Active Directory.

Here is a really quick and simple way to do it using Windbind for userlookups, and Kerberos for authentication.

In this example, I will be using the below details

Windows Domain Name:         rmohan.com
Windows Domain NetBIOS Name: RMOHAN
Domain Controller:           dc1.rmohan.com
Client Server name:          server01.rmohan.com

Setup

1. Firstly, install the necessary components.

yum install -y samba-winbind samba-winbind-clients oddjob-mkhomedir pam_krb5 krb5-workstation

2. Make sure OddJobd is running at Startup. This is only for Red Hat Enterprise Linux 6 and other Red Hat based Operating systems.

Red Hat Enterprise Linux 5 will use pam_mkhomedir. pam_mkhomedir has SELinux issues at present, so oddjobd is the way to go.

chkconfig oddjobd on

3. Set authconfig to point to the relevant systems for Authentication.
Note: If you do not wish your users to log into your server via a shell, set –winbindtemplateshell to –winbindtemplateshell=/sbin/nologin

authconfig –update –kickstart –enablewinbind –smbsecurity=ads –smbworkgroup=RMOHAN –smbrealm=rmohan.com –winbindtemplatehomedir=/home/%U –winbindtemplateshell=/bin/bash –enablewinbindusedefaultdomain –enablelocauthorize –enablekrb5 –krb5realm=RMOHAN.COM –enablekrb5kdcdns –enablekrb5realmdns –enablepamaccess

4. Just like in Windows, Add your system to the domain. Here I have used the Domain Administrator account, but any account with enough rights to add a system to the domain will suffice.

[root@server ~]# net ads join -U Administrator
Enter Administrator’s password:
Using short domain name — RMOHAN
Joined ‘server’ to realm ‘rmohan.com’

Note: As you are now dealing with Active Directory, it now becomes time sensitive. Make sure your system clock is pointing to one of your Domain Controllers as the RMOHANP server.

Otherwise you will end up with errors like this when you try to add the system to the domain.

[root@server ~]# net ads join -U Administrator
Enter Administrator’s password:
Using short domain name — RMOHAN
Joined ‘SERVER’ to realm ‘rmohan.com’
[2012/07/06 17:24:04.397769,  0] libads/kerberos.c:333(ads_kinit_password)
kerberos_kinit_password SERVER$@RMOHAN.EXAMPLE.COM failed: Clock skew too great
[root@server ~]#

5. Configure Winbind Backend
The default Winbind backend is great for single systems being added to Active Directory, however if you are in a very large Linux estate like I usually am, you will need to change the backend to ensure that all UID’s/GID’s match across all your systems.

To do this, add the below two lines to your global Samba configuration. Replace “RMOHAN” with your own Domain name.

idmap config RMOHAN:backend = rid
idmap config RMOHAN:range = 10000000-19999999
kerberos method = dedicated keytab
dedicated keytab file=/etc/krb5.keytab

Just so we are on the same page, my global configuration now looks like this

workgroup = RMOHAN
realm = RMOHAN.EXAMPLE.COM
security = ads
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
idmap config RMOHAN:backend = rid
idmap config RMOHAN:range = 10000000-19999999
kerberos method = dedicated keytab
dedicated keytab file=/etc/krb5.keytab
template homedir = /home/%U
template shell = /bin/bash
winbind use default domain = true
winbind offline logon = false

6. Restart Winbind service
Once you have added your system to the domain, it is important to restart the Winbind service.

[root@server ~]# service winbind restart
Shutting down Winbind services:                            [FAILED]
Starting Winbind services:                                 [  OK  ]
[root@server ~]#

7. Create a Kerberos keytab to enable Single Sign On (SSO)

[root@server ~]# net ads keytab create -U Administrator
Enter Administrator’s password:
[root@server ~]#

8. Test configuration. If you receive no output for a known username, then something is wrong.

[root@server ~]# getent passwd Administrator
administrator:*:16777216:16777216:Administrator:/home/administrator:/bin/bash
[root@server ~]#

or, if you enabled shell logins,

User@workstation:~$ ssh Administrator@server.rmohan.com
Administrator@server.rmohan.com’s password:
Your password will expire in 11 days.

Creating home directory for administrator.
[administrator@server ~]$

9. This is optional, your home directory will not exist on the system when a new user logs in, run the below command if you with to have the homedir automatically created on first login.

[root@server ~]# authconfig –enablemkhomedir –update
Starting Winbind services:                                 [  OK  ]
Starting oddjobd:                                          [  OK  ]
[root@server ~]#

 

authconfig –enablemkhomedir –update

service messagebus restart

/etc/init.d/oddjobd restart

service winbind restart

 

 

Oddjobd fails to start [FIXED]

I was configuring a new CentOS 6.5 machine to accept Active Directory logins and up until recently you could use the trust pam_mkhomedir.so to auto create home directories on login. This has since been replaced by a new system called Oddjobd and after the standard authconfig tool I enabled the auto create home directories and then Oddjobd fails to start.

1
2
[root@host ~]# service oddjobd start
Starting oddjobd:                                          [ FAILED ]

I did a bit of searching and couldn’t find anything in the logs on the machine or on the net with regards to this. So here is the post. Oddjobd requires access to the system message bus (dbus) and when trying to login to the machine with an AD account I got an error message.

1
org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory

This pointed out that the message bus wasn’t working or was broken. So first thing I did was check the status of the messagebus and it wasn’t running. I started up messagebus service and then oddjobd started fine.

1
2
3
4
5
6
[root@host ~]# service messagebus restart
Stopping system message bus:                             [ FAILED ]
Starting system message bus:                               [ OK ]
[root@host ~]# service oddjobd start
Starting oddjobd:                                                      [ OK ]
[root@host ~]#

I was then able to login with my AD user and it auto created the home directory as required .

chef beginner

INSPEC.IO
user ‘test’ do
comment ‘test user’
uid ’89’
gid ’89’
home ‘/home/random’
shell ‘/bin/bash’
action :create
password ‘$1$JJsvHslasdfjVEroftprNn4JHtDi’
end
user ‘test’ do
comment ‘test user’
uid ’89’
gid ’89’
home ‘/home/test’
shell ‘/bin/bash’
action :create
password ‘$1$/IoJI4pW$rVC197lCpPyDdkD7RxiRG/’
end
user ‘test’ do
comment ‘test user’
uid ’89’
gid ’89’
home ‘/home/test’
shell ‘/bin/bash’
action :modify
password ‘$1$/IoJI4pW$rVC197lCpPyDdkD7RxiRG/’
end

package ‘pqr’ do
action :remove
end

package ‘tree’ do
action :install
end

for p in [ “elinks”, “wget”, “lynx”, “vim”, “ant” ] do
package p do
action [:install]
end
end

package git
group “test1” do
gid 3000
end
user “test1” do
uid “3000”
shell “/sbin/nologin”
home “/home/test”
gid “3000”
password ‘$1$/IoJI4pW$rVC197lCpPyDdkD7RxiRG/’
end

file ‘/etc/motd’ do
content ‘

===================================
This server is a property of visa

===================================

mode “0644”
end
root@ws:/var/chef/cookbooks/useradd/recipes# /opt/chefdk/embedded/bin/ruby -c default.rb
Syntax OK
root@ws:/var/chef/cookbooks/useradd/recipes#

run on local mode

chef-client -z | –local-mode

chef-client –local-mode –why-run

chef-client –local-mode

chef-client –local -o recipe
chef-client -z -l info test.rb

I understand Chef conditional execution.

I’d like to do some conditional execution based on whether or not a database exists in Postgresql

So here’s my example

execute “add_db” do
cwd “/tmp”
user “dbuser”
command “createdb -T template_postgis mydb”
not_if ‘psql –list|grep mydb’
end
execute “touch /home/#{user}/monkeypants” do
user “monkey”
group “monkey”
cwd “/home/monkey”
not_if “check_command”, :cwd => “/home/monkey” , :user => “monkey”, :group => “monkey”
end
log “Welcome to Chef, #{node[“starter_name”]}!” do
level :info
end

file ‘/etc/my_first_file’ do
content ‘This is my first file creation using chef server’
end

file ‘/etc/my_second_file’ do
content ‘My Second file’
end
ignore_failure true

Full control

In case this happens to a resource you control, you have the wonderful ignore_failure attribute to modify this behavior. Adding it to e.g. a service, will enable Chef to continue a run, even if this resource is failing.

service “apache” do
action :enable
ignore_failure true
end

how to create a cookbook
chef
mkdir cookbooks

Usage: chef generate GENERATOR [options]

Available generators:
app Generate an application repo
cookbook Generate a single cookbook
recipe Generate a new recipe
attribute Generate an attributes file
template Generate a file template
file Generate a cookbook file
lwrp Generate a lightweight resource/provider
repo Generate a Chef code repository
policyfile Generate a Policyfile for use with the install/push commands
generator Copy ChefDK’s generator cookbook so you can customize it
build-cookbook Generate a build cookbook for use with Delivery
knife cookbook create nginx
** Creating cookbook nginx
** Creating README for cookbook: nginx
** Creating CHANGELOG for cookbook: nginx
** Creating metadata for cookbook: nginx
chef generate cookbook cookbooks/nginx

root@ws:~# chef generate cookbook cookbooks/nginx
Generating cookbook nginx
– Ensuring correct cookbook file content
– Committing cookbook files to git
– Ensuring delivery configuration
– Ensuring correct delivery build cookbook content
– Adding delivery configuration to feature branch
– Adding build cookbook to feature branch
– Merging delivery content feature branch to master

Your cookbook is ready. Type `cd cookbooks/nginx` to enter it.

There are several commands you can run to get started locally developing and testing your cookbook.
Type `delivery local –help` to see a full list.

Why not start by writing a test? Tests for the default recipe are stored at:

test/recipes/default_test.rb

If you’d prefer to dive right in, the default recipe can be found at:

recipes/default.rb

version cotrol

Berksfile README.md chefignore metadata.rb recipes spec test
root@ws:~/cookbooks/nginx# cat metadata.rb
name ‘nginx’
maintainer ‘The Authors’
maintainer_email ‘you@example.com’
license ‘all_rights’
description ‘Installs/Configures nginx’
long_description ‘Installs/Configures nginx’
version ‘0.1.0’
cookbooks
|_nginx
|_recipes
|_default
|_test.rb

execute ‘apt-get update’ do
action :run
end

package ‘nginx’ do
action :install
end

service ‘nginx’ do
action [ :enable, :start ]
end

service ‘nginx’ do
supports status: true, restart: true, reload: true
action :enable
end
service ‘apache’ do
supports :restart => true, :reload => true
action :enable
end
service “nginx” do
supports :restart => true, :start => true, :stop => true, :reload => true
action :nothing
end

template “nginx” do
path “/etc/init.d/nginx”
source “nginx.erb”
owner “root”
group “root”
mode “0755”
notifies :enable, “service[nginx]”
notifies :start, “service[nginx]”
end

cookbook_file “/usr/share/nginx/html/index.html” do
source “index.html”
mode “0644”
end
cd ~/chef-repo/cookbooks/nginx/files/default

nano index.html

<html>
<head>
<title>Hello there</title>
</head>
<body>
<h1>This is a Mohan test</h1>
<p>Please Mohan work!</p>
</body>
</html>
excute ‘service nginx stop ‘ do
not_if ‘service nginx status’
end

chef-client -z –runlist “recipe[nginx::install],recipe[nginx::service]”

chef-client -z –runlist “recipe[nginx::install]”
install.rb

execute ‘apt-get update’ do
end

package ‘nginx’ do
action :install
end
service.rb

service ‘nginx’ do
action :start
end

execute ‘my own service start description’ do
command ‘service nginx start’
not_if ‘service nginx status’
end
config.rb

cookbook_file ‘/etc/nginx/nginx.conf’ do
source ‘nginx.conf’
mode ‘0644’
action :create
notifies :restart, ‘service[nginx]’
end
cookbook_file ‘/usr/share/nginx/html/index.html’ do
source ‘index.html’
mode ‘0644’
action :create
end

DLQ handler rules MQ

The DLQ handler rules table
The DLQ handler rules table defines how the DLQ handler is to process messages that arrive on the DLQ. There are two types of entry in a rules table:
?The first entry in the table, which is optional, contains control data.
?All other entries in the table are rules for the DLQ handler to follow. Each rule
consists of a pattern (a set of message characteristics) that a message is matched against, and an action to be taken when a message on the DLQ matches the specified pattern. There must be at least one rule in a rules table. Each entry in the rules table comprises one or more keywords.

vi rules_dlq.txt
REASON (MQRC_PUT_INHIBITED) ACTION(FWD) +
FWDQ(QL.B) FWDQM(QMW) HEADER(NO)

$ runmqdlq DLQ QMW &lt; rules_dlq.txt &amp;
[1] 7834
$ 01/14/13  04:06:10  AMQ8708: Dead-letter queue handler started to process INPUTQ(DLQ).

$ amqsput QRMT.A QMC
Sample AMQSPUT0 start
target queue is QRMT.A
I send a message from QMC to QL.B

Sample AMQSPUT0 end
$ amqsget QL.B QMW
Sample AMQSGET0 start
message <I>
message <I>
no more messages
Sample AMQSGET0 end
$

ALTER QL(QL.A) PUT(ENABLED)
21 : ALTER QL(QL.A) PUT(ENABLED)
AMQ8008: WebSphere MQ queue changed.
CLEAR QL(QL.B)
22 : CLEAR QL(QL.B)
AMQ8022: WebSphere MQ queue cleared.
CLEAR QL(QL.A)
23 : CLEAR QL(QL.A)
AMQ8022: WebSphere MQ queue cleared.

Tcserver password encoding and decoding

 

Tc Server 3.2.0 introduced a new command for encoding passwords

 

./tcruntime-admin.sh encode mypassword passkey

 

Please, take a look at following link for more information

 

http://tcserver.docs.pivotal.io/docs-tcserver/topics/manual.html#obfusc

 

The old style can still be used but a new property is needed and the jasypt can be left off the classpath as they are found automatically now based on catalina.home.

 

So your commmand would look like

 

/usr/mware/java/bin/java -cp /usr/mware/tcServer/tomcat-8.5.9.B.RELEASE/lib/tcServer3.jar:\

/usr/mware/tcServer/tomcat-8.5.9.B.RELEASE/bin/tomcat-juli.jar::\

/usr/mware/tcServer/tomcat-8.5.9.B.RELEASE/lib/tomcat-util.jar:: \ -Dcatalina.home=/usr/mware/tcServer/tomcat-8.5.9.B.RELEASE/ \ com.springsource.tcserver.security.PropertyDecoder -encode passkey mypassword

Windows has a built-in function to do time synchronisation 2012 r2

Windows has a built-in function to do time synchronisation 2012 r2

Windows has a built-in function to do time synchronisation. And by default it gets the time from time.windows.com. There is no need to use a third-party application.

You can (but you should not need to) change the settings by right-clicking the clock in the taskbar > Adjust date/time > Internet Time.

Or from the command line:

w32tm /config /syncfromflags:manual /manualpeerlist:time.windows.com /update

Use tar + pigz + ssh to achieve efficient transmission of large data

Use tar + pigz + ssh to achieve efficient transmission of large data

Before we copy large data when the host, such as to copy more than 100GB of mysql raw data, we usually practice as follows:

Package the tar.gz file at the source
Using scp or rsync copy to the target host
Unzip the file at the target host

These three processes are synchronized, that is, they can not be executed at the same time, resulting in inefficiency.

Now we will optimize the process to the data stream, while the implementation of (non-blocking mode), the efficiency can generally be increased to more than 3 times the original, the specific realization is as follows:

Disk read —-> packaging —-> compression ——> transmission —-> decompression -> unpacking —-> plate

-> tar | -> gzip | -> ssh | -> gzip | -> tar

For example, I want to copy the local test directory to the “target IP” data directory, the command is as follows:

Tar -c test / | pigz | ssh -c arcfour128 Target IP “gzip -d | tar -xC / data”

Of course, here the decompression process is still using the efficiency of the lower gzip, if the decompression tool replaced lz4 (but need to compile and install separately), then the efficiency can be improved a lot.

If you do not need to extract, the command changes to:

Tar -c test / | pigz | ssh -c arcfour128 target IP “cat> /data/test.tar.gz”

Note: Because of the use of streaming compression, decompression process must be added-i parameters, and tar -ixf / data/test.tar.gz.

Description: pigz is an efficient compression tool that can be used for each sub-core CPU’s remaining performance for compression calculations. The traditional gzip can only use single-core CPU. For example, a 2 8core cpu server using pigz and gzip compression of the same data, the general performance gap of at least 7-8 times more than (generally do not reach the theory of 16 times, because limited by the disk read and write speed and memory resources ).

Installation and configuration of the Linux NFS server CENTOS and Rhel 6.8

Installation and configuration of the Linux NFS server

First, the NFS service profile

NFS is the Network File System. The main function is through the network so that different servers can share files or directories. NFS client is usually the application server (such as web, load balancing, etc.), you can mount the NFS server-side shared directory to NFS client local directory.
NFS relies on RPC (Remote Procedure Call) protocol during file transfer. NFS itself is not provided by the information transmission protocol and function, but can use the network for pictures, videos, attachments and other sharing functions. As long as the use of NFS need to start the RPC service, whether it is NFS server or client.
NFS and RPC relationship: can be understood as NFS is a network file system (metaphor for rental homeowners), and RPC is responsible for the transmission of information (intermediary), the client (the equivalent of rental tenants)

Second, the system environment:
[root @ rmohan.com ~] # cat /etc/RedHat-release ## View the system version information
CentOS release 6.7 (Final)
[root @ rmohan.com ~] # uname -r # # View the kernel information
2.6.32-573.el6.x86_64
[root @ rmohan.com ~] # uname -m # # to see whether the system is 32-bit or 64-bit
X86_64

Third, the server configuration
Before starting the NFS service, first start the RPC service (CentOS5 is the portmap service, CentOS6.6 later version is rpcbind service), otherwise the NFS server can not register with the RPC service. In addition, if the RPC service restarts, the original and some NFS ports will be lost. Therefore, as long as the RPC service restarts, the NFS service will restart the new random port number to RPC. Generally modify the NFS configuration file, do not need to restart the service, you can directly restart the command, the command: /etc/init.d/nfs reload or exportfs-rv can modify / etc / exports effective.

/etc/init.d/nfs reload role is: let the request to the server has been completed to him, but did not reach the server request, put it off. It is equivalent to our car to the station, the car is about to leave, has been on the bus can be a normal departure, there is no way to catch the car with the car starting.

To deploy the NFS service, you need to install the following packages:
1) nfs-utils: main program for NFS services
2) rpcbing: NFS can be regarded as an RPC main program, before starting any of the RPC program, you need to do the port and function mapping mapping work

1) View the NFS package
[root@rmohan.com ~]# rpm -qa nfs-utils rpcbind

2) CentOS 6.7 does not install the package by default, you can use the yum install nfs-utils rpcbind -y command to install NFS software

[root@rmohan.com ~]# yum install nfs-utils rpcbind  -y
[root@rmohan.com ~]# rpm -qa nfs-utils rpcbind
nfs-utils-1.2.3-70.el6_8.2.x86_64
rpcbind-0.2.0-12.el6.x86_64

3) Start the NFS service

[root@rmohan.com ~]# /etc/init.d/rpcbind start  # start rpc service
[root@rmohan.com ~]# /etc/init.d/rpcbind status # View rpc service status
rpcbind (pid  4269)  is running …

Step 2: Start the NFS service
[root @ rmohan.com ~] # /etc/init.d/nfs start # Start the nfs service
[root @ rmohan.com ~] # /etc/init.d/nfs status # View nfs service status
Rpc.svcgssd is stopped
Rpc.mountd (pid 3282) is running …
Nfsd (pid 3298 3297 3296 3295 3294 3293 3292 3291) Running …
Rpc.rquotad (pid 3277) is running …

You must first start the rpc service, and then start the NFS service, if you start the NFS service, start the service will fail, suggesting that the following
[root @ rmohan.com ~] # /etc/init.d/nfs start
Start the NFS service: [OK]
Turn off NFS quota: Unable to register service: RPC: Unable to receive; errno = Reject connection
Rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp).
[failure]
Start NFS mountd: [failed]
Start the NFS daemon:

[root @ rmohan.com ~] # rpcinfo-p 192.168.1.31 # View the NFS service to rpc registered port information, the main port number is: 111
Program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 875 rquotad
100011 2 udp 875 rquotad
100011 1 tc
P 875 rquotad
100011 2 tcp 875 rquotad

Step 3: Check whether the boot from the start

[root @ rmohan.com ~] # chkconfig nfs on
[root @ rmohan.com ~] # chkconfig rpcbind on
[root @ rmohan.com ~] # chkconfig –list nfs
Nfs 0: off 1: off 2: enable 3: enable 4: enable 5: enable 6: turn off
[root @ rmohan.com ~] # chkconfig –list rpcbind
Rpcbind 0: off 1: off 2: enable 3: enable 4: enable 5: enable 6: turn off
[root @ rmohan.com ~] # tail -2 /etc/rc.local
/etc/init.d/rpcbind start
/etc/init.d/nfs start

In the work, most of the unified operation and maintenance in accordance with the norms of service start command /etc/rc.local inside, rather than using chkconfig to manage, all the services once the boot from the start must be / etc / rc. Local. The advantage is that once the management of this server staff busy business migration can be /etc/rc.local easy to view the server corresponding to the relevant services, you can easily operation and maintenance management.

4) NFS server configuration file configuration
The default NFS configuration file path is: / etc / exports, the file is empty by default.

The format of the / etc / exports configuration file is:
NFS shared directory NFS client address (parameter 1, parameter 2)
123456 [root @ rmohan.com ~] # cat / etc / exports
#share / data by rmohan.com for bingbing at 20160425
/ Data 172.16.1.0/24(rw, sync) ## a configuration to get NFS configuration file
Where: / data is the directory shared by the server
172.16.1.0/24 shared directory client ip address
(Rw, sync), where rw represents the right to read and write, sync on behalf of the data synchronization write NFS server-side hard disk. Can also be used async, async is used when large data is written to the cache, and then write to the disk.

V NFS shared directory: for the NFS server to share the actual directory, use the decision path, such as (/ data). Pay attention to the local permissions to share the directory, you need to read and write permissions, be sure to let the local directory can be NFS client users to read and write
V NFS client address: NFS client address for NFS server-accessible accessible shared directory, can be a separate ip address or host name, domain name, etc., or the address of the entire network segment.
Create the / data directory, and both the owner and the group are: nfsnobody, where nfsnobody is the default user when installing the nfs service
[root @ rmohan.com ~] # mkdir / data -p
[root @ rmohan.com ~] # chown -R nfsnobody: nfsnobody / data
[root @ rmohan.com ~] # ls -ld / data
Drwxr-xr-x 6 nfsnobody nfsnobody 4096 December 8 20:17 / data
[root@ rmohan.com ~] # /etc/init.d/nfs reload
[root @ rmohan.com ~] # showmount-e 192.168.1.31 ## local test, that the server test success
Export list for 192.168.1.31:
/ Data 172.16.1.0/24

Fourth, the client configuration
1. Client and server, but also to install nfs and rpm installation package. (See server-side configuration)
2. The client needs to start rpc service, join the boot from the start, do not need to start nfs service. (See server-side configuration)
3. Test:
The first step: ping, ping ? server-side ip address
[root @ rmohan.com ~] # ping 192.168.1.31
PING 192.168.1.31 (192.168.1.31) 56 (84) bytes of data.
64 bytes from 192.168.1.31: icmp_seq = 1 ttl = 64 time = 0.383 ms
64 bytes from 192.168.1.31: icmp_seq = 2 ttl = 64 time = 0.434 ms
64 bytes from 192.168.1.31: icmp_seq = 3 ttl = 64 time = 0.420 ms
64 bytes from 192.168.1.31: icmp_seq = 4 ttl = 64 time = 0.437 ms
64 bytes from 192.168.1.31: icmp_seq = 5 ttl = 64 time = 0.439 ms
^ C
— 192.168.1.31 ping statistics —
5 passengers transmitted, 5 received, 0% packet loss, time 4997ms
Rtt min / avg / max / mdev = 0.383 / 0.422 / 0.439 / 0.030 ms

The second step: telnet server port 111
[root @ rmohan.com ~] # telnet 192.168.1.31 111
Trying 192.168.1.31 …
Connected to 192.168.1.31.
Escape character is ‘^]’.

Step 3: showmount the server

[root @ rmohan.com ~] # showmount -e 192.168.1.31
Export list for 192.168.1.31:
/ Data 172.16.1.0/24

Step 4: mount, file sharing
[root @ rmohan.com ~] # mount-t nfs 192.168.1.31:/data/ / mnt

Step 5: Check if the mount is successful
[root @ rmohan.com ~] # df-h
Filesystem Size Used Avail Use% Mounted on
/ Dev / sda3 8.8G 1.5G 6.9G 18% /
Tmpfs 491M 0 491M 0% / dev / shm
/ Dev / sda1 190M 36M 145M 20% / boot
192.168.1.31:/data/ 8.8G 1.5G 7.0G 18% / mnt

WebSphere MQ: Starting / stopping tracing – strmqtrc, endmqtrc

WebSphere MQ: Starting / stopping tracing – strmqtrc, endmqtrc

Use the WebSphere MQ strmqtrc and endmqtrc commands to check the trace file acquisition operation. In addition, the previous article is here. References are listed at the end of the article.

Start MQ trace

Start with the strmqtrc command. (Specify target queue manager with option -m)

$ Strmqtrc -m QML
1
In the case of
$ Strmqtrc -m QML

Perform put and get operations on the queue of the queue manager. If you do not pass through the sample program, do PATH setting referring to the following article.

$ Amqsput QL.A QML
Sample AMQSPUT 0 start
Target queue is QL.A
I confirm strmqtrc command operation.

Sample AMQSPUT 0 end
$ Amqsget QL.A QML
Sample AMQSGET0 start
Message <I confirm strmqtrc command operation.>
No more messages
Sample AMQSGET 0 end

In the case of
$ Amqsput QL.A QML
Sample AMQSPUT 0 start
Target queue is QL.A
I confirm strmqtrc command operation.

Sample AMQSPUT 0 end
$ Amqsget QL.A QML
Sample AMQSGET0 start
Message <I confirm strmqtrc command operation.>
No more messages
Sample AMQSGET 0 end

End of MQ trace

$ Endmqtrc -m QML
1
In the case of
$ Endmqtrc -m QML

Confirm MQ trace file

Trace related files are saved in binary format in / var / mqm / trace on UNIX systems. Use the dspmqtrc command to convert the file to text format.

$ Cd / var / mqm / trace
$ Ls
AMQ 21469.0.TRC AMQ 3015.0.TRC AMQ 3036.0.TRC AMQ 3060.0.TRC AMQ 3072.0.TRC
AMQ 21477.0.TRC AMQ 3016.0.TRC AMQ 3038.0.TRC AMQ 3070.0.TRC AMQ 3094.0.TRC
AMQ 3010.0. TRC AMQ 3035.0. TRC AMQ 3039.0. TRC AMQ 3071.0. TRC AMQ 3100.0. TRC
$ Dspmqtrc *. TRC
$ Ls
AMQ 21469.0.FMT AMQ 3015.0.FMT AMQ 3036.0.FMT AMQ 3060.0.FMT AMQ 3072.0.FMT
AMQ 21469.0.TRC AMQ 3015.0.TRC AMQ 3036.0.TRC AMQ 3060.0.TRC AMQ 3072.0.TRC
AMQ 21477.0.FMT AMQ 3016.0.FMT AMQ 3038.0.FMT AMQ 3070.0.FMT AMQ 3094.0.FMT
AMQ 21477.0.TRC AMQ 3016.0.TRC AMQ 3038.0.TRC AMQ 3070.0.TRC AMQ 3094.0.TRC
AMQ 3010.0.FMT AMQ 3035.0.FMT AMQ 3039.0.FMT AMQ 3071.0.FMT AMQ 3100.0.FMT
AMQ 3010.0. TRC AMQ 3035.0. TRC AMQ 3039.0. TRC AMQ 3071.0. TRC AMQ 3100.0. TRC
$ Grep amqsput * .FMT
AMQ 21469.0.FMT: 01: 46: 04.841170 21469.1: PID: 21469 Process: amqsput (64-bit)
AMQ 21469.0.FMT: 01: 46: 29.220659 21469.1: 0x0110: 06000000 616 d 7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 44.099147 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 44.099220 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 59.102071 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |

In the case of
$ Cd / var / mqm / trace
$ Ls
AMQ 21469.0.TRC AMQ 3015.0.TRC AMQ 3036.0.TRC AMQ 3060.0.TRC AMQ 3072.0.TRC
AMQ 21477.0.TRC AMQ 3016.0.TRC AMQ 3038.0.TRC AMQ 3070.0.TRC AMQ 3094.0.TRC
AMQ 3010.0. TRC AMQ 3035.0. TRC AMQ 3039.0. TRC AMQ 3071.0. TRC AMQ 3100.0. TRC
$ Dspmqtrc *. TRC
$ Ls
AMQ 21469.0.FMT AMQ 3015.0.FMT AMQ 3036.0.FMT AMQ 3060.0.FMT AMQ 3072.0.FMT
AMQ 21469.0.TRC AMQ 3015.0.TRC AMQ 3036.0.TRC AMQ 3060.0.TRC AMQ 3072.0.TRC
AMQ 21477.0.FMT AMQ 3016.0.FMT AMQ 3038.0.FMT AMQ 3070.0.FMT AMQ 3094.0.FMT
AMQ 21477.0.TRC AMQ 3016.0.TRC AMQ 3038.0.TRC AMQ 3070.0.TRC AMQ 3094.0.TRC
AMQ 3010.0.FMT AMQ 3035.0.FMT AMQ 3039.0.FMT AMQ 3071.0.FMT AMQ 3100.0.FMT
AMQ 3010.0. TRC AMQ 3035.0. TRC AMQ 3039.0. TRC AMQ 3071.0. TRC AMQ 3100.0. TRC
$ Grep amqsput * .FMT
AMQ 21469.0.FMT: 01: 46: 04.841170 21469.1: PID: 21469 Process: amqsput (64-bit)
AMQ 21469.0.FMT: 01: 46: 29.220659 21469.1: 0x0110: 06000000 616 d 7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 44.099147 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 44.099220 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |
AMQ 21477.0.FMT: 01: 46: 59.102071 21477.1: 0x0110: 06000000 616 d7173 70757 420 20202020 | …. amqsput |
$

Dead-letter queues

Dead-letter queues
A dead-letter (undelivered-message) queue is a queue that stores messages that cannot be routed to their correct destinations.
This occurs when, for example, the destination queue is full. The supplied dead-letter queue is called SYSTEM.DEAD.LETTER.QUEUE.
For distributed queuing, define a dead-letter queue on each queue manager involved.

It is defined on the queue manager in the queue to be saved if the message can not be stored in the specified destination.
As an example it is stored in the dead-letter queue if the destination is queueful.
If you can not put it in the Dead-letter queue, the channel stops and the message remains in the transmission queue.

Dead-letter queue definition

We have already defined the Dead-letter queue called DLQ at the previous article (distributed queuing).

DEF QL (DLQ) REPLACE
ALTER QMGR DEADQ (DLQ)

In the case of
DEF QL (DLQ) REPLACE
ALTER QMGR DEADQ (DLQ)

Check dead-letter queue operation

By putting QL.A put on DISABLED on the queue manager QMW to create a state that can not be put into QL.A and sending a message from QMC, the message is stored in the Dead-letter queue on the QMW .

As a preparatory step, execute the following command on QMW on the mqsc interface.

CLEAR QL (QL.A)
15: CLEAR QL (QL.A)
AMQ 8022: WebSphere MQ queue cleared.
CLEAR QL (QL.B)
16: CLEAR QL (QL.B)
AMQ 8022: WebSphere MQ queue cleared.
CLEAR QL (DLQ)
17: CLEAR QL (DLQ)
AMQ 8022: WebSphere MQ queue cleared.
ALTER QL (QL.A) PUT (DISABLED)
18: ALTER QL (QL.A) PUT (DISABLED)
AMQ 800: WebSphere MQ queue changed.
In the case of
CLEAR QL (QL.A)
15: CLEAR QL (QL.A)
AMQ 8022: WebSphere MQ queue cleared.
CLEAR QL (QL.B)
16: CLEAR QL (QL.B)
AMQ 8022: WebSphere MQ queue cleared.
CLEAR QL (DLQ)
17: CLEAR QL (DLQ)
AMQ 8022: WebSphere MQ queue cleared.
ALTER QL (QL.A) PUT (DISABLED)
18: ALTER QL (QL.A) PUT (DISABLED)
AMQ 800: WebSphere MQ queue changed.

Subsequently, from QMC put the message in amqsput to the remote queue definition (referring to QL.A in QMW).

$ Amqsput QRMT.A QMC
Sample AMQSPUT 0 start
Target queue is QRMT.A
I send a message from QMC.

Sample AMQSPUT 0 end
$ Amqsbcg QL.A QMW

AMQSBCG 0 – starts here
**********************

MQOPEN – ‘QL.A’

No more messages
MQCLOSE
MQDISC $

$ Amqsput QRMT.A QMC
Sample AMQSPUT 0 start
Target queue is QRMT.A
I send a message from QMC.

Sample AMQSPUT 0 end
$ Amqsbcg QL.A QMW

AMQSBCG 0 – starts here
**********************

MQOPEN – ‘QL.A’

No more messages
MQCLOSE
MQDISC $
$

However, since QL.A on QMW can not be PUT, the message does not reach QL.A, and the message is stored in the dead-letter queue.

DIS QL (DLQ) CURDEPTH
19: DIS QL (DLQ) CURDEPTH
AMQ 8409: Display Queue details.
QUEUE (DLQ) TYPE (QLOCAL)
CURDEPTH (1)

In the case of
DIS QL (DLQ) CURDEPTH
19: DIS QL (DLQ) CURDEPTH
AMQ 8409: Display Queue details.
QUEUE (DLQ) TYPE (QLOCAL)
CURDEPTH (1)

Use amqsbcg to check the Dead-letter header.

$ Amqsbcg DLQ QMW

AMQSBCG 0 – starts here
**********************

MQOPEN – ‘DLQ’

MQGET of message number 1
**** Message descriptor ****

StrucId: ‘MD’ Version: 2
Report: 0 Msg Type: 8
Expiry: -1 Feedback: 0
Encoding: 546 CodedCharSetId: 819
Format: ‘MQDEAD’
Priority: 0 Persistence: 0
MsgId: X’414D5120514D432020202020202020209093F25002510020 ‘
CorrelId: X’00000000000000000000000000000000000000000000000000 ‘
BackoutCount: 0
ReplyToQ: ”
ReplyToQMgr: ‘QMC’
** Identity Context
UserIdentifier: ‘mqm’
AccountingToken:
X’0334393600000000000000000000000000000000000000000000000000000006 ‘
ApplIdentityData: ”
** Origin Context
PutAppl Type: ‘6’
PutApplName: ‘amqsput’
PutDate: ‘20130114’ PutTime: ‘11241665’
ApplOriginData: ”

GroupId: X’00000000000000000000000000000000000000000000000000 ‘
MsgSeqNumber: ‘1’
Offset: ‘0’
MsgFlags: ‘0’
OriginalLength: ‘-1’

**** Message ****

Length – 198 bytes

00000000: 444 C 4820 0100 0000 0308 0000 514 C 2 E 41 ‘DLH …….. QL. A’
00000010: 2020 2020 2020 2020 2020 2020 2020 2020 ”
00000020: 2020 2020 2020 2020 2020 2020 2020 2020 ”
00000030: 2020 2020 2020 2020 2020 2020 514 D 5720 ‘QMW’
00000040: 2020 2020 2020 2020 2020 2020 2020 2020 ”
00000050: 2020 2020 2020 2020 2020 2020 2020 2020 ”
00000060: 2020 2020 2020 2020 2020 2020 2202 0000 ‘”…’
00000070: 3303 0000 4D51 5354 5220 2020 0600 0000 ‘3 … MQSTR ….’
00000080: 616 D 7172 6 D 70 7061 2020 2020 2020 2020 ‘amqrmppa’
00000090: 2020 2020 2020 2020 2020 2020 3230 3133 ‘2013’
000000 A0: 3031 3134 3131 3234 3236 3737 4920 7365 ‘011411242677 I se’
000000B0: 6E64 2061 206D 6573 7361 6765 2066 726 F ‘nd a message fro’
000000C0: 6D20 514D 432E ‘m QMC.’

No more messages
MQCLOSE
MQDISC $

In the case of
$ Amqsbcg DLQ QMW

AMQSBCG 0 – starts here
**********************

MQOPEN – ‘DLQ’

MQGET of message number 1
**** Message descriptor ****

StrucId: ‘MD’ Version: 2
Report: 0 Msg Type: 8
Expiry: -1 Feedback: 0
Encoding: 546 CodedCharSetId: 819
Format: ‘MQDEAD’
Priority: 0 Persistence: 0
MsgId: X’414D5120514D432020202020202020209093F25002510020 ‘
CorrelId: X’00000000000000000000000000000000000000000000000000 ‘
BackoutCount: 0
ReplyToQ: ”
ReplyTo