Click here for NameNode Check Pointing
Hadoop & Linux Administrator
Monday, 30 July 2018
Tuesday, 15 November 2016
Managine Hadoop Cluster
If you are working
on Hadoop, you’ll realize there are several shell commands available to manage
your Hadoop cluster.
Hadoop
administration commands.
1. Name node
Commands
Command
|
Description
|
hadoop namenode
-format
|
Format HDFS
filesystem from Namenode
|
hadoop namenode
-upgrade
|
Upgrade the
NameNode
|
start-dfs.sh
|
Start HDFS
Daemons
|
stop-dfs.sh
|
Stop HDFS Daemons
|
start-mapred.sh
|
Start MapReduce
Daemons
|
stop-mapred.sh
|
Stop MapReduce
Daemons
|
hadoop namenode
-recover -force
|
Recover namenode
metadata after a cluster failure (may lose data)
|
2. fsck Commands
Command
|
Description
|
hadoop fsck /
|
Filesystem check
on HDFS
|
hadoop fsck /
-files
|
Display files
during check
|
hadoop fsck /
-files -blocks
|
Display files and
blocks during check
|
hadoop fsck /
-files -blocks -locations
|
Display files,
blocks and its location during check
|
hadoop fsck /
-files -blocks -locations -racks
|
Display network
topology for data-node locations
|
hadoop fsck
-delete
|
Delete corrupted
files
|
hadoop fsck -move
|
Move corrupted
files to /lost+found directory
|
3. Job Commands
Command
|
Description
|
hadoop job -submit
<job-file>
|
Submit the job
|
hadoop job -status
<job-id>
|
Print job status
completion percentage
|
hadoop job -list
all
|
List all jobs
|
hadoop job
-list-active-trackers
|
List all available
TaskTrackers
|
hadoop job
-set-priority <job-id> <priority>
|
Set priority for a
job. Valid priorities: VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW
|
hadoop job
-kill-task <task-id>
|
Kill a task
|
hadoop job
-history
|
Display job
history including job details, failed and killed jobs
|
4. dfsadmin Commands
Command
|
Description
|
hadoop dfsadmin
-report
|
Report filesystem
info and statistics
|
hadoop dfsadmin
-metasave file.txt
|
Save namenode’s
primary data structures to file.txt
|
hadoop dfsadmin
-setQuota 10 /quotatest
|
Set Hadoop
directory quota to only 10 files
|
hadoop dfsadmin
-clrQuota /quotatest
|
Clear Hadoop
directory quota
|
hadoop dfsadmin
-refreshNodes
|
Read hosts and
exclude files to update datanodes that are allowed to connect to namenode.
Mostly used to commission or decommsion nodes
|
hadoop fs -count
-q /mydir
|
Check quota space
on directory /mydir
|
hadoop dfsadmin
-setSpaceQuota /mydir 100M
|
Set quota to 100M
on hdfs directory named /mydir
|
hadoop dfsadmin
-clrSpaceQuota /mydir
|
Clear quota on a
HDFS directory
|
hadooop dfsadmin
-saveNameSpace
|
Backup Metadata
(fsimage & edits). Put cluster in safe mode before this command.
|
5. Safe Mode (Maintenance Mode) Commands
The
following dfsadmin commands helps the cluster to enter or leave safe mode,
which is also called as maintenance mode. In this mode, Namenode does not
accept any changes to the name space, it does not replicate or delete blocks.
Command
|
Description
|
hadoop dfsadmin
-safemode enter
|
Enter safe mode
|
hadoop dfsadmin
-safemode leave
|
Leave safe mode
|
hadoop dfsadmin
-safemode get
|
Get the status of
mode
|
hadoop dfsadmin
-safemode wait
|
Wait until HDFS
finishes data block replication
|
6. Configuration Files
File
|
Description
|
hadoop-env.sh
|
Sets ENV variables
for Hadoop
|
core-site.xml
|
Parameters for
entire Hadoop cluster
|
hdfs-site.xml
|
Parameters for
HDFS and its clients
|
mapred-site.xml
|
Parameters for
MapReduce and its clients
|
masters
|
Host machines for
secondary Namenode
|
slaves
|
List of slave
hosts
|
7. mradmin Commands
Command
|
Description
|
hadoop mradmin
-safemode get
|
Check Job tracker
status
|
hadoop mradmin
-refreshQueues
|
Reload mapreduce
configuration
|
hadoop mradmin
-refreshNodes
|
Reload active
TaskTrackers
|
hadoop mradmin
-refreshServiceAcl
|
Force Jobtracker
to reload service ACL
|
hadoop mradmin
-refreshUserToGroupsMappings
|
Force jobtracker
to reload user group mappings
|
8. Balancer Commands
Command
|
Description
|
start-balancer.sh
|
Balance the
cluster
|
hadoop dfsadmin
-setBalancerBandwidth <bandwidthinbytes>
|
Adjust bandwidth
used by the balancer
|
hadoop balancer
-threshold 20
|
Limit balancing to
only 20% resources in the cluster
|
9. Filesystem Commands
Command
|
Description
|
hadoop fs -mkdir
mydir
|
Create a directory
(mydir) in HDFS
|
hadoop fs -ls
|
List files and
directories in HDFS
|
hadoop fs -cat
myfile
|
View a file
content
|
hadoop fs -du
|
Check disk space
usage in HDFS
|
hadoop fs -expunge
|
Empty trash on
HDFS
|
hadoop fs -chgrp
hadoop file1
|
Change group
membership of a file
|
hadoop fs -chown
huser file1
|
Change file
ownership
|
hadoop fs -rm
file1
|
Delete a file in
HDFS
|
hadoop fs -touchz
file2
|
Create an empty
file
|
hadoop fs -stat
file1
|
Check the status
of a file
|
hadoop fs -test -e
file1
|
Check if file
exists on HDFS
|
hadoop fs -test -z
file1
|
Check if file is
empty on HDFS
|
hadoop fs -test -d
file1
|
Check if file1 is
a directory on HDFS
|
Thursday, 14 May 2015
VENOM Vulnerability
How to Patch and Protect Linux Server against the VENOM Vulnerability # CVE-2015-3456
A very serious security problem has been found in the virtual floppy drive QEMU's code used by many computer virtualization platforms including Xen, KVM, VirtualBox, and the native QEMU client. It is called VENOM vulnerability. How can I fix VENOM vulnerability and protect my Linux server against the attack? How do I verify that my server has been fixed against the VENOM vulnerability?
This is tagged as high severity security bug and it was announced on 13th May 2015. The VENOM vulnerability has existed since 2004, when the virtual Floppy Disk Controller was first added to the QEMU codebase. Since the VENOM vulnerability exists in the hypervisor’s codebase, the vulnerability is agnostic of the host operating system (Linux, Windows, Mac OS, etc.).
What is the VENOM security bug (CVE-2015-3456)?
An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the hosting QEMU process.
A list of affected Linux distros
§ RHEL (Red Hat Enterprise Linux) version 5.x, 6.x and 7.x
§ CentOS Linux version 5.x, 6.x and 7.x
§ OpenStack 5 for RHEL 6
§ OpenStack 4 for RHEL 6
§ OpenStack 5 for RHEL 7
§ OpenStack 6 for RHEL 7
§ Red Hat Enterprise Virtualization 3
§ Debian Linux code named stretch, sid, jessie, squeeze, and wheezy [and all other distro based on Debian]
§ SUSE Linux Enterprise Server 10 Service Pack 4 (SLES 10 SP3)
§ SUSE Linux Enterprise Server 10 Service Pack 4 (SLES 10 SP4)
§ SUSE Linux Enterprise Server 11 Service Pack 1 (SLES 11 SP1)
§ SUSE Linux Enterprise Server 11 Service Pack 2 (SLES 11 SP2)
§ SUSE Linux Enterprise Server 11 Service Pack 3 (SLES 11 SP3)
§ SUSE Linux Enterprise Server 12
§ SUSE Linux Enterprise Expanded Support 5, 6 and 7
§ Ubuntu 12.04
§ Ubuntu 14.04
§ Ubuntu 14.10
§ Ubuntu 15.04
Fix the VENOM vulnerability on a CentOS/RHEL/Fedora/Scientific Linux
sudo yum clean all
sudo yum update
Reboot all your virtual machines on those hypervisors.
Fix the VENOM vulnerability on a Debian Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade
Reboot all your virtual machines on those hypervisors.
Fix the VENOM vulnerability on a Ubuntu Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade
Reboot all your virtual machines on those hypervisors.
A very serious security problem has been found in the virtual floppy drive QEMU's code used by many computer virtualization platforms including Xen, KVM, VirtualBox, and the native QEMU client. It is called VENOM vulnerability. How can I fix VENOM vulnerability and protect my Linux server against the attack? How do I verify that my server has been fixed against the VENOM vulnerability?
This is tagged as high severity security bug and it was announced on 13th May 2015. The VENOM vulnerability has existed since 2004, when the virtual Floppy Disk Controller was first added to the QEMU codebase. Since the VENOM vulnerability exists in the hypervisor’s codebase, the vulnerability is agnostic of the host operating system (Linux, Windows, Mac OS, etc.).
What is the VENOM security bug (CVE-2015-3456)?
An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the hosting QEMU process.
A list of affected Linux distros
§ RHEL (Red Hat Enterprise Linux) version 5.x, 6.x and 7.x
§ CentOS Linux version 5.x, 6.x and 7.x
§ OpenStack 5 for RHEL 6
§ OpenStack 4 for RHEL 6
§ OpenStack 5 for RHEL 7
§ OpenStack 6 for RHEL 7
§ Red Hat Enterprise Virtualization 3
§ Debian Linux code named stretch, sid, jessie, squeeze, and wheezy [and all other distro based on Debian]
§ SUSE Linux Enterprise Server 10 Service Pack 4 (SLES 10 SP3)
§ SUSE Linux Enterprise Server 10 Service Pack 4 (SLES 10 SP4)
§ SUSE Linux Enterprise Server 11 Service Pack 1 (SLES 11 SP1)
§ SUSE Linux Enterprise Server 11 Service Pack 2 (SLES 11 SP2)
§ SUSE Linux Enterprise Server 11 Service Pack 3 (SLES 11 SP3)
§ SUSE Linux Enterprise Server 12
§ SUSE Linux Enterprise Expanded Support 5, 6 and 7
§ Ubuntu 12.04
§ Ubuntu 14.04
§ Ubuntu 14.10
§ Ubuntu 15.04
Fix the VENOM vulnerability on a CentOS/RHEL/Fedora/Scientific Linux
sudo yum clean all
sudo yum update
Reboot all your virtual machines on those hypervisors.
Fix the VENOM vulnerability on a Debian Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade
Reboot all your virtual machines on those hypervisors.
Fix the VENOM vulnerability on a Ubuntu Linux
sudo apt-get clean
sudo apt-get update
sudo apt-get upgrade
Reboot all your virtual machines on those hypervisors.
Wednesday, 4 February 2015
Installation and configuration of Docker
In privious Blog I have mention Docker Introduction.
I have Installed Docker Ubuntu so below are the steps to installation
- Docker is supported on the following versions of Ubuntu:
- Ubuntu Trusty 14.04 (LTS) (64-bit)
- Ubuntu Precise 12.04 (LTS) (64-bit)
- Ubuntu Raring 13.04 and Saucy 13.10 (64 bit)
Please read Docker and UFW, if you plan to use UFW (Uncomplicated Firewall)
Ubuntu Trusty 14.04 (LTS) (64-bit)
Ubuntu Trusty 14.04 (LTS) (64-bit)
Ubuntu Trusty comes with a 3.13.0 Linux kernel, and a docker.io package which installs Docker 1.0.1 and all its prerequisites from Ubuntu's repository.
Note: Ubuntu contain a much older KDE3/GNOME2 package called docker, so the Ubuntu-maintained package and executable are named docker.io.
Ubuntu-maintained Package Installation
To install the latest Ubuntu package (this is not the most recent Docker release):
$ sudo apt-get update
$ sudo apt-get install docker.io
Then, to enable tab-completion of Docker commands in BASH, either restart BASH or:
$ source /etc/bash_completion.d/docker.io
Note: Since the Ubuntu package is quite dated at this point, you may want to use the following section to install the most recent release of Docker. If you install the Docker version, you do not need to install docker.io from Ubuntu.
Docker-maintained Package Installation
If you'd like to try the latest version of Docker:
First, check that your APT system can deal with https URLs: the file /usr/lib/apt/methods/https should exist. If it doesn't, you need to install the package apt-transport-https.
[ -e /usr/lib/apt/methods/https ] || {
apt-get update
apt-get install apt-transport-https
}
Then, add the Docker repository key to your local keychain.
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
Add the Docker repository to your apt sources list, update and install the lxc-docker package.
You may receive a warning that the package isn't trusted. Answer yes to continue installation.
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main\
> /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
Note:
There is also a simple curl script available to help with this process.
$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh
To verify that everything has worked as expected:
same way you can access your container.
$ sudo docker run -i -t Ubuntu /bin/bash
Which should download the Ubuntu image, and then start bash in a container.
RHEL 6 you need to Install EPEL repo and then you need run below mention command
yum install docker
In RHEL 7. it's inbuild Package. you don't need to install additional
Package
NOTE: If you wanted to launch your docker container then you need to write dockerfile and need to build the docker container.
More Information you can Click here
Tuesday, 3 February 2015
Docker Introduction
About Docker:
Develop, Ship and Run Any Application, Anywhere
Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.
Docker consists of:
Deployment:
Develop, Ship and Run Any Application, Anywhere
Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.
Docker consists of:
- The Docker Engine - lightweight and powerful open source container virtualization technology combined with a work flow for building and containerizing your applications.
- Docker Hub - SaaS service for sharing and managing your application stacks.
Deployment:
- Docker containers run (almost) everywhere. You can deploy containers on desktops, physical servers, virtual machines, into data centers, and up to public and private clouds.
- Since Docker runs on so many platforms, it's easy to move your applications around. You can easily move an application from a testing environment into the cloud and back whenever you need.
- Docker containers don't need a hypervisor, so you can pack more of them onto your hosts. This means you get more value out of every server and can potentially reduce what you spend on equipment and licenses.
- As Docker speeds up your work flow, it gets easier to make lots of small changes instead of huge, big bang updates. Smaller changes mean reduced risk and more uptime.
Friday, 30 January 2015
Patching Linux Server
In this blog we are going to Patch Linux
Machine Using up2date and yum. We are going to take backup of important fine
and necessary steps after patching, backout plan if system crashed
·
Take the back-up of the following files/commands.
·
Common for all
revisions:
·
uname -a
·
ifconfig –a
·
fdisk -l
·
uptime
·
cat /etc/hosts
·
cat /etc/fstab
·
df -h
·
cat /etc/grub.conf
·
cat /etc/sysctl.conf
·
rpm -qa > /packagelist_beforePatch_May2011.txt
·
cat /packagelist_beforePatch_May2011.txt
·
cat /etc/selinux/config
·
cat /etc/resolv.conf
·
chkconfig –list
·
#cat /etc/sysconfig/rhn/up2date
·
#up2date –l
·
#up2date --configure
·
#more /etc/sysconfig/rhn/up2date
·
more /etc/yum.conf
·
yum check-update
·
The below document
takes all the details of the remaining system-files as a part of taking backup
of system configuration:
·
rpm -qa > /packagelist_afterPatch_May2011.txt
·
cat /packagelist_afterPatch_May2011.txt
·
First, you must
update the up2date utility do to havening problems not being able to boot up
after patching.
#up2date up2date
·
this will download and install the latest up2date
utility
·
After verifying that up2date is at the latest revision
and the development and production environment are the same you must first down
load the patches on the all the servers that are being patched and install
patches on the development servers for testing.
#up2date --dry-run Or #up2date -l Or #up2date
--nodownload
·
This will show you the updated
patches/packages that are available for download.
Fetching Obsoletes list for channel: rhel-i386-es-4...
Fetching Obsoletes list for channel: rhel-i386-es-4...
Fetching rpm headers...
########################################
Name Version Rel Arch
·
----------------------------------------------------------------------------------------
·
4Suite 1.0 3.el4_8.1 i386
·
PyXML 0.8.3 6.el4_8.2 i386
·
acpid 1.0.3 2.el4_7.1 i386
·
apr 0.9.4 24.9.el4_8.2 i386
·
apr-util 0.9.4 22.el4_8.2 i386
·
audit 1.0.16 4.el4_8.1 i386
·
audit-libs 1.0.16 4.el4_8.1 i386
·
bash 3.0 21.el4_8.2 i386
·
bind-libs 9.2.4 30.el4_8.5 i386
·
bind-utils 9.2.4 30.el4_8.5 i386
·
compat-openldap 2.1.30 12.el4_8.2 i386
·
cpio 2.5 16.el4_8.1 i386
·
cpp 3.4.6 11.el4_8.1 i386
·
wget 1.10.2 1.el4_8.1 i386
·
xmlsec1 1.2.6 3.1 i386
·
xmlsec1-openssl 1.2.6 3.1 i386
·
Testing package set /
solving RPM inter-dependencies...
·
########################################
·
Name
Version Rel Arch
·
----------------------------------------------------------------------------------------
·
4Suite 1.0 3.el4_8.1 i386
·
PyXML 0.8.3 6.el4_8.2 i386
·
acpid 1.0.3 2.el4_7.1 i386
·
bind-utils 9.2.4 30.el4_8.5 i386
·
compat-openldap 2.1.30 12.el4_8.2 i386
·
gd
2.0.28
5.4E.el4_8.1 i386
·
glibc 2.3.4 2.43.el4_8.3 i686
·
The following Packages were
marked to be skipped by your configuration:
·
Name
Version Rel Reason
·
------------------------------------------------------------------------------------------------
·
kernel 2.6.9 89.0.26.EL Pkg name/pattern
·
kernel-smp 2.6.9 89.0.26.EL Pkg name/pattern
·
kernel-utils 2.4 20.el4 Pkg name/pattern
#more /etc/sysconfig/rhn/up2date
# Automatically generated
Red Hat Update Agent config file, do not edit.
# Format: 1.0
useNoSSLForPackages
[comment] =Use the noSSLServerURL for package, package list, a
nd header fetching
useNoSSLForPackages=0
storageDir[comment]=Where
to store packages and other data when they are retrieved
storageDir=/var/spool/up2date
[comment]=Remote
server URL without SSL
noSSLServerURL=http://xmlrpc.rhn.redhat.com/XMLRPC
networkRetries[comment]=Number
of attempts to make at network connections before
giving up
networkRetries=5
pkgsToInstallNotUpdate[comment]=A
list of provides names or package names of pack
ages to install not update
pkgsToInstallNotUpdate=kernel;kernel-modules;kernel-devel;
Select the required options (
keepAfterInstall & pkgskipList and etc) to change the Configuration of
Up2date Agent.
0. debug No
1. rhnuuid
38e8d384-589b-11d7-9124-00096be0a8c5
2. isatty Yes
showAvailablePacka
No
4. depslist [ ]
5. networkSetup Yes
6. retrieveOnly No
7. enableRollbacks No
8.pkgSkipList ['kernel*']
9.storageDir /var/spool/up2date
· This will download and save only the updates/packages in
/var/spool/up2date or what is defined in line 9 of up2date-config file
· Run
only if packages are downloaded into non-default directories
Example:
#up2date –iuk
/var/spool
·
This will download patches/rpm into a custom
directory. The default download directory is /var/spool/up2date. If the updates/packages have already been
downloaded, use this option below to install the downloaded updates/packages.
·
After patches are installed
#rpm -qa >
/packagelist_afterPatch_10182010.txt
·
A new listing should be done after patching
for future reference.
· Onece
you fine this command then you will get below mention output
vim-enhanced-6.3.046-0.40E.7
vim-minimal-6.3.046-0.40E.7
vixie-cron-4.1-50.el4
vsftpd-2.0.1-6.el4
vte-0.11.11-12.el4
vte-0.11.11-12.el4
wget-1.10.2-0.40E
which-2.16-4
wireless-tools-28-0.pre16.3.3.EL4
words-3.0-3.2
wvdial-1.54.0-3
Xaw3d-1.5-24
# shutdown [OPTION]... TIME [MESSAGE] The shutdown command
format.
# Broadcast message from root@RH5
(/dev/pts/1)
at 14:10 ...
The
system is going down for reboot NOW!
· AFTER THE PREDEFINDED, TESTING PEIORED THE
UPDATES/PATCHES WILL NEED TO BE MOVED TO THE PRODUCTION ENVIROMENT.
PRODUCTION
SERVER
· Take the back-up of the following files/commands.
#uname -a
#ifconfig –a
#cat /etc/hosts
#cat /etc/fstab
#df -h
#cat /etc/sysconfig/rhn/up2date
#cat /etc/grub.conf
#cat /etc/sysctl.conf
#rpm -qa
> /packagelist_10152010.txt#cat /packagelist_10152010.txt
#cat /etc/selinux/config
·
Select the required options (
keepAfterInstall & pkgskipList and etc) to change the Configuration of
Up2date Agent.
0. debug No
1. rhnuuid
38e8d384-589b-11d7-9124-00096be0a8c5
2. isatty Yes
3. showAvailablePacka No
4. depslist [ ]
5. networkSetup Yes
6. retrieveOnly No
7. enableRollbacks No
8.pkgSkipList ['kernel*']
9.storageDir
/var/spool/up2date
(Run only if packages are downloaded into non-default
directories)
Example:
#up2date –iuk /var/spool
·
This will check for downloaded patches first
before downloading from the RHN. The default download directory is
/var/spool/up2date. If the
updates/packages have already been downloaded, use this option to install the
downloaded updates/packages first before checking the RHN for updates/packages.
#rpm -qa >
newpatchlist.txt
·
A new listing should be done after patching
for future reference.
vim-enhanced-6.3.046-0.40E.7
vim-minimal-6.3.046-0.40E.7
vixie-cron-4.1-50.el4
vsftpd-2.0.1-6.el4
vte-0.11.11-12.el4
vte-0.11.11-12.el4
wget-1.10.2-0.40E
which-2.16-4
wireless-tools-28-0.pre16.3.3.EL4
words-3.0-3.2
wvdial-1.54.0-3Xaw3d-1.5-24
#
shutdown [OPTION]... TIME [MESSAGE] The shutdown command format.
#
Broadcast message from root@RH5
(/dev/pts/1) at 14:10 ...
The system is going down for reboot NOW!
(/dev/pts/1) at 14:10 ...
The system is going down for reboot NOW!
· (RH5): How to download and
install patches/Updates for a development/production environment:
Take the back-up of the following
files/commands.
#uname
-a
#ifconfig
–a
#fdisk
-l
#cat
/etc/hosts
#cat
/etc/fstab
#df
-h
#cat
/etc/yum.conf
#cat
/etc/grub.conf
#cat
/etc/sysctl.conf
#rpm
-qa > /packagelist_10152010.txt
#cat
/packagelist_10152010.txt
#cat
/etc/selinux/config
·
First, you must install the yum downloadonly utility to give yum the
ability to download patches/rpm.
·
this will download and install the downloadonly utility
·
After verifying that yum download utility is installed and the
development and production environment are the same you must first down load
the patches on the all the servers that are being patched and install patches
on the development server for testing
·
In addition, you will need to clear the yum cache.
·
This will clean the yum chache when you again fire the command then it
will search all repository for updated packages.
·
Once you fine check-update command then This
will give you a list of updated patches/packages available for download.
kpartx.i386
0.4.7-34.el5_5.1
rhel-i386-server-5
krb5-libs.i386
1.6.1-36.el5_5.4
rhel-i386-server-5
krb5-workstation.i386
1.6.1-36.el5_5.4
rhel-i386-server-5
libsmbclient.i386
3.0.33-3.29.el5_5
rhel-i386-server-5
lvm2.i386
2.02.56-8.el5_5.4
rhel-i386-server-5mkinitrd.i386
5.1.19.6-61.el5_5.1 rhel-i386-server-5
nash.i386
5.1.19.6-61.el5_5.1
rhel-i386-server-5
net-snmp-libs.i386 1:5.3.2.2-9.el5_5.1
rhel-i386-server-5
nscd.i386
2.5-49.el5_5.2 rhel-i386-server-5
·
This will download
and install updates/packages. This may update several packages on server
including kernel.
·
Yum will download
rpm files to the default download directory /var/cache/yum.
[main]
cachedir=/var/cache/yum
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
distroverpkg=redhat-release
tolerant=1
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
# Note: yum-RHN-plugin doesn't honor this.
metadata_expire=1h
# Default.
# installonly_limit = 3
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d
·
Example:#yum
localinstall
/var/cache/yum/rhel-i386-server-5/packages/*
·
Software testing is
built into the yum command
· A new
listing should be done after patching for future reference.
cat newpatchlist.txt
vim-enhanced-6.3.046-0.40E.7
vim-minimal-6.3.046-0.40E.7
vixie-cron-4.1-50.el4
vsftpd-2.0.1-6.el4
vte-0.11.11-12.el4
vte-0.11.11-12.el4
wget-1.10.2-0.40E
which-2.16-4
wireless-tools-28-0.pre16.3.3.EL4
words-3.0-3.2
wvdial-1.54.0-3
·
# Broadcast message
from root@RH5
(/dev/pts/1) at 14:10 ...
(/dev/pts/1) at 14:10 ...
o The system is going down for reboot NOW!
·
AFTER THE PREDEFINDED, TESTING PEIORED THE
UPDATES/PATCHES WILL NEED TO BE MOVED TO THE PRODUCTION ENVIROMENT.
· Take the back-up of the following files/commands.
#uname -a
#ifconfig –a
#cat /etc/hosts
#cat /etc/fstab
#df -h
#cat /etc/yum.conf
#cat /etc/grub.conf
#cat /etc/sysctl.conf
#rpm -qa >
/packagelist_10152010.txt
#cat
/packagelist_10152010.txt
#cat
/etc/selinux/config
·
Example:#yum
localinstall /var/cache/yum/rhel-i386-server-5/packages/*
·
Software testing is
built into the yum command
· A new listing should be done after patching for future
reference.
vim-enhanced-6.3.046-0.40E.7
vim-minimal-6.3.046-0.40E.7
vixie-cron-4.1-50.el4
vsftpd-2.0.1-6.el4
vte-0.11.11-12.el4
vte-0.11.11-12.el4
wget-1.10.2-0.40E
which-2.16-4
wireless-tools-28-0.pre16.3.3.EL4
words-3.0-3.2
wvdial-1.54.0-3
Xaw3d-1.5-24
# shutdown [OPTION]... TIME [MESSAGE] The shutdown command format.
# Broadcast message from
root@RH5
(/dev/pts/1) at 14:10 ...
The system is going down for reboot NOW
· Boot the server from old kernel through GRUB.
· Edit the grub configuration file under /etc/grub.conf.
(Delete the new kernel entry, make the old-kernel as default)
·
If the patching
corrupts the present kernel which corrupts the GRUB, then perform the below
tasks:
·
The GRUB build will
be corrupted as OS is corrupted. So, insert OS-CD on the machine and boot from
CD.
·
Proceed to the OS
from the rescue mode, and select grub.conf.
·
Make appropriate
changes to the file, which reflects old-kernel to be booted as default. (This
makes the server to boot from it.)
·
Restart the server
and Boot the server from the old-kernel.
·
If the old-kernel
and new-kernel both are crashed while patching the machine, then we shall need
to rebuild the server. Follow the below mentioned steps for rebuild:
·
Insert CD into the
cd-rom.
·
Boot the machine
from CD and proceed with installation.
·
After the
installation, work with changing the system configuration files. (Screen-shot
of the system files is taken before patching)
·
Work on restoration
of files from the recent backup.
·
Work with
Nimsoft-tier on getting the machine into monitoring.
·
Restart the machine
and make sure that the machine is back UP to the normal state as before.
(Monitors should work as normal as before after this reboot
Subscribe to:
Posts (Atom)