Google

Friday, November 23, 2007

IBM System p5 570


* Up to 16-core scalability with modular architecture and leadership POWER5+ technology

* IBM Advanced POWER™ Virtualization features increase system utilization and reduce the number of overall systems required

* Capacity on Demand features enable quick response to spikes in processing requirements

The IBM System p5 570 mid-range server is a powerful 19-inch rack mount system that can be used for database and application serving, as well as server consolidation. IBM’s modular symmetric multiprocessor (SMP) architecture means you can start with a 2-core system and easily add additional building blocks when needed for more processing power (up to 16-cores) I/O and storage capacity. The p5-570 includes IBM mainframe-inspired reliability, availability and serviceability features.

The System p5 570 server is designed to be a cost-effective, flexible server for the on demand environment. Innovative virtualization technologies and Capacity on Demand (CoD) options help increase the responsiveness of the server to variable computing demands. These features also help increase the systems utilization of processors and system components allowing businesses to meet their computing requirements with a smaller system. By combining IBM’s most advanced leading-edge technology for enterprise-class performance and flexible adaptation to changing market conditions, the p5-570 can deliver the key capabilities medium-sized companies need to survive in today’s highly competitive world.

Specifically, the System p5 570 server provides:

Common features Hardware summary

* 19-inch rack-mount packaging
* 2- to 16-core SMP design with unique building block architecture
* 64-bit 1.9 or 2.2 GHz POWER5+ processor cores
* Mainframe-inspired RAS features
* Dynamic LPAR support
* Advanced POWER Virtualization1 (option)
o IBM Micro-Partitioning™ (up to 160 micro- partitions)
o Shared processor pool
o Virtual I/O Server
o Partition Load Manager (IBM AIX 5L™ only)
* Up to 20 optional I/O drawers
* IBM HACMP™ software support for near continuous operation*
* Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL AS 4 or later) and SUSE Linux (SLES 9 or later) operating systems
* System Cluster 1600 support with Cluster Systems Management software*



* 4U 19-inch rack-mount packaging
* One to four building blocks
* Two, four, eight, 12, 16 1.9 or 2.2 GHz 64-bit POWER5+ processor cores
* L2 cache: 1.9MB to 15.2MB (2- to 16-core)
* L3 cache: 36MB to 288MB (2- to 16-core)
* 1.9 GHz systems: 2GB to 256GB of 533 MHz DDR2 memory; 2.2 GHz systems: 2GB to 256GB of 533 MHz or 32GB to 512GB of 400 MHz DDR2 memory
* Six hot-plug PCI-X adapter slots per building block
* Six hot-swappable disk bays per building block provide up to 7.2TB of internal disk storage
* Optional I/O drawers may add up to an additional 139 PCI-X slots (for a maximum of 163) and 240 disk bays (72TB additional)
* Dual channel Ultra320 SCSI controller per building block (internal; RAID optional)
* One integrated 2-port 10/100/1000 Ethernet per building block
* Optional 2 Gigabit Fibre Channel, 10 Gigabit Ethernet and 4x GX adapters
* One 2-port USB per building block
* Two HMC, two system ports
* Two hot-plug media bays per building block

IBM System p 570 with POWER 6


* Advanced IBM POWER6™ processor cores for enhanced performance and reliability

* Building block architecture delivers flexible scalability and modular growth

* Advanced virtualization features facilitate highly efficient systems utilization

* Enhanced RAS features enable improved application availability

The IBM POWER6 processor-based System p™ 570 mid-range server delivers outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades and innovative virtualization technologies. This powerful 19-inch rack-mount system, which can handle up to 16 POWER6 cores, can be used for database and application serving, as well as server consolidation. The modular p570 is designed to continue the tradition of its predecessor, the IBM POWER5+™ processor-based System p5™ 570 server, for resource optimization, secure and dependable performance and the flexibility to change with business needs. Clients have the ability to upgrade their current p5-570 servers and know that their investment in IBM Power Architecture™ technology has again been rewarded.

The p570 is the first server designed with POWER6 processors, resulting in performance and price/performance advantages while ushering in a new era in the virtualization and availability of UNIX® and Linux® data centers. POWER6 processors can run 64-bit applications, while concurrently supporting 32-bit applications to enhance flexibility. They feature simultaneous multithreading,1 allowing two application “threads” to be run at the same time, which can significantly reduce the time to complete tasks.

The p570 system is more than an evolution of technology wrapped into a familiar package; it is the result of “thinking outside the box.” IBM’s modular symmetric multiprocessor (SMP) architecture means that the system is constructed using 4-core building blocks. This design allows clients to start with what they need and grow by adding additional building blocks, all without disruption to the base system.2 Optional Capacity on Demand features allow the activation of dormant processor power for times as short as one minute. Clients may start small and grow with systems designed for continuous application availability.

Specifically, the System p 570 server provides:

Common features Hardware summary

* 19-inch rack-mount packaging
* 2- to 16-core SMP design with building block architecture
* 64-bit 3.5, 4.2 or 4.7 GHz POWER6 processor cores
* Mainframe-inspired RAS features
* Dynamic LPAR support
* Advanced POWER Virtualization1 (option)
o IBM Micro-Partitioning™ (up to 160 micro-partitions)
o Shared processor pool
o Virtual I/O Server
o Partition Mobility2
* Up to 32 optional I/O drawers
* IBM HACMP™ software support for near continuous operation*
* Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL 4 Update 5 or later) and SUSE Linux (SLES 10 SP1 or later) operating systems



* 4U 19-inch rack-mount packaging
* One to four building blocks
* Two, four, eight, 12 or 16 3.5 GHz, 4.2 GHz or 4.7 GHz 64-bit POWER6 processor cores
* L2 cache: 8 MB to 64 MB (2- to 16-core)
* L3 cache: 32 MB to 256 MB (2- to 16-core)
* 2 GB to 192 GB of 667 MHz buffered DDR2 or 16 GB to 384 GB of 533 MHz buffered DDR2 or 32 GB to 768 GB of 400 MHz buffered DDR2 memory3
* Four hot-plug, blind-swap PCI Express 8x and two hot-plug, blind-swap PCI-X DDR adapter slots per building block
* Six hot-swappable SAS disk bays per building block provide up to 7.2 TB of internal disk storage
* Optional I/O drawers may add up to an additional 188 PCI-X slots and up to 240 disk bays (72 TB additional)4
* One SAS disk controller per building block (internal)
* One integrated dual-port Gigabit Ethernet per building block standard; One quad-port Gigabit Ethernet per building block available as optional upgrade; One dual-port 10 Gigabit Ethernet per building block available as optional upgrade
* Two GX I/O expansion adapter slots
* One dual-port USB per building block
* Two HMC ports (maximum of two), two SPCN ports per building block
* One optional hot-plug media bay per building block
* Redundant service processor for multiple building block systems2

AIX command


AIX Control Book Creation
List the licensed program productslslpp -L
List the defined devices lsdev -C -H
List the disk drives on the system lsdev -Cc disk
List the memory on the system lsdev -Cc memory (MCA)
List the memory on the system lsattr -El sys0 -a realmem (PCI)
lsattr -El mem0
List system resources lsattr -EHl sys0
List the VPD (Vital Product Data) lscfg -v
Document the tty setup lscfg or smit screen capture F8
Document the print queues qchk -A
Document disk Physical Volumes (PVs) lspv
Document Logical Volumes (LVs) lslv
Document Volume Groups (long list) lsvg -l vgname
Document Physical Volumes (long list) lspv -l pvname
Document File Systems lsfs fsname
/etc/filesystems
Document disk allocation df
Document mounted file systems mount
Document paging space (70 - 30 rule) lsps -a
Document paging space activation /etc/swapspaces
Document users on the system /etc/passwd
lsuser -a id home ALL
Document users attributes /etc/security/user
Document users limits /etc/security/limits
Document users environments /etc/security/environ
Document login settings (login herald) /etc/security/login.cfg
Document valid group attributes /etc/group
lsgroup ALL
Document system wide profile /etc/profile
Document system wide environment /etc/environment
Document cron jobs /var/spool/cron/crontabs/*
Document skulker changes if used /usr/sbin/skulker
Document system startup file /etc/inittab
Document the hostnames /etc/hosts
Document network printing /etc/hosts.lpd
Document remote login host authority /etc/hosts.equiv

Directories to monitor in AIX


/var/adm/sulog Switch user log file (ASCII file). Use cat, pg or
more to view it and rm to clean it out.
/etc/security/failedlogin Failed logins from users. Use the who command
to view the information. Use "cat /dev/null >
/etc/failedlogin" to empty it,
/var/adm/wtmp All login accounting activity. Use the who
command to view it use "cat /dev/null >
/var/adm/wtmp" to empty it.
/etc/utmp Who has logged in to the system. Use the who
command to view it. Use "cat /dev/null >
/etc/utmp" to empty it.
/var/spool/lpd/qdir/* Left over queue requests
/var/spool/qdaemon/* temp copy of spooled files
/var/spool/* spooling directory
smit.log smit log file of activity
smit.script smit log

Mirror Write Consistency


Mirror Write Consistency (MWC) ensures data consistency on logical volumes in case a
system crash occurs during mirrored writes. The active method achieves this by logging
when a write occurs. LVM makes an update to the MWC log that identifies what areas of
the disk are being updated before performing the write of the data. Records of the last 62
distinct logical transfer groups (LTG) written to disk are kept in memory and also written to
a separate checkpoint area on disk (MWC log). This results in a performance degradation
during random writes.
With AIX V5.1 and later, there are now two ways of handling MWC:
• Active, the existing method
• Passive, the new method

Wednesday, November 21, 2007

File system types

The following types of file systems are supported on an AIX 5L Version 5.3:
Journaled file system
This type of file system is named journaled because the
system uses journaling techniques to maintain the
integrity of control structures. Each journaled file system
must reside on a distinct jfs logical volume. Therefore, the
file system size will be a multiple of the size of a logical
partition.
Enhanced journaled file system
This is the enhanced version of the initial journalized file
system. It uses extent based allocation to allow higher
performance, larger file systems, and a larger file size.
Each enhanced journaled file system must reside on a
distinct jfs2 logical volume. When the operating system is
installed using the default options, it creates JFS2 file
systems.
Network file system The network file system (NFS) is a distributed file system
that allows users to access files and directories located on
remote computers and use those files and directories as
though they are local.
CD-ROM file system The CD-ROM file system (CDRFS) is a file system type
that allows you to access the contents of a CD-ROM
through the normal file system interfaces

Storage management concepts

The fundamental concepts used by LVM are physical volumes, volume groups,
physical partitions, logical volumes, logical partitions, file systems, and raw
devices. Some of their characteristics are presented as follows:
 Each individual disk drive is a named physical volume (PV) and has a name
such as hdisk0 or hdisk1.
 One or more PVs can make up a volume group (VG). A physical volume can
belong to a maximum of one VG.
 You cannot assign a fraction of a PV to one VG. A physical volume is
assigned entirely to a volume group.
 Physical volumes can be assigned to the same volume group even though
they are of different types, such as SCSI or SSA.
 Storage space from physical volumes is divided into physical partitions (PPs).
The size of the physical partitions is identical on all disks belonging to the
same VG.
 Within each volume group, one or more logical volumes (LVs) can be defined.
Data stored on logical volumes appears to be contiguous from the user point
of view, but can be spread on different physical volumes from the same
volume group.
 Logical volumes consist of one or more logical partitions (LPs). Each logical
partition has at least one corresponding physical partition. A logical partition
and a physical partition always have the same size. You can have up to three
copies of the data located on different physical partitions. Usually, physical
partitions storing identical data are located on different physical disks for
redundancy purposes.
 Data from a logical volume can be stored in an organized manner, having the
form of files located in directories. This structured and hierarchical form of
organization is named a file system.
 Data from a logical volume can also be seen as a sequential string of bytes.
This type of logical volumes are named raw logical volumes. It is the
responsibility of the application that uses this data to access and interpret it
correctly.
 The volume group descriptor area (VGDA) is an area on the disk that contains
information pertinent to the volume group that the physical volume belongs to.
It also includes information about the properties and status of all physical and
logical volumes that are part of the volume group. The information from VGDA
is used and updated by LVM commands. There is at least one VGDA per
physical volume. Information from VGDAs of all disks that are part of the
same volume group must be identical. The VGDA internal architecture and
Chapter 6. Disk storage management 213
location on the disk depends on the type of the volume group (original, big, or
scalable).
 The volume group status area (VGSA) is used to describe the state of all
physical partitions from all physical volumes within a volume group. The
VGSA indicates if a physical partition contains accurate or stale information.
VGSA is used for monitoring and maintained data copies synchronization.
The VGSA is essentially a bitmap and its architecture and location on the disk
depends on the type of the volume group.
 A logical volume control block (LVCB) contains important information about
the logical volume, such as the number of the logical partitions or disk
allocation policy. Its architecture and location on the disk depends on the type
of the volume group it belongs to. For standard volume groups, the LVCB
resides on the first block of user data within the LV. For big volume groups,
there is additional LVCB information in VGDA on the disk. For scalable volume
groups, all relevant logical volume control information is kept in the VGDA as
part of the LVCB information area and the LV entry area.

Boot process of IBM AIX

This chapter describes the boot process and the different stages the system
uses to prepare the AIX 5L environment.
Topics discussed in this chapter are:
 The boot process
 System initialization
 The /etc/inittab file
 How to recover from a non-responsive boot process
 Run levels
 An introduction to the rc.* files

WSM Objectives

The objectives of the Web-based System Manager are:
• Simplification of AIX administration by a single interface
• Enable AIX systems to be administered from almost any client platform with a browser
that supports Java 1.3 or use downloaded client code from an AIX V5.3 code
• Enable AIX systems to be administered remotely
• Provide a system administration environment that provides a similar look and feel to the
Windows NT/2000/XP, LINUX and AIX CDE environments
The Web-based System Manager provides a comprehensive system management
environment and covers most of the tasks in the SMIT user interface. The Web-based
System Manager can only be run from a graphics terminal so SMIT will need to be used in
the ASCII environment.
To download Web-based System Manager Client code from an AIX host use the address
http:///remote_client.html
Supported Microsoft Windows clients for AIX 5.3 are Windows 2000 Professional version,
Windows XP Professional version, or Windows Server 2003.
Supported Linux clients are PCs running: Red Hat Enterprise Version 3, SLES 8, SLES 9,
Suse 8.0, Suse 8.1, Suse 8.2, and Suse 9.0 using desktops KDE or GNOME only.
The PC Web-based System Manager Client installation needs a minimum of 300 MB free
disk space, 512 MB memory (1GB preferred) and a 1 GHZ cpu.

HACMP COurse Details

· Explain what High Availability is.

· Outline the capabilities of HACMP for AIX.

· Design and plan a highly available cluster.

· Install and configure HACMP for AIX or HACMP/ES in the

following modes of operaton:

    • Cascading.
    • Mutual Takeover.
    • Cascading without Fallback.
    • Rotating.
    • Concurrent Access (optional).

  • Perform basic system administration tasks for HACMP.

  • Perform basic customisation for HACMP.
  • Carry out problem determination and recovery.

Sunday, November 18, 2007

System management

Cluster Systems Management (CSM) for AIX and Linux
CSM is designed to minimize the cost and complexity of administering clustered and partitioned systems by enabling comprehensive management and monitoring of the entire environment from a single point of control. CSM provides:

  • Software distribution, installation and update (operating system and applications)
  • Comprehensive system monitoring with customizable automated responses
  • Distributed command execution
  • Hardware control
  • Diagnostic tools
  • Management by group
  • Both a graphical interface and a fully scriptable command line interface
In addition to providing all the key functions for administration and maintenance of distributed systems, CSM is designed to deliver the parallel execution required to manage clustered computing environments effectively. CSM supports homogeneous or mixed environments of IBM servers running AIX or Linux.

Parallel System Support Programs (PSSP) for AIX
PSSP is the systems management predecessor to Cluster Systems Management (CSM) and does not support IBM System p servers or AIX 5L™ V5.3 or above. New cluster deployments should use CSM and existing PSSP clients with software maintenance will be transitioned to CSM at no charge.

IBM System Cluster 1350

Reduced time to deployment

IBM HPC clustering offers significant price/performance advantages for many high-performance workloads by harnessing the advantages of low cost servers plus innovative, easily available open source software.

Today, some businesses are building their own Linux and Microsoft clusters using commodity hardware, standard interconnects and networking technology, open source software, and in-house or third-party applications. Despite the apparent cost advantages offered by these systems, the expense and complexity of assembling, integrating, testing and managing these clusters from disparate, piece-part components often outweigh any benefits gained.

IBM has designed the IBM System Cluster 1350 to help address these challenges. Now clients can benefit from IBM’s extensive experience with HPC to help minimize this complexity and risk. Using advanced Intel® Xeon®, AMD Opteron™, and IBM PowerPC® processor-based server nodes, proven cluster management software and optional high-speed interconnects, the Cluster 1350 offers the best of IBM and third-party technology. As a result, clients can speed up installation of an HPC cluster, simplify its management, and reduce mean time to payback.

The Cluster 1350 is designed to be an ideal solution for a broad range of application environments, including industrial design and manufacturing, financial services, life sciences, government and education. These environments typically require excellent price/performance for handling high performance computing (HPC) and business performance computing (BPC) workloads. It is also an excellent choice for applications that require horizontal scaling capabilities, such as Web serving and collaboration.

Common features Hardware summary
  • Rack-optimized Intel Xeon dual-core and quad-core and AMD Opteron processor-based servers
  • Intel Xeon, AMD and PowerPC processor-based blades
  • Optional high capacity IBM System Storage™ DS3200, DS3400, DS4700, DS4800 and EXP3000 Storage Servers and IBM System Storage EXP 810 Storage Expansion
  • Industry-standard Gigabit Ethernet cluster interconnect
  • Optional high-performance Myrinet-2000 and Myricom 10g cluster interconnect
  • Optional Cisco, Voltaire, Force10 and PathScale InfiniBand cluster interconnects
  • Clearspeed Floating Point Accelerator
  • Terminal server and KVM switch
  • Space-saving flat panel monitor and keyboard
  • Runs with RHEL 4 or SLES 10 Linux operating systems or Windows Compute Cluster Server
  • Robust cluster systems management and scalable parallel file system software
  • Hardware installed and integrated in 25U or 42U Enterprise racks
  • Scales up to 1,024 cluster nodes (larger systems and additional configurations available—contact your IBM representative or IBM Business Partner)
  • Optional Linux cluster installation and support services from IBM Global Services or an authorized partner or distributor
  • Clients must obtain the version of the Linux operating system specified by IBM from IBM, the Linux Distributor or an authorized reseller
  • x3650—dual core up to 3.0 GHz, quad core up to 2.66
  • x3550—dual core up to 3.0 GHz, quad core up to 2.66
  • x3455—dual core up to 2.8 GHz
  • x3655—dual core up to 2.6 GHz
  • x3755—dual core up to 2.8 GHz
  • HS21—dual core up to 3.0 GHz, quad core up to 2.66
  • HS21 XM—dual core up to 3.0 GHz, quad core up to 2.33
  • JS21—2.7/2.6 GHz*; 2.5/2.3 GHz*
  • LS21—dual core up to 2.6 GHz
  • LS41—dual core up to 2.6 GHz
  • QS20—multi-core 3.2 GHz
  • Up to 64 storage nodes

BM System Cluster 1600


BM System Cluster 1600 systems are comprised of IBM POWER5™ and POWER5+™ symmetric multiprocessing (SMP) servers running AIX 5L™ or Linux®. Cluster 1600 is a highly scalable cluster solution for large-scale computational modeling and analysis, large databases and business intelligence applications and cost-effective datacenter, server and workload consolidation. Cluster 1600 systems can be deployed on Ethernet networks, InfiniBand networks, or with the IBM High Performance Switch and are typically managed with Cluster Systems Management (CSM) software, a comprehensive tool designed specifically to streamline initial deployment and ongoing management of cluster systems.

Common features

· Highly scalable AIX 5L or Linux cluster solutions for large-scale computational modeling, large databases and cost-effective data center, server and workload consolidation

· Cluster Systems Management (CSM) software for comprehensive, flexible deployment and ongoing management

· Cluster interconnect options: industry standard 1/10Gb Ethernet (AIX 5L or Linux), IBM High Performance Switch (AIX 5L and CSM) SP Switch2 (AIX 5L and PSSP); 4x/12x InfiniBand (AIX 5L or SLES 9); or Myrinet (Linux)

· Operating system options: AIX 5L Version 5.2 or 5.3, SUSE Linux Enterprise Server 8 or 9, Red Hat Enterprise Linux 4

· Complete software suite for creating, tuning and running parallel applications: Engineering & Scientific Subroutine Library (ESSL), Parallel ESSL, Parallel Environment, XL Fortran, VisualAge C++

· High-performance, high availability, highly scalable cluster file system General Parallel File System (GPFS)

· Job scheduling software to optimize resource utilization and throughput: LoadLeveler®

· High availability software for continuous access to data and applications: High Availability Cluster Multiprocessing (HACMP™)
Hardware summary

· Mix and match IBM POWER5 and POWER5+ servers:
· IBM System p5™ 595, 590, 575, 570, 560Q, 550Q, 550, 520Q, 520, 510Q, 510, 505Q and 505

· IBM eServer™ p5 595, 590, 575, 570, 550, 520, and 510



· Up to 128 servers or LPARs (AIX 5L or Linux operating system images) per cluster depending on hardware; higher scalability by special order







Very usefull Command

svmon
svmon -P

Further:
use can user svmon command to monitor memory usage as follows;

(A) #svmon -P -v -t 10 | more (will give top ten processes)
(B) #svmon -U -v -t 10 | more ( will give top ten user)


smit install requires "inutoc ." first. It'll autogenerate a .toc for you
I believe, but if you later add more .bff's to the same directory, then
the inutoc . becomes important. It is of course, a table of contents.


dump -ov /dir/xcoff-file


topas, -P is useful # similar to top


When creating really big filesystems, this is very helpful:
chlv -x 6552 lv08
Word on the net is that this is required for filesystems over 512M.

esmf04m-root> crfs -v jfs -g'ptmpvg' -a size='884998144' -m'/ptmp2'
-A''`locale yesstr | awk -F: '{print $1}'`'' -p'rw' -t''`locale yesstr |
awk -F: '{print $1}'`'' -a frag='4096' -a nbpi='131072' -a ag='64'
Based on the parameters chosen, the new /ptmp2 JFS file system
is limited to a maximum size of 2147483648 (512 byte blocks)
New File System size is 884998144
esmf04m-root>

If you give a bad combination of parameters, the command will list
possibilities. I got something like this from smit, then seasoned
to taste.


If you need files larger than 2 gigabytes in size, this is better.
It should allow files up to 64 gigabytes:
crfs -v jfs -a bf=true -g'ptmpvg' -a size='884998144' -m'/ptmp2' -A''` |
| locale yesstr | awk -F: '{print $1}'`'' -p'rw' -t''`locale yesstr | aw |
| k -F: '{print $1}'`'' -a nbpi='131072' -a ag='64'


Show version of SSP (IBM SP switch) software:
lslpp -al ssp.basic


llctl -g reconfig - make loadleveler reread its config files


oslevel (sometimes lies)
oslevel -r (seems to do better)


lsdev -Cc adapter


pstat -a looks useful


vmo is for VM tuning


On 1000BaseT, you really want this:
chdev -P -l ent2 -a media_speed=Auto_Negotiation


Setting jumbo frames on en2 looks like:
ifconfig en2 down detach
chdev -l ent2 -a jumbo_frames=yes
chdev -l en2 -a mtu=9000
chdev -l en2 -a state=up


Search for the meaning of AIX errors:
http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/eisearch.htm


nfso -a shows AIX NFS tuning parameters; good to check on if you're
getting badcalls in nfsstat. Most people don't bother to tweaks these
though.


nfsstat -m shows great info about full set of NFS mount options


Turn on path mtu discovery
no -o tcp_pmtu_discover=1
no -o udp_pmtu_discover=1
TCP support is handled by the OS. UDP support requires cooperation
between OS and application.


nfsstat -c shows rpc stats


To check for software problems:
lppchk -v
lppchk -c
lppchk -l


List subsystem (my word) status:
lssrc -a
mkssys
rmssys
chssys
auditpr
refresh
startsrc
stopsrc
traceson
tracesoff


This starts sendmail:
startsrc -s sendmail -a "-bd -q30m"


This makes inetd reread its config file. Not sure if it kills and
restarts or just HUP's or what:
refresh -s inetd


lsps is used to list the characteristics of paging space.


Turning off ip forwarding:
/usr/sbin/no -o ipforwarding=0


Detailed info about a specific error:
errpt -a -jE85C5C4C
BTW, Rajiv Bendale tells me that errors are stored in NVRAM on AIX,
so you don't have to put time into replicating an error as often.


Some or all of these will list more than one number. Trust the first,
not the second.

lslpp -l ppe.poe
...should list the version of poe installed on the system

Check on compiler versions:
lslpp -l vac.C
lslpp -l vacpp.cmp.core

Check on loadleveler version:
lslpp -l LoadL.full


If you want to check the bootlist do bootlist -o -m normal if you want to
update bootlist do bootlist -m normal hdisk* hdisk* cd* rmt*


prtconf


Run the ssadiag against the drive and the adapter and it will tell you if it
fails or not. Then if its a hot plugable it can be replaced online.

AIX Control Book Creation

List the licensed program productslslpp -L
List the defined devices lsdev -C -H
List the disk drives on the system lsdev -Cc disk
List the memory on the system lsdev -Cc memory (MCA)
List the memory on the system lsattr -El sys0 -a realmem (PCI)
lsattr -El mem0
List system resources lsattr -EHl sys0
List the VPD (Vital Product Data) lscfg -v
Document the tty setup lscfg or smit screen capture F8
Document the print queues qchk -A
Document disk Physical Volumes (PVs) lspv
Document Logical Volumes (LVs) lslv
Document Volume Groups (long list) lsvg -l vgname
Document Physical Volumes (long list) lspv -l pvname
Document File Systems lsfs fsname
/etc/filesystems
Document disk allocation df
Document mounted file systems mount
Document paging space (70 - 30 rule) lsps -a
Document paging space activation /etc/swapspaces
Document users on the system /etc/passwd
lsuser -a id home ALL
Document users attributes /etc/security/user
Document users limits /etc/security/limits
Document users environments /etc/security/environ
Document login settings (login herald) /etc/security/login.cfg
Document valid group attributes /etc/group
lsgroup ALL
Document system wide profile /etc/profile
Document system wide environment /etc/environment
Document cron jobs /var/spool/cron/crontabs/*
Document skulker changes if used /usr/sbin/skulker
Document system startup file /etc/inittab
Document the hostnames /etc/hosts
Document network printing /etc/hosts.lpd
Document remote login host authority /etc/hosts.equi

What is Hot Spare

What is an LVM hot spare?

A hot spare is a disk or group of disks used to replace a failing disk. LVM marks a physical
volume missing due to write failures. It then starts the migration of data to the hot spare
disk.
Minimum hot spare requirements
The following is a list of minimal hot sparing requirements enforced by the operating
system.
- Spares are allocated and used by volume group
- Logical volumes must be mirrored
- All logical partitions on hot spare disks must be unallocated
- Hot spare disks must have at least equal capacity to the smallest disk already
in the volume group. Good practice dictates having enough hot spares to
cover your largest mirrored disk.
Hot spare policy
The chpv and the chvg commands are enhanced with a new -h argument. This allows you
to designate disks as hot spares in a volume group and to specify a policy to be used in the
case of failing disks.
The following four values are valid for the hot spare policy argument (-h):
Synchronization policy
There is a new -s argument for the chvg command that is used to specify synchronization
characteristics.
The following two values are valid for the synchronization argument (-s):
Examples
The following command marks hdisk1 as a hot spare disk:
# chpv -hy hdisk1
The following command sets an automatic migration policy which uses the smallest hot
spare that is large enough to replace the failing disk, and automatically tries to synchronize
stale partitions:
# chvg -hy -sy testvg
Argument Description
y (lower case)
Automatically migrates partitions from one failing disk to one spare
disk. From the pool of hot spare disks, the smallest one which is big
enough to substitute for the failing disk will be used.
Y (upper case)
Automatically migrates partitions from a failing disk, but might use
the complete pool of hot spare disks.
n
No automatic migration will take place. This is the default value for a
volume group.
r
Removes all disks from the pool of hot spare disks for this volume