tag:blogger.com,1999:blog-9397047622448103592024-02-21T07:18:47.431+05:30Welcome to AIX WorldNilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-939704762244810359.post-76419962627166429092007-11-23T19:53:00.001+05:302007-11-23T19:53:35.196+05:30IBM System p5 570<h3 class="post-title entry-title"> <a href="http://jayinux.blogspot.com/2007/11/ibm-system-p5-570.html"><br /></a> </h3> <p>* Up to 16-core scalability with modular architecture and leadership POWER5+ technology<br /><br /> * IBM Advanced POWER™ Virtualization features increase system utilization and reduce the number of overall systems required<br /><br />* Capacity on Demand features enable quick response to spikes in processing requirements<br /><br />The IBM System p5 570 mid-range server is a powerful 19-inch rack mount system that can be used for database and application serving, as well as server consolidation. IBM’s modular symmetric multiprocessor (SMP) architecture means you can start with a 2-core system and easily add additional building blocks when needed for more processing power (up to 16-cores) I/O and storage capacity. The p5-570 includes IBM mainframe-inspired reliability, availability and serviceability features.<br /><br />The System p5 570 server is designed to be a cost-effective, flexible server for the on demand environment. Innovative virtualization technologies and Capacity on Demand (CoD) options help increase the responsiveness of the server to variable computing demands. These features also help increase the systems utilization of processors and system components allowing businesses to meet their computing requirements with a smaller system. By combining IBM’s most advanced leading-edge technology for enterprise-class performance and flexible adaptation to changing market conditions, the p5-570 can deliver the key capabilities medium-sized companies need to survive in today’s highly competitive world.<br /><br />Specifically, the System p5 570 server provides:<br /><br />Common features Hardware summary<br /><br /> * 19-inch rack-mount packaging<br /> * 2- to 16-core SMP design with unique building block architecture<br /> * 64-bit 1.9 or 2.2 GHz POWER5+ processor cores<br /> * Mainframe-inspired RAS features<br /> * Dynamic LPAR support<br /> * Advanced POWER Virtualization1 (option)<br /> o IBM Micro-Partitioning™ (up to 160 micro- partitions)<br /> o Shared processor pool<br /> o Virtual I/O Server<br /> o Partition Load Manager (IBM AIX 5L™ only)<br /> * Up to 20 optional I/O drawers<br /> * IBM HACMP™ software support for near continuous operation*<br />* Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL AS 4 or later) and SUSE Linux (SLES 9 or later) operating systems<br /> * System Cluster 1600 support with Cluster Systems Management software*<br /><br /><br /><br /> * 4U 19-inch rack-mount packaging<br /> * One to four building blocks<br /> * Two, four, eight, 12, 16 1.9 or 2.2 GHz 64-bit POWER5+ processor cores<br /> * L2 cache: 1.9MB to 15.2MB (2- to 16-core)<br /> * L3 cache: 36MB to 288MB (2- to 16-core)<br />* 1.9 GHz systems: 2GB to 256GB of 533 MHz DDR2 memory; 2.2 GHz systems: 2GB to 256GB of 533 MHz or 32GB to 512GB of 400 MHz DDR2 memory<br /> * Six hot-plug PCI-X adapter slots per building block<br /> * Six hot-swappable disk bays per building block provide up to 7.2TB of internal disk storage<br />* Optional I/O drawers may add up to an additional 139 PCI-X slots (for a maximum of 163) and 240 disk bays (72TB additional)<br /> * Dual channel Ultra320 SCSI controller per building block (internal; RAID optional)<br /> * One integrated 2-port 10/100/1000 Ethernet per building block<br /> * Optional 2 Gigabit Fibre Channel, 10 Gigabit Ethernet and 4x GX adapters<br /> * One 2-port USB per building block<br /> * Two HMC, two system ports<br /> * Two hot-plug media bays per building block</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-10446661173515148122007-11-23T19:52:00.003+05:302007-11-23T19:52:59.074+05:30IBM System p 570 with POWER 6<h3 class="post-title entry-title"> <a href="http://jayinux.blogspot.com/2007/11/ibm-system-p-570-with-power-6.html"><br /></a> </h3> <p>* Advanced IBM POWER6™ processor cores for enhanced performance and reliability<br /><br /> * Building block architecture delivers flexible scalability and modular growth<br /><br /> * Advanced virtualization features facilitate highly efficient systems utilization<br /><br /> * Enhanced RAS features enable improved application availability<br /><br />The IBM POWER6 processor-based System p™ 570 mid-range server delivers outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades and innovative virtualization technologies. This powerful 19-inch rack-mount system, which can handle up to 16 POWER6 cores, can be used for database and application serving, as well as server consolidation. The modular p570 is designed to continue the tradition of its predecessor, the IBM POWER5+™ processor-based System p5™ 570 server, for resource optimization, secure and dependable performance and the flexibility to change with business needs. Clients have the ability to upgrade their current p5-570 servers and know that their investment in IBM Power Architecture™ technology has again been rewarded.<br /><br />The p570 is the first server designed with POWER6 processors, resulting in performance and price/performance advantages while ushering in a new era in the virtualization and availability of UNIX® and Linux® data centers. POWER6 processors can run 64-bit applications, while concurrently supporting 32-bit applications to enhance flexibility. They feature simultaneous multithreading,1 allowing two application “threads” to be run at the same time, which can significantly reduce the time to complete tasks.<br /><br />The p570 system is more than an evolution of technology wrapped into a familiar package; it is the result of “thinking outside the box.” IBM’s modular symmetric multiprocessor (SMP) architecture means that the system is constructed using 4-core building blocks. This design allows clients to start with what they need and grow by adding additional building blocks, all without disruption to the base system.2 Optional Capacity on Demand features allow the activation of dormant processor power for times as short as one minute. Clients may start small and grow with systems designed for continuous application availability.<br /><br />Specifically, the System p 570 server provides:<br /><br />Common features Hardware summary<br /><br /> * 19-inch rack-mount packaging<br /> * 2- to 16-core SMP design with building block architecture<br /> * 64-bit 3.5, 4.2 or 4.7 GHz POWER6 processor cores<br /> * Mainframe-inspired RAS features<br /> * Dynamic LPAR support<br /> * Advanced POWER Virtualization1 (option)<br /> o IBM Micro-Partitioning™ (up to 160 micro-partitions)<br /> o Shared processor pool<br /> o Virtual I/O Server<br /> o Partition Mobility2<br /> * Up to 32 optional I/O drawers<br /> * IBM HACMP™ software support for near continuous operation*<br />* Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL 4 Update 5 or later) and SUSE Linux (SLES 10 SP1 or later) operating systems<br /><br /><br /><br /> * 4U 19-inch rack-mount packaging<br /> * One to four building blocks<br /> * Two, four, eight, 12 or 16 3.5 GHz, 4.2 GHz or 4.7 GHz 64-bit POWER6 processor cores<br /> * L2 cache: 8 MB to 64 MB (2- to 16-core)<br /> * L3 cache: 32 MB to 256 MB (2- to 16-core)<br />* 2 GB to 192 GB of 667 MHz buffered DDR2 or 16 GB to 384 GB of 533 MHz buffered DDR2 or 32 GB to 768 GB of 400 MHz buffered DDR2 memory3<br /> * Four hot-plug, blind-swap PCI Express 8x and two hot-plug, blind-swap PCI-X DDR adapter slots per building block<br /> * Six hot-swappable SAS disk bays per building block provide up to 7.2 TB of internal disk storage<br /> * Optional I/O drawers may add up to an additional 188 PCI-X slots and up to 240 disk bays (72 TB additional)4<br /> * One SAS disk controller per building block (internal)<br />* One integrated dual-port Gigabit Ethernet per building block standard; One quad-port Gigabit Ethernet per building block available as optional upgrade; One dual-port 10 Gigabit Ethernet per building block available as optional upgrade<br /> * Two GX I/O expansion adapter slots<br /> * One dual-port USB per building block<br /> * Two HMC ports (maximum of two), two SPCN ports per building block<br /> * One optional hot-plug media bay per building block<br /> * Redundant service processor for multiple building block systems2</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-55157012400915623282007-11-23T19:52:00.001+05:302007-11-23T19:52:10.891+05:30AIX command<h3 class="post-title entry-title"> <a href="http://jayinux.blogspot.com/2007/11/aix-command.html"><br /></a> </h3> <p>AIX Control Book Creation<br />List the licensed program productslslpp -L<br />List the defined devices lsdev -C -H<br />List the disk drives on the system lsdev -Cc disk<br />List the memory on the system lsdev -Cc memory (MCA)<br />List the memory on the system lsattr -El sys0 -a realmem (PCI)<br />lsattr -El mem0<br />List system resources lsattr -EHl sys0<br />List the VPD (Vital Product Data) lscfg -v<br />Document the tty setup lscfg or smit screen capture F8<br />Document the print queues qchk -A<br />Document disk Physical Volumes (PVs) lspv<br />Document Logical Volumes (LVs) lslv<br />Document Volume Groups (long list) lsvg -l vgname<br />Document Physical Volumes (long list) lspv -l pvname<br />Document File Systems lsfs fsname<br />/etc/filesystems<br />Document disk allocation df<br />Document mounted file systems mount<br />Document paging space (70 - 30 rule) lsps -a<br />Document paging space activation /etc/swapspaces<br />Document users on the system /etc/passwd<br />lsuser -a id home ALL<br />Document users attributes /etc/security/user<br />Document users limits /etc/security/limits<br />Document users environments /etc/security/environ<br />Document login settings (login herald) /etc/security/login.cfg<br />Document valid group attributes /etc/group<br />lsgroup ALL<br />Document system wide profile /etc/profile<br />Document system wide environment /etc/environment<br />Document cron jobs /var/spool/cron/crontabs/*<br />Document skulker changes if used /usr/sbin/skulker<br />Document system startup file /etc/inittab<br />Document the hostnames /etc/hosts<br />Document network printing /etc/hosts.lpd<br />Document remote login host authority /etc/hosts.equiv</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-74420525719508336672007-11-23T19:51:00.001+05:302007-11-23T19:51:48.022+05:30Directories to monitor in AIX<h3 class="post-title entry-title"> <a href="http://jayinux.blogspot.com/2007/11/directories-to-monitor-in-aix.html"><br /></a> </h3> <p>/var/adm/sulog Switch user log file (ASCII file). Use cat, pg or<br />more to view it and rm to clean it out.<br />/etc/security/failedlogin Failed logins from users. Use the who command<br />to view the information. Use "cat /dev/null ><br />/etc/failedlogin" to empty it,<br />/var/adm/wtmp All login accounting activity. Use the who<br />command to view it use "cat /dev/null ><br />/var/adm/wtmp" to empty it.<br />/etc/utmp Who has logged in to the system. Use the who<br />command to view it. Use "cat /dev/null ><br />/etc/utmp" to empty it.<br />/var/spool/lpd/qdir/* Left over queue requests<br />/var/spool/qdaemon/* temp copy of spooled files<br />/var/spool/* spooling directory<br />smit.log smit log file of activity<br />smit.script smit log</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-38015790847987875342007-11-23T19:50:00.000+05:302007-11-23T19:51:26.115+05:30Mirror Write Consistency<h3 class="post-title entry-title"> <a href="http://jayinux.blogspot.com/2007/11/mirror-write-consistency.html"><br /></a> </h3> <p>Mirror Write Consistency (MWC) ensures data consistency on logical volumes in case a<br />system crash occurs during mirrored writes. The active method achieves this by logging<br />when a write occurs. LVM makes an update to the MWC log that identifies what areas of<br />the disk are being updated before performing the write of the data. Records of the last 62<br />distinct logical transfer groups (LTG) written to disk are kept in memory and also written to<br />a separate checkpoint area on disk (MWC log). This results in a performance degradation<br />during random writes.<br />With AIX V5.1 and later, there are now two ways of handling MWC:<br />• Active, the existing method<br />• Passive, the new method</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-22698644508897112902007-11-21T12:18:00.000+05:302007-11-21T12:19:07.673+05:30File system typesThe following types of file systems are supported on an AIX 5L Version 5.3:<br />Journaled file system<br />This type of file system is named journaled because the<br />system uses journaling techniques to maintain the<br />integrity of control structures. Each journaled file system<br />must reside on a distinct jfs logical volume. Therefore, the<br />file system size will be a multiple of the size of a logical<br />partition.<br />Enhanced journaled file system<br />This is the enhanced version of the initial journalized file<br />system. It uses extent based allocation to allow higher<br />performance, larger file systems, and a larger file size.<br />Each enhanced journaled file system must reside on a<br />distinct jfs2 logical volume. When the operating system is<br />installed using the default options, it creates JFS2 file<br />systems.<br />Network file system The network file system (NFS) is a distributed file system<br />that allows users to access files and directories located on<br />remote computers and use those files and directories as<br />though they are local.<br />CD-ROM file system The CD-ROM file system (CDRFS) is a file system type<br />that allows you to access the contents of a CD-ROM<br />through the normal file system interfacesNilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-6732347782566297582007-11-21T12:17:00.000+05:302007-11-21T12:18:10.507+05:30Storage management conceptsThe fundamental concepts used by LVM are physical volumes, volume groups,<br />physical partitions, logical volumes, logical partitions, file systems, and raw<br />devices. Some of their characteristics are presented as follows:<br /> Each individual disk drive is a named physical volume (PV) and has a name<br />such as hdisk0 or hdisk1.<br /> One or more PVs can make up a volume group (VG). A physical volume can<br />belong to a maximum of one VG.<br /> You cannot assign a fraction of a PV to one VG. A physical volume is<br />assigned entirely to a volume group.<br /> Physical volumes can be assigned to the same volume group even though<br />they are of different types, such as SCSI or SSA.<br /> Storage space from physical volumes is divided into physical partitions (PPs).<br />The size of the physical partitions is identical on all disks belonging to the<br />same VG.<br /> Within each volume group, one or more logical volumes (LVs) can be defined.<br />Data stored on logical volumes appears to be contiguous from the user point<br />of view, but can be spread on different physical volumes from the same<br />volume group.<br /> Logical volumes consist of one or more logical partitions (LPs). Each logical<br />partition has at least one corresponding physical partition. A logical partition<br />and a physical partition always have the same size. You can have up to three<br />copies of the data located on different physical partitions. Usually, physical<br />partitions storing identical data are located on different physical disks for<br />redundancy purposes.<br /> Data from a logical volume can be stored in an organized manner, having the<br />form of files located in directories. This structured and hierarchical form of<br />organization is named a file system.<br /> Data from a logical volume can also be seen as a sequential string of bytes.<br />This type of logical volumes are named raw logical volumes. It is the<br />responsibility of the application that uses this data to access and interpret it<br />correctly.<br /> The volume group descriptor area (VGDA) is an area on the disk that contains<br />information pertinent to the volume group that the physical volume belongs to.<br />It also includes information about the properties and status of all physical and<br />logical volumes that are part of the volume group. The information from VGDA<br />is used and updated by LVM commands. There is at least one VGDA per<br />physical volume. Information from VGDAs of all disks that are part of the<br />same volume group must be identical. The VGDA internal architecture and<br />Chapter 6. Disk storage management 213<br />location on the disk depends on the type of the volume group (original, big, or<br />scalable).<br /> The volume group status area (VGSA) is used to describe the state of all<br />physical partitions from all physical volumes within a volume group. The<br />VGSA indicates if a physical partition contains accurate or stale information.<br />VGSA is used for monitoring and maintained data copies synchronization.<br />The VGSA is essentially a bitmap and its architecture and location on the disk<br />depends on the type of the volume group.<br /> A logical volume control block (LVCB) contains important information about<br />the logical volume, such as the number of the logical partitions or disk<br />allocation policy. Its architecture and location on the disk depends on the type<br />of the volume group it belongs to. For standard volume groups, the LVCB<br />resides on the first block of user data within the LV. For big volume groups,<br />there is additional LVCB information in VGDA on the disk. For scalable volume<br />groups, all relevant logical volume control information is kept in the VGDA as<br />part of the LVCB information area and the LV entry area.Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-81939287986071843672007-11-21T12:16:00.000+05:302007-11-21T12:17:28.978+05:30Boot process of IBM AIXThis chapter describes the boot process and the different stages the system<br />uses to prepare the AIX 5L environment.<br />Topics discussed in this chapter are:<br /> The boot process<br /> System initialization<br /> The /etc/inittab file<br /> How to recover from a non-responsive boot process<br /> Run levels<br /> An introduction to the rc.* filesNilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-8028554376999676222007-11-21T12:15:00.000+05:302007-11-21T12:16:39.067+05:30WSM ObjectivesThe objectives of the Web-based System Manager are:<br />• Simplification of AIX administration by a single interface<br />• Enable AIX systems to be administered from almost any client platform with a browser<br />that supports Java 1.3 or use downloaded client code from an AIX V5.3 code<br />• Enable AIX systems to be administered remotely<br />• Provide a system administration environment that provides a similar look and feel to the<br />Windows NT/2000/XP, LINUX and AIX CDE environments<br />The Web-based System Manager provides a comprehensive system management<br />environment and covers most of the tasks in the SMIT user interface. The Web-based<br />System Manager can only be run from a graphics terminal so SMIT will need to be used in<br />the ASCII environment.<br />To download Web-based System Manager Client code from an AIX host use the address<br />http://<hostname>/remote_client.html<br />Supported Microsoft Windows clients for AIX 5.3 are Windows 2000 Professional version,<br />Windows XP Professional version, or Windows Server 2003.<br />Supported Linux clients are PCs running: Red Hat Enterprise Version 3, SLES 8, SLES 9,<br />Suse 8.0, Suse 8.1, Suse 8.2, and Suse 9.0 using desktops KDE or GNOME only.<br />The PC Web-based System Manager Client installation needs a minimum of 300 MB free<br />disk space, 512 MB memory (1GB preferred) and a 1 GHZ cpu.Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-202381250077605692007-11-21T12:14:00.000+05:302007-11-21T12:15:41.685+05:30HACMP COurse Details<p class="MsoNormal" style="margin-left: 22.5pt; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family: Symbol;"><span style="">·<span style="font-family: "Times New Roman"; font-style: normal; font-variant: normal; font-weight: normal; font-size: 7pt; line-height: normal; font-size-adjust: none; font-stretch: normal;"> </span></span></span><!--[endif]-->Explain what High Availability is.</p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal" style="margin-left: 22.5pt; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family: Symbol;"><span style="">·<span style="font-family: "Times New Roman"; font-style: normal; font-variant: normal; font-weight: normal; font-size: 7pt; line-height: normal; font-size-adjust: none; font-stretch: normal;"> </span></span></span><!--[endif]-->Outline the capabilities of HACMP for AIX.</p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal" style="margin-left: 22.5pt; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family: Symbol;"><span style="">·<span style="font-family: "Times New Roman"; font-style: normal; font-variant: normal; font-weight: normal; font-size: 7pt; line-height: normal; font-size-adjust: none; font-stretch: normal;"> </span></span></span><!--[endif]-->Design and plan a highly available cluster.</p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal" style="margin-left: 22.5pt; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family: Symbol;"><span style="">·<span style="font-family: "Times New Roman"; font-style: normal; font-variant: normal; font-weight: normal; font-size: 7pt; line-height: normal; font-size-adjust: none; font-stretch: normal;"> </span></span></span><!--[endif]-->Install and configure HACMP for AIX or HACMP/ES in the</p> <p class="MsoNormal" style="margin-left: 4.5pt; text-indent: 0.25in;">following modes of operaton:</p> <p class="MsoNormal"><o:p> </o:p></p> <ul style="margin-top: 0in;" type="disc"><ul style="margin-top: 0in;" type="circle"><li class="MsoNormal" style="">Cascading.</li><li class="MsoNormal" style="">Mutual Takeover.</li><li class="MsoNormal" style="">Cascading without Fallback.</li><li class="MsoNormal" style="">Rotating.</li><li class="MsoNormal" style="">Concurrent Access (optional).</li></ul></ul> <p class="MsoNormal"><o:p> </o:p></p> <ul style="margin-top: 0in;" type="disc"><li class="MsoNormal" style="">Perform basic system administration tasks for HACMP.</li></ul> <p class="MsoNormal"><o:p> </o:p></p> <ul style="margin-top: 0in;" type="disc"><li class="MsoNormal" style="">Perform basic customisation for HACMP.</li><li class="MsoNormal" style="">Carry out problem determination and recovery.</li></ul>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-87929670817129755252007-11-18T00:30:00.001+05:302007-11-18T00:30:43.645+05:30System management<a href="http://www-03.ibm.com/systems/clusters/software/csm/"><b>Cluster Systems Management (CSM) for AIX and Linux</b></a><br />CSM is designed to minimize the cost and complexity of administering clustered and partitioned systems by enabling comprehensive management and monitoring of the entire environment from a single point of control. CSM provides:<br /><br /> <ul><li>Software distribution, installation and update (operating system and applications)</li><li>Comprehensive system monitoring with customizable automated responses</li><li>Distributed command execution</li><li>Hardware control</li><li>Diagnostic tools</li><li>Management by group</li><li>Both a graphical interface and a fully scriptable command line interface</li></ul> In addition to providing all the key functions for administration and maintenance of distributed systems, CSM is designed to deliver the parallel execution required to manage clustered computing environments effectively. CSM supports homogeneous or mixed environments of IBM servers running AIX or Linux.<br /><br /> <p> </p> <a name="pssp"></a> <b>Parallel System Support Programs (PSSP) for AIX</b><br />PSSP is the systems management predecessor to Cluster Systems Management (CSM) and does not support IBM System p servers or AIX 5L™ V5.3 or above. New cluster deployments should use CSM and existing PSSP clients with software maintenance will be transitioned to CSM at no charge.Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-61046315797106679782007-11-18T00:28:00.000+05:302007-11-18T00:29:04.236+05:30IBM System Cluster 1350<b>Reduced time to deployment</b> <br /><br />IBM HPC clustering offers significant price/performance advantages for many high-performance workloads by harnessing the advantages of low cost servers plus innovative, easily available open source software.<br /><br />Today, some businesses are building their own Linux and Microsoft clusters using commodity hardware, standard interconnects and networking technology, open source software, and in-house or third-party applications. Despite the apparent cost advantages offered by these systems, the expense and complexity of assembling, integrating, testing and managing these clusters from disparate, piece-part components often outweigh any benefits gained.<br /><br />IBM has designed the IBM System Cluster 1350 to help address these challenges. Now clients can benefit from IBM’s extensive experience with HPC to help minimize this complexity and risk. Using advanced Intel® Xeon®, AMD Opteron™, and IBM PowerPC® processor-based server nodes, proven cluster management software and optional high-speed interconnects, the Cluster 1350 offers the best of IBM and third-party technology. As a result, clients can speed up installation of an HPC cluster, simplify its management, and reduce mean time to payback.<br /><br />The Cluster 1350 is designed to be an ideal solution for a broad range of application environments, including industrial design and manufacturing, financial services, life sciences, government and education. These environments typically require excellent price/performance for handling high performance computing (HPC) and business performance computing (BPC) workloads. It is also an excellent choice for applications that require horizontal scaling capabilities, such as Web serving and collaboration.<br /><br /> <table border="0" cellpadding="0" cellspacing="0" width="443"><col width="218"> <col width="7"> <col width="218"> <tbody><tr> <td colspan="1" align="left" valign="top" width="218"> <b>Common features</b> </td> <td width="7"><img src="http://www.ibm.com/i/c.gif" alt="" border="0" height="1" width="7" /></td> <td colspan="1" align="left" valign="top" width="218"> <b>Hardware summary</b> </td> </tr> <tr><td colspan="3" height="3"><img src="http://www.ibm.com/i/c.gif" alt="" border="0" height="3" width="443" /></td></tr> <tr> <td colspan="1" align="left" valign="top" width="218"> <ul class="smalldotintable"><li class="smalldotintable"> <span class="small">Rack-optimized Intel Xeon dual-core and quad-core and AMD Opteron processor-based servers</span> </li><li class="smalldotintable"> <span class="small">Intel Xeon, AMD and PowerPC processor-based blades</span> </li><li class="smalldotintable"> <span class="small">Optional high capacity IBM System Storage™ DS3200, DS3400, DS4700, DS4800 and EXP3000 Storage Servers and IBM System Storage EXP 810 Storage Expansion</span> </li><li class="smalldotintable"> <span class="small">Industry-standard Gigabit Ethernet cluster interconnect</span> </li><li class="smalldotintable"> <span class="small">Optional high-performance Myrinet-2000 and Myricom 10g cluster interconnect</span> </li><li class="smalldotintable"> <span class="small">Optional Cisco, Voltaire, Force10 and PathScale InfiniBand cluster interconnects</span> </li><li class="smalldotintable"> <span class="small">Clearspeed Floating Point Accelerator</span> </li><li class="smalldotintable"> <span class="small">Terminal server and KVM switch</span> </li><li class="smalldotintable"> <span class="small">Space-saving flat panel monitor and keyboard</span> </li><li class="smalldotintable"> <span class="small">Runs with RHEL 4 or SLES 10 Linux operating systems or Windows Compute Cluster Server</span> </li><li class="smalldotintable"> <span class="small">Robust cluster systems management and scalable parallel file system software</span> </li><li class="smalldotintable"> <span class="small">Hardware installed and integrated in 25U or 42U Enterprise racks</span> </li><li class="smalldotintable"> <span class="small">Scales up to 1,024 cluster nodes (larger systems and additional configurations available—contact your IBM representative or IBM Business Partner)</span> </li><li class="smalldotintable"> <span class="small">Optional Linux cluster installation and support services from IBM Global Services or an authorized partner or distributor</span> </li><li class="smalldotintable"> <span class="small">Clients must obtain the version of the Linux operating system specified by IBM from IBM, the Linux Distributor or an authorized reseller</span> </li></ul> </td> <td width="7"><img src="http://www.ibm.com/i/c.gif" alt="" border="0" height="1" width="7" /></td> <td colspan="1" align="left" valign="top" width="218"> <ul class="smalldotintable"><li class="smalldotintable"> <span class="small">x3650—dual core up to 3.0 GHz, quad core up to 2.66</span> </li><li class="smalldotintable"> <span class="small">x3550—dual core up to 3.0 GHz, quad core up to 2.66</span> </li><li class="smalldotintable"> <span class="small">x3455—dual core up to 2.8 GHz</span> </li><li class="smalldotintable"> <span class="small">x3655—dual core up to 2.6 GHz</span> </li><li class="smalldotintable"> <span class="small">x3755—dual core up to 2.8 GHz</span> </li><li class="smalldotintable"> <span class="small">HS21—dual core up to 3.0 GHz, quad core up to 2.66</span> </li><li class="smalldotintable"> <span class="small">HS21 XM—dual core up to 3.0 GHz, quad core up to 2.33</span> </li><li class="smalldotintable"> <span class="small">JS21—2.7/2.6 GHz*; 2.5/2.3 GHz*</span> </li><li class="smalldotintable"> <span class="small">LS21—dual core up to 2.6 GHz</span> </li><li class="smalldotintable"> <span class="small">LS41—dual core up to 2.6 GHz</span> </li><li class="smalldotintable"> <span class="small">QS20—multi-core 3.2 GHz</span> </li><li class="smalldotintable"> <span class="small">Up to 64 storage nodes</span> </li></ul></td></tr></tbody></table>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-44113553453695270892007-11-18T00:25:00.000+05:302007-11-18T00:27:20.994+05:30<h1>BM System Cluster 1600</h1> <br /> <!-- END TITLE SECTION --> <table border="0" cellpadding="0" cellspacing="0" width="600"> <tbody><tr valign="top"> <td id="content" width="443"> <!-- Begin content cell -->BM System Cluster 1600 systems are comprised of IBM POWER5™ and POWER5+™ symmetric multiprocessing (SMP) servers running AIX 5L™ or Linux®. Cluster 1600 is a highly scalable cluster solution for large-scale computational modeling and analysis, large databases and business intelligence applications and cost-effective datacenter, server and workload consolidation. Cluster 1600 systems can be deployed on Ethernet networks, InfiniBand networks, or with the IBM High Performance Switch and are typically managed with Cluster Systems Management (CSM) software, a comprehensive tool designed specifically to streamline initial deployment and ongoing management of cluster systems.<br /><br /><table border="0" cellpadding="0" cellspacing="0" width="443"> <tbody><tr valign="top"> <td width="218"> <table border="0" cellpadding="0" cellspacing="0" width="218"> <tbody><tr> <td colspan="2"><b>Common features</b></td> </tr> <tr><td colspan="2" height="5"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Highly scalable AIX 5L or Linux cluster solutions for large-scale computational modeling, large databases and cost-effective data center, server and workload consolidation</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Cluster Systems Management (CSM) software for comprehensive, flexible deployment and ongoing management</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Cluster interconnect options: industry standard 1/10Gb Ethernet (AIX 5L or Linux), IBM High Performance Switch (AIX 5L and CSM) SP Switch2 (AIX 5L and PSSP); 4x/12x InfiniBand (AIX 5L or SLES 9); or Myrinet (Linux)</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Operating system options: AIX 5L Version 5.2 or 5.3, SUSE Linux Enterprise Server 8 or 9, Red Hat Enterprise Linux 4</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Complete software suite for creating, tuning and running parallel applications: Engineering & Scientific Subroutine Library (ESSL), Parallel ESSL, Parallel Environment, XL Fortran, VisualAge C++</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">High-performance, high availability, highly scalable cluster file system General Parallel File System (GPFS)</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">Job scheduling software to optimize resource utilization and throughput: LoadLeveler®</span></td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"><span class="small">High availability software for continuous access to data and applications: High Availability Cluster Multiprocessing (HACMP™)</span></td> </tr> </tbody></table> </td> <td width="7"> </td> <td width="218"> <table border="0" cellpadding="0" cellspacing="0" width="218"> <tbody><tr> <td colspan="2"><b>Hardware summary</b></td> </tr> <tr><td colspan="2" height="5"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"> <span class="small">Mix and match IBM POWER5 and POWER5+ servers:</span><br /> <table border="0" cellpadding="0" cellspacing="0"> <tbody><tr> <td width="10"> </td> <td valign="top" width="11">· </td> <td valign="top"><span class="small">IBM System p5™ 595, 590, 575, 570, 560Q, 550Q, 550, 520Q, 520, 510Q, 510, 505Q and 505</span></td> </tr> <tr> <td colspan="3" height="3"><br /></td> </tr> <tr> <td width="10"> </td> <td valign="top" width="11">· </td> <td valign="top"><span class="small">IBM eServer™ p5 595, 590, 575, 570, 550, 520, and 510</span></td> </tr> <tr> <td colspan="3" height="3"><br /></td> </tr> <tr> <td colspan="3" height="3"><br /></td> </tr> </tbody></table> </td> </tr> <tr><td colspan="2" height="3"><br /></td></tr> <tr> <td valign="top" width="11">· </td> <td valign="top"> <span class="small">Up to 128 servers or LPARs (AIX 5L or Linux operating system images) per cluster depending on hardware; higher scalability by special order</span></td> </tr> </tbody></table><br /> </td></tr></tbody></table> <br /><br /> <!-- {Rate this page section. Content needs to be determined.} <table width="443" border="0" cellspacing="0" cellpadding="0"> <tr><td width="443" bgcolor="#cccccc"><img src="//www.ibm.com/i/c.gif" alt="" width="1" height="1" border="0" /><br /></td></tr> </table> --> <!-- {Rate this page content.} --> <br /><!-- End content cell --> </td> <td width="7"> </td> <td id="right-nav" width="150"> <!-- Begin right column cell --> <!-- Contact Us Feature Box --> <div id="ibm-contact-module"> <table style="width: 527px; height: 428px;" border="0" cellpadding="0" cellspacing="0"> <tbody><tr> <td><br /></td> </tr> <tr> <td><br /></td> </tr> </tbody></table><br /> </div> <!-- End of Contact Us --> <!-- BEGIN FEATURE BOX COLUMN --> <!-- New feature Free form --></td></tr></tbody></table>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-38851644560897449642007-11-18T00:22:00.001+05:302007-11-18T00:22:20.954+05:30Very usefull Command<pre>svmon<br />svmon -P <pid><br /><br />Further:<br />use can user svmon command to monitor memory usage as follows;<br /><br />(A) #svmon -P -v -t 10 | more (will give top ten processes)<br />(B) #svmon -U -v -t 10 | more ( will give top ten user)<br /><hr /><br />smit install requires "inutoc ." first. It'll autogenerate a .toc for you<br />I believe, but if you later add more .bff's to the same directory, then<br />the inutoc . becomes important. It is of course, a table of contents.<br /><hr /><br />dump -ov /dir/xcoff-file<br /><hr /><br />topas, -P is useful # similar to top<br /><hr /><br />When creating really big filesystems, this is very helpful:<br />chlv -x 6552 lv08<br />Word on the net is that this is required for filesystems over 512M.<br /><br />esmf04m-root> crfs -v jfs -g'ptmpvg' -a size='884998144' -m'/ptmp2'<br />-A''`locale yesstr | awk -F: '{print $1}'`'' -p'rw' -t''`locale yesstr |<br />awk -F: '{print $1}'`'' -a frag='4096' -a nbpi='131072' -a ag='64'<br />Based on the parameters chosen, the new /ptmp2 JFS file system<br />is limited to a maximum size of 2147483648 (512 byte blocks)<br />New File System size is 884998144<br />esmf04m-root><br /><br />If you give a bad combination of parameters, the command will list<br />possibilities. I got something like this from smit, then seasoned<br />to taste.<br /><hr /><br />If you need files larger than 2 gigabytes in size, this is better.<br />It should allow files up to 64 gigabytes:<br />crfs -v jfs -a bf=true -g'ptmpvg' -a size='884998144' -m'/ptmp2' -A''` |<br /> | locale yesstr | awk -F: '{print $1}'`'' -p'rw' -t''`locale yesstr | aw |<br /> | k -F: '{print $1}'`'' -a nbpi='131072' -a ag='64' <br /><hr /><br />Show version of SSP (IBM SP switch) software:<br />lslpp -al ssp.basic<br /><hr /><br />llctl -g reconfig - make loadleveler reread its config files<br /><hr /><br />oslevel (sometimes lies)<br />oslevel -r (seems to do better)<br /><hr /><br />lsdev -Cc adapter<br /><hr /><br />pstat -a looks useful<br /><hr /><br />vmo is for VM tuning<br /><hr /><br />On 1000BaseT, you really want this:<br />chdev -P -l ent2 -a media_speed=Auto_Negotiation<br /><hr /><br />Setting jumbo frames on en2 looks like:<br />ifconfig en2 down detach<br />chdev -l ent2 -a jumbo_frames=yes<br />chdev -l en2 -a mtu=9000<br />chdev -l en2 -a state=up<br /><hr /><br />Search for the meaning of AIX errors:<br /><a target="_blank" href="http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/eisearch.htm">http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/eisearch.htm</a> <hr /><br /><br />nfso -a shows AIX NFS tuning parameters; good to check on if you're<br />getting badcalls in nfsstat. Most people don't bother to tweaks these<br />though.<br /><hr /><br />nfsstat -m shows great info about full set of NFS mount options<br /><hr /><br />Turn on path mtu discovery<br />no -o tcp_pmtu_discover=1<br />no -o udp_pmtu_discover=1<br />TCP support is handled by the OS. UDP support requires cooperation<br />between OS and application.<br /><hr /><br />nfsstat -c shows rpc stats<br /><hr /><br />To check for software problems:<br />lppchk -v<br />lppchk -c<br />lppchk -l<br /><hr /><br />List subsystem (my word) status:<br />lssrc -a<br />mkssys<br />rmssys<br />chssys<br />auditpr<br />refresh<br />startsrc<br />stopsrc<br />traceson<br />tracesoff<br /><hr /><br />This starts sendmail:<br />startsrc -s sendmail -a "-bd -q30m"<br /><hr /><br />This makes inetd reread its config file. Not sure if it kills and<br />restarts or just HUP's or what:<br />refresh -s inetd<br /><hr /><br />lsps is used to list the characteristics of paging space.<br /><hr /><br />Turning off ip forwarding:<br />/usr/sbin/no -o ipforwarding=0<br /><hr /><br />Detailed info about a specific error:<br />errpt -a -jE85C5C4C<br />BTW, Rajiv Bendale tells me that errors are stored in NVRAM on AIX,<br />so you don't have to put time into replicating an error as often.<br /><hr /><br />Some or all of these will list more than one number. Trust the first,<br />not the second.<br /><br />lslpp -l ppe.poe<br />...should list the version of poe installed on the system<br /><br />Check on compiler versions:<br />lslpp -l vac.C<br />lslpp -l vacpp.cmp.core<br /><br />Check on loadleveler version:<br />lslpp -l LoadL.full<br /><hr /><br />If you want to check the bootlist do bootlist -o -m normal if you want to<br />update bootlist do bootlist -m normal hdisk* hdisk* cd* rmt*<br /><hr /><br />prtconf<br /><hr /><br />Run the ssadiag against the drive and the adapter and it will tell you if it<br />fails or not. Then if its a hot plugable it can be replaced online.</pre>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-54409014100910243762007-11-18T00:20:00.000+05:302007-11-18T00:21:02.939+05:30AIX Control Book CreationList the licensed program productslslpp -L<br />List the defined devices lsdev -C -H<br />List the disk drives on the system lsdev -Cc disk<br />List the memory on the system lsdev -Cc memory (MCA)<br />List the memory on the system lsattr -El sys0 -a realmem (PCI)<br />lsattr -El mem0<br />List system resources lsattr -EHl sys0<br />List the VPD (Vital Product Data) lscfg -v<br />Document the tty setup lscfg or smit screen capture F8<br />Document the print queues qchk -A<br />Document disk Physical Volumes (PVs) lspv<br />Document Logical Volumes (LVs) lslv<br />Document Volume Groups (long list) lsvg -l vgname<br />Document Physical Volumes (long list) lspv -l pvname<br />Document File Systems lsfs fsname<br />/etc/filesystems<br />Document disk allocation df<br />Document mounted file systems mount<br />Document paging space (70 - 30 rule) lsps -a<br />Document paging space activation /etc/swapspaces<br />Document users on the system /etc/passwd<br />lsuser -a id home ALL<br />Document users attributes /etc/security/user<br />Document users limits /etc/security/limits<br />Document users environments /etc/security/environ<br />Document login settings (login herald) /etc/security/login.cfg<br />Document valid group attributes /etc/group<br />lsgroup ALL<br />Document system wide profile /etc/profile<br />Document system wide environment /etc/environment<br />Document cron jobs /var/spool/cron/crontabs/*<br />Document skulker changes if used /usr/sbin/skulker<br />Document system startup file /etc/inittab<br />Document the hostnames /etc/hosts<br />Document network printing /etc/hosts.lpd<br />Document remote login host authority /etc/hosts.equiNilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0tag:blogger.com,1999:blog-939704762244810359.post-85684342546404925082007-11-18T00:18:00.000+05:302007-11-18T00:20:40.440+05:30What is Hot Spare<h3 class="post-title entry-title"><a href="http://jayinux.blogspot.com/2007/11/1-m-1-i-1-m-1-i-1.html">What is an LVM hot spare?</a> </h3> <p>A hot spare is a disk or group of disks used to replace a failing disk. LVM marks a physical<br />volume missing due to write failures. It then starts the migration of data to the hot spare<br />disk.<br />Minimum hot spare requirements<br />The following is a list of minimal hot sparing requirements enforced by the operating<br />system.<br />- Spares are allocated and used by volume group<br />- Logical volumes must be mirrored<br />- All logical partitions on hot spare disks must be unallocated<br />- Hot spare disks must have at least equal capacity to the smallest disk already<br />in the volume group. Good practice dictates having enough hot spares to<br />cover your largest mirrored disk.<br />Hot spare policy<br />The chpv and the chvg commands are enhanced with a new -h argument. This allows you<br />to designate disks as hot spares in a volume group and to specify a policy to be used in the<br />case of failing disks.<br />The following four values are valid for the hot spare policy argument (-h):<br />Synchronization policy<br />There is a new -s argument for the chvg command that is used to specify synchronization<br />characteristics.<br />The following two values are valid for the synchronization argument (-s):<br />Examples<br />The following command marks hdisk1 as a hot spare disk:<br /># chpv -hy hdisk1<br />The following command sets an automatic migration policy which uses the smallest hot<br />spare that is large enough to replace the failing disk, and automatically tries to synchronize<br />stale partitions:<br /># chvg -hy -sy testvg<br />Argument Description<br />y (lower case)<br />Automatically migrates partitions from one failing disk to one spare<br />disk. From the pool of hot spare disks, the smallest one which is big<br />enough to substitute for the failing disk will be used.<br />Y (upper case)<br />Automatically migrates partitions from a failing disk, but might use<br />the complete pool of hot spare disks.<br />n<br />No automatic migration will take place. This is the default value for a<br />volume group.<br />r<br />Removes all disks from the pool of hot spare disks for this volume</p>Nilesh Patilhttp://www.blogger.com/profile/00563367415922748637noreply@blogger.com0