Institute of Physical Chemistry "Ilie Murgulescu", Romanian Academy

Modeling and Simulation Group in Materials Science: www.hpc-icf.ro

Home

ASSG Project
 
Research
 
Members
 
User Projects


Hardware
 
Software
 
Accounts
 
Administration


Tutorials
 
Publications
 
Conferences
 
Links


Contacts

 

Hardware

 

The hpc-icf cluster is an IBM based Blade Cluster

Racks (2 pieces):

  • Frames IBM NetBay de 42U (1U = 1.75"). They are the physical "boxes" that hold most of a cluster's other components:

    • Nodes of various types

    • Switch components

    • Other network components

    • Parallel file system disk resources (usually in separate racks)

  • Vary in size/appearance between the different Linux clusters at ASSG.

  • Power and console management - Racks include hardware and software that allow system administrators to perform most tasks remotely.

IBM BladeCenter H Chassis (5 pieces):

  • IBM BladeCenter H is a powerful platform built with the enterprise customer in mind, providing industry-leading performance, innovative architecture and a solid foundation for virtualization:

    • Delivers high performance to run the most demanding applications and simulations at blazing-fast speeds

    • Provides easy integration to promote innovation and help manage growth, complexity and risk

    • Protects your investment by being compatible with the entire IBM BladeCenter® family

  • Characeristics:
    • Height: 9U 

    •  Bays: 14 bays

    • Power modules: 2900W AC

    • I/O modules: 4 high-sped, 4 bridge, 10 GIGABIT ETHERNET, INFINIBAND

Nodes (64+2 pieces):

  • The basic building block of a Linux cluster is the node. A node is essentially an independent PC. However, some important features of cluster nodes distinguish them from typical desktop machines:

    • Low form-factor - Clusters nodes are very thin in order to save space. For example, the Opteron cluster nodes have a form factor of approximately 1U (U = 1.75").

    • Rack Mounted - Nodes are mounted compactly in a drawer fashion to facilitate maintenance, reduced footprint, etc.

    • Remote Management - There is no keyboard, mouse, monitor or other device typically used to interact with a desktop machine. All interaction with a cluster node takes place remotely over a network.

  • Nodes are typically configured into three types, according to their function:

    • Compute nodes - 64 nodes (blades) that run user jobs: IBM HS21 XM

      - The IBM BladeCenter® HS21 XM delivers optimal performance for enterprise environments with expanded memory and processor performance. This high-density blade server is supported in all IBM BladeCenter chassis and features low-voltage processors for better energy management.

      - Characteristics: 16 Gb memory per blade, IBM 146GB SAS 10kRPM, two high-performance Dual-Core or Quad-Core Intel® Xeon® Processors, 4 Integrated Dual Gigabit Ethernet controllers

    • Login node - The name of the frontend server is fep.hpc-icf.ro (IP:193.231.132.66). This is where you login for access to the compute nodes.

    • Storage node - The server fep.hpc-icf.ro  (IP:193.231.132.65) is dedicated for fileservers. It connects the compute nodes to essential file systems which are mounted on disk storage devices. It is used by system administrator to manage the entire cluster. It is not accessible to users.

  • Processors:
    • on each node there are two processors quad-core Intel Xeon with the characheristics: frequency 2.0/2.5 GHz, cache L2 de 12MB, FSB 1333MHz

    • Low form-factor - Clusters nodes are very thin in order to save space. For example, the Opteron cluster nodes have a form factor of approximately 1U.

It provides binary compatibility with applications running on previous members of Intel’s IA-32 architecture, hyper-threading technology, enhanced branch prediction and enables system support for up to 64 GB pf physical memory.

  • Memory: 16GB RAM DDR FBDimm C667MHz ECC CL5

  • Hard Disk: IBM 146GB SAS 10000rpm

  • Storage equipments: 12 hard-disks of 300GB  SAS, 15000rpm, hot-swap, Support for RAID 0/1/ 5/0, Dynamic Volume Expansion, Dynamic Raid Level Migration, Dynamic Segment Size Migration

  • UPS (4 pieces): 6U, rackmountable, with support for 4 batteries rackmountable, 10000 VA (8000 W), 10 minutes back-up for 5000 W load, 

  • Switch (2 pieces): Linksys Gigabit Smart Switch 48 ports 10/100/1000 Mbps , 2 combo SFPs,

  • Console (1 piece): monitor 17” TFT, keyboard, mouse 


Cooling System:
  • Liebert type Hiross HPM M66OA / 2xHCE 49: 

    • total cooling 70.5 kW, 

    • cooling efficiency 0,93 (65,7/70.5), 

    • performance coefficient 3,35, 

    • air debit 13470 m3/h, 

    • electric power 400 V (± 10 %)/3Ph/50Hz

    • ventilator  2 pieces

    • working temperatures  -20oC : 40oC


 

HPC Main Partners

Faculty of Automatic Control and Computers 

University Polytechnic of Bucharest, Romania

National Institute of Materials Physics

Magurele, Romania


Institute of Catalysis

 Bulgarian Academy of Sciences

Sofia, Bulgaria

 


    News

 

    Investigated Systems

  • Molecules
  • Nanosystems
  • Crystals
  • Surfaces
  • Liquids

 


    Simulation Groups

  • Atomic Scale

  • Nanoscale

  • Mesoscale

  • Macroscale

  • MultiScale 

     Some HPC Centers

 

 

 

Top500

  Last modified: September 12 2009 12:14:48.

Institute of Physical Chemistry “Ilie Murgulescu” | Romanian Academy | 202, Splaiul Independentei | 060021, Bucharest | Romania