Institute of Physical Chemistry "Ilie Murgulescu", Romanian Academy

High Performance Computing Group in Materials Science


Home


ASSG Project
Research
Members
User Projects


Hardware
Software
Accounts
Administration


Tutorials
Publications
Conferences
Links


Contacts



 

Account

The ASSG infrastructure is not a Computing Center. However, any person (researcher, professor or student) who is working in a field of the material science and who find the hpc-icf cluster useful for their computation and research are welcome to ask for an account in the cluster. For this, you may  write a e-mail to Dr. Viorel Chihaia

You are kindly asked to submit one page of resume of your project. In the e-mail, please state your name, your department and your favorite username. You are also welcome to write down your usage with the cluster and put down some suggested software to be installed for your use. When the project is finished you have to write a short project report what will be listed on our site.


Payment

  • The students are granted for a limited computing time for free. Student application should be referred by your supervisor. 
  • The colleagues from ICF have to contribute to the founding of the cluster from their own research contracts, 
  • Any researcher or professor who is partner of a member of hpc-icf in some scientific programs has free access to the cluster resources,
  • In other cases, please contact Dr. Viorel Chihaia.

How to connect 

Soon after you apply for the computing time you will have an opened account with the username what you have indicated. 


Login

Logging into UNIX-like systems is done through the secure shell (SSH). Since usually the SSH daemon is installed by default on all the nodes. You can log into each one of the cluster’s frontends from your local UNIX machine, using the ssh command:

$ ssh username@fep.hpc-icf.ro

$ ssh username@fep.hpc-icf.ro

Depending on your local configuration it may be necessary to use the -Y flag to enable the trusted forwarding of graphical programs. Logging into one of the frontends from your local Windows machine is done with the use of a windows SSH client such as putty.


File Management

Every user of the cluster has a home directory on a shared filesystem within the cluster. This is

$HOME=/export/home/username.

Transfering files to Solaris or Linux from your local UNIX-like machine is done through the secure copy command scp, e.g:

$ scp localfile username@fep.hpc-icf.ro:

$ scp -r localdirectory username@fep.hpc-icf.ro:

The default directory where scp copies the file is the home directory. If you want to specify a different path where to save the file, you should write the path after ”:”

Example:

$ scp localfile username@fep.hpc-icf.ro:your/relative/path/to/home

$ scp localfile username@fep.hpc-icf.ro:/your/absolute/path

Transfering files from Solaris or Linux to your local UNIX-like machine is done through the secure copy command scp, e.g:

$ scp username@fep.hpc-icf.ro:path/to/file  /path/to/destination/on/local/machine

WinSCP is a scp client for Windows that provides a graphical file manager for copying files to and from the cluster.

NX Client is a solution for secure remote access, desktop virtualization, and hosted desktop deployment. It is used also on Linux and also on Windows.

If you want to copy an archive from a link in your current directory it is easier to use wget. You should use Copy Link Location (from your browser) and paste the link as parameter for wget.

Example: wget http://link/for/download


Batch System

A batch system controls the distribution of tasks (batch jobs) to the available machines or resources. It ensures that the machines are not overbooked, to provide optimal program execution. If no suitable machines have available resources, the batch job is queued and will be executed as soon as there are resources available. Compute jobs that are expected to run for a large period of time or use a lot of resources should use the batchsystem in order to reduce load on the frontend machines. For this cluster we are using Sun Grid Engine.

You may submit your jobs for execution on one of the available queues. Each of the queues has an associated environment. To display queues summary:

$ qstat -g c [-q queue]

To submit a job for execution over a cluster, you have two options: either specify the command directly, or provide a script that will be executed. This behavior is controlled by the ”-b y—n” parameter as follows: ”y” means the command may be a binary or a script and ”n” means it will be treated as a script. Some examples of submitting jobs (both binaries and scripts).

$ qsub -q [queue] -b y [executable]                      

$ qsub -q queue_1 -b y /path/my_exec     

             

$ qsub -pe [pe_name] [no_procs] -q [queue] -b n [script]

e.g: 

     $ qsub -pe pe_1 4 -q queue_1 -b n my_script.sh

To watch the evolution of the submited job, use qstat. Running it without any arguments shows information about the jobs submited by you alone. To see the progress of all the jobs use the -f flag. You may also specify which queue jobs you are interested in by using the -q [queue] parameter, e.g:

$ qstat [-f] [-q queue]

Typing ”watch qstat” will automatically run qstat every 2 sec. To exit type ”Ctrl-C”. In order to delete a job that was previously submitted invoke the qdel command, e.g:

$ qdel [-f] [-u user_list] [job_range_list]

where:

-f - forces action for running jobs

-u - users whose jobs will be removed. To delete all the jobs for all users use -u ”*”.

An example of submitting a job with SGE looks like that:

$ cat script.sh

#!/bin/bash

‘pwd‘/script.sh

$ chmod +x script.sh

$ qsub -q queue_1 script.sh (you may omit -b and it will behave like -b n)

To display the sumited jobs of all users( -u ”*”) or a specified user, use:

$ qstat [-q queue] [-u user]

To display extended information about some jobs, use:

$ qstat -t [-u user]

To print detailed information about one job, use:

$ qstat -j job_id


HPC Main Partners

Prof. N. Tapus and Prof. V. Cristea

Faculty of Automatic Control and Computers

University Polytechnic of Bucharest, Romania

Dr. P. Palade

National Institute of Materials Physics

Magurele, Romania

Prof. S.-H. Suh

Keimyung University

Daegu, Korea

Ass.Prof. V. Alexiev

Institute of Catalysis

 Bulgarian Academy of Sciences

Sofia, Bulgaria


News

 

    Investigated Systems

  • Molecules
  • Nanosystems
  • Crystals
  • Surfaces
  • Liquids

 


    Simulation Groups

  • Atomic Scale

  • Nanoscale

  • Mesoscale

  • Macroscale

  • Multiscale 

  


Some HPC Centers

 


Top500


  Last modified: September 12 2009 12:14:48.

Institute of Physical Chemistry “Ilie Murgulescu” | Romanian Academy | 202, Splaiul Independentei | 060021, Bucharest | Romania