Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ft:experimental_computing_lab [2010/10/15 20:18]
rothpc
ft:experimental_computing_lab [2011/08/05 20:18] (current)
rothpc
Line 15: Line 15:
  
 First, log into excl.ornl.gov via ssh.  Then log into the desired target system via ssh.  The person sponsoring your ExCL access should be able to tell you the name of the resource(s) you should attempt to use. First, log into excl.ornl.gov via ssh.  Then log into the desired target system via ssh.  The person sponsoring your ExCL access should be able to tell you the name of the resource(s) you should attempt to use.
 +
 +==== Software on ExCL Systems ====
 +
 +In general, each ExCL system provides a similar software environment. ​ The x86_64-based systems run a Red Hat family OS distribution,​ usually CentOS or Fedora.
 +
 +The easiest way to manage your environment on ExCL systems is to use the '​modules'​ command. ​ In your shell login script, either '​dot'​ or '​source'​ the appropriate initialization script from /​opt/​shared/​sw/​$PLAT/​modules/​default/​init,​ where $PLAT is a GNU-style architecture-vendor-os triple (e.g., as produced by a GNU config.guess script). ​ Currently, most software is built for x86_64-unknown-linux-gnu since most ExCL systems have this configuration. ​ Then, commands like '​module avail' can be used to see which software packages are available. '​module load <​pkg>'​ and '​module unload <​pkg>'​ can be used to adjust your environment variables (e.g., PATH, LD_LIBRARY_PATH) to use the specified pkg.
  
 ==== File Systems ==== ==== File Systems ====
Line 25: Line 31:
   * Currently, there are no file system quotas enforced on user accounts. ​ Please do not abuse this freedom - if you do, we will be forced to implement storage quotas for some or all users.   * Currently, there are no file system quotas enforced on user accounts. ​ Please do not abuse this freedom - if you do, we will be forced to implement storage quotas for some or all users.
  
-==== Software on ExCL Systems ==== 
- 
-In general, each ExCL system provides a similar software environment. ​ The x86_64-based systems run a Red Hat family OS distribution,​ usually CentOS or Fedora. 
- 
-The easiest way to manage your environment on ExCL systems is to use the '​modules'​ command. ​ In your shell login script, either '​dot'​ or '​source'​ the appropriate initialization script from /​opt/​shared/​sw/​$PLAT/​modules/​default/​init. ​ Then, commands like '​module avail' can be used to see which software packages are available. '​module load <​pkg>'​ and '​module unload <​pkg>'​ can be used to adjust your environment variables (e.g., PATH, LD_LIBRARY_PATH) to use the specified pkg. 
- 
-==== System-specific Usage Notes ==== 
- 
-**Note:** Some ExCL systems are access controlled. ​ You may not have permission to access all (or even any) of the systems listed below. ​ 
-  * [[:​ft:​kt_dl160s|kt0{0,​1}.ftpn.ornl.gov]] ​ Early test nodes for the [[http://​keeneland.gatech.edu/​|Keeneland]] Initial Delivery (ID) system. 
-  * [[:​ft:​kt_beta|kt02.ftpn.ornl.gov]] ​ HP beta system test node for the [[http://​keeneland.gatech.edu/​|Keeneland]] ID system. 
-  * [[:​ft:​kt_demo|kt0{3-6}.ftpn.ornl.gov]] ​ Test nodes for the [[http://​keeneland.gatech.edu/​|Keeneland]] ID system. 
  
 ===== Systems ===== ===== Systems =====
Line 42: Line 36:
  
   * A 31-node **Linux Networx cluster** (Yoda1.ornl.gov) consisting of 32-bit Intel Xeon 2.6GHz processors, networked with Gigabit Ethernet and 4x SDR IB, which serves as a testbed for system software research including operating systems and parallel filesystems. ​   * A 31-node **Linux Networx cluster** (Yoda1.ornl.gov) consisting of 32-bit Intel Xeon 2.6GHz processors, networked with Gigabit Ethernet and 4x SDR IB, which serves as a testbed for system software research including operating systems and parallel filesystems. ​
-  * Two HP DL160se G6 systems, each with two quad-core Intel Nehalem processors with 24GB memory (kt0[01].ftpn.ornl.gov) 
   * One Dual Socket 2.6 Ghz AMD Istanbul system with 16GB memory (ankara.ftpn.ornl.gov)   * One Dual Socket 2.6 Ghz AMD Istanbul system with 16GB memory (ankara.ftpn.ornl.gov)
   * Two Dual Socket 2.6 Ghz AMD Shanghai systems (shanghai|peking.ftpn.ornl.gov)   * Two Dual Socket 2.6 Ghz AMD Shanghai systems (shanghai|peking.ftpn.ornl.gov)
Line 62: Line 55:
  
   * Several variants of **GPU** accelerators that include   * Several variants of **GPU** accelerators that include
-    * NVIDIA Tesla 10-series ​(kt0[01].ftpn.ornl.gov)+    * A system with two NVIDIA Tesla C2050 GPUs (newark.ftpn.ornl.gov)
     * AMD Evergreen Series (atlanta.ftpn.ornl.gov)     * AMD Evergreen Series (atlanta.ftpn.ornl.gov)
-    * A CUDA development machine with NVIDIA 8600GT is available at (athens.ftpn.ornl.gov) 
-  * Two **Cell Broadband Engine** (CBE) blade systems with dual 2.4GHz Cell processors, each with a 64-bit Power Architecture PPE core and eight SPE SIMD cores. (cell0[01].ornl.gov)  ​ 
   * Three **Digilent Virtex-II Pro** FPGA Development System boards, with a variety of I/O ports, including USB and Ethernet.   * Three **Digilent Virtex-II Pro** FPGA Development System boards, with a variety of I/O ports, including USB and Ethernet.
   * A **Nallatech XtremeDSP** Development Kit with the Xilinx Virtex-II Pro FPGA and dual-channel high-performance ADCs and DACs.   * A **Nallatech XtremeDSP** Development Kit with the Xilinx Virtex-II Pro FPGA and dual-channel high-performance ADCs and DACs.
Line 88: Line 79:
   * An **AGEIA PhysX P1** PCI 128MB GDDR3 physics accelerator board.   * An **AGEIA PhysX P1** PCI 128MB GDDR3 physics accelerator board.
   * Two **ClearSpeed Avalon** PCI boards, each capable of 100GF.   * Two **ClearSpeed Avalon** PCI boards, each capable of 100GF.
 +  * A CUDA development machine with NVIDIA 8600GT (athens.ftpn.ornl.gov)
 +  * Two **Cell Broadband Engine** (CBE) blade systems with dual 2.4GHz Cell processors, each with a 64-bit Power Architecture PPE core and eight SPE SIMD cores. (cell0[01].ornl.gov)  ​
 +
  
 
ft/experimental_computing_lab.1287173917.txt.gz · Last modified: 2010/10/15 20:18 by rothpc
Recent changes RSS feed Driven by DokuWiki