Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ft:experimental_computing_lab [2010/04/20 13:33]
rothpc
ft:experimental_computing_lab [2011/08/05 20:18] (current)
rothpc
Line 3: Line 3:
 The Experimental Computing Lab (ExCL) was established in 2004 with the goal of providing application users and computer scientists with access to leading-edge computing systems. ExCL is managed by the [[http://​ft.ornl.gov|Future Technologies Group]]. ExCL researchers investigate architectures such as multicore processors, Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs), Cell Broadband Engines (CBEs), and Multi-Threaded Array Processors (MTAPs). Most hardware is located in a large access-controlled server room at ORNL in the JICS/NICS building, near the Future Technology Group. The Experimental Computing Lab (ExCL) was established in 2004 with the goal of providing application users and computer scientists with access to leading-edge computing systems. ExCL is managed by the [[http://​ft.ornl.gov|Future Technologies Group]]. ExCL researchers investigate architectures such as multicore processors, Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs), Cell Broadband Engines (CBEs), and Multi-Threaded Array Processors (MTAPs). Most hardware is located in a large access-controlled server room at ORNL in the JICS/NICS building, near the Future Technology Group.
  
- +===== Contacts ​=====
-===== More Information ​=====+
  
   * Director: Jeffrey Vetter ([[vetter@ornl.gov]])   * Director: Jeffrey Vetter ([[vetter@ornl.gov]])
Line 11: Line 10:
   * General Help: [[excl-help@email.ornl.gov]]   * General Help: [[excl-help@email.ornl.gov]]
  
-====== Systems ====== +===== Using ExCL Systems ===== 
-===== Compute Servers ​=====+ 
 +==== Accessing ExCL Systems ==== 
 + 
 +First, log into excl.ornl.gov via ssh.  Then log into the desired target system via ssh.  The person sponsoring your ExCL access should be able to tell you the name of the resource(s) you should attempt to use. 
 + 
 +==== Software on ExCL Systems ​==== 
 + 
 +In general, each ExCL system provides a similar software environment. ​ The x86_64-based systems run a Red Hat family OS distribution,​ usually CentOS or Fedora. 
 + 
 +The easiest way to manage your environment on ExCL systems is to use the '​modules'​ command. ​ In your shell login script, either '​dot'​ or '​source'​ the appropriate initialization script from /​opt/​shared/​sw/​$PLAT/​modules/​default/​init,​ where $PLAT is a GNU-style architecture-vendor-os triple (e.g., as produced by a GNU config.guess script). ​ Currently, most software is built for x86_64-unknown-linux-gnu since most ExCL systems have this configuration. ​ Then, commands like '​module avail' can be used to see which software packages are available. '​module load <​pkg>'​ and '​module unload <​pkg>'​ can be used to adjust your environment variables (e.g., PATH, LD_LIBRARY_PATH) to use the specified pkg. 
 + 
 +==== File Systems ==== 
 + 
 +  * Home directories are shared across most ExCL systems. 
 +  * Commonly-used software is installed in /​opt/​shared/​sw/​$PLAT,​ where $PLAT is a GNU-style architecture-vendor-os triple (e.g., as produced by a GNU config.guess script). ​ Currently, most software is built for x86_64-unknown-linux-gnu since most ExCL systems have this configuration. 
 +  * Some compiler/​tool software packages are shared and mounted under /opt.  Look there to see what is available on the system that you are using. 
 +  * There is a shared project file system mounted at /proj on most ExCL systems. ​ Your ExCL sponsor should be able to tell you whether you are to use this project space 
 +  * Shared ExCL file systems are hosted by a storage system that uses RAID, but this storage is **not backed up**.  If this concerns you, you are responsible for transferring your data to a more stable archiving system. 
 +  * Currently, there are no file system quotas enforced on user accounts. ​ Please do not abuse this freedom - if you do, we will be forced to implement storage quotas for some or all users. 
 + 
 + 
 +===== Systems ===== 
 +==== Compute Servers ====
  
   * A 31-node **Linux Networx cluster** (Yoda1.ornl.gov) consisting of 32-bit Intel Xeon 2.6GHz processors, networked with Gigabit Ethernet and 4x SDR IB, which serves as a testbed for system software research including operating systems and parallel filesystems. ​   * A 31-node **Linux Networx cluster** (Yoda1.ornl.gov) consisting of 32-bit Intel Xeon 2.6GHz processors, networked with Gigabit Ethernet and 4x SDR IB, which serves as a testbed for system software research including operating systems and parallel filesystems. ​
-  * Two HP DL160se G6 systems, each with two quad-core Intel Nehalem processors with 24GB memory (kt0[01].ftpn.ornl.gov) 
   * One Dual Socket 2.6 Ghz AMD Istanbul system with 16GB memory (ankara.ftpn.ornl.gov)   * One Dual Socket 2.6 Ghz AMD Istanbul system with 16GB memory (ankara.ftpn.ornl.gov)
   * Two Dual Socket 2.6 Ghz AMD Shanghai systems (shanghai|peking.ftpn.ornl.gov)   * Two Dual Socket 2.6 Ghz AMD Shanghai systems (shanghai|peking.ftpn.ornl.gov)
Line 27: Line 47:
  
  
-===== I/O Servers for Parallel Filesystem Development ​=====+==== I/O Servers for Parallel Filesystem Development ====
  
   * Dual socket 2.3 GHz quad-core Intel Harpertown (iot0[1-5].ftpn.ornl.gov),​ with InfiniBand DDR+QDR and Chelsio 10GigE network ​   * Dual socket 2.3 GHz quad-core Intel Harpertown (iot0[1-5].ftpn.ornl.gov),​ with InfiniBand DDR+QDR and Chelsio 10GigE network ​
  
-===== Emerging Architectures ​=====+==== Emerging Architectures ====
  
  
   * Several variants of **GPU** accelerators that include   * Several variants of **GPU** accelerators that include
-    * NVIDIA Tesla 10-series ​(kt0[01].ftpn.ornl.gov)+    * A system with two NVIDIA Tesla C2050 GPUs (newark.ftpn.ornl.gov)
     * AMD Evergreen Series (atlanta.ftpn.ornl.gov)     * AMD Evergreen Series (atlanta.ftpn.ornl.gov)
-    * A CUDA development machine with NVIDIA 8600GT is available at (athens.ftpn.ornl.gov) 
-  * Two **Cell Broadband Engine** (CBE) blade systems with dual 2.4GHz Cell processors, each with a 64-bit Power Architecture PPE core and eight SPE SIMD cores. (cell0[01].ornl.gov)  ​ 
   * Three **Digilent Virtex-II Pro** FPGA Development System boards, with a variety of I/O ports, including USB and Ethernet.   * Three **Digilent Virtex-II Pro** FPGA Development System boards, with a variety of I/O ports, including USB and Ethernet.
   * A **Nallatech XtremeDSP** Development Kit with the Xilinx Virtex-II Pro FPGA and dual-channel high-performance ADCs and DACs.   * A **Nallatech XtremeDSP** Development Kit with the Xilinx Virtex-II Pro FPGA and dual-channel high-performance ADCs and DACs.
     ​     ​
-===== Infrastructure ​=====+==== Infrastructure ====
  
   * A 4.5 TB **Panasas ActiveStore storage system** (one shelf, two Director Blades and nine Storage Blades) serving home directories and project areas for ExCL systems   * A 4.5 TB **Panasas ActiveStore storage system** (one shelf, two Director Blades and nine Storage Blades) serving home directories and project areas for ExCL systems
Line 52: Line 70:
  
  
-====== Retired Architectures ​======+===== Retired Architectures =====
  
   * An **SRC-6C MAPstation** Reconfigurable Computing Platform pairing dual 2.8GHz Xeon processors with the Xilinx Virtex-II FPGA connected via DIMM slots.   * An **SRC-6C MAPstation** Reconfigurable Computing Platform pairing dual 2.8GHz Xeon processors with the Xilinx Virtex-II FPGA connected via DIMM slots.
Line 61: Line 79:
   * An **AGEIA PhysX P1** PCI 128MB GDDR3 physics accelerator board.   * An **AGEIA PhysX P1** PCI 128MB GDDR3 physics accelerator board.
   * Two **ClearSpeed Avalon** PCI boards, each capable of 100GF.   * Two **ClearSpeed Avalon** PCI boards, each capable of 100GF.
 +  * A CUDA development machine with NVIDIA 8600GT (athens.ftpn.ornl.gov)
 +  * Two **Cell Broadband Engine** (CBE) blade systems with dual 2.4GHz Cell processors, each with a 64-bit Power Architecture PPE core and eight SPE SIMD cores. (cell0[01].ornl.gov)  ​
 +
  
 
ft/experimental_computing_lab.1271770386.txt.gz · Last modified: 2010/04/20 13:33 by rothpc
Recent changes RSS feed Driven by DokuWiki