Home | Research | Facilities

Facilities

Clusters

These HPC clusters are available to CIS faculty and their groups.

Sauron Cluster (GPU-based cluster)

A team of UD faculty were awarded an NSF Major Research Instrumentation (MRI) award in 2009 for the purchase of an advanced GPU cluster. The NSF award supported the acquisition of a hybrid-computing cluster in 2011 with GPU-accelerated computing nodes. Sauron is A Graphics Processor Unit (GPU) centered computational cluster from Dell consisting of 96 NVIDIA 2070 GPU’s attached to 45 computational nodes. The cluster has a total of 96 Fermi S2070 GPU systems resulting in 43,008 GPU cores attached to 552 Intel Xeon and 96 AMD Opteron cores, with a combined total of over 3 TB of memory all connected through 10 GB ethernet to 48 Terabytes of high speed SAS storage. The ratio of GPU’s per CPU can be selectively changed via web based software, allowing from zero to sixteen GPU’s allocated per node. The GPU-enabled capacity of the cluster supports implementation and testing of HPC research involving multi-threading GPU programming in scientific computing. The cluster also supports a large number of theoretical and experimental researchers at UD to study a number of problems in the chemical sciences.

Chimera Cluster (High-end, low-power cluster)

The Chimera Cluster is an NSF funded infrastructure purchased under the leadership of Siegel at CIS. The cluster is a highly parallel computing system for scientific computation research and teaching at the University. The supercomputer cluster contains over 3,000 processor cores and can be expanded as demand grows. It is used by University of Delaware researchers who share a common interest in scalable parallel computation to explore a variety of scientific applications. The high numbers of processor cores make it possible to run algorithms that can scale to the largest process-counts. Each of the 66 nodes comprise 48 cores, fits into one rack unit, and has 64GB RAM. Eight of the nodes actually have 128GB RAM for those problems which might not scale and require more memory and use the “HE” (High Energy efficiency) version of the AMD Magny Cors processor to meet high computing capability while offering low energy cost- green computing standards. The Chimera Cluster supercomputer is used to optimize algorithms in a bread range of scientific application application, help identify limitations to scalability, and explore new ideas to enable scaling to hundreds of thousands of processors and to take advantage of hybrid architectures. The system is also motivating collaboration among faculty interested in application problems and faculty with expertise in computer science.

Mills Cluster (Community high-end cluster)

Based on priorities expressed in a faculty’s Research Computing Task Force report, IT responded by rapidly developing and implementing its first high performance computing (HPC) Community Cluster plan. The Mills HPC cluster features 200 compute nodes with 5,136 processor cores, over 14 terabytes of RAM, and about 200 terabytes of disk space. Mills is being funded collaboratively by University researchers and IT. Fifty faculty stakeholders from 19 different UD departments or centers purchased the compute nodes, and IT funded the storage, switch, maintenance, and physical and staff infrastructure. Each stakeholder will have his or her own set of compute nodes, but there will also be collaborative queue management to allow open access to unused cycles. Mills is the first in a series of high performance computing clusters IT plans to build collaboratively with UD faculty to improve research computing. IT plans to solicit interest for additional clusters every 12 months over the next five years to respond to increasing research needs at UD and emerging HPC technologies.

For more information about UD’s support of high performance research computing, visit IT’s Research Computing website.

Network facilities

A team of faculty and IT personnel led by B. David Saunders at CIS was awarded in 2010 with a significant NSF award to renovate the campus network infrastructure and the wide-area network connection at the University of Delaware. This included adding a second external connection point to the campus network, improving external network capacity, providing higher-bandwidth connectivity, providing the capability to use dynamic network circuits, and upgrading campus network devices to enable the ability to provide multiple 10 Gigabits/sec connections from research centers and groups to the campus backbone.

The renovation has enabled research in a number of fields of research pursued at the University of Delaware, including: astronomy and space physics, catalysis and energy research, critical zone studies in oceanography and marine biology, bioinformatics, genomics, protein modeling, and computer science research on networks.

In addition to providing infrastructure for research, the renovation has enhanced access to dynamic network and computing resources for graduate research training and education. The network renovation allows UD users to exploit the current trends of virtualization so that computing work is no longer tied to a particular lab or group of systems. The University is also using the enhanced network connectivity to make its computational and network resources available, for education and research, to other four- and two-year colleges in Delaware.

Research Facilities

Departmental:

The Department has extensive computing facilities that are devoted to the research needs of the faculty and graduate students. These facilities are administered cooperatively between the Department of Electrical & Computer Engineering and the Department of Computer and Informational Sciences. Some specialized machines are used jointly by the two departments.

The ECE/CIS Joint Laboratory radiates from the central facilities center in Evans Hall. The computer datacenter is housed in a 1100 square feet area with raised floors and temperature/humidity controlled environment. Uninterruptible Power System’s and environmental/security monitoring help protect and maintain the systems housed there. The center connects together an extensive network of approximately 2000 ECE/CIS computers and other more specialized research facilities.

The center in Evans Hall houses all the server, file storage, and networking facilities central to ECE/CIS research needs. It contains a centralized storage and backup (SAN) disk subsystem; a 10-Gigabit Ethernet Network with internal and external connectivity that provides high performance, security, and monitoring; Wireless networks are deployed in all of the ECE/CIS buildings to aid in roaming and conference settings.

The central facility provides access to a number of other specialized facilities in the ECE/CIS lab, running Solaris_x86, Linux, Windows, and MacOS. They include:

  • Research SiCortex mini-Cluster systems providing distributed software research in a single computer chassis. Leveraging MIPS core processors in embedded systems, the SiCortex systems appear to the software as a distributed cluster while using the electrical power of a typical workstation.
  • Sun/Oracle ZFS Storage Server with 96 TB of disk space serving client machines and user file space,
  • Sun/Oracle ZFS Storage Server with 128 TB of disk space serving online client and server data backup,
  • 10 Penguin/Dell/Sun Multi-CPU Servers, with upwards of 64x CPU cores and 128 GB of RAM per system, serving computational needs,
  • 6 Penguin/Dell/Sun Multi-CPU Servers with Virtual Machine (VM) Support, providing over 100 VMs for a wide-range of system needs.

For more information about the CIS/ECE support initiative including activities, help desk, and facilities, visit EECIS website.

University:

The ECE/CIS Department also makes use of University-wide Facilities in its research and instructional programs. Many under-class Undergraduate instructional computing is done on University-wide facilities, though the Department also maintains a separate Academic network for specialized instruction.

The University of Delaware is a member of the MAGPI-Internet2 project and has a 5-to-10Gbps connection from Campus to this new nationwide network research vehicle running at speeds up to 100Gbps.

In early 2012, the Campus upgraded its redundant Internet links to Dense Wavelength Division Multiplexed technology. The links can support multiple 10Gbps circuits and co-locates with the State of Delaware to a high-speed ISP in Philadelphia PA. At the same time, the Department upgraded its router with campus to support multiple 10Gigabit Ethernet links.