Home / Solutions

Solutions

Bioinformatics & Life Sciences

Our supercomputers make ideal genomics core servers. Next-generation sequencing, fragment assembly and genome mapping demand large shared memory and supercomputing performance -- exactly what our Duets and Trios deliver.

Our Duets and Trios are called 'Departmental Supercomputers' because they are very affordable. What had previously cost over a million dollars -- large shared-memory High Performance Computers -- can now be obtained for an order of magnitude less. Your research department can now afford our Departmental Supercomputers with up to 3.0 TB of shared-memory and 1.6 TeraFLOPS of supercomputing power dedicated to your researchers' needs.

Our 192-core Trio Departmental Supercomputer (1.6 TeraFLOPS peak theoretical) offers 1.5 TB or 3.0 TB of shared memory; our 128-core Duet Departmental Supercomputer (1 TeraFLOPS peak theoretical) offers 512-GB or 1 TB shared-memory configurations -- ideal for bioinformatics and life science applications and data sets.

Our Duet and Trio Departmental Supercomputers are true Symmetric Multi-Processing (SMP) supercomputer with a large shared-memory. They have the RAM your researchers need to run de nova sequencing. Their sequencing analyses can be executed in hours rather than weeks/months/years.

To programmers, our Duet and Trio Departmental Supercomputers look just like a single huge-memory Linux boxes. Programmers can use standard threading packages to get access to all 128/192 cores and up to 1/3.0 TBytes of memory. With our Duets and Trios, programmers need not worry about message passing interface programming, which is what supercomputing clusters and other limited memory systems demand. There's also no need to build complex file-access program components; programmers can just read a big dataset into memory and access it as an array.

Because our Departmental SuperComputers are easier to program, your researchers need not be computer science experts which is what supercomputing clusters demand. Instead, your researchers can focus more of their valuable time on their specialty – BioInformatics and Life Sciences.

BioInformatics and Life Sciences Benchmarks

The National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST) was executed on a Trio™ Departmental Supercomputer against the full nt and nr database.Click here to download a PDF of NCBI BLAST Benchmarks with the Trio™ Departmental Computer →

The protein sequence analysis program commonly used for profile Hidden Markov Model database searches, HMMER3, was executed on a Trio™ Departmental Supercomputer. Click here to download a PDF of HMMER3 Benchmarks with the Trio™ Departmental Computer →

The molecular dynamics program NAMD was executed on a Trio™ Departmental Supercomputer. Click here to download a PDF of NAMD Benchmarks with the Trio™ Departmental Computer →

Engineering Simulations

Our supercomputers make ideal modeling and simulation servers.

Engineering simulation problems have grown in many dimensions. Increasing problem size and model complexity, the use of multiple solvers, and the added complexity of coupled analyses are all making engineering simulation datasets larger and more complex.

In Computer-Aided Engineering (CAE), the trend has often been toward adoption of Linux clusters and Distributed Memory Parallel (DMP) computing architecture to support larger and larger models. However, many solvers and solver coupling mechanisms just do not scale well in a distributed environment. What's needed is large shared-memory supercomputing.

Our Departmental Supercomputers are Symmetric Multi-Processing (SMP) high performance computers with large shared-memory. They can maintain engineering simulation data in a single coherent data structure -- a requirement for some physics solvers and coupling mechanisms. Our Departmental SuperComputers can operate a large dataset structure, simplifying the process of data loading. Typical examples are implicit FEA solvers such as NASTRAN and ANSYS, MARC, and ABAQUS. Another area which prefers SMP architectures is DEM, or Discrete Element Modeling, the use of particles to simulate materials.

Our Duets and Trios are called 'Departmental Supercomputers' because they are very affordable. What had previously cost over a million dollars -- large shared-memory High Performance Computers -- can now be obtained for an order of magnitude less. Your engineering department can afford our Departmental SuperComputers with up to 3.0 TB of shared-memory and 1.6 TeraFLOPS of supercomputing power.

Our 192-core Trio Departmental Supercomputer (1.6 TeraFLOPS peak theoretical) offers 1.5TB or 3.0-TB of shared memory; our 128-core Duet Departmental SuperComputer (1 TeraFLOPS peak theoretical) offers 512-GB or 1-TB shared-memory configurations -- ideal for engineering models and simulations.

To programmers, our Duet and Trio Departmental Supercomputers look just like a single huge-memory Linux boxes. Programmers can use standard threading packages to get access to all 128/192 cores and up to 1/3.0 TBytes of memory. With our Duets and Trios, programmers need not worry about message passing interface programming, which is what supercomputing clusters and other limited memory systems demand. There's also no need to build complex file-access program components; programmers can just read a big dataset into memory and access it as an array.

Because our Departmental SuperComputers are easier to program, your engineers need not be computer science experts which is what supercomputing clusters demand. Instead, your engineers can focus more of their valuable time on their specialty – mechanical engineering, chemical engineering, material science, physics, manufacturing engineering and other disciplines.

Modeling/Simulation Documents

CEI installed, tested, and verified that EnSight runs on our Departmental SuperComputers. Click here to download a PDF of a joint CEI Ensight - Symmetric Computing brochure. →

HPC Centers

Symmetric Computing's affordable supercomputers are ideal for Universities and Research Centers.

Our Departmental SuperComputers have both the processing power and affordability needed by individual researchers, departments or a team of researchers who prefer a dedicated supercomputing resource.

Our Departmental Supercomputers can also be deployed as a shared supercomputing resource. Many universities and research centers have High Performance Computing (HPC) centers with supercomputing resources for all researchers to share. Our Departmental SuperComputers make a great addition to any HPC center – even those with supercomputing clusters having thousands of cores. Most engineers, scientists, biologists, chemists, physicians, geologists and researchers like to focus on their particular specialty. And, they would rather not struggle with the programming complexities of distributed memory clusters – particularly when their individual needs for supercomputing resources are typically intermittent. What most individual researchers and research teams want and need is an easy-to-program, large shared-memory supercomputing resource that they can draw upon for the few times they need it – that is, a supercomputing resource exactly like Symmetric Computing's Departmental Supercomputers.

When our Trio Departmental Supercomputers are deployed in HPC centers, often Oracle Grid Engine, previously known as Sun Grid Engine (SGE), is employed. SGE is an open source batch-queuing system. It accepts, schedules, dispatches and manages the remote and distributed execution of large numbers of standalone, parallel or interactive user jobs. It manages and schedules the allocation of processors, memory, disk space and software licenses. Symmetric Computing's Departmental Supercomputers can be configured as an:

  • SGE workstation/execution host and SGE master host to provide an integrated resource management and job scheduling solution; and,
  • SGE workstation/ execution host that is monitored/managed by an SGE master host on the local subnet.
Our Departmental SuperComputers appear to programmers as a single, large shared-memory Linux box. They have a single software image. If your HPC center desires, they can be a single node in a supercomputing cluster every bit as much as any other single standalone server. However, we recommend that our Departmental SuperComputers are best used as an easy-to-program, large shared memory supercomputer – what bioinformaticians need for genomic sequencing and engineers demand for coupled simulations.

Energy

Geologists and engineers rely on sophisticated and complex computer simulations from seismic data to reduce uncertainty in energy exploration. Their subsurface simulations are also critical for the efficient management of oil and gas reservoirs. By adding actual production histories to their 3D representations of the fluid and rock properties of the subsurface, wellhead production can be optimized.

Optimal wellhead production is dependent upon the quick execution of complex subsurface models with the most up-to-date information. Supercomputing clusters can execute these tasks, but many geologists and engineers find that the cluster is not immediately available. Many times geologists and engineers find themselves resorting to whatever is the most powerful workstation (32 cores or less) in their department for those times when the cluster is unavailable to them. Even though a Symmetric Multi-Processing (SMP) workstation does not waste CPU cycles on message passing like a cluster, they can still take weeks to run a reservoir model. And many workstations – no matter how powerful -- do not have sufficient shared memory for big data simulations that some reservoir models require.

Symmetric Computing's Trio Departmental Supercomputer is ideal for optimizing wellhead production with up-to-date reservoir models – particularly for those many times that the supercomputing cluster is unavailable. With 192 cores (1.6 TFLOPS peak theoretical), our SMP Trio is powerful enough to run your reservoir models in days, not weeks. And, our Trio has the large shared memory (3.0-TB of RAM) for big data simulations. And, our Trios are priced within range of your department's budget.

Earth Science

Weather and climate dramatically affect human life. Both global and local ecosystems play critical roles in shaping economies and infrastructures, and touch upon nearly every aspect of human life, including food supplies, energy generation (e.g., wind, solar, hydro) and recreational activities.

Being able to accurately predict weather and climate is highly valued by both scientists and government leaders. High Performance Computing (HPC) has increasingly provided ever more accurate weather and climate predictions with earth system models. From performance to system and data management, climate, ocean and weather modeling present unique HPC challenges. The computational requirements for simulations of appropriate spatial and temporal scales can be immense and can require many cores to execute in practical time frames.

Some of the most intricate and complex earth system models provide extremely valuable predictive insight. However, these complicated models can only be executed with the most expensive supercomputing clusters available to only deep-pocketed federal governments and Global 100 firms.

There are many earth system models – particularly for local ecosystems – that do not need all the power of the most expensive supercomputing cluster. Yet many scientists still try to run their models on the cluster, and find themselves waiting long periods before they can execute their model on the fully-booked cluster. Many scientists can't afford to wait. Instead, they resort to employing whatever is the most powerful workstation (32 cores or less) in their department. Even though a Symmetric Multi-Processing (SMP) workstation does not waste CPU cycles on message passing like a cluster, they can still take weeks to run even a relatively straightforward earth system model. And many workstations – no matter how powerful -- do not have sufficient shared memory for big data simulations that many weather and climate models require.

Symmetric Computing's Trio Departmental Supercomputer is ideal for these earth system models – particularly for those many times that the supercomputing cluster is unavailable. With 192 cores (1.6 TFLOPS peak theoretical), our SMP Trio is powerful enough to run your earth system models in days, not weeks. And, our Trio has the large shared memory (3.0-TB of RAM) for big data simulations. And, our Trios are priced within range of your department's budget.

Financial Analysis

Our supercomputers make ideal financial analytical systems.

In financial analytics, speed and accuracy is paramount. We offer multiple-core, large shared-memory Symmetric Multi-Processing (SMP) high performance computers. Unlike distributed-memory supercomputing clusters, our SMP supercomputers do not split data sets among limited memory cluster nodes and waste valuable time passing coordination messages. And unlike others, we employ state-of-the-art, fast and accurate memory: 1600 MHz ECC DDR3 Direct In-Line Memory Modules (DIMMs).

Your analytics deserve the best so our large shared-memory SMP supercomputers devote AMD's next generation Opteron 6300 series processors – the world's first 16-core processors – to your analytics. With twelve of these next-generation AMD processors, our Trio Departmental Supercomputers provide 192 cores of supercomputing power. And for those times when your analytics do not need all those cores, our Trios utilize AMD's new TurboCORE technology which automatically clocks our Trio's processors at higher speeds when less than half (i.e., 96 cores) are engaged. We also engage AMD's new Flex FP technology which gives your analytics 256-bit floating point capability – a next-generation level of mathematical precision.

Our Trio is called a 'Departmental Supercomputer' because it is very affordable. What had previously cost over a million dollars -- large shared-memory High Performance Computers -- can now be obtained for an order of magnitude less. With the new AMD next generation Opteron 6300 series processors, our Trio provides 1.6 TeraFLOPS (double precision peak theoretical) of supercomputing power and 3.0 TB of shared-memory.

To programmers, our Trio Departmental Supercomputer looks just like a single huge-memory Linux box. Programmers can use standard threading packages to get access to all 192 cores and 3.0 TBytes of memory. With our Trios, programmers need not worry about message passing interface programming, which is what supercomputing clusters and other limited memory systems demand. There's also no need to build complex file-access program components; programmers can just read a big dataset into memory and access it as an array.

Because our SMP supercomputers are easier to program, it is easier to keep your financial analytics state-of-the-art. We do not require that your financial analysts be computer science experts which are what supercomputing clusters demand. Instead, we let your analysts focus their valuable time on their specialty and, more importantly, on making your firm more profitable.