Managing the Dynamic Datacenter

Datacenter Automation

Subscribe to Datacenter Automation: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Datacenter Automation: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Datacenter Automation Authors: Shelly Palmer, Automic Blog, Pat Romanski, Elizabeth White, Liz McMillan

Related Topics: Cloud Computing, Virtualization Magazine, Ubuntu Linux Journal, Infrastructure 2.0 Journal, Datacenter Automation, Entrepreneurs and Innovators, Java in the Cloud, The Role of Business


Penguin Computing’s Public HPC Cloud Powered by Moab Cluster Suite

POD is an on-demand HPC offering that supports ‘bare metal’ execution of compute jobs

Penguin Computing, a provider of HPC cluster solutions and Adaptive Computing, and of unified intelligent automation technology, announced on Tuesday that Penguin Computing chose Adaptive Computing’s Moab ClusterSuite to manage the workload on its HPC Cloud offering Penguin on Demand (POD).

POD is an on-demand HPC offering that supports ‘bare metal’ execution of compute jobs, effectively providing public access to a supercomputer to users that require compute capacity that is unavailable in-house. Unlike on conventional, general purpose cloud solutions like Amazon’s EC2, compute jobs on POD are executed in an HPC optimized environment that is free of the overhead introduced by virtualization layers. On POD, users can decide for every compute job which resources are required for optimal job execution. This execution model provides an unmatched flexibility that is not available on infrastructures based on the static allocation of virtual compute instances. The resource scheduler’s effective, dynamic allocation of suitable compute nodes for submitted compute jobs is crucial to ensuring minimal job turnaround times and effective resource utilization.

“The workload on POD has increased drastically over the last six months. To accommodate the growing demand we have been continuously adding compute capacity. Eventually we reached the scalability and functionality limits of our scheduler. After evaluating a variety of resource scheduling solutions we decided to deploy Moab Cluster Suite® in our POD production environment. With the improved scalability of Moab Cluster Suite we can now accommodate the growing workload and implement more complex scheduling policies while offering multiple job submission interfaces for resource managers such as TORQUE or SGE. After the transition we feel comfortable that in the years to come we will be able to scale out further to accommodate POD’s future growth,” says Tom Coull, Vice President and General Manager of Software and Services at Penguin Computing.

Moab Cluster Suite is a policy-based intelligence engine that integrates scheduling, managing, monitoring, and reporting of cluster workloads. It guarantees that service levels are met while maximizing job throughput. Moab integrates with existing middleware for consolidated administrative control and holistic cluster reporting. Its graphical management interfaces and flexible policy capabilities result in decreased costs and increased ROI.

“For the last five years, Penguin Computing and Adaptive Computing have worked together to deliver powerful management solutions for traditional HPC clusters,” said Michael Jackson, president and COO of Adaptive Computing. “With this announcement, we are now extending management to HPC cloud environments, which highlights Moab’s unmatched ability at managing heterogeneous environments at the largest computing environments in the world that require massive scale. We are excited to partner with Penguin Computing to make its POD cloud a standard for delivering business value in unprecedented time to the HPC market.”

More Stories By Salvatore Genovese

Salvatore Genovese is a Cloud Computing consultant and an i-technology blogger based in Rome, Italy. He occasionally blogs about SOA, start-ups, mergers and acquisitions, open source and bleeding-edge technologies, companies, and personalities. Sal can be reached at hamilton(at)