High performance cluster
A high performance cluster is a set of computers that is designed to provide high performance in terms of computing capacity.
Also known as cluster computing or network computing. Your goal of a high-performance cluster is to share the most valuable resource of a computer, that is, processing power. High-performance clusters use the nodes to run concurrent computations.
This type of cluster allows applications to work in parallel, thus improving their performance.
The reasons for using a high performance cluster are:
- The size of the problem to be solved.
- The price of the machine needed to solve it.
By means of a cluster, it is possible to achieve computing capacities greater than those of a computer that is more expensive than the combined cost of the computers in the cluster.
Example of very cheap clusters are those that are being carried out in some universities with personal computers discarded as "outdated" that manage to compete in calculation capacity with very expensive supercomputers.
To guarantee this computational capacity, the problems need to be parallelizable, since the method with which the clusters speed up the processing is to divide the problem into smaller problems and calculate them in the nodes, therefore, if the problem does not complies with this characteristic, the cluster cannot be used for its calculation.
For a problem to be parallelizable, use must be made of special libraries such as PVM (Parallel Virtual Machine) or MPI (Message Passage Interface), where the first is used especially in clusters with heterogeneous nodes (processor architecture, systems operating, among others), and belonging to different network domains, and the second library used for homogeneous clusters
One type of software for cluster high performance is OSCAR (Open Source Cluster Application Resources) distributed under the GPL license. This software works on the Linux operating system. On Windows you could use WCC 2003 (Windows Computer Cluster).
There are different classifications or configurations of these high availability environments, but the most common are the following:
Active/Active: In an active/active configuration, all servers in the cluster can run the same resources simultaneously. That is, the servers have the same resources and can access them independently of the other servers in the cluster. If a system node fails and becomes unavailable, its resources continue to be accessible through the other servers in the cluster. The following figure shows how both servers are active, providing the same service to different users. Clients access the service or resources transparently and are not aware of the existence of several servers forming a cluster.
Active/Passive: A high availability cluster, in an active/passive configuration, consists of a server that owns the cluster resources and other servers that are capable of accessing those resources, but do not activate them until the owner of the resources is no longer available. The advantages of the active/passive configuration are that there is no service degradation and that services are only restarted when the active server stops responding. However, a disadvantage of this configuration is that passive servers do not provide any type of resources while they are on standby, making the solution less efficient than active/active type clustering.
Deployed on-premises, edge or in the cloud, HPC solutions are used for different purposes in multiple industries. Examples include:
Research Labs: HPC is used to help scientists find renewable energy sources, understand the evolution of our universe, predict and track storms, and create new materials.
Other uses
Media & Entertainment: HPC is used to edit feature films, produce mind-blowing special effects, and broadcast live events around the world.
Oil and Gas: HPC is used to more accurately identify where new wells are being drilled and to help boost production from existing wells.
Contenido relacionado
Liquefied petroleum gas
George F.L. Charles Airport
Benchmarking (computing)