Cell (microprocessor)

format_list_bulleted Contenido keyboard_arrow_down
ImprimirCitar
The Cell B.E. processor of a PS3 main plate.

Cell is a microprocessor architecture developed jointly by Sony Computer Entertainment, Toshiba, and IBM, in an alliance known as "STI". The architecture design and its first implementation were carried out at the STI Design Center in Austin, Texas, over a total period of four years beginning in March 2001, using a budget of $400 million according to IBM.

Cell is short for Cell Broadband Engine Architecture, also known as CBEA for its full acronym or Cell BE. Cell employs a combination of the general-purpose and medium-performance PowerPC core architecture with cascading coprocessor elements, which greatly speed up multimedia and vector processing applications, as well as other forms of dedicated computing.

The Cell's first major commercial application was Sony's PlayStation 3 video game console. We can also find this processor in dual Cell servers, Cell blade (self-contained server type) in dual configuration, PCI-Express accelerator cards and HDTV adapters.

Unique features such as the RAMBUS XDR memory subsystem and the Element Interconnect Bus (EIB) seem to position the Cell advantageously for use in future applications in the field of supercomputing, where it would take advantage of the processor's ability to handle floating point cores. IBM has announced plans to incorporate Cell processors as expansion cards into its IBM System z9 mainframes, so that they can be used as servers in massively multiplayer online role-playing games (MMORPGs).

In November 2006, David A. Bader of the Georgia Institute of Technology was chosen by Sony, Toshiba and IBM from more than a dozen universities to lead the first STI Center of Cell Processor Experts. Competence for the Cell Processor). This alliance aims to create a community of programmers and broaden industry support for the Cell processor. A video tutorial on programming the Cell is available to the public.

History

Peter Hofstee, one of the heads of architects of the Cell.

In 2000, Sony Computer Entertainment, Toshiba Corporation, and IBM formed an alliance (STI) to design and manufacture the processor. The STI Design Center opened in March 2001. The Cell was in the design phase for four years, using improved versions of the design tools used with the POWER4 processor. Around 400 engineers from the three companies worked in Austin, with close support from eleven of IBM's design centers.

During this period, IBM registered several patents related to the Cell architecture, the manufacturing process and the software environment. The first version of the Broadband Engine patent showed a chip package containing four "Processing Elements," which was the patent's description of what is now known as "Power Processing Elements." element). Each process element contained 8 arithmetic processors, designated as SPEs in the current Broadband Engine chip. Such an encapsulation is speculated to have typically been clocked at 4 GHz, with 32 arithmetic processing units providing a total of 32 GFLOPS each. Thus, the Broadband Engine displayed a Teraflop of raw computing power.

In March 2007, IBM announced that the 65 nm version of the Cell BE was in production at its East Fishkill, New York facility.

Again, during the month of February 2008, IBM announced its intention to start manufacturing Cell processors with 45nm technology.

Marketing

On May 17, 2005, Sony Computer Entertainment confirmed some of the specifications of the Cell processor that would be included in the future PlayStation 3 game console. In this Cell configuration, a processing power element (PPE) would be included) in the core, along with 8 synergistic processing units (SPEs) on the silicon wafer. On the PlayStation 3, there are 8 SPEs and 1 is reserved for the operating system, leaving 7 SPEs free to run the code. of games. The target clock speed at launch is 3.4 GHz. The first design was manufactured using SOI 90 nm manufacturing technology, initial production of which was handled by IBM's East Fishkill factory.

The relationship between cores and threads is a common source of confusion. The PPE is capable of handling two threads of execution and is shown in software as such, while each active SPE is shown as only one. In the PlayStation 3 configuration, as described by Sony, the Cell processor provides nine threads of execution.

On June 28, 2005, IBM and Mercury Computer Systems announced an agreement to produce Cell-based systems for embedded applications such as medical imaging, industrial inspection, aerospace processing, defense applications, seismic detection and also for telecommunications. Mercury has since marketed blades, conventional rack servers and PCI-Express accelerators with Cell processors.

In the fall of 2006, IBM released the QS20 blade module, using dual Cell BE processors that provided tremendous performance in certain types of applications, reaching a maximum of 410 GFLOPS per module. The QS22 modules integrate the IBM Roadrunner supercomputer, which came online in 2008. Mercury and IBM make full use of the Cell processor, with its 8 active SPEs.

Analysis

The Cell Broadband Engine – or more commonly Cell – is a microprocessor designed to fill the gap between conventional desktop processors (such as the Athlon, Pentium, and PowerPC families) and high-performance specialized graphics processors (GPUs) from NVIDIA and ATI Technologies. Its full name indicates the specifications for its use, mainly as a component in present and future digital distribution systems. As such, it can be used in high-definition displays and recording equipment, as well as computer entertainment systems for the HDTV era. Additionally, the processor may be appropriate for digital imaging systems (medical, scientific, etc.), as well as physical simulations (eg, scientific or structural engineering modelling).

In a simple analysis, the Cell processor can be broken down into four parts:

  • I/O external structures,
  • the main processor (called Power Processing Element (PPE), consisting of a Power ISA core (Instruction Set Architecture) v.2.03 of two simultaneous multi-track,
  • eight functional coprocessors called Synergistic Processing Elements, or SPEs;
  • and a large bandwidth specialized circular data bus that connects the PPE, the I/O and the SPEs elements, called “interconnection bus” or Element Interconnect Bus (EIB).

To achieve the high performance required for intensive math tasks, such as decoding or encoding MPEG streams, generating or transforming 3D data, or performing Fourier analysis of data, the Cell processor brings together SPE and PPE through the EIB to provide them with access to both main memory and external storage devices.

The PPE, which is capable of running a conventional operating system, has control over the SPEs and can start, stop, and schedule processes to run on them. For this purpose, it has additional instructions regarding the control of SPEs. Despite featuring Turing-complete architectures, SPEs are not fully autonomous and require commands from the EPP before they can perform any useful tasks. However, most of the “horsepower” comes from the synergistic processing units.

The PPE and bus architecture include several modes of operation that provide different levels of memory protection. Allowing certain areas of memory to be protected from access by specific processes running on the SPEs or the PPE.

Both the PPE and SPE architectures are RISC-type, with instructions of a fixed word size of 32 bits. The PPE contains a 64-bit General Purpose Register (GPR), a 64-bit Floating Point Register (FPR), and a 128-bit Altivec-type register. The SPE contains only 128-bit registers. These can be used for various scalar data types ranging from 8 to 128 bits in size or, for SIMD calculations, in a variety of floating point or integer formats.

Memory addresses for both the PPE and SPEs are expressed as 64-bit values, giving a theoretical address range of 264 bytes (16,777,216 terabytes). In practice, not all of these bits are implemented in hardware. However, the address space is extremely large. The local storage addresses of the SPEs are expressed as 32-bit words. In Cell documentation, the term word always refers to 32 bits, doubleword to 64 bits, and quadword to 128 bits.

PowerXCell 8i

In 2008, IBM announced a revised variant of the Cell called PowerXCell 8i, which is available on IBM QS22 Blade Servers. The PowerXCell is built on the 65nm process, and adds support for up to 32GB of DDR3 memory, as well as dramatically improved double-precision floating-point performance on SPEs with peaks from around 12.8 GFLOPS up to 102.4 Total GFLOPS for eight SPEs. The IBM Roadrunner supercomputer, currently the second fastest in the world, consists of 12,240 PowerXCell 8i processors, along with 6,562 AMD Opteron processors. Along with the QS22 and the RoadRunner computer, the PowerXCell processor is also available as an accelerator on a PCI-card. Express and is used as the core processor in the QPACE project.

Influences and contrasts

In some respects the Cell system resembles early Seymour Cray designs, but in reverse. The familiar CDC6600 used a single, very fast processor to handle the math, while ten slower systems ran smaller programs to keep the main memory fed with data.

The problem has been reversed on the Cell: reading data is no longer the problem thanks to the complex codes used in the industry. Today the problem is to decode the data into less and less compressed format as quickly as possible.

Modern graphics cards have elements much like those of SPEs, known as shader units, with associated high-speed memory. Programs, known as "shaders" (shaders), are loaded into these units to process the input data stream provided by previous stages (possibly the CPU), according to the required operations.

The main differences are that Cell's SPEs are much more general purpose than shader units, and the ability to chain multiple SPEs under program control offers much greater flexibility, allowing the Cell to handle graphics, sound or any other workload.

Architecture

Cell Processor Scheme.

Although the Cell chip can come in a variety of configurations, the most basic is a multicore chip made up of a “Power processing element" (Power Processor Element, PPE), also sometimes called a "process element" (PE); and various "Synergistic Processing Elements" (SPEs). The PPE and SPEs are interconnected with each other via an internal high-speed bus called the "Element Interconnect Bus" (EIB).

Due to the nature of its applications, the Cell is optimized for computing single-precision floating-point data. SPEs are capable of performing double-precision computations, but at a noticeable performance penalty. However, there are ways around this in software by using iterative type refinements, which means that values will only be calculated in double precision when necessary. Jack Dongarra and his team have publicly demonstrated a 3.2 GHz Cell with 8 SPEs providing performance equal to 100 GFLOPS processing a standard Linpack 4096x4096 matrix with double precision data.

Power Processor Element

The PPE is a multi-threaded, two-way kernel based on the Power architecture that acts as the controller for the 8 SPEs, which handle most of the computational load. The PPE works with conventional operating systems due to its similarity to other 64-bit Power PC processors, while the SPEs are designed for vectorized floating point code execution.

The PPE contains 32 KB instructions and a 32 KB level 1 cache, as well as a 512 KB level 2 cache. Additionally, IBM has incorporated an Altivec unit, which is configured to process comma data double-precision float using pipelines.

Each PPU can complete two double-precision operations per clock cycle, which translates to a performance of 6.4 GFLOPS at 3.2 GHZ.

Synergistic Processing Elements (SPE)

PPE scheme.
SPE scheme.

Each SPE is made up of a "Synergistic Processing Unit" (SPU) and a "Memory Flow Controller" (MFC, DMA, MMU, or bus interface). An SPE it is a RISC processor with a 128-bit SIMD organization set up to execute single or double precision instructions. In the current generation of Cell, each SPE contains 256 KB of embedded SRAM for instruction and data storage, called "storage". local” (not to be confused with “local memory”, which in Sony documentation corresponds to VRAM), visible to the PPE and can be directly addressed by software. Each SPE supports up to 4 GB of local storage memory.

Local storage does not operate like a conventional CPU cache, since it is neither transparent to the software nor does it contain structures for predicting the data to be loaded. The SPEs contain a 128-input 128-bit register line measuring 14.5 mm² with 90 nm fabrication technology. The SPE can perform operations on 16 8-bit integers, 8 16-bit integers, 4 32-bit integers, or 4 single-precision floating-point numbers in a single clock cycle, as well as perform a memory operation. Note that the SPU cannot address system memory directly: 64-bit virtual addresses formed on the SPU have to be passed to the memory flow control (MFC) unit to perform a DMS operation within the system memory space..

In a typical usage scenario, the system will load small programs onto SPEs (similar to threads), chaining them together so that each one takes care of a small step in a complex operation. For example, a desktop set-top box could load programs to read a DVD, decode audio and video, and control screens; and the data would be transmitted from SPE to SPE until it finally reached the television. Another possibility is to split the input data and run the SPEs on the same task in parallel. At 3.2 GHZ, each SPE provides a theoretical performance of 25.6 GFLOPS on single-precision data.

Compared to a modern personal computer, the higher relative performance of a floating point Cell processor seems to make a fool of the capabilities of SIMD drives in desktop processors such as the Pentium 4 and Athlon 64. However, comparing only the Floating point capacities of a system is a way of making one-dimensional measurements and strongly linked to certain applications. Unlike a Cell processor, desktop CPUs are better suited to run general purpose software commonly run on personal computers. In addition to executing multiple instructions per clock cycle, Intel and AMD processors provide branch prediction. The Cell was designed to compensate for this with the help of its compiler, in which instructions for preparing branches are created. For double precision data, usually used in personal computers, the Cell's performance drops considerably, but still reaches 12.8 GFLOPS.

Recent tests by IBM show that SPEs can achieve 98% of their theoretical maximum using parallel matrix multiplication.

Toshiba has developed a coprocessor operated by four SPEs and no PPEs, called SpursEngine, which is designed to accelerate 3D and movie effects in consumer electronics.

Element Interconnect Bus (EIB)

The EIB is a communications bus internal to the Cell processor that interconnects the various system elements integrated on the chip: the PPE processor, the memory controller (MIC), the eight SPE coprocessors, and the two external I/O interfaces. of the chip, forming a total of 12 participants. The EIB includes an allocation unit that works like a set of semaphores. In some of the IBM documents EIB participants are referred to as “units”.

Currently, the EIB is implemented as a circular ring composed of 4 unidirectional channels of 16 Bytes that rotate counterclockwise in pairs. When traffic patterns allow, each channel can transmit up to a maximum of 3 transactions concurrently. Since the EIB operates at half the speed of the system clock, the effective throughput is 16 bytes every two clock cycles. With three active transactions in each of the four rings, that is, with maximum concurrency, the EIB's maximum instantaneous peak bandwidth is 96 bytes per clock cycle (12 concurrent transactions*16 bytes/2 clock cycles).). Even though this value is normally quoted by IBM it is unrealistic to scale this number by processor speed. The allocation unit imposes additional restrictions, which are discussed below in the bandwidth allocation section.

David Krolak, IBM Chief Engineer and EIB Design Director, explains the concurrency model:

A ring can start an operation every three cycles. Each transfer lasts 8 impulses. This was one of the simplifications we did. This way, it is optimized for the transfer of many data. If you do small operations, it doesn't work that well. Think of trains of eight cars running this way. Whenever the cars do not collide with each other, they can coexist on the road.

Each of the EIB participants has a 16-byte read port and a 16-byte write port. The limit for each individual participant is to read and write at a rate of 16 bytes per clock cycle (for simplicity, 8 bytes per clock cycle is often indicated). Note that each SPU contains a dedicated DMA queue capable of scheduling large sequences of transactions to various endpoints without interfering with the computations that the SPU is performing. These DMA queues can be managed both locally and remotely, providing additional flexibility in the control model.

Data travels on an EIB channel in a clockwise direction around the ring. Since there are twelve participants, the total number of steps around the channel back to the origin is twelve. Six steps is the maximum distance between any pair of participants. An EIB channel is not allowed to communicate data that requires more than six steps. This type of data will have to take the shortest route in the other direction. The number of steps involved in sending a packet has little impact on transfer latency: the clock speed that controls all the steps is too fast relative to any other consideration. However, longer communication distances are detrimental to the overall performance of the EIB, since they reduce the available concurrency.

Despite IBM's original desire to implement the EIB as a more powerful crossbar, the circular configuration they adopted to save resources rarely represents a limiting factor in the performance of the Cell chip as a whole. In the worst case, the programmer has to take extra care in planning communication patterns where the EIB is able to function with high levels of participation.

David Krolak explains:

Well, at the beginning of the development process, several people promoted the idea of a switch (crossbar switch). By the way the bus was designed, the EIB could be removed and a switch could be implanted as long as you were willing to devote more space to the chip's skeleton. We had to find a balance between connectivity and space and there was simply not enough space to put a switch. So we came up with this ring structure, which we thought is quite interesting. It fits with space restrictions and still provides quite notorious bandwidths.

Bandwidth allocation

When quoting performance figures, we'll assume a Cell processor running at 3.2 GHz, the most frequently quoted. At this clock frequency each channel transmits at a rate of 25.6 GB/s. Contemplating the EIB in isolation from the elements it interconnects, reaching twelve simultaneous transactions with this transfer ratio would yield a theoretical bandwidth of 207.2 GB/s. Based on this perspective, many of the IBM publications describe the available bandwidth on the EIB as “greater than 300 GB/s”. This number reflects the instantaneous peak of EIB bandwidth scaled by processor frequency.

However, other technical restrictions are involved in the arbitration mechanism for packets that are accepted on the bus. As The IBM Systems Performance Group explains:

Each unit in the EIB can simultaneously send or receive 16 bytes of data for each bus cycle. The maximum bandwidth for data for the entire EIB is limited by the maximum speed at which the addresses are monitored by all units in the system, which is one per bus cycle. Since each request for controlled management can potentially transfer up to 128 bytes, the peak of theoretical bandwidth in the EIB to 3.2 GHz is 128 bytes x 1.6 GHz=204,8{displaystyle = 204,8} GB/s.

This quote apparently represents the largest public disclosure by IBM of this mechanism and its impact. The EIB arbitration unit, the watchdog mechanism, and the generation of breaks in segments or page translation faults are not well described in the IBM public documentation sets.

In practice the effective bandwidth of the EIB may also be limited by the participants involved in the ring. While each of the nine process cores can maintain a simultaneous read and write speed of 25.6 GB/s, the memory interface controller adapter (MIC) is tied to a pair of memory channels. XDRs that allow a maximum traffic of 25.6 GB/s for combined writes and reads; and the two I/O controllers, as stated in the documentation, support a combined maximum input speed of 25.6 GB/s and a combined maximum output speed of 35 GB/s.

To add to the confusion, some older posts talk about EIB bandwidth assuming a 4 GHz system clock. This frame of reference results in an instantaneous bandwidth figure of 384 GB/s and a value of bandwidth limited by arbitration of 256 Gb/s. The theoretical value of 204.8 GB/s, the most quoted, is the best to take into account, considering any other aspects. The IBM Systems Performance Group has performed demonstrations where data flows of 197 GB/s were achieved around the SPUs on a Cell processor running at 3.2 GHz, so this figure is a reliable indicator in practice as well.

Optical interconnection

Sony is currently working on the development of optical interconnect technology for use as an internal adapter or between external devices for various types of Cell-based consumer electronics and entertainment systems.

Memory and I/O Controller

The Cell processor contains a new generation dual channel XIO Rambus macro, which interfaces with XDR Rambus memory. The memory adapter controller (MIC) is separate from the XIO macro and has been designed by IBM. The XIO-XDR link runs at 3.2 GB/s on each pin. Two 32-bit channels can provide a theoretical maximum of 25.6 GB/s.

The system adapter used in Cell, also a Rambus design, is known as FlexIO. The FlexIO interface is organized into 12 lanes, each lane being a point-to-point 8-bit channel. Five 8-bit wide point-to-point paths are input lanes to the Cell, while the remaining seven are output lanes. This provides a theoretical maximum bandwidth of 62.4GB/s (36.5GB/s out, 26GB/s in).

The FlexIO interface can have an independent clock rate (typically 3.2 GHz). Four input channels and four output channels are responsible for implementing memory coherence.

Possible uses

Blade Server

On August 29, 2007, IBM introduced the BladeCenter QS21. Generating about 1.05 Giga Floating Point Operations Per Second (GFLOPS) measured per watt, with peak performance of about 460 GFLOPS, it is one of the most power-efficient computing platforms to date. A BladeCenter chassis can achieve 6.4 TFLOPS and around 25.8 TFLOPS in a standard 42U cabinet. Companies like Blizzard use these types of servers to host their massive online games.

Video game consoles

Sony's PlayStation 3 video game console contains the first ever manufactured application of the Cell processor, clocked at 3.2 GHz, with seven of eight SPEs operational, allowing Sony to increase manufacturing output of the processor. Only six of the seven SPEs are accessible to developers, while the seventh is reserved for the operating system.

Home theater

It has been said that Toshiba is considering the possibility of producing high-definition televisions (HDTVs) using the Cell. They have already introduced a system that decodes 48 standard definition MPEG-2 streams simultaneously on a 1920x1080 screen. This would allow the user to choose a channel from dozens of thumbnail videos displayed simultaneously on the screen.

Supercomputing

The new supercomputer planned by IBM, the IBM Roadrunner, will be a hybrid between CISC general purpose processors and Cell processors. This combination is said to produce the first computer capable of operating at petaflop speeds. It will use an updated version of the Cell processor, built on 65nm technology and with improved SPUs that can handle double-precision computations on 128-bit registers, reaching 100 GFLOPS in double-precision.

Cluster Computing

PlayStation 3 console clusters are an attractive alternative to higher-end Blade Cell-based systems. The Innovative Computing Laboratory, whose group is led by Jack Dongarra, within the department of computer science at the University of Tennessee, investigated the application in depth. Terrasoft Solutions (since 2008 Fixstars), implementing Dongarra's research, sells PlayStation 3 clusters of 8 or 32 nodes, with Yellow Dog Linux pre-installed. As Wired magazine published on October 17, 2007, an interesting application of using the PlayStation 3 in a cluster configuration was implemented by astrophysicist Dr. Gaurav Khanna, who replaced the time spent on supercomputers with eight PlayStation 3s. biochemical and biophysical computing at Pompeu Fabra University in Barcelona built a BOINC system based on CellMD software (the first designed specifically for the Cell) called PS3GRID for shared computing.

With the help of the computing power of around half a million PlayStation 3 consoles, the Folding@Home distributed computing project has been recognized by the Guinness Book of World Records as the most powerful distributed computing network in the world. The first record was obtained on September 16, 2007, when the project exceeded a PETAFLOPS, which had never been achieved before by any distributed computing network. Collective efforts allowed only PS3s to reach the PETAFLOPS mark on September 23, 2007. For comparison, the world's most powerful supercomputer, the IBM "Roadrunner", is rated at around 1,105 PETAFLOPS. This means that the computing power of Folding@Home is approximately the same as that of Roadrunner (although the interconnection between CPUs in Roadrunner is much faster, on the order of millions of times, than the average speed of the Folding@ network). Home).

Mainframes

On April 25, 2007, IBM announced that it would begin integrating its Cell Broadband Engine architecture microprocessors into its line of mainframes.

Software engineering

The Cell architecture implements innovations such as the memory coherence structure, for which IBM has received several patents. This architecture emphasizes performance per watt, prioritizes bandwidth over latency, and favors maximum computational output over simplicity of programming code. Because of this, Cell is often considered a difficult software development environment. IBM provides a complete Linux-based development platform to help programmers tackle this task. The main factor in checking whether Cell gets to develop its performance potential is the adaptation of the software. Despite these difficulties, studies indicate that Cell excels in various types of scientific computing.

Given the flexible nature of the Cell, there are many possibilities for the use of its resources, which are not simply limited to different computing paradigms.

Work queues

The PPE maintains a work queue, schedules work in the SPEs and monitors progress. Each SPE runs a mini kernel whose job is to pick up a job, execute it, and synchronize it with the PPE.

Autonomous multitasking in SPEs

The core and planning is distributed among the SPEs. Tasks are synchronized using semaphores or multitexes just as in conventional operating systems. Tasks ready for execution wait in a queue for the SPEs to execute them. The SPEs use shared memory for all tasks in this configuration.

Stream processing

Each SPE runs a particular program. The data comes from an input stream and is sent to the SPEs. When an SPE has finished processing, the output data is sent to an output stream. This provides a flexible and powerful architecture for flow processing and allows each SPE to be explicitly scheduled separately. Other processors are also capable of performing streaming tasks, but are limited by the type of kernel loaded.

Distributed computing

There is an active BOINC distributed computing application at https://web.archive.org/web/20171217224351/http://www.ps3grid.net/PS3GRID/. It is entirely dedicated to various types of biological calculations that can only be successfully completed by microprocessors running in parallel.

Open source software development

A strategy based on Open Source software was undertaken to speed up the development of a Cell BE "ecosystem" and to provide an environment in which to develop Cell applications. In 2005, IBM developers shipped patches enabling support of Cell for inclusion in the Linux kernel. Anrd Bergmann (one of the developers of these patches) also described the Linux-based Cell architecture in the 2005 LinuxTag.

Both the PPE and the SPEs can be programmed in C/C++ using a common API provided by libraries.

Terra Soft Solutions supplies Yellow Dog Linux for both Cell IBM and Mercury systems, as well as the PlayStation3. Terra Soft has strategically partnered with Mercury to supply the LinuxBoard support package for Cell, as well as support and development of software applications on several other Cell platforms, including the IBM BladeCenter JS21, Cell QSQ20, and Mercury solutions. Terra Soft also maintains the Y-HPC (High Performance Computing) cluster construction and management suite and sequencing tools Y-Bio genes. Y-Bio is built on the Linux RPM package management standard and offers tools that help bioinformatics researchers carry out their work more efficiently. IBM has developed a pseudo file system for Linux called "SPufs", the which simplifies the access and use of the resources of the SPEs. IBM currently maintains a Linux kernel and GDB ports, while Sony maintains the GNU toolchain (GCC, binutils...).

In November 2005, IBM released the Cell Broadband Engine (CBE) Software Development Kit Version 1.0 (Cell Broadband Engine (CBE) Software Development Kit Version 1.0) on its website, consisting of a simulator and various tools. Development versions of the latest kernel and tools for Fedora Core 5 can be found on the Barcelona Supercomputing Center website.

In August 2007, Mercury Computer Systems released a High Performance Computing Software Development Kit for the PlayStation 3.

With the release of kernel version 2.6.16 on March 20, 2006, the Linux kernel provided official support for the Cell processor.

Contenido relacionado

Schottky anomaly

The Schottky anomaly, in specific heat measurements, is found in materials with spin energy...

Magnetostructural correlation

The magnetostructural correlations are those established between the magnetic properties, generally magnetic exchange or electron transfer parameters, and the...

Precut

In the packaging industry, the perforation is the dotted line that is punched into the cardboard sheet in the die-cutting...
Más resultados...
Tamaño del texto:
undoredo
format_boldformat_italicformat_underlinedstrikethrough_ssuperscriptsubscriptlink
save