One angry old man against the world
Computing has just taken a huge step into the future with the release of the new CPU from AMD; the ZEN, newly branded as the RYZEN. This is the first truly innovative design in twenty years, not a redesign, not an upgrade, a ‘from the ground up,’ completely new, fresh look at how a CPU should work and work efficiently. I personally think that this will change how CPU’s will be designed in the future, it certainly has the most of the computing community very excited. It also has me wondering about many of the new aspects of the design; neutral net, self learning, environmental detection, are directly aimed at AI, are we seeing the next big push towards machine intelligence (see also ‘Radeon Instinct’; last article.)
8 Cores (The new Threadripper series has 16 cores & 32 threads!)
40% Instructions per clock improvement (actually turned out to be 53%)
Base clock 3.4 Ghz or higher (now actually +4 Ghz)
AM4 Motherboard ecosystem
Zen adapts to it’s environment and gets better over time!
8-core, 16-thread, beats 40% IPC goal, 4GHz or higher frequency!
SenseMI, neutral net prediction and smart pre-fetch, instructions to learn and
anticipate data before needed and accounts for 1/4 performance.
Pure Power and Precision Boost detects utilisation required in milliseconds.
Extended frequency range, detects environment, eg cooling hardware and enables
higher clock speeds as the system gets cooler.
Ryzen still to be fine-tuned and performance expected to get even better.
Improving with new 2000 series chips, some including APUs (Graphics)! (May 2018)
According to Mark Papermaster, AMD’s chief technology officer, AMD set out to ensure that Ryzen had what he called the best “intelligent performance,” an adaptive technology that continually assesses the processor to deliver the best performance at a given power level. AMD calls this “SenseMI.”
SenseMI consists of five different technologies: Pure Power, Precision Boost, Extended Frequency Range (XFR), Neural Net Prediction, and Smart Prefetch. The technologies all work together, using what AMD calls its Infinity Fabric—an on-chip network of connections—to constantly loop back and reassess how they’re doing.
Pure Power and Precision Boost, for example, are like two sides of the same coin. Pure Power monitors the chip’s temperature using hundreds of temperature sensors embedded in the chip and fabric, constantly seeking to bump down the power by milliwatts at a time while maintaining the same level of performance. On the other hand, Precision Boost is a fine-grained frequency control that can nudge performance up by 25MHz increments (versus 100MHz for Intel) to boost performance without consuming more power.
And if a user has a good cooler installed—using air, water, or liquid nitrogen—the chip can sense it, via Extended Frequency Range (XFR), a fancy name for auto detection that allows the Ryzen chip to run at a higher frequency than normally permitted.
If designing a chip was like training a football player, than the first three SenseMI technologies would be like hitting the gym: improving speed, power, and endurance. Think of the latter two, Neural Net Prediction and Smart Prefetch, as the mental aspects of the game: anticipation and awareness.
Papermaster described AMD’s Neural Net Prediction capabilities as “scary smart” branch prediction, intended to remove pipeline stalls. A microprocessor’s instructions typically work on conditions: if this, then that. But executing those instructions, then waiting for the next one, can take several clock cycles where the chip is essentially doing nothing. To compensate, modern processors “cheat” by trying to guess the way the conditional jump will go. If it’s right, then the processor can save time and improve the overall performance. If it’s wrong, then everything stalls while a new instruction is fetched. AMD’s technology uses a “massive amount of data” to retrain AMD’s branch predictor on the fly, minimizing those pipeline stalls, Papermaster said.
Likewise, Smart Prefetch makes that same bet, but in a different manner—it tries to guess what data Ryzen will need next, then grab it before the chip can act upon it. “That’s what we live for,” Papermaster said. “This inspires every designer.”
New processors also mean new chipsets, and Ryzen is no exception. The top-end AM4 chipset, known as X370, adds support for DDR4 memory at up 2,400MHz, PCIe Gen 3, USB 3.1 at 10Gbps, and of course support for NVMe and SATA Express drives.
The new Radeon Instinct GPU-based accelerators unveiled yesterday by chipmaker AMD are aimed at dramatically improving the capabilities of machine intelligence in server computing, the company said. In addition to the new hardware, AMD also announced new software frameworks and a new open source library for GPU-based machine learning implementations.
AMD said its new Instinct accelerators are aimed at enabling customers to better use and understand the vastly expanding volumes of data being generated by a wide range of applications and devices. The new technologies are designed to provide “a blueprint for an open software ecosystem for machine intelligence, helping to speed inference insights and algorithm training,” the company said in a statement. Instinct products are expected to hit the market in the first half of next year.
Aimed at ‘Proliferation of Machine Intelligence’ Intended to “address a wide range of machine intelligence applications,” the Radeon Instinct lineup will include three different accelerators: two for inference applications and one designed for deep learning training. The MI6 inference-focused accelerator, based on the Polaris GPU architecture, will offer a peak FP16 performance of 5.7 teraflops and is passively cooled.
The MI8 inference accelerator is based on the Fiji architecture and promises a peak FP16 performance of 8.2 teraflops, according to the company. The third accelerator, the Radeon Instinct MI25, is optimized for deep learning training and is built on AMD’s Vega GPU architecture. It will also be passively cooled.
The new Radeon Instinct offerings provide “the GPU and x86 silicon expertise to address the broad needs of the datacenter and help advance the proliferation of machine intelligence,” said Lisa Su, AMD president’s and CEO, in the statement.
In a technology summit AMD held last week, the company showed off the new accelerator technology to a number of customers and partners, including SuperMicro, Xilinx and the University of Toronto’s CHIME radio telescope project.
To support the new family of Radeon Instinct accelerators, AMD also plans to roll out an open source library called MIOpen. Set to become available sometime in the first quarter of 2017, MIOpen will provide “GPU-tuned implementations for standard routines such as convolution, pooling, activation functions, normalization and tensor format,” the company said.
AMD will also launch the ROCm software platform, which the company said is optimized for accelerating deep learning frameworks such as Caffe, Torch 7 and Tensorflow. Among the sectors AMD is targeting with Instinct, MIOpen and ROCm, are self-driving cars, smart homes, autopilot drones, personal robots, financial services and security.
(Now in mid 2018, new and very exciting chips are flowing out of AMDs fabrication plant with satisfying regularity: The 2000 series, ZEN+, coming soon Ryzen 2 and 3, Threadripper+, new 7nm fabs in development, many new upgraded motherboards for the same chips. The future is very bright for the computing enthusiast! If the greedy price fixing memory manufacturers can be brought into check, and the bitcoin mining craze would come to an end and restore the graphic card prices to normal, developments in the computing field could really get into top gear! Spikey)