What it is and how it affects storage

What it is and how it affects storage

Analytics and prediction are key to today’s IT as organizations embark on digital transformation, with use cases ranging from speech recognition and pattern analysis in science, to fraud detection and protection. AIOps in IT and Storage.

As artificial intelligence and predictive methods – Machine Learning, Deep Learning, Neural ProcessingAnd so – becoming more common, ways to streamline these operations have evolved

Chief among these is the emergence of new ways of dealing with large numbers, and bfloat16 – originally developed by Google – is the most prominent of these.

In this article, we look at what bfloat16 is, what effect it has Memory and back-end storageAnd which hardware manufacturers support it.

What is bfloat16?

Bfloat16 – short for Brain Float 16 – is a way to represent floating point numbers in computing operations. floating point number A way computers can handle very large numbers (think millions, billions, trillions, etc.) or very small (think lots of zeros after the decimal point) while using the same schema.

In floating point schemes, there is a set number of binary bits. One bit indicates whether the number is positive or negative, some bits indicate the number itself, and the floating point element – the exponent – is a few bits that say where the decimal point should be.

Bfloat16, as the name suggests, uses a 16-bit format to do all this. In doing so, it halves the bit weight of the most common existing standard, IEEE 754, which is 32-bit.

But bfloat16 uses an exponent that is the same size as IEEE 754, which allows it to represent the same range of numbers, but with less precision.

What is bfloat16 used for?

Bfloat16 was developed by Google – representing the B company’s brain project – specifically for this tensor processing unit (TPU) Used for machine learning. The point here is that machine learning operations don’t require the level of precision in terms of your binary powers that other computations might require. But you want speed of operation, and that’s the goal of bfloat16.

How will bfloat16 affect storage?

The main advantage of bfloat16 is that it reduces storage requirements during processing and speeds up discrete computations during machine learning operations.

Bfloat16 takes half the memory of equivalent operations that use IEEE 754 32-bit numbers, meaning more can be held in memory and less time spent swapping in and out of memory. That means larger models and datasets can be used. Also, bfloat16 takes less time to load into memory from bulk storage.

Hardware support for bfloat16 is something that is built into the processor and processing unit, so it will be built into the standard.

Back-end storage volumes may be positively affected. In other words, if you do a lot of machine learning operations with bfloat16, you will need less storage. But it is likely for some time, that IEEE 754 will prevail, and bfloat16 will be converted from that existing standard.

Does any hardware support exist for bfloat16?

Bfloat16 was first deployed on Google’s hardware TPU, supported by Intel, which can be used through the provider’s cloud services (Cloud TPU) or purchased as a product for customer on-premise use.

At the time of writing, it is supported by Intel’s third-generation Xeon Scalable CPUs, IBM’s Power10 and ARM’s Neoverse processors.

Bfloat16 is also supported on several NPUs – Neural Processing Units, of which the TPU is one – including ARM’s Trillium, Center, Flex, Habana, Intel’s Nervana, and Wave.



Source link