Intel’s Cooper Lake microarchitecture is aimed to replace the company’s current Cascade Lake offerings, and it might be bringing along some new tricks for machine learning and artificial intelligence thanks to Facebook. According to reports, Facebook has been working closely with the semiconductor manufacturer, collaborating on a feature known as Bfloat16. If utilized correctly, the new partnership could allow for upcoming machines to accelerate their A.I. processes.
Expected to be released sometime in mid-2019, Cooper Lake offers an upgrade to Cascade Lake, offering Core i7, Core i9, and a range of Xeon processors with Intel’s 14nm microarchitecture. Based upon Intel’s Whitley platform, Cooper Lake CPUs are said to offer higher octa-channel memory bandwidths and an I/O upgrade from PCIe 3.0 to PCIe 4.0 for faster device interfacing. The new offerings are aimed at PC enthusiasts and those running server applications.
With Facebook having its fingers within the machine learning cookie jar for quite some time, it is not as surprising as one might first initially think that the company is working alongside Intel. The newly included feature known as Bfloat16 allows for machines to express information with a total of only 16 bits rather than the standard 32-bit number format. What this means in real-world scenarios is that machines could now convert information faster.
The average consumer may not be aware of how much A.I. processing their Windows or Mac machine currently employs, but functions such as speech recognition and image identification, require the latest machine learning processes. Every time you power up your favorite photo app and allow it to find similar faces, it is utilizing your CPU for an A.I. workload.
But, why might Facebook get involved with Intel’s design process? It is not hard to see why a company such as