The big picture: Facebook is adding to a growing list of companies that are rushing to develop their own chips. Apple, Microsoft, Google, Amazon, Tesla, and Baidu are all looking to cut their reliance on silicon giants like Intel, AMD, Nvidia, and Qualcomm. The main reason is that custom silicon designed for specific workloads needs less power to run and can be more scalable than general-purpose hardware. The latter has become a commodity that any competitors can use and doesn’t allow for tight integration with the software.
Facebook is one of several companies jumping on the custom Arm-based silicon train and becoming more self-reliant. According to a report from The Information, the social giant has been developing a family of special chipsets for accelerating machine learning tasks. One of these will be used in training the AI that handles content recommendations.
This effort dates back to 2018 when it transpired that Facebook was looking to hire engineers with experience in designing FPGAs and ASICs. One year later, the company revealed plans to create an AI pipeline for its data centers with the help of partners like Intel, Qualcomm, Marvell, Esperanto, and Habana — which is now owned by Intel.
However, the new report suggests the social giant has changed its mind and is developing the new chips completely in-house. A company spokesperson clarified to us that “Facebook is always exploring ways to drive greater levels of compute performance and power efficiency with our silicon partners and through our own internal efforts.” This suggests the company plans to do the transition in small steps over the coming years, as the new chips aren’t meant to completely replace third-party solutions just yet.
The company is also developing a chip for video transcoding to improve the infrastructure that delivers videos and livestreams in its apps. This is similar to what Google has been doing with its “Argos” Video Coding Units (VCUs) to accelerate the transcoding of videos uploaded to YouTube.