Google has entered a strategic partnership with Hugging Face, an AI model repository, making it easier for developers to create, train, and deploy AI models in Google Cloud.
This collaboration allows external developers on Hugging Face’s platform to access Google’s huge AI infrastructure of GPU supercomputers, including the highly sought-after Nvidia H100s, all in a “cost-effective” manner.
Hugging Face hosts open-sourced foundation models such as Meta’s Llama 2 and Stability AI’s Stable Diffusion, alongside numerous databases for model training.
With over 350,000 models available, developers can leverage or contribute their models similar to the collaborative nature of code sharing on GitHub.
The partnership also aims to foster the creation of AI software that is optimized for specific tasks which follows a growing trend in the AI world.
Hugging Face, valued at $4.5 billion, benefitted from significant funding, with Google, Amazon, Nvidia, and others collectively raising $235 million in the past year.
Google stated that Hugging Face users will gain access to the AI app-building platform Vertex AI and the Kubernetes engine for model training and fine-tuning starting in the first half of 2024.
This move signifies Google’s commitment to supporting the development of the open-source AI ecosystem.
While some of Google’s models are available on Hugging Face, notable large language models like Gemini and the text-to-image model Imagen remain exclusive and are considered more closed-source.
This collaboration marks a significant step in democratizing AI development, allowing a broader community of developers to harness the power of Google Cloud’s infrastructure through the accessible platform provided by Hugging Face.