OpenAI proposes open-source Triton language as an alternative to Nvidia’s CUDA

Emotional, Burning, Unlimited Tuned Laboratory

SEO: Python-like language promises to be easier to write than native CUDA and specialized GPU code but has performance comparable to what expert GPU coders can produce and better than standard library code such as Torch.

Graphics processing units from Nvidia are too hard to program, including with Nvidia’s own programming tool, CUDA, according to artificial intelligence research firm OpenAI.

The San Francisco-based AI startup, backed by Microsoft and VC firm Khosla Ventures, introduced the 1.0 version on Wednesday, a new programming language specially crafted to ease that burden, called Triton, detailed in a blog post that links to GitHub source code.

OpenAI claims Triton can deliver substantial ease-of-use benefits over coding in CUDA for some neural network tasks at the heart of machine learning forms of AI such as matrix multiplications.