UHG
Search
Close this search box.

Bend It Like Python, Scale It Like CUDA

Bend by default executes code on CPU and GPU in parallel with Python-like syntax, making it a great choice for developers getting started with GPU development.

Share

There has been a lot of buzz around the newest programming language, Bend. Discussion forums have been pitting it against CUDA, the go-to choice for experienced developers. However, with CUDA’s restrictions and worthy alternatives, Bend could be worth the excitement. 

Bend is a high-level, massively parallel programming language designed to simplify parallel computing. Unlike traditional low-level languages like CUDA and Metal, Bend offers a Python-like syntax that makes parallel programming more accessible and easy to developers without deep expertise in concurrent programming.

“Bend automatically parallelises code, ensuring that any code that can run in parallel will do so without requiring explicit parallel annotations. As such, while Bend empowers developers with powerful parallel constructs, it maintains a clean and expressive syntax,” Vinay Konanur, VP – emerging technologies, UNext Learning, told AIM.

Why Not Cuda, Then? 

One might wonder how it measures up against low-level languages like CUDA. While CUDA is a mature, low-level language that provides precise control over hardware, Bend aims to abstract away the complexities of parallel programming.

Bend is powered by HVM2 (Higher-Order Virtual Machine 2), a successor of HVM, letting you run high-level programming on massively parallel hardware, like GPUs, with near-ideal speedup.

A user mentioned that Bend is nowhere close to the performance of manually optimised CUDA. “It isn’t about peak performance,” he added.

Bend is based on the Rust foundation, which means you can expect top-notch performance through simple Python-like syntax. Konanur also revealed that Bend’s interoperability with Rust libraries and tools provides access to a rich ecosystem. 

“Developers can leverage the existing Rust code and gradually transition to Bend,” said Konanur. 

Moreover, he believes that the performance of a programming language on a specific GPU can depend on several factors, including the specific GPU, the nature of the task, and how well the task can be parallelized. 

“So, even if Bend were to support AMD GPUs in the future, the performance could vary depending on these factors,” Konanur added. 

Scalability and Parallelisation 

Bend’s official documentation suggests that as long as the code isn’t “helplessly sequential”, Bend will use thousands of threads to run it in parallel. User demos have proved the same. 

A recent demo showed a 57x speedup going from 1 CPU thread to 16,000 GPU threads on an NVIDIA RTX 4090. This is a perfect example of how Bend runs on massively parallel hardware like GPUs and provides near-linear speedup based on the number of cores available. 

Focusing on parallelisation, Bend is not limited to any specific domain, like array operations. It can scale any concurrent algorithm that can be expressed using recursive data types and folds, from shaders to actor models.

Max Bernstein, a software developer, argues that Bend has different scaling laws compared to the traditional languages. While Bend may be slower than other languages in single-threaded performance, it can scale linearly with the number of cores for parallelisable workloads.

How about Other Programming Languages? 

A Reddit user, when asked how different Bend is from CuPy or Numba, answered, “It massively reduces the amount of work you need to do in order to make your general purpose program parallelisable, whereas CuPy and Numba (as far as I know) only parallelise programmes that deal with multidimensional arrays.”

Further, users have also observed that Bend is not focused on giving you peak performance like the manually optimised CUDA code but rather on simplifying code execution by using Python/Haskell-like code on GPUs, which wasn’t possible earlier. 

When you compare Bend with Mojo, a programming language that can be executed on GPUs and provides Python-like syntax, Bend focuses more on parallelism across all computations. Mojo is geared more towards traditional AI/ML workloads involving linear algebra.

But unlike Mojo, Bend is completely open-source which means users can take and modify the code as per their convenience. Also, they can contribute to the project as it ensures more transparency. 

📣 Want to advertise in AIM? Book here

Picture of Sagar Sharma

Sagar Sharma

A software engineer who loves to experiment with new-gen AI. He also happens to love testing hardware and sometimes they crash. While reviving his crashed system, you can find him reading literature, manga, or watering plants.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.