ComfyUI is an open-source, node-based interface for building and running image generation workflows with Stable Diffusion and similar models. Unlike more traditional prompt-based tools, ComfyUI lets you visually lay out each step—loading models, preprocessing inputs, applying conditioning, sampling, and postprocessing—by connecting nodes in a graph. This modular design makes it very popular among developers and artists who want fine-grained control over how images are created. ComfyUI has seen rapid growth over the past year, with an active community contributing custom nodes and optimized workflows for tasks like inpainting, upscaling, and video generation.
Recently, a ComfyUI Hackathon was held on June 26, 2025, at the GitHub office in San Francisco. Sponsored by NVIDIA, this mini-hackathon challenged participants to build either new custom nodes or complete workflows demonstrating creative uses of ComfyUI—especially those leveraging RTX hardware acceleration. Teams competed in two main categories: Custom Node Development, which focused on extending ComfyUI’s capabilities through new code modules, and Workflow & Content Creation, which emphasized building polished workflows to produce high-quality images or videos. I took part in this hackathon with a few other people from Magic Hour, the company where I’m currently interning. Our project involved inserting a character seamlessly into an existing video, combining video frame processing with Stable Diffusion to create an integrated, animated result. We ended up placing second overall in the workflow category. My boss, Runbo, won first place for a project that transformed videos into many different artistic styles, showing how ComfyUI can be used to generate diverse, stylized outputs. Winners received NVIDIA RTX 5090 GPUs and cash prizes. It was an incredible experience to see firsthand how powerful and flexible ComfyUI has become—and to contribute to pushing its creative possibilities forward.