Recent advancements in image generative foundation models have prioritized quality improvements but often at the cost of increased computational complexity and inference latency. To address this critical trade-off, we introduce HiDream-I1, a new open-source image generative foundation model with 17B parameters that achieves state-of-the-art image generation quality within seconds. HiDream-I1 is constructed with a new sparse Diffusion Transformer (DiT) structure. Specifically, it starts with a dual-stream decoupled design of sparse DiT with dynamic Mixture-of-Experts (MoE) architecture, in which two separate encoders are first involved to independently process image and text tokens. Then, a single-stream sparse DiT structure with dynamic MoE architecture is adopted to trigger multi-model interaction for image generation in a cost-efficient manner. To support flexiable accessibility with varied model capabilities, we provide HiDream-I1 in three variants: HiDream-I1-Full, HiDream-I1-Dev, and HiDream-I1-Fast.Furthermore, we go beyond the typical text-to-image generation and remould HiDream-I1 with additional image conditions to perform precise, instruction-based editing on given images, yielding a new instruction-based image editing model namely HiDream-E1. Ultimately, by integrating text-to-image generation and instruction-based image editing, HiDream-I1 evolves to form a comprehensive image agent (HiDream-A1) capable of fully interactive image creation and refinement. To accelerate multi-modal AIGC research, we have open-sourced all the codes and model weights of HiDream-I1-Full, HiDream-I1-Dev, HiDream-I1-Fast, HiDream-E1 through our project websites:this https URLandthis https URL. All features can be directly experienced viathis https URL.
View on arXiv@article{cai2025_2505.22705, title={ HiDream-I1: A High-Efficient Image Generative Foundation Model with Sparse Diffusion Transformer }, author={ Qi Cai and Jingwen Chen and Yang Chen and Yehao Li and Fuchen Long and Yingwei Pan and Zhaofan Qiu and Yiheng Zhang and Fengbin Gao and Peihan Xu and Yimeng Wang and Kai Yu and Wenxuan Chen and Ziwei Feng and Zijian Gong and Jianzhuang Pan and Yi Peng and Rui Tian and Siyu Wang and Bo Zhao and Ting Yao and Tao Mei }, journal={arXiv preprint arXiv:2505.22705}, year={ 2025 } }