About the Release
We are thrilled to announce the beta release of
SDXL for StreamDiffusion
, bringing the most advanced Stable Diffusion model into a highly controllable real-time video workflow.Better yet:
it’s all open source
and available for the real-time AI video community to explore and extend.Builders and Creative Technologists can now generate
HD real-time video at over 15FPS
directly on the Daydream platform, with the potential to achieve up to 25 FPS in optimized configurations.dotsimulate and others are already creating SDXL-based tools and applications using the Daydream API or self-hosting our open-source StreamDiffusion fork.
Getting Started
- StreamDiffusion can create a wide variety of video styles. Craft a custom configuration and share it with the world. Explore →
- If you’re ready to build, check out our Quickstart guides: Start Building →orConnect to StreamDiffusionTD →
- Learn about real-time AI and world models Explore Knowledge Hub and Research Resources →
Quality you can actually control
Our open source stack allows us to rapidly combine many parallel research tracks and deliver groundbreaking quality and controllability.
Here a few of the key components - already battle-tested by Daydream builders:
SDXL
- 3.5× larger modelwith expanded attention blocks for better image quality
- Native 1024×1024 resolution- HD output with improved colors, contrast, and details
- Less flickering and fewer artifactswith better frame-to-frame consistency
Image-Based Style Control (through IPAdapters)
IPAdapters (Image-Prompt Adapters)
let you use any reference image to guide your video's style - similar to LoRAs but with real-time control.Two modes:
- IPAdapter Standard: Apply artistic styles across your video stream
- IPAdapter FaceID: Maintain character consistency throughout sequences
Technical features:
- 7+ Temporal Weight Types: Linear, Ease-in/out, Style transfer, Composition, etc.
- Runtime Parameter Tuning: Adjust settings during generation
Fine-tuned Spatial Consistency (through Multi-ControlNet Support)
We’ve accelerated HED, Depth, Pose, Tile, Canny ControlNets for incredibly granular control of the workflow’s spatial awareness. You can use multiple at the same time, and adjust the strength of each.
TensorRT Acceleration
- Sustained real-time generation through NVIDIA's inference optimization
- Consistent 15-25 FPS with complex models
🎁 Bonus: SD1.5 + IPAdapters
SD1.5 is a community favorite for a reason - and we’ve coupled it with accelerated IPAdapters for a high-framerate style transfer experience.
Note: IPAdapters aren't available for SDTurbo models.