Papers
Sign in to view your remaining parses.
Tag Filter
Auto-Regressive Diffusion Model
FilmWeaver: Weaving Consistent Multi-Shot Videos with Cache-Guided Autoregressive Diffusion
Published:12/12/2025
Consistent Video GenerationAuto-Regressive Diffusion ModelMulti-Shot Video SynthesisVideo Generation Dataset ConstructionCharacter and Scene Consistency
FilmWeaver is a novel framework addressing multishot video generation consistency challenges. It employs an autoregressive diffusion approach for arbitrarylength video creation, decoupling intershot and intrashot coherence through a duallevel cache mechanism to maintain char
01
InfVSR: Breaking Length Limits of Generic Video Super-Resolution
Published:10/1/2025
Video Super-ResolutionAuto-Regressive Diffusion ModelLong-Sequence Video Processingvideo diffusion modelsTemporal Consistency Evaluation
InfVSR reformulates video superresolution as an autoregressive onestep diffusion model, enabling efficient, scalable processing of long videos with temporal consistency via rolling caches and patchwise supervision.
04
Large Language Diffusion Models
Published:2/14/2025
Large Language Diffusion ModelsAuto-Regressive Diffusion ModelLarge Language Model Fine-TuningTransformer architectureProbabilistic Inference Generation
LLaDA, a diffusionbased large language model, uses masking and reverse generation with Transformers to predict tokens, optimizing likelihood bounds. It matches autoregressive baselines in diverse tasks and excels in context learning, demonstrating diffusion models’ promise for s
05
dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive
Caching
Published:5/17/2025
Diffusion Model Fine-TuningEfficient Inference of Diffusion ModelsLLM Reasoning Capacity EnhancementTraining-Free Acceleration MethodsAuto-Regressive Diffusion Model
dLLMCache is a trainingfree adaptive caching method accelerating diffusion large language models by reusing intermediate computations, achieving up to 9.1× speedup on LLaDA 8B and Dream 7B without degrading output quality.
05
OneFlowSeq: Achieving One-Step Generation for Diffusion Language Models via Lightweight Distillation
Published:10/8/2025
Diffusion Model Fine-TuningAuto-Regressive Diffusion ModelLarge Language Model Fine-TuningSequence Policy OptimizationTraining-Free Acceleration Methods
OneFlowSeq distills a multistep diffusion teacher into a singlestep generator using MeanFlow supervision and Jacobianvector product signals, greatly accelerating inference and improving performance with 1600× fewer trainable parameters.
017