Papers
Sign in to view your remaining parses.
Tag Filter
Learning Decomposed Contextual Token Representations from Pretrained and Collaborative Signals for Generative Recommendation
Published:8/23/2025
Generative Recommendation SystemsContextual Token Representation LearningLarge Language Model OptimizationSequence-to-Sequence ModelingUser Interaction Modeling
The DECOR framework addresses limitations in generative recommenders, enhancing token adaptability while preserving pretrained semantics. It employs contextualized token composition and decomposed embedding fusion, demonstrating superior performance on real datasets compared to s
02
User-LLM: Efficient LLM Contextualization with User Embeddings
Published:2/21/2024
User Embedding-Based LLM ContextualizationSelf-Supervised User Behavior EncoderMovie Recommendation DatasetResponse Generation from User TimelineCross-Attention LLM Integration
The UserLLM framework employs selfsupervised user embeddings for direct contextualization with LLMs. Through crossattention, it dynamically adapts responses to user behavior, achieving 78.1X speedup and a 16.33% performance boost on datasets like MovieLens.
03
REGEN: Learning Compact Video Embedding with (Re-)Generative Decoder
Published:3/12/2025
Diffusion Model Video EmbeddingCompact Video EncodingGenerative Video ReconstructionAccelerated Training for Video Generation ModelsTemporal Compression Video Embedding
The study introduces a novel video embedding method focusing on synthesizing visually plausible reconstructions, achieving higher compression ratios. Utilizing an encodergenerator framework with a diffusion transformer, it shows superior encodingdecoding performance with up to
00
PersonaBench: Evaluating AI Models on Understanding Personal Information through Accessing (Synthetic) Private User Data
Published:2/28/2025
Synthetic User Data GenerationPersonalized AI Model EvaluationUnderstanding Private User InformationRetrieval-Augmented Generation (RAG) MethodEnhancing User Personalization Capabilities
This study introduces a synthetic data generation pipeline for creating realistic user profiles and private documents, leading to the PersonaBench benchmark. It highlights poor performance of current retrievalaugmented models in extracting personal information, underscoring the
02
Robust model predictive control for heat exchanger network
Published:8/28/2014
Robust Model Predictive ControlHeat Exchanger Network ControlNonlinear Control StrategiesSimulation Experiments in MATLAB/SimulinkControl Performance Optimization
This paper introduces a Robust Model Predictive Control (RMPC) strategy for optimizing heat exchanger network operation. Simulation experiments on three series countercurrent exchangers in MATLAB/Simulink demonstrate RMPC's effectiveness in reducing cooling medium consumption co
02
Planning with Diffusion for Flexible Behavior Synthesis
Published:5/20/2022
Diffusion Model PlanningModel-Based Reinforcement LearningTrajectory OptimizationLong-Horizon Decision MakingBehavior Synthesis
This paper presents a novel modelbased reinforcement learning approach that combines diffusion probabilistic modeling with trajectory optimization, enhancing consistency between modeling and decisionmaking. It demonstrates effective longhorizon decisionmaking and flexibility
02
Learnable Item Tokenization for Generative Recommendation
Published:5/12/2024
LLM-based Generative Recommendation SystemsLearnable Item TokenizationContrastive Learning-based Recommendation AlgorithmsResidual Quantized Variational AutoencoderRanking-Guided Generation Loss
The paper introduces LETTER, a learnable tokenizer addressing challenges of transforming recommendation data into LLMs’ language space. By integrating hierarchical semantics, collaborative signals, and code assignment diversity, its experimental validation on three datasets demon
027
HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
Published:8/7/2025
Generative Recommendation SystemsHierarchical Semantic IDsInterpretable Generative RecommendationDisentangled Representation LearningUniqueness Loss Mechanism
HiDVAE is a proposed framework that enhances generative recommendation by learning hierarchically disentangled item representations, addressing traditional methods' flatness and entanglement issues, thereby improving recommendation accuracy and diversity.
01
TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation
Published:6/15/2024
LLM-based Recommendation SystemsGenerative Recommendation SystemsUser-Item ID TokenizationMasked Vector-Quantized TokenizerCapturing High-Order Collaborative Knowledge for LLMs
TokenRec is introduced as a novel framework for enhancing LLMbased recommendation systems by effectively tokenizing user and item IDs. Featuring the Masked VectorQuantized Tokenizer and generative retrieval, it captures highorder collaborative knowledge, improving recommendati
01
Omnidirectional 3D Scene Reconstruction from Single Image
Single Image 3D Scene ReconstructionDiffusion Model for 3D ReconstructionGeometric Consistency Optimization3D Gaussian Splatting RepresentationOmnidirectional Scene Reconstruction
The paper proposes Omni3D, a novel method for omnidirectional 3D scene reconstruction from a single image. By iteratively optimizing generated views and poses, it minimizes 3D reprojection errors, enhancing geometric consistency. Experiments show Omni3D significantly outperforms
02
MoE-Loco: Mixture of Experts for Multitask Locomotion
Published:3/11/2025
Multitask Locomotion LearningMixture of Experts FrameworkQuadrupedal and Bipedal LocomotionGradient Conflict MitigationRobot Task Migration and Skill Composition
The paper presents MoELoco, a Mixture of Experts framework for legged robots, enabling a single policy to navigate diverse terrains while mitigating gradient conflicts in multitask reinforcement learning, enhancing training efficiency and performance.
03
Inductive Generative Recommendation via Retrieval-based Speculation
Published:10/4/2024
Generative Recommendation SystemsTraining-Free Acceleration MethodsOnline Recommendation System OptimizationSequential Recommender SystemsImage Generation
The paper introduces , a retrievalbased inductive generative recommendation framework that addresses the limitations of generative models in recommending unseen items by utilizing a drafter model for candidate generation and a generative model for verification, enhancing
03
LLM-Aligned Geographic Item Tokenization for Local-Life Recommendation
Published:11/18/2025
LLM-based Recommendation SystemsGeographic Item TokenizationLocal-Life RecommendationReinforcement Learning Geographic AlignmentHierarchical Geographic Item Tokenization
The LGSID framework enhances locallife recommendation by integrating RLbased geographic alignment and hierarchical item tokenization to capture spatial relationships, outperforming existing models in empirical studies.
05
Pre-training Generative Recommender with Multi-Identifier Item Tokenization
Published:4/6/2025
Generative Recommendation SystemsMulti-Identifier Item TokenizationCurriculum Recommender Pre-TrainingRQ-VAE as TokenizerLow-Frequency Item Semantic Modeling
The MTGRec framework enhances generative recommender pretraining through multiidentifier item tokenization, using RQVAE for multiple identifier association and a curriculum learning scheme to improve semantic modeling for lowfrequency items and token diversity.
02
Optimized Product Quantization for Approximate Nearest Neighbor Search
Published:6/1/2013
Optimized Product QuantizationApproximate Nearest Neighbor SearchHigh-Dimensional Vector EncodingQuantization Distortion MinimizationParametric and Non-Parametric Methods
This study presents an optimized product quantization method to enhance the accuracy of approximate nearest neighbor search (ANN). By minimizing quantization distortions, two optimization approaches are proposed: a nonparametric method and a parametric method assuring optimal so
02
Understanding Negative Sampling in Knowledge Graph Embedding
Published:1/31/2021
Knowledge Graph EmbeddingNegative Sampling MethodsNegative Sample GenerationKnowledge Representation in Recommendation SystemsLink Prediction and Node Classification
This paper discusses negative sampling in knowledge graph embedding, highlighting its importance in training. It categorizes negative sampling methods into static, dynamic, and custom clustering approaches, offering new insights to enhance knowledge representation in recommendati
04
Qwen2.5 Technical Report
Published:12/20/2024
Qwen 2.5 Large Language ModelMultistage Reinforcement LearningSupervised Fine-Tuning MethodsHuman Preference EnhancementLarge-Scale Pre-Training Datasets
The Qwen2.5 technical report introduces a new large language model series, expanding the pretraining dataset to 18 trillion tokens. It employs over 1 million finetuned samples and multistage reinforcement learning to enhance human preferences, demonstrating superior performanc
04
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
Published:12/4/2025
Real-Time Audio-Driven Avatar GenerationEfficient Inference of Diffusion ModelsLow-Latency Streaming GenerationTemporal Consistency Enhancement MechanismLarge-Scale Parameter Diffusion Model
Live Avatar' is an innovative algorithmsystem framework for highfidelity, infinitelength audiodriven avatar generation. It employs a 14billionparameter diffusion model and introduces a timestepforcing pipeline for lowlatency streaming, enhancing temporal consistency and m
07
Complex-valued Neural Operator for Solving 2D Wave Equation Based on Graph Neural Network
Published:1/1/2025
Complex-Valued Neural OperatorApplication of Graph Neural Networks2D Wave Equation SolvingGreen's Function MethodEM Simulation Acceleration
This work introduces a complexvalued neural operator (CVNeuralOp) using graph neural networks to solve the 2D wave equation. Inspired by Green's function method, it shows adaptability to various domain shapes and grid densities while outperforming the method of moments in compu
04
NSNO: Neumann Series Neural Operator for Solving Helmholtz Equations in Inhomogeneous Medium
Published:1/25/2024
Neumann Series Neural OperatorHelmholtz Equation SolvingEmbedded U-Net ArchitectureDeep Learning for PDEsInverse Scattering Problem Model
The Neumann Series Neural Operator (NSNO) is introduced for learning the solution operator of the Helmholtz equation, achieving 60% lower relative L2error and 50% reduced computational cost, especially beneficial in high wavenumber scenarios and inverse scattering problems.
02
……