OneSearch: A Preliminary Exploration of the Unified End-to-End Generative Framework for E-commerce Search
TL;DR Summary
OneSearch unifies e-commerce search into an end-to-end generative model, leveraging hierarchical quantization encoding, multi-view user behavior, and preference-aware rewards to enhance relevance and efficiency, achieving significant business improvements and cost reductions in d
Abstract
Traditional e-commerce search systems employ multi-stage cascading architectures (MCA) that progressively filter items through recall, pre-ranking, and ranking stages. While effective at balancing computational efficiency with business conversion, these systems suffer from fragmented computation and optimization objective collisions across stages, which ultimately limit their performance ceiling. To address these, we propose \textbf{OneSearch}, the first industrial-deployed end-to-end generative framework for e-commerce search. This framework introduces three key innovations: (1) a Keyword-enhanced Hierarchical Quantization Encoding (KHQE) module, to preserve both hierarchical semantics and distinctive item attributes while maintaining strong query-item relevance constraints; (2) a multi-view user behavior sequence injection strategy that constructs behavior-driven user IDs and incorporates both explicit short-term and implicit long-term sequences to model user preferences comprehensively; and (3) a Preference-Aware Reward System (PARS) featuring multi-stage supervised fine-tuning and adaptive reward-weighted ranking to capture fine-grained user preferences. Extensive offline evaluations on large-scale industry datasets demonstrate OneSearch's superior performance for high-quality recall and ranking. The rigorous online A/B tests confirm its ability to enhance relevance in the same exposure position, achieving statistically significant improvements: +1.67% item CTR, +2.40% buyer, and +3.22% order volume. Furthermore, OneSearch reduces operational expenditure by 75.40% and improves Model FLOPs Utilization from 3.26% to 27.32%. The system has been successfully deployed across multiple search scenarios in Kuaishou, serving millions of users, generating tens of millions of PVs daily.
Mind Map
In-depth Reading
English Analysis
1. Bibliographic Information
- Title: OneSearch: A Preliminary Exploration of the Unified End-to-End Generative Framework for E-commerce Search
- Authors: Ben Chen, Xian Guo, Siyuan Wang, Zihan Liang, Yue Lv, Yufei Ma, Xinlong Xiao, Bowen Xue, Xuxin Zhang, Ying Yang, Huangyu Dai, Xing Xu, Tong Zhao, Mingcan Peng, Xiaoyang Zheng, Chao Wang, Qihang Zhao, Zhixin Zhai, Yang Zhao, Bochao Liu, Jingshan Lv, Xiao Liang, Yuqing Ding, Jing Chen, Chenyi Lei, Wenwu Ou, Han Li, Kun Gai.
- Affiliations: All authors are affiliated with Kuaishou Technology, a major Chinese technology company known for its short-video platform and e-commerce services. This indicates the research is grounded in large-scale industrial application.
- Journal/Conference: The paper is submitted to an ACM conference in 2025. The specific venue is not named (
In . ACM, New York, NY, USA), but ACM conferences (like SIGIR, KDD, WSDM) are premier venues in the fields of information retrieval, data mining, and web search, signifying high-quality research. - Publication Year: 2025 (Projected)
- Abstract: The paper introduces
OneSearch, a novel end-to-end generative framework designed to replace traditional multi-stage cascading architectures (MCA) in e-commerce search. MCAs suffer from fragmented computation and conflicting optimization goals across different stages (recall, pre-ranking, ranking).OneSearchunifies these stages into a single generative model. Its key innovations are: (1) aKeyword-enhanced Hierarchical Quantization Encoding(KHQE) to create better item representations, (2) amulti-view user behavior sequence injectionstrategy to comprehensively model user preferences, and (3) aPreference-Aware Reward System(PARS) to fine-tune the model for relevance and user preference. Offline and online A/B tests on Kuaishou's platform show significant improvements in user engagement (CTR, buyers, orders) and massive gains in efficiency (75.40% cost reduction, 8x improvement in hardware utilization). - Original Source Link:
https://arxiv.org/abs/2509.03236. This is a preprint link from arXiv, indicating the paper is available for public review before or alongside formal peer review and publication.
2. Executive Summary
-
Background & Motivation (Why):
- Core Problem: Traditional e-commerce search systems use a Multi-stage Cascading Architecture (MCA). This pipeline involves a
recallstage (finding thousands of potentially relevant items from billions), apre-rankingstage (narrowing them down to hundreds), and arankingstage (sorting the final list for the user). This design, while efficient, has major drawbacks:- Fragmented Computation: Resources are wasted on data transfer and communication between stages rather than on actual model computation.
- Objective Collision: Each stage is optimized for a different goal (e.g., recall focuses on breadth, ranking on precision), leading to conflicts.
- Error Propagation: If a perfect item is mistakenly filtered out in an early stage, it can never be shown to the user, limiting the system's overall performance.
- Why Now: Recent advances in Generative Retrieval (GR), which frames retrieval as a sequence generation task (like how a language model generates text), offer a new paradigm. Instead of filtering, a single model can directly generate a list of the most relevant item identifiers. However, applying GR to e-commerce search is uniquely challenging due to noisy item descriptions, strict relevance needs, and complex user intents.
- Fresh Angle:
OneSearchis presented as the first industrial-deployed framework to successfully unify the entire e-commerce search pipeline into a single end-to-end generative model, tackling the specific challenges of this domain head-on.
- Core Problem: Traditional e-commerce search systems use a Multi-stage Cascading Architecture (MCA). This pipeline involves a
-
Main Contributions / Findings (What):
- Keyword-enhanced Hierarchical Quantization Encoding (KHQE): A novel method to convert items into discrete
Semantic IDs(SIDs). It enhances core item attributes (like brand, color) and uses a hybrid quantization scheme (RQ-Kmeans+OPQ) to capture both hierarchical and unique item features, improving relevance. - Multi-view User Behavior Sequence (Mu-Seq) Injection: A comprehensive strategy to personalize results. It creates unique user IDs from behavior history and injects both recent (short-term) and historical (long-term) user interactions into the model to better infer user intent.
- Preference-Aware Reward System (PARS): A sophisticated training and ranking system. It uses multi-stage supervised fine-tuning to align the model with search tasks and an adaptive reward model to teach the model fine-grained user preferences, balancing relevance and conversion.
- Significant Real-World Impact: Online A/B tests confirmed
OneSearch's superiority, achieving +1.67% item Click-Through Rate (CTR), +2.40% buyers, and +3.22% order volume. It also dramatically cut operational expenditure by 75.40% and improved hardware efficiency (Model FLOPs Utilization) from 3.26% to 27.32%.
- Keyword-enhanced Hierarchical Quantization Encoding (KHQE): A novel method to convert items into discrete
3. Prerequisite Knowledge & Related Work
-
Foundational Concepts:
- Multi-stage Cascading Architecture (MCA): The standard design for large-scale retrieval systems. It's like a series of filters.
- Recall: The first stage. Its job is to quickly find a large set of candidate items (e.g., 10,000) from the entire database (e.g., 1 billion) that might be relevant to the user's query. It prioritizes speed and coverage over accuracy.
- Pre-ranking: The middle stage. It takes the output of the recall stage and uses a slightly more complex model to shrink the list to a more manageable size (e.g., 200 items).
- Ranking: The final stage. It uses a powerful and computationally expensive model to score and sort the remaining items to produce the final list shown to the user. It prioritizes precision and business goals (like clicks and purchases).
- Generative Retrieval (GR): A new paradigm that treats retrieval as a generation problem. Instead of matching a query to existing items, a model (often a Transformer) "generates" the identifiers of the most relevant items, token by token.
- Semantic ID (SID): In GR, every item in the database is assigned a unique sequence of discrete tokens, like a "name" or "barcode." This SID is not random; it's learned, so items with similar meanings have similar SIDs. The model's task is to generate these SIDs.
- Vector Quantization: A technique to compress a high-dimensional vector (like an item's embedding) into a compact, discrete representation.
OneSearchuses:RQ-Kmeans(Residual Quantization with K-means): A hierarchical method that quantizes the vector in stages. It first finds a coarse representation and then quantizes the remaining "residual" error, creating a fine-grained, layered code.OPQ(Optimized Product Quantization): A method that splits a vector into parts, rotates them to be more independent, and quantizes each part separately.OneSearchuses it to capture unique item features missed byRQ-Kmeans.
- Encoder-Decoder Architecture: The backbone of models like
BARTandT5. The encoder reads and understands the input sequence (e.g., query and user history), creating a rich numerical representation. The decoder then uses this representation to generate the output sequence (the item SIDs).
- Multi-stage Cascading Architecture (MCA): The standard design for large-scale retrieval systems. It's like a series of filters.
-
Previous Works:
- Intra-stage Optimizations: Earlier research focused on improving individual stages of the MCA, such as
EBRfor recall,DCNandDSSMfor pre-ranking, andDINandDeepFMfor ranking. These methods did not address the fundamental flaws of the MCA structure itself. - Early Generative Retrieval:
Tiger: A pioneering GR model for sequential recommendation. It introduced the concept of generatingSemantic IDs(SIDs).LC-REC: Adapted large language models for recommendation by integrating collaborative signals (i.e., user-item interaction patterns).OneRec: The first industrial GR model to unify recall and ranking for video recommendation, showing the real-world potential of end-to-end generation.OneSug: An end-to-end generative model for query suggestion in e-commerce, a precursor toOneSearchfrom the same research ecosystem.
- GR for Search:
GenR-POandGRAM: Early attempts to use GR for search, but they were mostly used to enhance existing MCA stages (like recall or pre-ranking) rather than replacing the entire pipeline.
- Intra-stage Optimizations: Earlier research focused on improving individual stages of the MCA, such as
-
Differentiation: The paper argues that e-commerce search is harder for GR than recommendation or query suggestion. As shown in Figure 3, recommendation (
OneRec) deals with a closed vocabulary of item IDs. Query suggestion (OneSug) deals with an open vocabulary of text. E-commerce search is a hybrid: the input (query) is open-vocabulary text, while the output (item) is a closed-vocabulary SID. This requires a model that can bridge the gap between natural language understanding and structured ID generation, which is a core focus ofOneSearch.
4. Methodology (Core Technology & Implementation)
The OneSearch framework is a comprehensive system designed to replace the entire MCA pipeline. Its architecture is shown in Figure 4 and can be broken down into four main components.
该图像是论文中OneSearch框架的整体架构示意图,展示了从对齐表示、关键词增强、RQ-OPQ编码、多视角行为序列注入、统一编码-解码结构到偏好感知奖励系统的模块流程,包含关键模块和训练调度。
4.1 Keyword-enhanced Hierarchical Quantization Encoding (KHQE)
This module's goal is to create high-quality Semantic IDs (SIDs) for every item. A good SID must capture an item's meaning while being distinct enough to avoid confusion.
-
Step 1: Aligned Collaborative and Semantic Representation:
- The model starts with a pre-trained text encoder (
BGE). To make it suitable for e-commerce, it's fine-tuned on query-item interaction data from Kuaishou's search logs. - This alignment is achieved by training the encoder with a multi-task loss function, ensuring that the resulting embeddings understand both content similarity and user behavior. The total loss is:
- Symbol Explanation:
- : Contrastive losses that pull embeddings of similar queries/items (based on user clicks) closer together.
- : A contrastive loss that aligns query and item embeddings based on user interactions.
- : A margin loss that learns the relative preference of different user behaviors (e.g., a purchase is stronger than a click).
- : A relevance correction loss that uses an LLM to score difficult pairs, helping the model learn fine-grained relevance.
- : Hyperparameters to balance the different loss components.
- The model starts with a pre-trained text encoder (
-
Step 2: Core Keyword Enhancement:
-
E-commerce item titles are often noisy (e.g., "High-quality Summer Dress 2025 New Style Korean Fashion Free Shipping"). To focus on what matters, the system first identifies 18 core attributes (e.g.,
Brand,Style,Material) using Named Entity Recognition (NER). These attributes are listed in Table 1. -
Manual Transcription of Table 1: 18 structured attributes using Named Entity Recognition in the KuaiShou e-commerce search platform.
Entity Modifier Brand Material Style Function Location Audience Color Marketing Season Pattern Scene Specifications Price Model Anchor Series -
The embeddings of these keywords are then averaged with the item's original title embedding. This forces the final representation to be more influenced by these core attributes. The final query () and item () embeddings are:
-
Symbol Explanation:
- : Original embeddings for the query and item.
- : Embeddings of the core keywords found in the query and item.
m, n: Number of keywords found.
-
-
Step 3: RQ-OPQ Hierarchical Quantization Tokenization:
-
This step converts the final item embedding into a sequence of discrete tokens (the SID).
-
RQ-Kmeansfor Hierarchical Semantics: The embedding is first passed through a 3-layerRQ-Kmeanstokenizer. This creates a 3-token code (e.g.,[token1, token2, token3]). The first layer captures broad categories, while subsequent layers capture finer details. The paper experiments with different codebook sizes (see Table 2) and finds that a larger first layer (4096) and balancing the distribution of codes in the last layer improves performance. -
OPQfor Unique Features:RQ-Kmeansis good at finding shared features but discards the final residual embedding, which contains an item's unique characteristics. To capture this, the residual is quantized usingOPQ, generating two additional tokens (e.g.,[token4, token5]). -
The final SID is a 5-token sequence: 3 from
RQ-Kmeansand 2 fromOPQ. As shown in Table 3, thisRQ-OPQhybrid approach significantly outperformsRQ-VAEand standardRQ-Kmeansin offline recall and ranking metrics. -
Manual Transcription of Table 2: The codebook utilization rate (CUR) and independent coding rate (ICR) for various RQ-Kmeans configurations. The last + means balanced operation for all levels.
Configurations CURL1 CURL1*L2 CURTotal ICR 1024-1024-1024 100% 54.27% 1.72% 36.67% +keywords 100% 65.40% 2.03% 40.25% 2048-1024-512 +keywords 100% 46.88% 1.98% 37.80% 100% 57.16% 2.51% 40.76% 4096-1024-256 +keywords 99.90% 39.21% 2.27% 36.98% +13 balanced 100% 48.95% 2.94% 40.52% 100% 48.95% 10.31% 60.01% 4096-1024-512 99.90% 39.21% 1.30% 40.54% +keywords 100% 48.95% 1.64% 43.32% +13 balanced 100% 48.95% 7.03% 68.08% 4096-1024-512+ 99.93% 41.45% 0.51% 33.47% -
Manual Transcription of Table 3: Performance Comparisons of three Tokenization Schemas. Metrics are evaluated on the real click pairs.
Method CURTotal ICR Recall@10 MRR@10 OnlineMCA - - 0.3440 0.1323 RQ-VAE 1.17% 38.83% 0.2171 0.0689 RQ-Kmeans 7.03% 68.08% 0.2844 0.1038 RQ-OPQ - 91.91% 0.3369 0.1194
-
4.2 Multi-view Behavior Sequence (Mu-Seq) Injection
This component focuses on feeding user history into the model in multiple ways to capture different facets of their preferences.
- Behavior Sequence Constructed User IDs: Instead of random IDs,
OneSearchcreates a user-specific ID by taking a weighted average of the SIDs from their recent clicks (SID_short) and historical orders (SID_long). The weights decay over time, giving more importance to recent activity. This creates a personalized, behavior-driven user token. - Explicit Short Behavior Sequence: The SIDs of the user's most recent clicked items and searched queries are directly included in the model's input prompt. This explicitly tells the model what the user was just interested in, which is a powerful signal for predicting their next action.
- Implicit Long Behavior Sequence: A user's full history (clicks, orders) can be thousands of items long, which is too much to fit in a prompt. To handle this, the long-term sequences are compressed. Each item in the sequence is represented by its RQ cluster centroid vectors (3 embeddings per item). These vectors are then aggregated for each behavior type (click, order) and fed into a
Q-Former(a type of attention module) to produce a compact, fixed-size representation of the user's long-term profile.
4.3 Unified Encoder-Decoder Architecture
- The core of
OneSearchis a Transformer-based encoder-decoder model (likeBARTormT5). - Input: The encoder receives a concatenated sequence containing the behavior-constructed
user ID, the currentquerytext, the query'sSID, the explicitshort behavior sequence(SIDs), and the implicitlong behavior sequenceembedding. Special tokens like[SEP]are used to separate the different parts. - Output: The decoder autoregressively generates a list of item SIDs. These SIDs are then decoded back into actual items to be displayed to the user. The generation process uses beam search to explore multiple candidate item lists.
4.4 Preference Aware Reward System (PARS)
This is the training and optimization framework to make OneSearch generate not just relevant items, but the best items according to user preferences and business goals.
-
Stage 1: Multi-stage Supervised Fine-tuning (SFT): The model is trained in three progressive stages:
- Semantic Content Alignment: Teaches the model the basic mapping between text (queries/titles) and SIDs.
- Co-occurrence Synchronization: Teaches the model which queries and items frequently appear together, learning collaborative patterns from massive interaction logs.
- User Personalization Modeling: The full input including user ID and behavior sequences is used. The model learns to generate items that a specific user is likely to interact with, given their history and current query. Sliding window augmentation is used on user sequences to create more training data and improve robustness.
-
Stage 2: Adaptive Reward System: After SFT, the model is further refined to learn fine-grained preferences.
- Adaptive-weighted Reward Signal: A reward score
r(q, i)is calculated for each query-item pair. It combines a base reward based on user feedback (e.g., purchase > click > view) with calibratedCTRandCVRmetrics to favor items that are both popular and convert well. The preference difference between a positive and negative item, , is used to weight the training loss, forcing the model to focus on harder examples. - Reward Model Training: A separate three-tower model is trained to predict
CTR,CVR(Conversion Rate), andCTCVR(Click-Through Conversion Rate). This model acts as a "judge" of quality. Crucially, it also includes a term for a pre-computed relevance score () to ensure the generated items remain highly relevant to the query. - Hybrid Ranking Framework: The final training step uses a DPO-style (Direct Preference Optimization) list-wise loss. The SFT model generates a list of items. The reward model re-ranks this list. The differences in ordering are used to create training pairs (preferred vs. not-preferred items). The model is then trained to increase the probability of generating the preferred list. The final loss combines this preference alignment objective with the standard next-token prediction loss (NLL) from the SFT stage, creating a stable and effective hybrid training paradigm.
- Adaptive-weighted Reward Signal: A reward score
5. Experimental Setup
- Datasets: The experiments were conducted on massive, real-world, large-scale industry datasets from the Kuaishou e-commerce platform. The data consists of user search logs, including queries, clicked items, purchased items, and shown items.
- Evaluation Metrics:
- Online Business Metrics:
Item CTR(Click-Through Rate): The percentage of displayed items that are clicked. Measures immediate user interest.PV CTR(Page View CTR): The percentage of search result pages where at least one item is clicked. Measures overall page effectiveness.BuyerandOrder Volume: The number of unique buyers and total orders originating from search. Measures direct business impact.PV CVR(Page View Conversion Rate): The percentage of search result pages that lead to a purchase. Measures the end-to-end conversion efficiency.
- Offline Retrieval Metrics:
Recall@10: Out of all actual items a user clicked for a query, what fraction were present in the top 10 items generated by the model? Measures the model's ability to find relevant items.MRR@10(Mean Reciprocal Rank): Measures the ranking quality. A higher score is given if the first correct item is ranked higher. Its formula is: where is the number of queries and is the position of the first relevant item for the i-th query (or 0 if no relevant item is in the top 10).
- Efficiency Metrics:
OPEX(Operational Expenditure): The total cost of running the system in production (servers, energy, etc.).MFU(Model FLOPs Utilization): The percentage of a GPU's theoretical peak floating-point operations per second (FLOPs) that are actually used. Higher MFU means better hardware efficiency.
- Tokenization Quality Metrics:
CUR(Codebook Utilization Rate): The percentage of available codes in the tokenizer's vocabulary that are actually used to represent items. Higher is generally better, indicating richer representation.ICR(Independent Coding Rate): The ratio of unique SIDs to the total number of items. A high ICR means different items get different SIDs, which is crucial for distinguishing them.
- Online Business Metrics:
- Baselines: The primary baseline for comparison is the
Online MCA, the highly optimized, production-grade multi-stage cascading architecture that was running on Kuaishou's platform beforeOneSearch.
6. Results & Analysis
该图像是图表,展示了在线传统多阶段架构(OnlineMCA)与OneSearch在MFU和OPEX上的比较。OneSearch将MFU从3.26%提升至27.32%,增加了24.06%;同时将OPEX从100%降低至24.60%,减少了75.40%。
-
Core Results:
OneSearchdemonstrated substantial improvements over the traditionalOnline MCAin live A/B tests.- Business Metrics: Statistically significant gains were observed across the board: +1.67% item CTR, +2.40% buyers, and +3.22% order volume.
- Efficiency: The results are transformative. As shown in Figure 7,
OneSearchreduced OPEX by 75.40% by replacing multiple complex models with a single unified one. It also dramatically improved hardware efficiency, boosting MFU from 3.26% to 27.32%. This means GPUs were utilized over 8 times more effectively, a huge win for a large-scale industrial system. - End-to-End Power: A telling experiment was an MCA variant with only the recall and pre-ranking stages. This system saw a catastrophic 9.97% drop in item CTR and a 39.14% drop in orders. This proves that the ranking stage is critical in MCA and that
OneSearch, by performing comparably or better than the full MCA, successfully integrates the power of all three stages into one model.
-
Ablations / Parameter Sensitivity:
- Tokenization Matters (Table 3): The proposed
RQ-OPQtokenization scheme is a key reason forOneSearch's success. It achieved aRecall@10of 0.3369, nearly matching the fullOnlineMCA's recall (0.3440) by itself and far surpassing simpler methods likeRQ-VAE(0.2171) and standardRQ-Kmeans(0.2844). ItsICRof 91.91% confirms it generates highly unique SIDs for different items. - Encoding Design (Table 2): The experiments on the
RQ-Kmeansconfiguration show a methodical approach to optimization. Adding keyword enhancement () and carefully balancing the codebook layers (+l3 balanced) progressively increased bothCURandICR, leading to a better tokenizer. This highlights the importance of domain-specific enhancements for SID creation. - Component Contribution: The paper reports incremental gains. The base
OneSearchmodel achieved comparable performance to the online MCA. IntroducingRQ-OPQand the long behavior sequence led to a +1.45% item CTR improvement. Finally, adding the preference-aware reward model for reranking pushed the final gains to +1.67% item CTR and over 3% in order volume. This demonstrates that each proposed component adds significant value.
- Tokenization Matters (Table 3): The proposed
7. Conclusion & Reflections
-
Conclusion Summary: The paper successfully presents
OneSearch, a pioneering end-to-end generative framework for e-commerce search that has been deployed at industrial scale. By unifying the fragmented MCA pipeline,OneSearchnot only overcomes inherent limitations like error propagation and objective collision but also delivers substantial improvements in both search effectiveness (higher CTR, sales) and operational efficiency (lower cost, better hardware utilization). The core innovations—KHQEfor superior item encoding,Mu-Seqfor deep user understanding, andPARSfor preference-aligned generation—collectively demonstrate a powerful and practical path forward for the next generation of search systems. -
Limitations & Future Work: The provided text is truncated and does not include the authors' discussion of limitations or future work. Based on the content, potential limitations could include:
- Cold-start for new items: While the paper mentions handling cold-start queries/users, new items without any interaction data would still be challenging to represent and generate.
- Scalability of SID Generation: The process of generating SIDs for billions of items needs to be highly efficient, and regenerating them as item content or the encoder model changes could be a significant operational task.
- Inference Latency: While overall efficiency is improved, the autoregressive generation of a list of SIDs might have higher latency than a traditional MCA's parallel scoring, requiring careful engineering for real-time constraints.
-
Personal Insights & Critique:
- Significance:
OneSearchrepresents a major architectural shift. Moving from a multi-stage, multi-model pipeline to a single, unified generative model is a significant engineering and research achievement. The impressive efficiency gains (75% cost reduction) alone could justify this shift for many large tech companies. - Complexity: The solution is highly sophisticated. It's not just a single "model" but a complex ecosystem of components: a specially tuned text encoder, a hybrid quantization scheme, multi-faceted feature engineering for user history, and a multi-stage training process involving a separate reward model. Replicating this would be a massive undertaking.
- Transferability: While tailored for e-commerce, the core principles are broadly applicable. Other domains with large item catalogs and user interaction data (e.g., academic paper search, job search, legal document retrieval) could adopt a similar end-to-end generative framework. The key would be adapting the "keyword enhancement" and "reward system" to the specific domain's definition of relevance and user success.
- Open Questions: The heavy reliance on a powerful
Reward Modelis both a strength and a potential weakness. It allows for fine-grained preference tuning but also introduces another complex model to maintain. The "hybrid ranking framework" seems like a pragmatic solution to balance the generative model's raw output with a more explicit ranking signal, but the interplay between the DPO-style loss and the base NLL loss is a delicate balance that likely requires extensive tuning.
- Significance:
Similar papers
Recommended via semantic vector search.