Paper status: completed

IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers

Published:04/28/2023
Original LinkPDF
Price: 0.100000
Price: 0.100000
Price: 0.100000
2 readers
This analysis is AI-generated and may not be fully accurate. Please refer to the original paper.

TL;DR Summary

IconShop uses autoregressive transformers to tokenize SVG paths and text, enabling diverse, high-quality text-guided vector icon generation with editing and interpolation capabilities, outperforming existing methods.

Abstract

Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text -> raster image -> vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text -> vector graphics script) through pretrained large language models. However, these methods still suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to fully exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively and qualitatively. Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.

Mind Map

In-depth Reading

English Analysis

1. Bibliographic Information

  • Title: IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers
  • Authors: Ronghuan Wu, Wanchao Su, Kede Ma, Jing Liao (all affiliated with City University of Hong Kong).
  • Journal/Conference: This paper is an arXiv preprint. While not yet peer-reviewed in a formal conference or journal at the time of this version's publication, arXiv is a standard platform for disseminating cutting-edge research in fields like computer science and machine learning.
  • Publication Year: 2023
  • Abstract: The paper addresses the difficulty of creating custom Scalable Vector Graphics (SVG) content. Existing methods either convert raster images to vector graphics (often with low quality) or use Large Language Models to generate SVG code (with limited diversity). The authors propose IconShop, a method that uses an autoregressive transformer to generate vector icons. The key idea is to represent an SVG icon as a uniquely decodable sequence of tokens based on its drawing commands. This allows the transformer to learn the structure of icons and generate new ones, either unconditionally or guided by text. The paper shows that IconShop outperforms existing methods in generation quality, diversity, and flexibility. It also demonstrates novel applications like icon editing, interpolation, and design auto-suggestion.
  • Original Source Link: https://arxiv.org/abs/2304.14400
  • PDF Link: http://arxiv.org/pdf/2304.14400v4
  • Publication Status: Preprint on arXiv.

2. Executive Summary

  • Background & Motivation (Why):

    • Core Problem: Creating Scalable Vector Graphics (SVG) is challenging for non-experts. It requires understanding complex grammars or mastering professional software like Adobe Illustrator.
    • Existing Gaps: Recent text-to-image models have inspired two main approaches for text-to-SVG synthesis, but both have significant flaws:
      1. Image-based methods (Text → Raster Image → Vector): These use models like Stable Diffusion to generate a raster image and then vectorize it. The results often have poor quality, with jagged lines and a style that doesn't match clean vector icons.
      2. Language-based methods (Text → SVG Script): These use Large Language Models (LLMs) like GPT-4 to directly write SVG code. The results tend to be overly simplistic, lack diversity, and fail to capture complex shapes.
    • Fresh Angle: The paper proposes treating vector graphics generation as a sequence modeling problem. Instead of dealing with pixels or code as natural language, IconShop represents an SVG icon as a flattened sequence of fundamental drawing commands and their coordinates. This representation is perfectly suited for powerful sequence-learning models like autoregressive transformers.
  • Main Contributions / Findings (What):

    1. A Novel SVG Representation: The paper introduces a method to sequentialize and tokenize SVG paths into a single, uniquely decodable token sequence. This is the core innovation that enables the use of autoregressive models.
    2. The IconShop Model: A text-guided autoregressive transformer architecture trained to predict the next token in the sequence. It can perform both unconditional (random) and text-conditioned icon generation.
    3. Superior Performance: IconShop is shown to be superior to existing image-based and language-based methods in quantitative metrics (FID, CLIP Score) and qualitative user studies. It achieves higher generation quality, diversity, and text-icon alignment.
    4. Enhanced Flexibility: The model's design supports several novel and practical applications beyond simple generation, including icon editing (filling in missing parts), icon interpolation (smoothly blending two icons), icon semantic combination (merging concepts like "key" + "cloud"), and icon design auto-suggestion (predicting the next drawing path for a user).

3. Prerequisite Knowledge & Related Work

  • Foundational Concepts:

    • Scalable Vector Graphics (SVG): A type of image format where graphics are defined by mathematical equations (lines, curves, shapes) rather than a grid of pixels (like JPEG or PNG). This means SVGs can be scaled to any size without losing quality. They are defined in an XML-based text format.
    • Autoregressive Models: Generative models that create sequences one element at a time. The prediction for the current element depends on all previously generated elements. A classic example is predicting the next word in a sentence based on the words that came before it. This is mathematically expressed by the chain rule of probability: p(S)=np(SnS<n)p(S) = \prod_n p(S_n | S_{<n}).
    • Transformers: A powerful neural network architecture, introduced in "Attention Is All You Need," that excels at handling sequential data. Its core mechanism, self-attention, allows it to weigh the importance of different elements in the input sequence when producing an output, effectively capturing long-range dependencies. The "decoder" part of a transformer is naturally autoregressive.
    • Large Language Models (LLMs): Massive transformer-based models (like GPT-4) trained on vast amounts of text data. They have shown remarkable abilities in understanding and generating human language and code.
  • Previous Works:

    • Text-to-Image Generation: The paper situates itself within the broader context of generative AI. It acknowledges the three main waves of development: Generative Adversarial Networks (GANs), Transformers (like DALL·E), and Diffusion Models (like Stable Diffusion). These models primarily operate on raster images.
    • Vector Graphics Generation:
      • SketchRNN: A pioneering model that used a Recurrent Neural Network (RNN) to generate vector sketches as sequences of pen strokes. It was not text-guided.
      • DeepSVG: A transformer-based model that represented SVGs by their layered structure (commands within paths). While it could reconstruct icons well, it struggled with precise geometry and was not text-guided. IconShop builds on the idea of command-based representation but simplifies it into a single sequence.
    • Text-Guided Vector Graphics Generation (The Competitors):
      • Vectorization-based (Image-based): This approach uses a text-to-image model (e.g., Stable Diffusion) to get a raster image, then applies a vectorization algorithm (e.g., Potrace, LIVE) to convert pixels into paths. The paper criticizes this for producing messy, jagged paths and failing to capture the clean, geometric style of icons.
      • Optimization-based: Methods like CLIPDraw and VectorFusion start with random SVG paths and iteratively adjust them to maximize the similarity to a text prompt, as measured by a vision-language model like CLIP. This process is extremely slow (per-image optimization) and suffers from similar quality issues.
      • Direct Generation (Language-based): This involves prompting an LLM like GPT-4 to write SVG code. The paper notes that while this works to some extent, the results are simple, lack diversity, and often have poor recognizability.
  • Differentiation: IconShop distinguishes itself by not operating in the pixel domain and not treating SVG code as unstructured natural language. Instead, it creates a structured, domain-specific sequence representation of the graphics themselves. This allows the autoregressive transformer to directly learn the "language" of vector drawing, leading to higher-quality, more consistent, and more complex results than previous approaches.

4. Methodology (Core Technology & Implementation)

The core of IconShop is its method for representing, modeling, and generating SVG icons as sequences.

Fig. 4. We use the CLIP image encoder to extract image features, and calculate the cosine similarity between the generated icons and the samples in the dataset. As visually distinct icons may have hi…

Image 4: This diagram illustrates the core pipeline of IconShop. An SVG icon is broken down into its constituent paths and commands. These commands (e.g., Move To, Line To, Bézier Curve) and their coordinates are then converted into a flat sequence of discrete tokens. This sequence, prefixed with a tokenized text prompt, is fed into an autoregressive transformer, which learns to predict the next token in the sequence.

  • Principles: The fundamental idea is that an SVG icon, which is a collection of geometric paths, can be deconstructed into a sequential stream of drawing commands. This stream, along with a text prompt, can be modeled as a single sequence by a transformer.

  • Steps & Procedures:

    1. SVG Representation and Tokenization (Section 3.2): To make SVG data suitable for a sequence model, the authors perform a multi-step conversion:

    • Simplification: All SVG icons are simplified to use only three basic path commands: Move To (MM), Line To (LL), and Cubic Bézier (CC). More complex shapes like circles or rectangles are approximated by these commands. This standardizes the input and reduces complexity.

      Table 1 from the paper explains these commands:

      Name Symbol Argument Explanation
      Move To M x, y Move the cursor to the specified point (x, y).
      Line To L x, y Draw a line segment from the current point to the specified point (x, y).
      Cubic Bézier C x1, y1, x2, y2, x, y Draw a curved path from the current point to (x, y) using two control points.
    • Tokenization Pipeline:

      1. Flattening: An icon with multiple paths is flattened into a single sequence of commands. To preserve the path structure, a special <BOP><BOP> (Begin-of-Path) token is inserted before the first command of each path.
      2. Command & Argument Discretization: The command types (MM, LL, CC) are mapped to unique integer tokens. The coordinate arguments (e.g., (x, y)) are discretized (e.g., to integers from 0-99 for a 100x100 canvas).
      3. Coordinate Unification: To reduce sequence length, each 2D coordinate pair (x, y) is mapped to a single 1D integer using the formula x×w+yx \times w + y, where ww is the canvas width (100 in this case).
      4. End Token: A special <EOS><EOS> (End-of-SVG) token is appended to signify the end of the icon sequence.

    2. Masking Scheme for Flexibility (Section 3.3): Standard autoregressive models can only generate from left to right. To enable tasks like editing (filling in a missing part), the model needs to understand bidirectional context. IconShop achieves this with a "causal" masking strategy during training, without changing the model architecture.

    • A random contiguous part of the icon sequence (a Span) is chosen.
    • The sequence [Left : Span : Right] is rearranged into [Left:<Mask>:Right:<Mask>:Span:<EOM>][Left : <Mask> : Right : <Mask> : Span : <EOM>].
    • The model is trained on this rearranged sequence. The first <Mask><Mask> marks the position to be filled, and the second <Mask><Mask> indicates the beginning of the content to be generated. <EOM><EOM> marks the end of the masked content.
    • This trains the model to generate a Span conditioned on both Left and Right contexts, effectively enabling "filling-in-the-middle."

    3. Model Architecture (Section 3.4): The model has three main components:

    • Text Embedding Module: Uses a frozen, pretrained BERT model to convert the input text prompt into a sequence of embeddings. This leverages the rich semantic knowledge of a large language model.
    • SVG Embedding Module: Converts the discrete SVG tokens into dense vectors. It uses a learnable embedding matrix for all token types (commands, coordinates, special tokens). Crucially, it adds separate learnable embeddings for the x and y coordinates to provide explicit spatial information.
    • Transformer Module: A 12-layer decoder-only transformer. It takes the concatenated text and SVG embeddings as input. Using causal (masked) self-attention, it learns the joint probability distribution of the entire sequence.
  • Mathematical Formulas & Key Details:

    • SVG Token Embedding: The embedding for the ii-th SVG token is computed as: viWei+Wxeix+Wyeiy v_i \gets W e_i + W^x e_i^x + W^y e_i^y

      • viv_i: The final embedding vector for the ii-th token.
      • eie_i: A one-hot vector representing the token (e.g., MM, LL, or a 1D coordinate value).
      • WW: The main learnable embedding matrix.
      • eix,eiye_i^x, e_i^y: One-hot vectors for the 2D coordinates xx and yy corresponding to the token.
      • Wx,WyW^x, W^y: Additional learnable embedding matrices that provide explicit positional information.
    • Training Objective (Section 3.5): The model is trained to minimize a weighted sum of two cross-entropy losses: one for predicting the text tokens (language modeling) and one for predicting the icon tokens (icon generation). total=text+λicon \ell^{\text{total}} = \ell^{\text{text}} + \lambda \ell^{\text{icon}}

      • text\ell^{\text{text}} and icon\ell^{\text{icon}} are the standard cross-entropy losses for the text and icon parts of the sequence, respectively.
      • λ\lambda: A hyperparameter that balances the importance of reconstructing the icon relative to the text. The paper sets λ=7.0\lambda = 7.0, placing a strong emphasis on getting the icon right.

5. Experimental Setup

  • Datasets:

    • FIGR-8-SVG: A large dataset of 1.5 million monochromatic (black-and-white) vector icons. The authors pre-process it by removing an outer bounding box (as seen in Figure 3) and filtering for icons with sequence lengths under 512, resulting in a training set of 300,000 samples.

    • Text Augmentation: The dataset's original annotations are simple keywords (e.g., "cat/face"). To train the model on more natural language, the authors used ChatGPT to expand these keywords into full sentences (e.g., "An icon of a cat face"). The model was trained on a mix of keywords, generated sentences, and blank text (for unconditional generation).

      Fig. 5. Icons randomly generated by IconShop, DeepSVG \(_ { 1 + \\mathsf { G A N } }\) , and BERT, respectively. Our approach creates icons with form consistency, high-precision of recognizability, geom… Image 5: This image shows samples from the FIGR-8-SVG dataset before and after the authors' preprocessing step of removing the black bounding box.

  • Evaluation Metrics:

    1. Fréchet Inception Distance (FID):
      • Conceptual Definition: A metric to measure the quality and realism of generated images. It calculates the distance between the distribution of features from generated images and the distribution of features from real images. A lower FID score means the generated images are more similar to the real ones in terms of high-level features. The authors use features from the CLIP image encoder.
      • Mathematical Formula: FID(x,g)=μxμg22+Tr(Σx+Σg2(ΣxΣg)1/2) \mathrm{FID}(x, g) = ||\mu_x - \mu_g||_2^2 + \mathrm{Tr}(\Sigma_x + \Sigma_g - 2(\Sigma_x \Sigma_g)^{1/2})
      • Symbol Explanation:
        • μx,μg\mu_x, \mu_g: The mean of the feature vectors for real (xx) and generated (gg) images.
        • Σx,Σg\Sigma_x, \Sigma_g: The covariance matrices of the feature vectors for real and generated images.
        • Tr()\mathrm{Tr}(\cdot): The trace of a matrix (sum of diagonal elements).
    2. CLIP Score:
      • Conceptual Definition: Measures how well a generated image aligns with its text prompt. It computes the cosine similarity between the CLIP embedding of the text prompt and the CLIP embedding of the generated image. A higher score indicates better semantic alignment.
      • Mathematical Formula: CLIP Score=100×cos(ET,EI) \text{CLIP Score} = 100 \times \cos(E_T, E_I)
      • Symbol Explanation:
        • ETE_T: The feature vector (embedding) of the text prompt from CLIP's text encoder.
        • EIE_I: The feature vector (embedding) of the rendered image from CLIP's image encoder.
        • cos(,)\cos(\cdot, \cdot): The cosine similarity function.
    3. Uniqueness & Novelty:
      • Conceptual Definition: These metrics measure generation diversity.
        • Uniqueness: The percentage of generated icons that are unique within the generated batch. A high score means the model isn't just producing the same few icons over and over.
        • Novelty: The percentage of generated icons that are not found in the training set. A high score means the model is creating genuinely new designs, not just memorizing training data.
      • The paper determines if two icons are "identical" by checking if the cosine similarity of their CLIP features is above a high threshold of 0.98.
  • Baselines:

    • DeepSVG+GANDeepSVG+GAN: A re-implementation of the DeepSVG idea, where a GAN is trained on DeepSVG's latent space to enable text-guided generation.
    • BERT: A non-autoregressive transformer trained to generate the entire token sequence in parallel, representing an alternative design choice.
    • Stable Diffusion + LIVE: Represents the state-of-the-art image-based approach.
    • GPT-4: Represents the state-of-the-art language-based approach.

6. Results & Analysis

  • Core Results: IconShop consistently and significantly outperforms all baselines across all metrics and tasks.

  • Ablations / Parameter Sensitivity (Section 4.2): The paper conducted crucial ablation studies to justify its architectural choices.

    • Seq2seq vs. Layered Modeling (IconShop vs. DeepSVG+GANDeepSVG+GAN):

      • Qualitative: As shown in Figure 5, IconShop generates clean, recognizable icons with clear geometric structures. DeepSVG+GANDeepSVG+GAN produces distorted and messy results, likely because its architecture, which averages features across commands and paths, loses fine-grained geometric detail.

      • Quantitative: The following table, transcribed from Table 2 in the paper, shows IconShop's clear superiority. It achieves a much lower (better) FID score. While DeepSVG+GANDeepSVG+GAN has high Uniqueness/Novelty, the authors argue this is "fake diversity" stemming from random visual distortions, not meaningful variations. IconShop achieves the best CLIP score, indicating better text alignment.

        Table 2a: Random Generation (Manual Transcription)

        FID ↓ Uniqueness % ↑ Novelty % ↑
        DeepSVG+GAN 11.95 98.72 99.22
        BERT 43.61 2.06 19.90
        IconShop 6.08 78.77 85.10

        Table 2b: Text-Guided Generation (Manual Transcription)

        FID ↓ Uniqueness % ↑ Novelty % ↑ CLIP Score ↑
        DeepSVG+GAN 12.01 97.59 99.01 21.78
        BERT 35.10 14.41 50.30 22.03
        IconShop 4.65 68.29 68.60 25.74
    • Autoregressive vs. Non-Autoregressive (IconShop vs. BERT):

      • Qualitative: Figure 5 shows that the BERT-based model fails spectacularly, producing only very simple, meaningless shapes.

      • Quantitative: Table 2 confirms this failure, with BERT having the worst FID, Uniqueness, and Novelty scores.

      • Analysis: The authors explain that non-autoregressive models predict all tokens simultaneously. This makes it difficult to determine the correct sequence length, as the model may predict multiple <EOS><EOS> tokens or place one too early, resulting in truncated and malformed outputs. This confirms that autoregressive modeling is better suited for generating variable-length sequences like SVG commands.

        该图像是论文中展示IconShop模型在图标编辑任务上的示意图,分为随机编辑和文本引导编辑两部分,展示模型对图标形状和语义的灵活修改能力。 Image 7: This figure visually demonstrates the failure modes of the baseline models. DeepSVG+GANDeepSVG+GAN produces distorted icons, while BERT only generates primitive shapes. IconShop's results are clean and semantically meaningful.

  • Comparison to State-of-the-Art (Section 4.3 & 4.4):

    该图像是示意图,展示了通过参数α在多组图标(如雪花、蜘蛛、雨伞和日历等)之间的形态渐变过程,体现了IconShop方法在图标插值和多样性生成上的能力。 Image 8: This figure compares IconShop to other SOTA methods for text-guided generation. The results from Stable Diffusion + LIVE are often blurry and not icon-like. GPT-4 produces simplistic icons. IconShop generates icons that are both complex and high-quality, closely matching the prompts.

    • Qualitative (Figure 6): Stable Diffusion + LIVE fails to produce the clean "line art" style, and the vectorization step introduces artifacts. GPT-4 generates recognizable but overly simple icons made of basic primitives. IconShop produces results with superior quality, complexity, and text alignment.

    • Subjective User Study: A formal user study confirmed these observations.

      Table 3: Subjective User Study Results (% of times selected as high quality/best alignment) (Manual Transcription)

      Task DeepSVG+GAN Stable Diffusion+LIVE GPT-4 IconShop Dataset
      Quality (random) 5.09 15.95 2.95 82.11 83.71
      Quality (text) 1.90 49.49 2.15 96.33 -
      Alignment (text) 29.24 72.78 1.77 96.20 -

      The results are striking. For random generation, users found IconShop's outputs almost as high-quality as real icons from the dataset. For text-guided tasks, IconShop was overwhelmingly preferred for both visual quality and text alignment. ANOVA tests confirmed these results are statistically significant (p<0.001p < 0.001).

7. Applications

IconShop's flexible design enables several novel applications that showcase its practical utility.

  1. Icon Editing (Figure 7): Thanks to the causal masking scheme, users can mask a region of an icon and have the model fill it in, either randomly or guided by text. This allows for powerful and intuitive editing.

    Fig. 9. IconShop learns to produce creative icons by combining semantics of different text prompts. Image 9: Examples of icon editing. A portion of an icon is removed (masked), and IconShop generates plausible content to fill the gap, either randomly or based on a new text prompt.

  2. Icon Interpolation (Figure 8): The model can generate smooth transitions between two different icons. This is done by getting the latent representations (the output of the transformer before the final prediction layer) for two icons and linearly interpolating between them, then generating a new icon from the interpolated latent vector.

    Fig. 10. IconShop is able to suggest subsequent paths for the users to design icons, with significantly boosted efficiency. We highlight paths suggested by IconShop with the green color and paths dra… Image 10: This shows smooth interpolation between different icons, like a snowflake and a spider, demonstrating a well-behaved and continuous latent space.

  3. Icon Semantic Combination (Figure 9): IconShop can creatively merge concepts. For example, by combining the representation for "key" with the text prompt "cloud," it can generate an icon of a key made of clouds.

    Fig. 11. Limitations contain text-SVG mismatches (left panel) and suboptimal semantic icon combination (right panel). Image 11: Examples of semantic combination. The model creatively merges the visual concept of a base icon with a new textual concept.

  4. Icon Design Auto-Suggestion (Figure 10): Because the model is autoregressive, it can be used interactively. After a user draws a few paths, the model can predict and suggest the most likely next paths to complete the icon, acting as an intelligent design assistant.

    该图像是IconShop方法的示意图,展示了如何将SVG矢量图形分解为路径和命令,并通过Tokenization转化为可解码的令牌序列,再结合文本描述,输入自回归Transformer模型实现图标生成。 Image 2: This illustrates the auto-suggestion feature. The user draws paths (blue), and IconShop suggests the next path (green) to complete the design, boosting productivity.

8. Conclusion & Reflections

  • Conclusion Summary: The paper successfully introduces IconShop, a novel autoregressive transformer-based method for text-guided vector icon synthesis. By treating SVG generation as a sequence modeling problem with a carefully designed tokenization scheme, it significantly surpasses prior image-based and language-based methods in quality, diversity, and text alignment. Furthermore, its architecture enables a range of powerful applications like editing and interpolation, making it a flexible and practical tool.

  • Limitations & Future Work: The authors acknowledge some limitations:

    • Generation Failures: The model can sometimes fail, producing results that don't match the text prompt or combine concepts in a suboptimal way (see Figure 11).

    • Dataset Bias: The training text was partly generated by ChatGPT, which may introduce biases or limit the diversity of language the model understands.

    • Simplicity: The current model is limited to monochromatic icons and a small subset of SVG commands. Future work could extend it to handle color, gradients, and a richer set of SVG features.

      Fig. 3. Monochromatic icon samples from the FIGR-8-SVG dataset (1st row). Each icon is associated with several discrete keywords as textual descriptions. We remove the black box to improve the visual… Image 3: This figure from the paper illustrates limitations, such as a mismatch between the text "headphone" and the generated icon (left), and a less-than-ideal semantic combination of "key" and "cloud" (right).

  • Personal Insights & Critique:

    • The paper's central insight—that vector graphics can be effectively modeled as a 1D sequence of commands—is powerful and elegant. It sidesteps the difficulties of pixel-based generation and the unstructured nature of raw code. This approach seems highly transferable to other forms of vector graphics, such as generating fonts, technical diagrams, or even simple animations.
    • The use of a "causal" masking strategy to unify autoregressive and non-autoregressive capabilities within a single model is a clever engineering choice that unlocks significant flexibility (e.g., editing) without added architectural complexity.
    • The biggest weakness is its reliance on a simplified SVG format. While effective for icons, this approach would need significant extensions to handle the full complexity of general-purpose SVGs used in web design or data visualization, which include layers, groups, styles (CSS), filters, and interactivity.
    • The work represents a significant step forward in generative modeling for vector graphics, moving beyond simple reconstruction and towards a truly creative and controllable tool. It bridges the gap between the generative power of transformers and the precise, scalable nature of vector art.

Similar papers

Recommended via semantic vector search.

No similar papers found yet.