Paper status: completed

Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation

Published:02/27/2024
Original LinkPDF
Price: 0.100000
Price: 0.100000
Price: 0.100000
3 readers
This analysis is AI-generated and may not be fully accurate. Please refer to the original paper.

TL;DR Summary

Playground v2.5 enhances aesthetics in text-to-image generation via improved noise scheduling, balanced aspect ratio datasets, and human preference alignment, outperforming leading open-source and commercial models.

Abstract

In this work, we share three insights for achieving state-of-the-art aesthetic quality in text-to-image generative models. We focus on three critical aspects for model improvement: enhancing color and contrast, improving generation across multiple aspect ratios, and improving human-centric fine details. First, we delve into the significance of the noise schedule in training a diffusion model, demonstrating its profound impact on realism and visual fidelity. Second, we address the challenge of accommodating various aspect ratios in image generation, emphasizing the importance of preparing a balanced bucketed dataset. Lastly, we investigate the crucial role of aligning model outputs with human preferences, ensuring that generated images resonate with human perceptual expectations. Through extensive analysis and experiments, Playground v2.5 demonstrates state-of-the-art performance in terms of aesthetic quality under various conditions and aspect ratios, outperforming both widely-used open-source models like SDXL and Playground v2, and closed-source commercial systems such as DALLE 3 and Midjourney v5.2. Our model is open-source, and we hope the development of Playground v2.5 provides valuable guidelines for researchers aiming to elevate the aesthetic quality of diffusion-based image generation models.

Mind Map

In-depth Reading

English Analysis

1. Bibliographic Information

  • Title: Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation
  • Authors: Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, Suhail Doshi
  • Affiliations: All authors are affiliated with Playground Research.
  • Journal/Conference: The paper is available on arXiv, which is a preprint server. This means it has not yet undergone formal peer review for a conference or journal publication.
  • Publication Year: 2024
  • Abstract: The paper presents three key insights that led to the development of Playground v2.5, a text-to-image model with state-of-the-art aesthetic quality. These insights focus on: (1) improving color and contrast by refining the diffusion model's noise schedule; (2) enhancing image generation across various aspect ratios through a balanced dataset bucketing strategy; and (3) improving fine details, particularly in human subjects, by aligning the model with human preferences. The authors claim that Playground v2.5 surpasses both open-source models like SDXL and its predecessor Playground v2, as well as closed-source commercial systems like DALL-E 3 and Midjourney v5.2, in user preference studies. The model and a new evaluation benchmark are open-sourced to benefit the research community.
  • Original Source Link:
    • arXiv Page: https://arxiv.org/abs/2402.17245

    • PDF Link: http://arxiv.org/pdf/2402.17245v1


2. Executive Summary

  • Background & Motivation (Why):

    • Core Problem: Despite rapid progress, modern text-to-image diffusion models still exhibit noticeable flaws that detract from their aesthetic quality. Specifically, many models, including the widely-used Stable Diffusion XL (SDXL), suffer from muted colors, struggle to generate high-quality images in non-square aspect ratios, and often produce distorted or unnatural human features (e.g., hands, faces).
    • Importance & Gaps: As text-to-image models move from research novelties to production-grade tools, user expectations for aesthetic quality, realism, and reliability have risen. Prior models often prioritized prompt-following or architectural efficiency over pure visual appeal. The authors identified a gap in systematically addressing a trio of common user complaints: poor contrast, inconsistent quality across different image shapes, and uncanny human depictions.
    • Fresh Angle: Rather than proposing a new model architecture, this work focuses on refining the training recipe. It provides a practical, engineering-driven approach, demonstrating that significant improvements can be achieved by meticulously addressing specific flaws in data preparation, training dynamics (noise schedule), and fine-tuning (human alignment).
  • Main Contributions / Findings (What):

    1. Three Practical Insights for Aesthetic Improvement: The paper distills its methodology into three actionable insights:

      • Insight 1 (Color/Contrast): Adopting the Elucidating the Design Space of Diffusion-based Generative Models (EDM) framework and its principled noise schedule fundamentally solves the problem of washed-out colors and poor contrast that plagued earlier models.
      • Insight 2 (Aspect Ratios): Creating a carefully balanced dataset with images grouped into various aspect ratio "buckets" allows the model to generate high-quality, well-composed images regardless of their shape, avoiding the bias towards square images.
      • Insight 3 (Human Details): An iterative, human-in-the-loop fine-tuning process, similar to Supervised Fine-Tuning (SFT), significantly improves the rendering of human faces, hands, and overall anatomy by aligning the model with rated high-quality examples.
    2. State-of-the-Art Performance: Through extensive user studies, Playground v2.5 is shown to be preferred over leading open-source models (SDXL, Playground v2) and, notably, even top-tier closed-source models (Midjourney v5.2, DALL-E 3) in terms of aesthetic quality.

    3. A New Public Benchmark: The authors introduce and release MJHQ-30K, a high-quality dataset of 30,000 images across 10 categories, designed for automatic evaluation of aesthetic quality using the Fréchet Inception Distance (FID) metric.


3. Prerequisite Knowledge & Related Work

  • Foundational Concepts:

    • Diffusion Models: These are a class of generative models that learn to create data (like images) by reversing a gradual noising process. The process starts with a real image, adds Gaussian noise over many steps until it becomes pure noise, and then trains a neural network to denoise it step-by-step. To generate a new image, the model starts from random noise and applies the learned denoising process.
    • Latent Diffusion Models (LDM): A highly efficient variant of diffusion models, popularized by Stable Diffusion. Instead of running the diffusion process on high-resolution pixel images, LDMs first compress the image into a smaller, lower-dimensional "latent space" using an autoencoder. The diffusion process happens in this latent space, which is computationally much cheaper. The final latent representation is then decoded back into a full-resolution image. Playground v2.5 is based on this architecture.
    • Noise Schedule: This defines how much noise is added at each step of the forward diffusion process. The paper argues that the choice of noise schedule critically impacts the final image quality. A flawed schedule, like that in early Stable Diffusion models, can prevent the model from learning to generate pure black or white, leading to muted colors.
    • Signal-to-Noise Ratio (SNR): At any step in the diffusion process, the SNR is the ratio of the power of the original image "signal" to the power of the added noise. A "Zero Terminal SNR" means that at the final noising step, the signal is completely gone, leaving only pure noise. This is a desirable property for better color reproduction.
    • Aspect Ratio Bucketing: A training strategy for handling images of various shapes. Instead of resizing all images to be square (which can distort them), the dataset is grouped into "buckets" of similar aspect ratios (e.g., portrait, landscape, square). During training, a batch of images is sampled from a single bucket, allowing the model to learn composition for different shapes.
    • Supervised Fine-Tuning (SFT): A technique primarily used in Large Language Models (LLMs). After a model is pre-trained on a massive general dataset, it is further trained (fine-tuned) on a smaller, curated dataset of high-quality examples. This aligns the model's outputs with a specific desired behavior, such as following instructions or, in this paper's case, producing aesthetically pleasing images.
    • Human-in-the-loop: A process where human feedback is integrated into the model training cycle. In this paper, user ratings on generated images help curate the high-quality dataset used for alignment, creating a feedback loop that steers the model toward human preferences.
  • Previous Works:

    • SDXL (Stable Diffusion XL): A major open-source model that served as a baseline. It improved upon earlier Stable Diffusion versions by using a larger UNet architecture and introducing aspect ratio bucketing. However, the paper points out its limitations in color contrast and unbalanced bucketing.
    • Playground v2: The authors' previous model. It achieved better user preference than SDXL but still relied on a "hack" called offset noise to partially fix the color issue and had room for improvement.
    • EDM (Elucidating the Design Space of Diffusion-based Generative Models): A seminal paper by Karras et al. (2022) that provided a principled, first-principles-based framework for designing diffusion models. Playground v2.5 adopts this framework to fix the noise schedule and color issues.
    • Emu: A model by Dai et al. (2023) that introduced an SFT-like alignment strategy for image models, which inspired the human preference alignment method in this paper.
    • Midjourney & DALL-E 3: Leading closed-source, commercial text-to-image models known for their high aesthetic quality. Outperforming them is a significant achievement and a key claim of the paper.
  • Differentiation: Playground v2.5's innovation lies not in creating a new architecture but in the synthesis and meticulous execution of existing advanced techniques.

    • vs. SDXL/Playground v2: It moves away from the traditional DDPM noise schedule and offset noise in favor of the more robust EDM framework. It also refines SDXL's bucketing strategy with a more balanced data pipeline.

    • vs. Generic Fine-Tuning: Its alignment process is not just fine-tuning on a static dataset but an iterative, human-in-the-loop system where user feedback directly shapes the data used to improve the model. This creates a tight loop between model output and human perception.


4. Methodology (Core Technology & Implementation)

The core of Playground v2.5 is built on three targeted improvements to the training pipeline of a latent diffusion model.

4.1 Insight 1: Enhanced Color and Contrast

  • Problem: Previous models like SDXL struggle to generate images with deep blacks, pure whites, or vibrant colors. This is attributed to a flawed noise schedule where the signal-to-noise ratio (SNR) at the final timestep is too high, meaning the "noisiest" image still contains traces of the original signal. This biases the model away from extreme color values.

  • Solution: The authors abandoned the standard DDPM noise schedule and the offset noise workaround used in Playground v2. Instead, they adopted the EDM framework from Karras et al. [14].

  • Key Advantages of EDM:

    1. Principled Noise Schedule: The EDM framework provides a theoretically sound design for the noise schedule, which naturally results in a near-zero terminal SNR. This ensures the model learns from pure Gaussian noise at the end of the diffusion process, allowing it to generate the full spectrum of colors, including pure blacks and whites, without any ad-hoc fixes.
    2. Better Preconditioning: EDM also defines an optimal preconditioning for the U-Net denoiser, leading to faster convergence and better overall image quality.
  • Implementation Detail: Inspired by Hoogeboom et al. [13], the noise schedule was further skewed to be "noisier" when training on high-resolution images, which likely helps the model learn finer details.

    该图像是两幅拼接的装饰性植物插图,上半部分展示了黑底白色植物花卉图案,细节丰富,色彩对比强烈;下半部分为纯黑色块,可能用于对比或遮挡。 该图像是两幅拼接的装饰性植物插图,上半部分展示了黑底白色植物花卉图案,细节丰富,色彩对比强烈;下半部分为纯黑色块,可能用于对比或遮挡。

    该图像是两张高质量青蛙特写照片,展示其细节纹理和鲜明色彩,体现了文本到图像生成模型在色彩对比和细节表现上的提升。 该图像是两张高质量青蛙特写照片,展示其细节纹理和鲜明色彩,体现了文本到图像生成模型在色彩对比和细节表现上的提升。

As seen in Figure 2(a) (composed of Image 11 and Image 12), SDXL fails to generate a solid black or white background, producing grayish outputs. In contrast, Playground v2.5 faithfully generates pure-colored backgrounds. Figure 3 (Image 16) further shows how v2.5 produces more vibrant colors and contrast compared to its predecessor, v2.

4.2 Insight 2: Generation Across Multiple Aspect Ratios

  • Problem: Models trained predominantly on square images perform poorly when asked to generate images in other aspect ratios (e.g., portrait 9:16 or landscape 16:9). They may produce strange compositions, duplicate subjects, or simply generate low-quality images. While SDXL used aspect ratio bucketing, its training data was heavily skewed towards square images, creating a bias.

  • Solution: Playground v2.5 implements a similar bucketing strategy but with a crucial refinement: a carefully balanced data pipeline.

  • Implementation Detail: The authors ensured that during training, batches were sampled from different aspect ratio buckets in a more uniform manner. This prevents the model from becoming biased towards any single aspect ratio (like 1:1 square) and avoids "catastrophic forgetting" of how to handle less common ratios. This balanced exposure forces the model to learn generalizable composition rules that apply across various image dimensions.

    该图像是由六个子图组成的拼接图,包含逼真的章鱼与室内环境、像素风和绘画风格的森林火焰场景,以及两辆机械风格的摩托车插图,呈现多样的美学风格与细节表现。 该图像是由六个子图组成的拼接图,包含逼真的章鱼与室内环境、像素风和绘画风格的森林火焰场景,以及两辆机械风格的摩托车插图,呈现多样的美学风格与细节表现。

    该图像是多张视觉艺术作品的组合插图,分别展示了逼真的水下场景、带有复杂玫瑰纹身的人像黑白和彩色肖像,以及精美的绿色宝石装饰蛋形艺术品,体现了文本生成图像在细节和美学上的高质量表现。 该图像是多张视觉艺术作品的组合插图,分别展示了逼真的水下场景、带有复杂玫瑰纹身的人像黑白和彩色肖像,以及精美的绿色宝石装饰蛋形艺术品,体现了文本生成图像在细节和美学上的高质量表现。

Figures 4 (Image 17) and 5 (Image 18) compare Playground v2.5 with SDXL on portrait (3:4) and landscape (4:3) aspect ratios. SDXL's outputs sometimes contain compositional errors or fail to follow the prompt, whereas Playground v2.5 generates coherent, high-quality images that correctly utilize the given aspect ratio.

4.3 Insight 3: Human Preference Alignment

  • Problem: A standard training objective (maximizing data log-likelihood) does not necessarily align with what humans perceive as high-quality or aesthetically pleasing. This misalignment is especially obvious in generated images of people, which often feature malformed hands, distorted faces, or an "uncanny valley" effect.

  • Solution: The authors implemented a human-in-the-loop alignment process inspired by Supervised Fine-Tuning (SFT).

  • Steps & Procedures:

    1. Data Curation: A system was built to automatically collect a high-quality dataset. This dataset was curated from multiple sources using user ratings gathered from the Playground product. This ensures the alignment data reflects real-world human preferences.
    2. Iterative Fine-Tuning: The base pre-trained model was fine-tuned on this high-quality dataset. The process was iterative: the team would train, evaluate, select the best dataset candidates based on empirical checks, and repeat.
    3. Human-in-the-loop Evaluation: During this iterative process, model progress was monitored by generating image grids from a fixed set of prompts and having humans evaluate them. This qualitative feedback loop helped guide the selection of the best model checkpoints and data subsets.
  • Focus Areas: This alignment process specifically targeted improvements in human-centric features like facial clarity, realistic eye gaze, hair texture, and overall lighting and depth-of-field.

    该图像是由六张高质量肖像照片组成的拼图,展示了不同风格和光影效果下的老人和年轻女性画像,体现了模型在细节刻画和美学表现上的提升。 该图像是由六张高质量肖像照片组成的拼图,展示了不同风格和光影效果下的老人和年轻女性画像,体现了模型在细节刻画和美学表现上的提升。

    Figure 7: Qualitative comparison between methods. Prompts for the top row: "a person with a feeling of dryness in the mouth.", the bottom row: "a jeweler's hands, each holding a tiny gemstone, aligni… 该图像是论文中图7的对比插图,展示了多个模型在不同提示语下生成的人脸表情与手部细节图像。Playground v2.5模型在表现人物生动表情、牙齿、眼睛细节及手部姿态方面效果显著优于其它模型。

Figure 6 (Image 2) and Figure 7 (Image 3) demonstrate the success of this method. Compared to SDXL and other models, Playground v2.5 generates humans with significantly more realistic and detailed faces, eyes, and hands. The lighting and depth-of-field also appear more natural and pleasing.


5. Experimental Setup

  • Datasets:

    • Internal-1K: A collection of 1,000 prompts sourced from real users of the Playground.com platform. This dataset is representative of how a general audience uses text-to-image models. It was used for the main aesthetic preference user studies.
    • People-200: A curated set of 200 high-quality prompts focused on generating images of people, also sourced from real users. This was used for the targeted evaluation of human-centric image generation.
    • MJHQ-30K: A novel benchmark dataset created by the authors for automatic evaluation. It consists of 30,000 high-quality images generated by Midjourney v5.2, organized into 10 distinct categories (e.g., people, animals, landscapes, fashion). The dataset was filtered for high image quality and text-image alignment using aesthetic scores and CLIP scores.
  • Evaluation Metrics:

    1. Human Preference Win Rate:

      • Conceptual Definition: This metric directly measures which model's output users find more aesthetically pleasing. It is considered a gold-standard evaluation for generative models because it captures subjective qualities that automatic metrics miss.

      • Methodology: For a given prompt, users are shown two images generated by two different models side-by-side (Figure 8). They vote for the one they prefer. To ensure robustness, each image pair is rated by at least 7 unique users. A model "wins" if it gets a margin of 2 or more votes. A 1-vote margin is a "tie." Thousands of users participated in each study.

        Figure 8: An example of an image pair shown to a user in our product. 该图像是本文产品中展示给用户的一个图像对比界面截图,呈现了两个风格相似但细节不同的时尚人像图像,供用户选择更具美学质量的图像。

    2. Fréchet Inception Distance (FID):

      • Conceptual Definition: FID measures the similarity between the distribution of generated images and the distribution of real images. It evaluates both the quality (fidelity) and diversity of the generated images. A lower FID score indicates that the two distributions are closer, meaning the generated images are more similar to real images. It is a standard metric for benchmarking image generation models.
      • Mathematical Formula: FID(x,g)=μxμg22+Tr(Σx+Σg2(ΣxΣg)1/2) \mathrm{FID}(x, g) = \left\| \mu_x - \mu_g \right\|_2^2 + \mathrm{Tr}\left( \Sigma_x + \Sigma_g - 2(\Sigma_x \Sigma_g)^{1/2} \right)
      • Symbol Explanation:
        • xx and gg represent the sets of real and generated images, respectively.
        • μx\mu_x and μg\mu_g are the mean vectors of the feature representations (activations from a specific layer of a pre-trained InceptionV3 network) for the real and generated images. The term μxμg22\|\mu_x - \mu_g\|_2^2 measures the distance between the means, penalizing shifts in content and style.
        • Σx\Sigma_x and Σg\Sigma_g are the covariance matrices of the feature representations. They capture the diversity and correlations of features in the image sets.
        • Tr()\mathrm{Tr}(\cdot) denotes the trace of a matrix (the sum of the diagonal elements). The trace term measures the distance between the covariance matrices, penalizing models that fail to capture the diversity of the real data.
  • Baselines:

    • Open-Source: SDXL 1.0, Playground v2, PIXART-α.

    • Closed-Source: DALL·E 3, Midjourney v5.2.

    • Specialized: RealStock v2, a community fine-tuned version of SDXL specifically trained on realistic photos of people.


6. Results & Analysis

Playground v2.5's performance was evaluated through extensive user studies and automatic metrics.

  • Core Results:

    • Overall Aesthetic Preference: In a head-to-head comparison using the Internal-1K prompt set, Playground v2.5 was strongly preferred over all other models (Figure 10).

      • It was preferred 4.8x more than SDXL (82.77% win rate).

      • It outperformed its predecessor, Playground v2, with a 73.18% win rate.

      • Crucially, it also outperformed the top commercial models: DALL·E 3 (60.05% win rate) and Midjourney 5.2 (53.33% win rate). This is a major result, as surpassing closed-source leaders is rare for an open-source model.

        Figure 10: User study against SoTA Methods. We report human aesthetic preference metrics of Playground \(\\mathbf { \\delta V } 2 . 5\) against various publicly available text-to-image models. Playground… 该图像是图表,展示了针对内部1000条提示词的用户偏好胜率对比。图中绿色柱代表Playground v2.5,黄色柱为基线模型,Playground v2.5在所有对比模型中胜率最高,特别是在SDXL-1.0上达到82.77%。

    • Performance Across Aspect Ratios: The balanced bucketing strategy proved effective. In user studies across various aspect ratios from portrait (9:16) to landscape (16:9), Playground v2.5 consistently outperformed SDXL by a large margin, with win rates typically between 80% and 90% (Figure 11). This confirms the model's reliability in non-square formats.

      Figure 11: User study against SDXL on multiple aspect ratios. We conduct user studies for images generated in various commonly-used aspect ratios, height-to-width ratios ranging from 9:16 to 16:9. Ou… 该图像是图表,展示了论文中Figure 11的用户调研结果,比较了Playground v2.5与SDXL-1-0-refiner在多种常见长宽比(从9:16到16:9)图像生成上的偏好胜率。结果显示Playground v2.5在所有纵横比上均大幅优于SDXL。

    • Performance on People-centric Prompts: The human preference alignment was highly successful. On the People-200 benchmark, Playground v2.5 was overwhelmingly preferred over both the general-purpose SDXL (91.46% win rate) and the specialized RealStock v2 model (75.38% win rate), as shown in Figure 12. This highlights the effectiveness of the SFT-like alignment process for improving human features.

      该图像是一个柱状图,展示了Playground v2.5与Baseline模型在人像生成任务上的用户偏好胜率。图中显示Playground v2.5在两个数据集(RealStock v2和SDXL-1.0)上的胜率分别为75.38%和91.46%,明显优于Baseline模型。 该图像是一个柱状图,展示了Playground v2.5与Baseline模型在人像生成任务上的用户偏好胜率。图中显示Playground v2.5在两个数据集(RealStock v2和SDXL-1.0)上的胜率分别为75.38%和91.46%,明显优于Baseline模型。

  • Automatic Evaluation on MJHQ-30K:

    • The automatic FID scores on the new MJHQ-30K benchmark corroborate the user study findings. A lower FID is better.

    • As transcribed from Table 1 in the paper, Playground v2.5 achieved a significantly lower overall FID score compared to its predecessors.

      Method Overall FID
      SDXL 1.0 + refiner[28] 9.55
      Playground v2 [20] 7.07
      Playground v2.5 4.48
    • The per-category FID breakdown in Figure 13 shows that Playground v2.5 outperforms Playground v2 and SDXL in all 10 categories. The most dramatic improvements are seen in the people and logo categories, which aligns with the explicit goals of improving human details and color fidelity (which is critical for logos).

      Figure 13: MJHQ-30K benchmark. We report FID of Playground v2.5, Playground v2 \[20\], and SDXL\[28\] on 10 common categories. Playground v2.5 outperforms Playground v2 in all categories, and most signif… 该图像是一张柱状图,展示了MJHQ-30K基准中Playground v2.5、Playground v2和SDXL-1-0-refiner三个模型在10个常见类别上的FID指标,数值越低越好。图中显示Playground v2.5在所有类别中均优于Playground v2,且在logo和people类别表现提升最为显著。


7. Conclusion & Reflections

  • Conclusion Summary: The paper successfully demonstrates that state-of-the-art aesthetic quality in text-to-image models can be achieved by focusing on a few critical aspects of the training recipe. By adopting a principled noise schedule (EDM framework), ensuring a balanced data pipeline for multiple aspect ratios, and implementing an iterative human preference alignment process, Playground v2.5 sets a new standard for open-source models. The results are compelling, showing superiority not only over other open-source alternatives but also over leading commercial systems in extensive human evaluations. The release of the model and the MJHQ-30K benchmark provides valuable resources for the community.

    Figure 14: Playground v2.5 random samples with popular user prompts. 该图像是论文中图14,展示了Playground v2.5以热门用户提示生成的多样化随机图像样本,涵盖肖像、动物、风景及幻想场景,体现了模型高质量的审美表现力。

  • Limitations & Future Work:

    • The authors do not list specific limitations of Playground v2.5.
    • For future work, they plan to focus on:
      1. Improving text-to-image alignment: Ensuring the model follows complex prompts even more faithfully.
      2. Enhancing variation: Increasing the diversity of outputs for a single prompt.
      3. Exploring new architectures: Moving beyond the current LDM framework, possibly towards models like Diffusion Transformers (DiT).
  • Personal Insights & Critique:

    • Engineering over Novelty: This paper is an excellent example of a "technical report" style of research. The insights are not groundbreaking theoretical discoveries but are derived from the clever application and synthesis of existing advanced techniques. Its value lies in providing a clear, reproducible "recipe" for achieving top-tier aesthetic results, which is immensely useful for practitioners.
    • Evaluation Rigor: The use of large-scale human preference studies involving thousands of real-world users is a major strength. It provides a more meaningful measure of "quality" than automated metrics alone. However, using their own product's users could introduce a slight bias, as these users might be pre-disposed to the "Playground" style of images.
    • Community Contribution: The open-sourcing of the model weights is a significant contribution. Even more impactful in the long run might be the release of the MJHQ-30K benchmark, which provides a standardized way for researchers to measure aesthetic quality and compare models without having to run expensive user studies.
    • The Power of Data and Alignment: The paper powerfully underscores a key trend in modern AI: a well-trained base model, when meticulously fine-tuned on high-quality, preference-aligned data, can often outperform larger models or more complex architectures. The human-in-the-loop approach is shown to be a critical component for closing the gap between what a model can generate and what a human wants to see.

Similar papers

Recommended via semantic vector search.

No similar papers found yet.