AiPaper
Paper status: completed

A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion

Published:07/09/2025
Original Link
Price: 0.10
4 readers
This analysis is AI-generated and may not be fully accurate. Please refer to the original paper.

TL;DR Summary

A dual-camera fusion method with temporal sampling enables fast, accurate sand gradation detection. Combining wide-angle and high-magnification views and lightweight segmentation achieves real-time, scalable quality control with minimal error.

Abstract

Academic Editor: Keun-Hyeok Yang Received: 9 June 2025 Revised: 27 June 2025 Accepted: 30 June 2025 Published: 9 July 2025 Citation: Zhang, S.; Zhang, Y.; Sun, S.; Yuan, X.; Sun, H.; Wang, H.; Yuan, Y.; Luo, D.; Xu, C. A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion. Buildings 2025 , 15 , 2404. https://doi.org/10.3390/ buildings15142404 Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/). Article A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion Shihao Zhang † , Yang Zhang * ,† , Song Sun , Xinghai Yuan, Haoxuan Sun, Heng Wang, Yi Yuan, Dan Luo and Chuanyun Xu School of Computer and Information Science, Chongqing Normal University, No. 37, University City Middle Road, Chongqing 401331, China; 2022051613005@stu.cqnu.edu.cn (S.Z.); sunsong@cqnu.edu.cn (S.S.); 2021051615251@stu.cqnu.edu.cn (X.Y.); 2022050303019@stu.cqnu.edu.cn (H.S.); 2022110516039@stu.cqnu.edu.cn (H.W.); 2023051611042@stu.cqnu.edu.cn (Y.Y.); 2023051603012@stu.cqnu

Mind Map

In-depth Reading

English Analysis

1. Bibliographic Information

  • Title: A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion
  • Authors: Shihao Zhang, Yang Zhang, Song Sun, Xinghai Yuan, Haoxuan Sun, Heng Wang, Yi Yuan, Dan Luo, and Chuanyun Xu. Yang Zhang is listed as the corresponding author.
  • Journal/Conference: The paper was published in Buildings, a peer-reviewed, open-access scientific journal by MDPI. Buildings is a reputable journal in the fields of building science, civil engineering, and architecture, known for its rapid publication timeline.
  • Publication Year: 2025 (The publication dates—Received: 9 June 2025, Published: 9 July 2025—are set in the future, likely as placeholders).
  • Abstract: The paper addresses the need for rapid, online quality control of manufactured sand used in concrete. Traditional sieve tests are accurate but too slow. The authors propose an image-based method using a dual-camera system (a wide-angle global camera and a high-magnification local camera) to capture a full range of particle sizes. To improve efficiency, a Temporal Interval Sampling Strategy (TISS) is introduced to select only representative image frames, reducing data redundancy. The system uses a lightweight geometric algorithm for segmenting overlapping particles and a normal-distribution-based classifier for sizing. In tests on ten 500g sand batches, the method achieved an average processing time of 7.8 minutes per batch, with a total gradation error under 12% and a fineness-modulus deviation within ±0.06 compared to standard sieving. The results suggest the method is a scalable, real-time solution for industrial applications.
  • Original Source Link: The paper is provided from the local path /files/papers/68fe64be9b4f3dbba6b986f2/paper.pdf and is published as an open-access article under the Creative Commons Attribution (CC BY) license.

2. Executive Summary

  • Background & Motivation (Why): The particle size distribution, or gradation, of sand is a critical factor determining the quality and performance of concrete. The standard method for measuring gradation is sieve analysis, which is a slow, labor-intensive manual process, making it unsuitable for real-time quality control on construction sites or in production plants. Existing automated image-based methods struggle with a fundamental trade-off: high-resolution imaging required for fine particles (<0.3 mm) results in a very small field of view, necessitating thousands of images and long processing times to analyze a representative sample. Conversely, wide-angle imaging captures large areas quickly but fails to resolve the fine particles accurately. This paper aims to solve this efficiency-versus-accuracy bottleneck.

  • Main Contributions / Findings (What): The paper introduces a novel system that balances speed and precision for sand gradation analysis. Its primary contributions are:

    1. Dual-Camera Hardware Fusion: A hardware setup combining a wide-angle "global" camera for overall particle distribution and a high-magnification "local" camera to specifically resolve fine particles. This design captures comprehensive data in a single pass without needing to stitch thousands of high-resolution images.
    2. Temporal Interval Sampling Strategy (TISS): An intelligent sampling method that significantly reduces the number of images processed. By capturing images at spaced intervals as the sand sample is fed, it avoids redundant data while maintaining statistical representativeness, drastically cutting down detection time.
    3. Lightweight Image Processing Pipeline: The system employs efficient, rule-based algorithms instead of computationally expensive deep learning models. This includes a Recursive Concavity-Guided Segmentation (RCGS) algorithm to separate clumped particles and a statistical classifier based on normal distributions to assign particle sizes.
    4. Demonstrated Practicality: The method was validated on 500g sand samples, processing each in an average of 7.8 minutes. It achieved results close to the ground-truth sieve analysis, with gradation error below 12% and fineness-modulus deviation within ±0.06, proving its potential for real-world, on-site deployment.

3. Prerequisite Knowledge & Related Work

  • Foundational Concepts:

    • Sand Gradation: Refers to the distribution of particle sizes within a sample of sand (fine aggregate). A well-graded sand has a good mix of particle sizes, which allows for denser packing in concrete, improving its workability, strength, and durability. It is typically measured by the percentage of material passing through a series of standardized sieves.
    • Fineness Modulus (FM): A single numerical index that provides a general measure of the fineness or coarseness of sand. It is calculated by summing the cumulative percentages of sand retained on a specific series of standard sieves and dividing by 100. A higher FM indicates a coarser aggregate.
    • Sieve Analysis: The traditional, universally accepted method for determining sand gradation. It involves shaking a sand sample through a stack of sieves with progressively smaller mesh openings and weighing the material retained on each sieve.
    • Image Segmentation: A critical step in digital image processing that involves partitioning an image into multiple segments or objects. In this context, it is used to isolate individual sand particles from the background and from each other, even when they are touching or overlapping.
    • Feret Diameter: A measure of an object's size. It is defined as the distance between two parallel tangents on opposite sides of the object's profile. It is often used to characterize the size of irregular particles like sand grains.
  • Previous Works: The paper positions its work against existing research in image-based gradation detection, which has focused on three main areas:

    1. Imaging Systems: Researchers have developed various camera setups to improve image quality. Li et al. [12] used adjustable lighting, while Huang et al. [13] used a falling-particle platform but could not resolve very fine particles. The idea of using multiple cameras is not new; Lin et al. [14] used a dual-camera setup for fine fractions, and Zhao et al. [15] used three cameras to create a 2D image library. However, these dynamic systems face challenges with lighting synchronization and the dispersion of fine grains. Static approaches like that of Zhang et al. [16] are more reliable for fine particles but are too slow for high-throughput applications.
    2. Particle Segmentation: Classical methods like the watershed algorithm [17, 18] are used to separate touching particles but are sensitive to parameter tuning and can lead to incorrect segmentation. In recent years, deep learning models like Fully Convolutional Networks (FCN) [19], U-Net variants [20], and Mask R-CNN [21] have shown higher accuracy. However, they require large, labeled datasets, extensive training, and are often not robust to changes in lighting or material, making them difficult to deploy in industrial settings.
    3. Gradation Estimation: Converting 2D image measurements to 3D volume and mass is a known challenge. Some studies use 2D metrics like the Feret diameter and projected area as proxies [22-25], but these inherently introduce errors as they miss the particle's thickness. True 3D methods like laser scanning [26] and X-ray CT [27] are highly accurate but are too expensive and computationally intensive for on-site use. Deep learning has also been used for end-to-end gradation prediction [28, 31, 32], but again suffers from the high overhead of data and training.
  • Differentiation: This paper's primary innovation is its holistic focus on efficiency for practical deployment. It consciously avoids the complexities of deep learning and expensive 3D scanning. Instead, it integrates three key ideas into a single, optimized pipeline: a complementary dual-camera design to solve the resolution-coverage trade-off, a temporal sampling strategy (TISS) to minimize data acquisition, and a lightweight, rule-based algorithm (RCGS) for fast and reliable processing. This combination directly targets the balance of speed, accuracy, and cost required for real-time industrial monitoring.

4. Methodology (Core Technology & Implementation Details)

The proposed method consists of an optimized hardware platform for image acquisition and a multi-stage software pipeline for processing the images and calculating the final gradation.

4.1. Image Acquisition Optimization

The system is designed to acquire high-quality, representative images efficiently.

  • Hardware Platform: As shown in Figure 1, the platform is a closed-loop automated system with four key components:

    1. Quantitative Feeding: A high-precision load cell ensures a consistent mass of sand (approx. 1.5 g) is fed in each cycle, minimizing sample size fluctuations.

    2. Vibration-Assisted Dispersion: A multi-axis voice-coil motor array vibrates a flexible tray to break up particle clumps and spread the sand evenly, which is critical for accurate imaging.

    3. Synchronized Dual-Camera Imaging: A wide-angle global camera captures the overall particle field, while a high-magnification local camera focuses on resolving fine particles. The two are synchronized to capture complementary views simultaneously.

    4. Pneumatic Cleaning: Directed air blowing cleans the tray after each cycle, which is faster and more effective than mechanical methods.

      Figure 1. Schematic diagram of the sampling device components. 该图像是论文中示意图,展示了基于双摄像头融合的快速砂粒级配检测装置的主要组成部分及结构布局,包括电脑、进料仓、局部采样相机和全局采样相机等关键部件。

    The parameters for the dual-camera module are detailed in Table 1. The global camera provides broad coverage, while the local camera provides the necessary detail for fine grains.

    (The following is a manual transcription of Table 1 from the source text.) Table 1. Imaging module parameter settings.

    Parameter Global Camera (MV-CS060-10GC) Local Camera (MV-CS200-10GC)
    Focal Length (mm) 12 50
    Resolution (pixels) 3072 × 2048 5472 × 3648
    Field of View (cm) 10.9 × 7.5 3.8 x 2.6
    Frame Rate (fps) 15 8
    Pixel Size (μm) 35.2 6.8
    Detection Objective Overall Distribution Statistics Fine-grained Morphology Analysis
  • Temporal Interval Sampling Strategy (TISS): A 500g sand sample requires approximately 334 feeding cycles (500g / 1.5g/cycle). Imaging every cycle would be extremely time-consuming (>1.3 hours). TISS solves this by sampling intermittently. An interval parameter dictates that images are acquired only once every interval feeding cycles. For example, with an interval of 10, only 34 image groups are captured, reducing the total time to about 8.5 minutes. This strategy assumes that the sand fed over time is uniformly distributed, making sparse sampling statistically representative.

4.2. Image Processing

Once images are acquired, they are processed to isolate and measure each particle.

  • Block-Wise Adaptive Threshold Segmentation: To robustly extract particles from the background under varying lighting conditions, a two-stage method is used:

    1. Multi-frame Background Modeling: The stable background is estimated by taking the median of a sequence of frames. This effectively removes sensor noise and static textures. The background is then subtracted from the frames.
    2. Local Adaptive Thresholding: The resulting foreground image is divided into smaller blocks (e.g., 32x32 pixels). A unique binarization threshold is calculated for each block based on its local pixel intensity statistics (e.g., 90th percentile). This adapts to local illumination gradients and preserves the edges of very fine particles. The full procedure is detailed in Algorithm 1.

    (The following is a manual transcription of Algorithm 1 from the source text.) Algorithm 1: Multi-frame Adaptive Sand Particle Segmentation

    | | | :--- | :--- | | Input: I = {I1, I2, . . . , IN }: Sequence of grayscale frames from the same viewpoint; | | Bsize: Local block size ( 32 × 32 pixels); | | Tpercent: Threshold percentile (90%); | | Output: M: Binary mask highlighting sand particle regions. | 1 | Multi-frame background modeling: | 2 | Compute background B ← Median(Z); | 3 | Foreground enhancement: | 4 | for i ← 1 to N do | 5 | Di ← |Ii − B| ; // Suppress static background | 6 | F ← Average(D1,...,DN); // Enhance consistent foreground signals | 7 | Local adaptive thresholding: | 8 | Initialize empty binary mask M with the same shape as F; | 9 | foreach block R in F of size Bsize do | 10 | TR ← Percentile(FR, Tpercent); | 11 | foreach pixel (x,y) in R do | 12 | if F(x,y) ≥ TR then | 13 | M(x,y) ← 1; // Foreground | 14 | else | 15 | M(x, y) ← 0; // Background | 16 | return M

  • Recursive Concavity-Guided Segmentation (RCGS): After binarization, touching particles appear as a single clump. RCGS is a rule-based algorithm designed to separate them. The process is illustrated in Figures 2 and 3.

    Figure 2. Segmentation processing flowchart. 该图像是图2的流程图,展示了基于二值掩膜图像的颗粒分割处理过程,包含提取轮廓、形状特征判断及凸缺陷分析等步骤,用于实现颗粒的准确分割与分类。

    Figure 3. Example workflow for clumped-particle segmentation. 该图像是图示流程图,展示了图3中颗粒团聚分割的步骤,通过最小包围矩形、凹缺提取及最近点分割递归,实现单颗粒的提取和分离。

    1. Adhesion Determination: For each detected contour, geometric features like aspect ratio, solidity, and convexity are calculated. If these metrics fall outside the typical range for a single particle (e.g., low solidity indicates non-convexity), the contour is flagged as an adhered cluster.
    2. Iterative Concavity-Based Splitting: For a cluster, the algorithm identifies all significant concave points (defects) on its contour. It then finds the pair of concave points with the smallest Euclidean distance between them and splits the cluster along the line connecting this pair.
    3. Recursion and Termination: Each newly created sub-contour is re-evaluated using the adhesion criteria. This split-and-check cycle repeats recursively until all resulting contours are classified as single particles or no more valid splits can be made. This approach effectively decomposes complex clumps without needing any training data.

4.3. Gradation Calculation

The final stage involves classifying the segmented particles by size and fusing the data from the two cameras.

  • Particle Size Classification: A data-driven statistical method is used to define the boundaries between size classes, avoiding arbitrary thresholds.

    1. Single-Size Sample Imaging: Sand is first sieved into standard, narrow size intervals (e.g., 0.075-0.15 mm). Each group is imaged separately.

    2. Normal Distribution Fitting: The Feret diameters of the particles in each size group are measured, and a normal distribution (defined by mean μ\mu and standard deviation σ\sigma) is fitted to each group's diameter histogram.

    3. Threshold Determination: The optimal classification threshold between two adjacent size classes is determined by finding the intersection point of their fitted probability density function curves. This point minimizes the probability of misclassifying a particle between the two neighboring classes. Figure 4 shows the fitted distributions and their overlaps. Algorithm 2 details the classification procedure.

      Figure 4. Fitted normal distribution curves for six particle-size intervals.

      (The following is a manual transcription of Algorithm 2 from the source text.) Algorithm 2: Feret Diameter-Based Particle Size Classification.

    | | | :--- | :--- | | Input: G = {G1, G2, . . . , Gn }: Feret diameter samples in each predefined size group; | | X = {x1, x2, . . . , xm }: Feret diameters of particles to classify. | | Output: C = {c1, c2, . . . , cm}: Group index for each classified particle. | 1 | Estimate distribution parameters: | 2 | for i ← 1 to n do | 3 | Compute mean μi\mu_i and variance σi2\sigma_i^2 for all giGig_i \in G_i; | 4 | Compute normal distribution intersections: | 5 | T ← []; | 6 | for i ← 1 to n − 1 do | 7 | Solve x such that N(x;μi,σi)=N(x;μi+1,σi+1)N(x; \mu_i, \sigma_i) = N(x; \mu_{i+1}, \sigma_{i+1}); | 8 | Append x to T; | 9 | Construct classification intervals: | 10| ranges ← construct_ranges(T,min_d,max_d); | 11| Assign class to each particle: | 12| for i ← 1 to m do | 13| for j ← 1 to n do | 14| if xiDjx_i \in D_j then | 15| cijc_i \leftarrow j; break | 16| return C

  • Dual-View Fusion and Correction: A three-stage process combines the data from the global and local cameras into a single, accurate gradation curve.

    1. Scale Normalization: Measurements from the local camera are scaled to match the global camera's coordinate system using pre-calibrated scaling factors for length and area. The local camera's contribution is amplified by the ratio of the fields of view to ensure its data is properly weighted in the final calculation.
    2. Fine-Particle Replacement: The global camera has poor resolution for fine particles (<0.3 mm). Therefore, all particle data in this size range from the global image is discarded and replaced with the scaled data from the local camera. The combined distribution is then normalized so all volume fractions sum to 100%.
    3. Weighted Error Correction: To correct for any remaining systematic errors, a final calibration step is performed. An optimization algorithm (BFGS) is used to find a set of weights W=(w1,w2,)W = (w_1, w_2, \ldots) that minimizes the squared error between the fused image-based gradation XX and a ground-truth gradation YY from sieve analysis. SSE(W)=j(wjXjYj)2 \mathrm { S S E } ( W ) = \sum _ { j } ( w _ { j } X _ { j } - Y _ { j } ) ^ { 2 } These optimal weights WW^* are then applied to the fused data to produce the final, corrected gradation. Algorithm 3 outlines this fusion process.

    (The following is a manual transcription of Algorithm 3 from the source text.) Algorithm 3: Weighted Correction of Fine-Scale Particle Volume Distribution

    | | | :--- | :--- | | Input: LlocalL_{local}: Feret diameters from local camera; AlocalA_{local}: Projected areas from local camera; VglobalV_{global}: Initial global volume distribution; Vlocal_fineV_{local\_fine}: Fine-particle volume from local view; Wlocal,HlocalW_{local}, H_{local}: Physical width and height of local image; Wglobal,HglobalW_{global}, H_{global}: Physical width and height of global image; slocal,x,slocal,ys_{local,x}, s_{local,y}: Local pixel sizes; sglobal,x,sglobal,ys_{global,x}, s_{global,y}: Global pixel sizes; DrangesD_{ranges}: Size-bin intervals; YreferenceY_{reference}: Reference volume distribution. | | Output: acorrecteda_{corrected}: Final corrected volume proportions per bin. | 1 | Compute area-scaling factors: khsglobal,x/slocal,xk_h \leftarrow s_{global,x} / s_{local,x}, kvsglobal,y/slocal,yk_v \leftarrow s_{global,y} / s_{local,y}, kAkh×kvk_A \leftarrow k_h \times k_v | 2 | Scale Feret diameters and areas to global image scale: foreach particle i in local image do | 3 | d˙local[i]\lfloor \dot{d}_{local}[i] \leftarrow \ldots; Alocal[i]Alocal[i]/kAA_{local}[i] \leftarrow A_{local}[i] / k_A | 4 | Replace global fine-scale volume: Sratio(Wglobal×Hglobal)/(Wlocal×Hlocal)S_{ratio} \leftarrow (W_{global} \times H_{global}) / (W_{local} \times H_{local}); Vfine_correctedAlocal×SratioV_{fine\_corrected} \leftarrow \sum A_{local} \times S_{ratio}; VglobalV'_{global} \leftarrow replace_fine_segment in VglobalV_{global} with Vfine_correctedV_{fine\_corrected}; | 5 | Normalize updated global distribution: foreach size bin jj do | 6 | araw[j]Vglobal[j]Vglobal×100%a_{raw}[j] \leftarrow \frac{V'_{global}[j]}{\sum V'_{global}} \times 100\%; | 7 | Construct regression objective for weight correction: Let X: matrix of arawa_{raw} across samples; Let Y: reference vector from YreferenceY_{reference}; Objective: SSE(w)=j(wjXjYj)2\mathrm{SSE}(\mathbf{w}) = \sum_j (w_j X_j - Y_j)^2 | 8 | Solve optimal weights w\mathbf{w}^* using BFGS: Initialize w[1,1,,1]\mathbf{w} \leftarrow [1, 1, \ldots, 1]. Optimize SSE to obtain w\mathbf{w}^*. | 9 | return acorrecteda_{corrected}

5. Experimental Setup

  • Datasets: The experiments used ten 500g samples of manufactured sand with varying gradations (fine, medium, and coarse), with particle sizes from 0.075 mm to 4.75 mm. Additionally, single-size sand samples were prepared using sieves for training the particle size classifier.

  • Evaluation Metrics: The performance of the proposed method was evaluated against the ground-truth sieve analysis using two key metrics:

    1. Grading Error (Mean Absolute Cumulative Gradation Error): This metric quantifies the overall accuracy of the particle size distribution. It is calculated by summing the absolute differences between the cumulative percentage of mass passing through each standard sieve, as measured by the image method versus the reference sieve method. A lower value indicates a better fit. While the paper does not provide the formula, a standard definition is: Grading Error=i=1kPimage(i)Psieve(i) \text{Grading Error} = \sum_{i=1}^{k} |P_{\text{image}}(i) - P_{\text{sieve}}(i)| where P(i) is the cumulative percentage passing the ii-th sieve and kk is the total number of sieves.
    2. Fineness Modulus (FM) Error: This measures the accuracy of the overall fineness/coarseness trend. It is the absolute difference between the Fineness Modulus calculated from the image-based gradation and that from the reference sieve analysis. FM Error=FMimageFMsieve \text{FM Error} = | \text{FM}_{\text{image}} - \text{FM}_{\text{sieve}} | A smaller error indicates that the image-based method correctly captures the general character of the sand.
  • Baselines: The study conducted a series of comparative experiments to validate each component of the proposed system:

    • Efficiency Optimization: Different sampling intervals (interval = 1 to 20) and numbers of frames per group (1, 2, or 3) were tested to find the optimal balance between speed and accuracy.
    • Dual-Camera Fusion: The proposed fusion method was compared against using only the global camera data or only the local camera data.
    • Segmentation Strategy: The proposed RCGS algorithm (referred to as "dynamic judgment") was compared against three simpler strategies: no segmentation, direct elimination of clumps, and fixed thresholding.

6. Results & Analysis

6.1. Efficiency Optimization Comparison Experiment

This experiment evaluated the trade-off between detection time and accuracy when using the Temporal Interval Sampling Strategy (TISS). Figure 5 and Tables 2-5 show the results.

Figure 5. Effect of the sampling interval on total processing time and mean grading error for the singleframe strategy. The processing time drops almost exponentially as interval increases, whereas t… 该图像是图表,展示了单帧采样策略下采样间隔对总处理时间和平均分级误差的影响。随着间隔增加,处理时间近似指数下降,而分级误差在间隔约为11前保持低于12%,之后迅速上升。

As shown in Figure 5, increasing the sampling interval drastically reduces the processing time. However, this comes at the cost of increased grading error, as fewer images are used. The results indicate an optimal interval of 11. At this setting:

  • The total processing time is reduced to 7.84 minutes.

  • The average grading error remains acceptable at approximately 12%.

  • Beyond this point, the error begins to rise sharply, indicating that the sampled data is no longer representative of the full 500g batch.

    (The following is a manual transcription of Table 2 from the source text.) Table 2. Raw metrics for the single-frame strategy at different sampling intervals.

Interval Max Error (%) Min Error (%) Average Error (%) Std Dev Time (min)
1 9.62 9.62 9.62 0 83.50
2 9.94 9.82 9.88 0.0008 41.75
3 10.85 9.56 10.09 0.0068 28.08
4 10.64 9.31 10.11 0.0062 21.13
5 10.67 10.32 10.48 0.0016 16.95
6 11.44 10.18 10.68 0.0043 14.17
7 11.85 9.83 11.04 0.0081 12.18
8 11.67 9.53 10.89 0.0077 10.69
9 12.41 10.34 11.29 0.0059 9.53
10 12.46 10.61 11.35 0.0052 8.60
11 13.29 10.11 12.02 0.0089 7.84
12 12.53 10.39 11.68 0.0076 7.21
13 13.27 10.99 12.01 0.0074 6.67
14 13.82 10.86 12.26 0.0091 6.21
15 14.64 11.16 12.60 0.0111 5.82
16 13.69 10.72 12.22 0.0077 5.47
17 14.24 11.56 12.83 0.0072 5.16
18 14.63 10.68 12.77 0.0131 4.89
19 14.54 11.33 13.02 0.0099 4.64
20 15.58 10.71 13.07 0.0130 4.43

Tables 3, 4, and 5 (transcribed below) compare strategies using 1, 2, or 3 images per sampled group. The analysis shows that for a given total sampled mass, the single-image-per-group strategy is most efficient. Capturing more images per group (e.g., 2 or 3) provides only marginal improvements in accuracy while significantly increasing the processing time. The experiments also confirmed that the results are stable and representative as long as the total sampled mass per batch remains above 50g (which corresponds to an interval around 10-11).

(The following are manual transcriptions of Tables 3, 4, and 5 from the source text.) Table 3. Detection time, mass, error, and standard deviation for sampling scheme with 1 image per group. This table is identical to the relevant columns in Table 2, but presented for clarity as in the original paper.

Interval Time (min) Mass (g) Error (%) Std Dev
1 83.50 500.00 9.62 0
... ... ... ... ...
10 8.60 50.00 11.35 0.0052
11 7.84 45.45 12.02 0.0089
... ... ... ... ...
20 4.43 25.00 13.07 0.0130

(Full data omitted for brevity, see Table 2)

Table 4. Detection time, mass, error, and standard deviation for sampling scheme with 2 images per group.

Interval Time (min) Mass (g) Error (%) Std Dev
1 52.88 250.00 9.54 0
2 26.60 125.00 9.93 0.0050
3 18.05 83.33 10.17 0.0033
4 13.62 62.50 10.72 0.0034
5 10.77 50.00 10.84 0.0081
... ... ... ... ...
20 2.85 12.50 14.93 0.0158

Table 5. Detection time, mass, error, and standard deviation for sampling scheme with 3 images per group.

Interval Time (min) Mass (g) Error (%) Std Dev
1 40.13 166.67 9.83 0
2 20.07 83.33 10.60 0.0026
3 13.62 55.56 10.84 0.0081
... ... ... ... ...
20 2.15 8.33 15.94 0.0165

4.2. Dual-Camera Comparison Experiment

This experiment demonstrates the superiority of the dual-camera Fusion strategy. Table 6 shows the results for a coarse sand sample.

(The following is a manual transcription of Table 6 from the source text.) Table 6. Comparison of gradation results for a coarse manufactured sand sample (FM = 3.52) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %)

Particle Size (mm) 0.075-0.15 0.15-0.3 0.3-0.6 0.6-1.18 1.18-2.36 2.36-4.75 >4.75 FM Error
Reference 4% 2% 16% 22% 22% 28% 0% 3.52
Global only 15.73% 21.21% 27.60% 35.46% 0% 11.34%
Local only 3.23% 2.11% 21.36% 23.12% 24.39% 25.78% 0% 3.41 13.20%
Fusion 3.22% 2.34% 18.56% 21.72% 25.17% 28.99% 0% 3.50 7.77%

Note: The provided text for Table 7 and beyond was incomplete. However, Figure 6 summarizes the results across all ten sand samples.

Figure 6. Comparison of fineness-modulus error and grading error across ten manufactured sand samples using different computation strategies. (a) The fineness-modulus error is presented as the absolu… 该图像是图表,展示了论文中不同计算策略(local-only、global-only、fusion)对十个砂样本的细度模数误差和级配误差的比较,左图展示绝对值的细度模数误差,右图展示级配误差。

Analysis of Table 6 and Figure 6 reveals:

  • The Global only method completely fails to detect fine particles (<0.3 mm), confirming its primary weakness.
  • The Local only method detects fine particles but is less accurate for the overall distribution, leading to a high total error (13.20%) and a notable deviation in the Fineness Modulus (3.41 vs. 3.52).
  • The Fusion method successfully combines the strengths of both cameras. It accurately measures the fine fractions while maintaining a precise overall distribution. It achieves the lowest grading error (7.77%) and the most accurate Fineness Modulus (3.50).
  • Figure 6 confirms this trend across all ten samples. In both panels, the Fusion method (green bars) consistently shows lower grading error and fineness-modulus error compared to the local-only and global-only methods.

4.3. Segmentation Strategy Comparison Experiments

Note: The text describing this experiment was missing from the provided source. However, Figure 7, which was provided, allows for a full analysis of the results.

Figure 7. Fineness modulus error and overall grading error of four image processing strategies across ten coarse manufactured sand samples. (a) The fineness-modulus error is presented as the absolute… 该图像是图表,展示了图7中四种图像处理策略在十个粗制砂样本上的细度模数误差和整体级配误差对比。左图(a)为细度模数绝对误差,右图(b)为级配误差,体现了RCGS方法普遍误差较低的表现。

Figure 7 compares the proposed RCGS segmentation algorithm (labeled "dynamic judgment" in the paper's design, and presumably represented by the best-performing bars) against simpler methods.

  • Panel (a) shows the absolute fineness-modulus error, while panel (b) shows the total grading error.
  • Across all ten samples, the proposed recursive segmentation method (RCGS, likely the green bars, which show the lowest error) consistently outperforms the other strategies.
  • Methods like no segmentation or direct elimination of clumps would lead to significant overestimation of particle sizes, resulting in large errors in both gradation and fineness modulus. A fixed threshold method would be less adaptable to varying degrees of particle adhesion.
  • The superior performance of RCGS demonstrates its robustness in handling complex, clumped particle scenarios, which is essential for achieving accurate gradation results.

7. Conclusion & Personal Thoughts

  • Conclusion Summary: The paper successfully presents and validates a rapid, image-based sand gradation detection method. By synergistically combining a dual-camera system with an efficient Temporal Interval Sampling Strategy (TISS), the system overcomes the classic speed-accuracy trade-off. The lightweight, rule-based image processing pipeline, particularly the RCGS algorithm for segmentation and the statistical size classifier, enables fast and accurate analysis without the heavy requirements of deep learning. The experimental results—an average detection time of 7.8 minutes with grading error under 12%—demonstrate that the method is a viable and practical solution for real-time quality control in industrial settings.

  • Limitations & Future Work:

    • Author-Acknowledged Limitations: The paper notes that the system's performance may degrade when processing sand with high moisture content or cohesion, as this hinders the effectiveness of the vibration-based dispersion.
    • Truncated Document: The provided research paper was incomplete, cutting off in the middle of the results section. The full discussion of results for medium and coarse sand, the segmentation comparison, and the authors' own conclusion and future work sections were missing.
    • Calibration Dependency: The final "Weighted Error Correction" step relies on a BFGS optimization to match the results to ground-truth sieve analysis. This implies that the system requires a one-time calibration for each new source or type of sand. While this improves accuracy, it means the system is not entirely independent of the slow, traditional sieving method it aims to replace.
  • Personal Insights & Critique:

    • Pragmatic Engineering: The core strength of this paper lies in its pragmatic approach. Instead of pursuing the highest possible accuracy with expensive or complex technologies (like 3D CT scans or deep learning), the authors built a system that is "good enough" for its intended application and excels in efficiency and practicality. The combination of simple but clever hardware and software solutions is a hallmark of good engineering design.
    • TISS is the Star: The Temporal Interval Sampling Strategy (TISS) is arguably the most impactful innovation. It directly addresses the primary bottleneck of image-based analysis—data volume—with a simple and effective statistical sampling concept.
    • Assumption of Uniformity: The method's success hinges on the assumption that the sand is well-mixed and that the small, intermittently sampled portions are representative of the whole batch. While the vibrating tray helps, any systematic segregation of particles during feeding could introduce bias.
    • Transferability: The overall framework—dual-view imaging, intelligent sampling, and lightweight processing—is highly transferable to other particle analysis problems in mining, pharmaceuticals, or food processing where real-time size distribution monitoring is needed.

Similar papers

Recommended via semantic vector search.

No similar papers found yet.