Towards robust fracture mapping: benchmarking automatic fracture mapping in 2D outcrop imagery
Abstract. Extracting consistent and accurate fracture traces from large volumes of high-resolution imagery remains a persistent challenge in structural analysis. We present a harmonised benchmarking dataset, FraXet, for pixel-wise fracture segmentation in high-resolution RGB orthophotos and digital elevation models (DEMs). FraXet curates images from three publicly available datasets, totalling 8953 256 × 256 RGB+DEM patches spanning diverse lithologies and imaging conditions. We use this dataset to systematically assess traditional image-processing filters (Canny, Sobel, Gabor, Sato, phase congruency) and two deep-learning (DL) models, U-Net and SegFormer, for per-pixel fracture detection. Quantitative comparison using image-quality (e.g., MSE, PSNR), segmentation (e.g., Precision, Recall, F1, IoU) and proposed similarity FracSim metrics suggest that the deep models substantially outperform classical filters (F1 ≈ 03 −0.5 vs ≤ 0.29), giving smoother, more continuous fracture traces with reduced noise. Training on the combined dataset (M_all) improves cross-site generalisation relative to models trained on the individual sub-datasets. Challenges remain in handling annotation misalignments, illumination artifacts, and thin traces. More importantly, probability maps derived from the DL approaches enable confidence-based triage and visualisation of model uncertainty. This work thus establishes a unified benchmark, curated dataset, and reproducible baseline to support further development of robust automated tools for fracture detection.