YOLOv11-Based Solar Panel Defect Detection and Location System

The rapid expansion of centralized photovoltaic (PV) power stations has underscored the critical need for efficient and reliable maintenance. Solar panel arrays are frequently subjected to defects such as foreign object shading, bird droppings, and hot spots, which can severely degrade power generation efficiency and even pose fire hazards. Traditional manual inspection methods are labor-intensive, time-consuming, and often inefficient for large-scale installations. While unmanned aerial vehicles (UAVs) equipped with cameras have been adopted for aerial surveys, a significant challenge remains in not only accurately identifying small defects on high-resolution imagery but also precisely locating these faulty solar panels within extensive arrays for targeted maintenance. To address these challenges, this paper proposes an intelligent solar panel defect detection and location system. The system leverages a deep learning approach, combining the YOLOv11n model with Slicing Aided Hyper Inference (SAHI) for robust defect detection in high-resolution UAV imagery. Furthermore, it integrates detection results with UAV metadata and geolocation information to calculate the precise relative position of defective solar panels within an array. This integrated solution provides a complete workflow from automated fault identification to actionable location data, significantly enhancing the intelligence and efficiency of PV farm operations and maintenance.

The core of the detection module is based on YOLOv11n, chosen for its optimal balance between accuracy and computational efficiency. The model is trained on a carefully curated dataset of solar panel images. To overcome the challenge of detecting small defects (e.g., 80×40 pixels) within very high-resolution UAV images (e.g., 4000×3000 pixels), the SAHI technique is employed during inference. SAHI works by slicing the large input image into smaller, overlapping patches, running detection on each patch, and then aggregating the results. This process prevents the loss of fine details that occurs when high-resolution images are simply resized to the model’s standard input dimensions, thereby dramatically improving the recall rate for small defects on the solar panel surface.

The location module translates pixel-level detection results into real-world geographic coordinates. The algorithm designates a reference point, typically the first solar panel in the inspected array, and target points, which are the solar panels containing defects. Using the Exchangeable Image File Format (EXIF) data from the UAV images—such as focal length, flight altitude, and the geographic coordinates of the image center—the system calculates key geospatial parameters. The Ground Sample Distance (GSD), representing the ground distance covered by one pixel, is calculated first:

$$L_{GSDx} = \frac{W_s \cdot h}{f \cdot W_p}$$

$$L_{GSDy} = \frac{H_s \cdot h}{f \cdot H_p}$$

where \(W_s\) and \(H_s\) are the sensor width and height, \(h\) is the flight altitude, \(f\) is the focal length, and \(W_p\) and \(H_p\) are the image width and height in pixels. \(L_{GSDx}\) and \(L_{GSDy}\) are the GSD values in the longitudinal and latitudinal directions, respectively.

The pixel offset \((\Delta x_m, \Delta y_m)\) of a target point (defective solar panel center) from the image center \((x_c, y_c)\) is converted to a ground offset:

$$\Delta x_m = (x_t – x_c) \cdot L_{GSDx}$$

$$\Delta y_m = (y_t – y_c) \cdot L_{GSDy}$$

This ground offset is then converted to a latitude and longitude offset \((\Delta \eta_t, \Delta \lambda_t)\) relative to the image center’s known coordinates \((\eta_c, \lambda_c)\). Finally, the absolute geographic coordinates \((\eta_t, \lambda_t)\) of the target solar panel are obtained:

$$\eta_t = \eta_c + \Delta \eta_t$$

$$\lambda_t = \lambda_c + \Delta \lambda_t$$

To provide maintenance crews with intuitive guidance, the system calculates the relative distance from the defective solar panel (target) to the array’s reference solar panel. The North-South (\(D_{NS}\)) and East-West (\(D_{EW}\)) distances are computed using the differences in their coordinates:

$$D_{NS} = R \cdot \Delta \eta \cdot \frac{\pi}{180}$$

$$D_{EW} = R \cdot \Delta \lambda \cdot \frac{\pi}{180} \cdot \cos(\eta_0)$$

where \(R\) is the Earth’s radius (approximately 6,371,000 m), \(\Delta \eta\) and \(\Delta \lambda\) are the differences in latitude and longitude between the target and reference points, and \(\eta_0\) is the latitude of the reference point.

The performance of the proposed system was rigorously evaluated. A dataset of high-resolution (4000×3000) visible and thermal images of solar panel arrays was collected via UAV. For effective model training, these images were sliced into 800×600 patches and annotated with defect classes (bird droppings, shading, hot spots) and solar panel bounding boxes. The training parameters and system configuration are summarized below.

Category Parameter Value/Specification
Platform Operating System Ubuntu 24.04 LTS
GPU NVIDIA RTX 4090 (24GB) x2
CPU Intel Core i9-14900K, 64GB RAM
Training Epochs 300
Batch Size 16
Image Size (Train) 640
Framework PyTorch 2.5.1, CUDA 11.7
SAHI Inference Slice Size 640×640
Overlap Ratio 0.1
Confidence Threshold 0.4
Location Params Visible Sensor Size (Ws x Hs) 6.73mm x 4.49mm
Thermal Sensor Size (Ws x Hs) 9.98mm x 7.98mm
Visible Image Resolution (Wp x Hp) 4000 x 3000 pixels
Thermal Image Resolution (Wp x Hp) 640 x 512 pixels

A comparative study of object detection models was conducted to validate the choice of YOLOv11n. The models were evaluated based on mean Average Precision (mAP) at different Intersection over Union (IoU) thresholds, model size, and computational complexity.

Model mAP@50 (%) mAP@50:95 (%) Weight (MB) GFLOPs
Fast R-CNN 65.5 47.5 97.2 139.0
SSD 65.6 47.8 60.8 61.3
YOLOv8n 65.6 48.7 6.2 8.2
YOLOv10n 64.9 48.4 5.8 8.4
YOLOv11n 67.1 49.5 5.5 6.4

The results demonstrate that YOLOv11n achieves the highest mAP scores while maintaining the lowest computational cost (GFLOPs) and model size among the lightweight models compared. This makes it highly suitable for integration into a practical system. The critical impact of using SAHI for detecting small defects on high-resolution solar panel images is shown in the following performance comparison.

Detection Model Accuracy (%) False Positive Rate (%) False Negative Rate (%)
YOLOv11n (Baseline) 3.76 88.73 7.51
YOLOv11n + SAHI 84.22 11.61 4.17

The integration of SAHI resulted in a remarkable improvement of over 80 percentage points in detection accuracy for the high-resolution solar panel imagery, effectively mitigating the small-target detection problem. The defect location algorithm was tested on multiple solar panel arrays. The calculated distances (Arithmetic Measurement, AM) between target and reference solar panels were compared against manual measurements (MM). The absolute error (\(\varepsilon\)) was found to be within an acceptable range for field maintenance, typically corresponding to 1-3 solar panels. A large-scale statistical evaluation of the location algorithm’s error distribution was performed on 1280 target solar panels across 10 different zones.

Camera Type Error 0-3m (%) Error 3-4m (%) Error >4m (%) Accuracy (Error ≤4m)
Visible Light 82.48 14.25 3.27 96.73%
Thermal Imaging 87.76 9.88 2.36 97.64%

The statistical results confirm the high practical utility of the location algorithm, with over 96% of calculated positions falling within a 4-meter error margin—a precision sufficient to guide maintenance personnel directly to the faulty solar panel or its immediate neighbor. To operationalize this technology, a full-stack web-based management system was developed. The front-end interface, built with HTML, CSS, and JavaScript, provides modules for “Detection & Location,” “Data Review,” and “Defect Analysis.” The back-end, powered by a MySQL database, stores all detection records, location coordinates, and related metadata. The system interface allows users to upload UAV imagery, select detection models, run the inspection pipeline, and view results. The output includes annotated images, defect types and counts, the geographic coordinates of defective solar panels, and, crucially, their relative distance (North-South and East-West) from a defined array reference point. This intuitive presentation transforms raw algorithm outputs into actionable maintenance tickets.

In conclusion, this research presents a comprehensive and intelligent system for the automated inspection of centralized PV plants. By integrating the advanced YOLOv11n deep learning model with SAHI for high-accuracy defect detection on solar panels and a novel multi-source data fusion algorithm for precise geolocation, the system effectively addresses the dual challenges of “finding” and “locating” faults. The experimental results demonstrate superior detection accuracy for small defects and reliable positioning performance, enabling maintenance crews to pinpoint issues within large solar panel arrays rapidly. The implemented web-based system offers a user-friendly platform that bridges the gap between advanced algorithms and practical field operations, contributing significantly to the shift towards smarter, more efficient, and data-driven O&M practices for solar power generation assets.

Scroll to Top