Enhancing Solar Panel Defect Detection Through Improved SSD and ResNet Architectures

With the widespread application of deep learning in image detection, solar panel defect identification has increasingly adopted convolutional neural networks (CNNs). This study introduces an optimized framework combining Single Shot MultiBox Detector (SSD) and Residual Networks (ResNet) to address challenges in localization accuracy and classification performance for photovoltaic module defects.

1. Methodology

The proposed dual-stage architecture first locates solar panels using enhanced SSD, then classifies defects through modified ResNet. Key innovations include:

1.1 SSD Network Optimization

We integrate Convolutional Block Attention Module (CBAM) into VGG16 backbone to strengthen multi-scale feature extraction. The default box dimensions are redesigned according to solar panel geometry:

$$S_k = S_{min} + \frac{S_{max} – S_{min}}{m – 1}(k – 1), \quad k \in [1,m]$$

$$w^a_k = S_k\sqrt{a_r}$$

$$h^a_k = \frac{S_k}{\sqrt{a_r}}$$

where \(a_r \in \{1,2,3,\frac{1}{2},\frac{1}{3}\}\) represents aspect ratios optimized for solar panel detection.

1.2 ResNet Enhancement

Squeeze-and-Excitation (SE) blocks are embedded in residual units to amplify critical channel features:

$$X^* = X \cdot \sigma(W_2\delta(W_1z))$$

where \(z\) denotes global pooled features, \(W_1\) and \(W_2\) are FC layer weights, and \(\sigma\) represents sigmoid activation.

2. Experimental Validation

We evaluate performance on a proprietary dataset containing 8,000 solar panel images with three defect types: gridline interruption, microcracks, and shadow occlusion.

Model Accuracy (%) FPS
Faster R-CNN 90.37 7
RetinaNet 85.99 30
YOLOv3 87.59 42
Improved SSD 97.23 21

The enhanced SSD demonstrates 9.86% higher accuracy than baseline while maintaining real-time processing capability (21 FPS). For defect classification:

Defect Type ResNet-50 (%) SE-ResNet (%)
Gridline 90.36 96.11
Microcracks 90.11 97.08
Shadow 87.45 95.99

3. Performance Metrics

Classification accuracy is calculated as:

$$\text{acc} = \frac{TP + TN}{TP + TN + FP + FN}$$

Our framework achieves 97.23% detection accuracy and 96.39% average classification accuracy across defect types, significantly outperforming conventional approaches for solar panel quality inspection.

4. Computational Efficiency

The modified SSD reduces training convergence time by 40% compared to original implementation. The attention mechanisms add minimal computational overhead:

Component FLOPs Increase
CBAM in SSD 0.8%
SE in ResNet 1.2%

This demonstrates the practical viability of our method for industrial solar panel inspection systems requiring both high precision and throughput.

5. Conclusion

The integration of spatial-channel attention mechanisms with multi-scale detection frameworks significantly improves solar panel defect identification accuracy. Future work will focus on optimizing the architecture for embedded deployment in photovoltaic monitoring systems.

Scroll to Top