The rapid depletion of fossil fuels and escalating environmental concerns have propelled the adoption of renewable energy sources, particularly solar energy. However, solar panels are prone to efficiency degradation due to surface occlusions such as dust, bird droppings, and shadows. This paper proposes innovative convolutional neural network (CNN)-based solutions to address three critical challenges in solar panel maintenance: dust source identification, dust accumulation level classification, and bird dropping/shadow detection.

1. Dust Source Identification via Improved ShuffleNetV2
1.1 Methodology
To identify dust origins on solar panels, we enhance the ShuffleNetV2 architecture with the following modifications:
- Mish Activation Function: Replaces ReLU to mitigate gradient vanishing and improve feature propagation:Mish(x)=x⋅tanh(ln(1+ex)).
- Coordinate Attention (CA) Mechanism: Captures spatial-channel dependencies by aggregating features along height and width dimensions.
- Hybrid Depthwise Convolution: Combines 3×3, 5×5, and 7×7 kernels to extract multi-scale features while minimizing computational overhead.
1.2 Experimental Results
Experiments on dust samples from four regions (Yijinhuoluoqi, Yulin, Xi’an, and Weinan) demonstrate the model’s superiority:
| Model | Accuracy (%) | Params (M) | FLOPs (G) |
|---|---|---|---|
| ShuffleNetV2 | 87.44 | 2.28 | 0.15 |
| MobileNetV2 | 91.14 | 3.51 | 0.32 |
| Improved ShuffleNetV2 | 92.25 | 2.03 | 0.12 |
The hybrid depthwise convolution and CA mechanism reduce parameters by 12.3% while improving accuracy by 4.81%.
2. Dust Accumulation Level Classification Using Enhanced DenseNet169
2.1 Methodology
For assessing dust severity, we redesign DenseNet169 with:
- Enhanced Style Pooling Module (ESM): Combines global average and standard deviation pooling to emphasize critical features.
- Asymmetric Convolution Group (ACGBlock): Replaces 3×3 convolutions with asymmetric kernels (3×1, 1×3) and grouped convolutions for lightweight feature extraction.
- Transfer Learning: Pre-training on Mini-ImageNet improves generalization.
2.2 Performance Evaluation
Testing on a dataset of 1,897 images (light, moderate, and heavy dust levels) yields:
| Model | Accuracy (%) | Params (M) | FLOPs (G) |
|---|---|---|---|
| DenseNet169 | 84.12 | 12.49 | 3.42 |
| ResNet50 | 80.29 | 25.56 | 4.12 |
| Improved DenseNet169 | 88.52 | 3.80 | 1.16 |
The ACGBlock and ESM reduce parameters by 69.6% and FLOPs by 66.1%, achieving higher accuracy than ResNet50 and MobileViT.
3. Bird Dropping and Shadow Detection via Optimized YOLOv8s
3.1 Methodology
To detect small occlusions, we modify YOLOv8s as follows:
- GhostConv and C2fGhost Modules: Replace standard convolutions to reduce computational costs.
- Small-Object Detection Head: Adds a 160×160 detection layer for improved sensitivity to tiny targets.
- Gather-Excite (GE) Attention: Enhances feature fusion in the neck network.
3.2 Detection Performance
Testing on 2,904 images shows significant improvements:
| Model | mAP@0.5 (%) | Params (M) | FLOPs (G) |
|---|---|---|---|
| YOLOv8s | 94.61 | 11.14 | 28.62 |
| YOLOv8s-Ghost | 93.81 | 8.28 | 20.82 |
| Improved YOLOv8s | 97.93 | 7.82 | 29.14 |
The small-object detection head boosts mAP@0.5 by 3.32%, while GE attention improves recall by 1.87%.
4. Key Contributions and Formulas
- Lightweight Architectures: All models prioritize efficiency without sacrificing accuracy. For example, the parameter count for ShuffleNetV2 is optimized as:Paramsconv=Kh×Kw×Cin×Cout+Cout,where Kh,Kw are kernel dimensions, and Cin,Cout are input/output channels.
- Attention Mechanisms: The CA and GE modules refine feature maps by weighting critical regions. For CA, the output for height h and width w is computed as:zch(h)=W10≤i<W∑xc(h,i),zcw(w)=H10≤j<H∑xc(j,w).
- Multi-Scale Feature Extraction: Hybrid depthwise convolutions and asymmetric kernels capture diverse dust particle sizes and occlusion patterns.
5. Conclusion
This work advances solar panel maintenance by addressing occlusion challenges through tailored CNN architectures. The proposed models achieve state-of-the-art performance in accuracy and efficiency, enabling real-time monitoring and reducing energy losses. Future work will integrate these models into drone-based inspection systems for large-scale solar farms.
