Weed detection using convolutional neural network

M. S. Hema*, V. Abhilash, V. Tharun and D. Madukar Reddy

*Correspondence:
M. S. Hema,
hemait@anurag.edu.in

Received: 20 April 2022; Accepted: 14 May 2022; Published: 19 May 2022.

Precision agriculture relies heavily on information technology, which also aids agronomists in their work. Weeds usually grow alongside crops, reducing the production of that crop. They are controlled by herbicides. The pesticide may harm the crop as well if the type of weed is not identified. To control weeds on farms, it is required to identify and classify them. A convolutional network or CNN, a deep learning-based computer vision technology, is used to evaluate images. A methodology is proposed to detect weeds using convolutional neural networks. There were two primary phases in this proposed methodology. The first phase is image collection and labeling, in which the features for images to be labeled for the base images are extracted. In the second phase, the convolutional neural network model is constructed by 20 layers to detect the weed. CNN architecture has three layers, namely, the convolutional layer, the pooling layer, and the dense layer. The input image is given to a convolutional layer to extract the features from the image. The features are given to the pooling layer to compress the image to reduce the computational complexity. The dense layer is used for final classification. The performance of the proposed methodology is assessed using agricultural dataset images taken from the Kaggle database.

Keywords: CNN, weed, precision agriculture, segmentation

Introduction

Farmers’ main issue is detecting weeds in the crop during the irrigation process. Manual work for detecting the crop and weeds takes a long time, and more human effort is required to complete this procedure. Weed identification in plants has become more challenging in recent years. So far, there have not been many efforts put into identifying weeds while growing crops. Traditional methods for identifying agricultural weeds focused on directly identifying the weed; nevertheless, there are substantial differences in weed species. In contrast to this method, this study proposes a revolutionary technique that combines deep learning with imaging technology. The dataset was first trained using the CNN model. We can categorize and predict the provided input image as a weed or crop once the training is completed. The major goal of this project is to use the CNN deep learning algorithm to detect and classify weeds and crops.

To solve this challenge, we are developing our project to detect weeds utilizing the CNN method. Our findings will aid in crop and weed classification, saving humans time and effort. Manual labor may overlook proper classification on occasion, which is where our project will come in handy in finding the correct classification and projection.

Literature survey

An object-oriented algorithm is proposed to detect weeds in the agricultural field (1). A comprehensive and critical survey on image-based plant segmentation techniques is presented. In this context, “segmentation” refers to the process of classifying an image into the plant and non-plant pixels (2). Non-chemical weed control is important both for the organic production of vegetables and for achieving ecologically sustainable weed management. Estimates have shown that the yield of vegetables may be decreased by 45–95% in the case of weed-vegetable competition. Non-chemical weed control in vegetables is desired for several reasons (3). The mixer is used to spray pesticides to alleviate weeds from crops (4). Robot technology is used to find weeds, and pesticides are used to alleviate weeds (5). A comprehensive review of research dedicated to applications of machine learning in agricultural production systems is presented. The analyzed works were categorized as follows: (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management (6). ML and DL techniques have been used for weed detection and recognition and thus for weed management. In 2018, Kamilaris and Prenafeta-Bold’u (7) published a survey of 40 research papers that applied DL techniques to address various agricultural problems, including weed detection. The study reported that DL techniques outperformed more than traditional image processing methods (7). It discussed ten components that are essential and possible obstructions to developing a fully autonomous mechanical weed management system (8). The authors focused on different machine vision and image processing techniques used for ground-based weed detection (9). Fernandez-Quintanilla et al. (10) reviewed technologies that can be used to monitor weeds in crops. They explored different remotely sensed and ground-based weed monitoring systems in agricultural fields. They reported that weed monitoring is essential for weed management. They foresaw that the data collected using different sensors could be stored in cloud systems for timely use in relevant contexts (11). They used the VGG-16 model for classifying crop plants and weeds. They also trained the model with one dataset containing sunflower crops and evaluated it with two different datasets with carrot and sugar beet crops (10). The author describes in “Weed detection using image processing” how they can detect and separate weed-affected areas from the crop plants using image processing (12). A methodology is proposed to detect weeds using image processing techniques. The properties are extracted from the image and weed is detected from the extracted features (13). Machine vision uses unique image processing techniques. Weeds in agricultural fields have been detected by their properties such as size, shape, spectral reflectance, and texture features (14). Two methods are proposed for weed detection: crop row detection in images from agriculture fields with high weed difficulty and to further differentiate between weed and crop (15). The author proposed in “Crop and weed detection based on texture and size features and automatic spraying of herbicides” how they developed the image processing algorithm for yield finding and management of weeds (16). A computer vision application to detect unwanted weeds in early-stage crops is proposed (17). A novel approach for weed classification using curvelet transform and Tamura texture feature (CTTTF) with RVM classification methodology is proposed (18). Robots are working collaboratively with humans and learning from them how to realize basic agriculture tasks such as weed detection, watering, or seeding Marinoudi et al. (19). Weed detection using deep learning has been proposed (20). It is proposed that the implementation of image processing in drones instead of robots so that they not only detect weeds but also monitor the growth of crops. By combining image processing and CNN in drones, they get different accuracies depending on the processing, which is from 98.8% with CNN to 85% using Histograms of Oriented Gradients (HOG) (21). The authors combined Hough transform with simple linear iterative clustering (SLIC). This method focuses on the detection of crop lines (22). A threshold based on the classification values of the area for a crop or a weed is proposed (23).

Proposed methodology

The main objective of the proposed methodology is to detect weeds. The convolutional neural network is proposed for weed detection. The architecture of the proposed methodology is shown in Figure 1.

FIGURE 1
www.bohrpub.com

Figure 1. Weed detection architecture.

The convolution layer is used to extract the features from the image. The rectified linear unit (ReLU) activation function is used in the convolutional layer. The ReLU helps to break up the linearity even further, compensating for any linearity that may be imposed on an image during the convolution process. As a result, ReLU aids in avoiding the exponential growth of the computation required to run the neural network. As the size of the CNN rises, the computational cost of adding more ReLUs grows linearly. Another non-linear activation function that has gained prominence in the deep learning sector is the ReLU function. The key benefit of employing the ReLU function over other activation functions is that it does not simultaneously stimulate all of the neurons.

The feature maps’ dimensions are reduced by using pooling layers. As a result, the number of parameters to learn and the amount of processing in the network are both reduced. The pooling layer summarizes the features found in a specific region of the feature map created by the convolution layer. The pooling layer is a crucial layer that performs down-sampling on the feature maps from the previous layer, resulting in new feature maps with a reduced resolution. This layer significantly decreases the input’s spatial dimension. The output from the final pooling or convolutional layer, which is flattened and then fed into the fully connected layer, is the input to the fully connected layer. In the neural network models that predict a multinomial probability distribution, the Softmax function is utilized as the activation function in the output layer. Softmax is used as the activation function for multiclass classification problems requiring class membership on more than two labels. CNNs are trained to recognize and extract the best features from images that are relevant to the problem at hand. Their biggest advantage is this. Because of their effectiveness as a classifier, CNN’s last layers are fully connected. Because CNNs include FC layers, these two architectures are not competing as much as you might imagine. Finally, we will finish things up and give a quick overview of the section’s main concepts.

Results and discussion

The agriculture image is taken from the Kaggle database. It consists of images of 1,000 weeds and crops. Out of which, 80% of the images are used for training and 20% of the images are used for validation.

Figure 2 shows the performance of the proposed approach. The accuracy of the training set improves when the number of epochs increases. When the number of epochs is less, the validation set accuracy is high; but, the accuracy increases when the number of epochs increases.

FIGURE 2
www.bohrpub.com

Figure 2. Accuracy of training and validation dataset.

Conclusion

Using the CNN model of deep learning, a system that can classify weeds and crops was implemented. The features are extracted from the input images using a convolutional layer. A pooling layer is used to downsize the image. Finally, the dense layer is used for classification. Furthermore, it can be extended in the future to help in detecting weeds from large crops or plants and can be improved to make work with more types of crops and weeds for accurate classification and to reduce human efforts. It will be easier to find any weed or crop with less human effort.

References

1. Berge TW, Aastveit AH, Fykse H. Evaluation of an algorithm for automatic detection of broad-leaved weeds in spring cereals. Precis Agric. (2008) 9:391–405. doi: 10.1007/s11119-008-9083-z

CrossRef Full Text | Google Scholar

2. Hamuda E, Glavin M, Jones E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput Electron Agric. (2016) 125:184–99. doi: 10.1016/j.compag.2016.04.024

CrossRef Full Text | Google Scholar

3. Mennan H, Jabran K, Zandstra BH, Pala F. Non-chemical weed management in vegetables by using cover crops: a review. Agronomy. (2020) 10:257. doi: 10.3390/agronomy10020257

CrossRef Full Text | Google Scholar

4. Dai X, Xu Y, Zheng J, Song H. Analysis of the variability of pesticide concentration downstream of inline mixers for direct nozzle injection systems. Biosyst Eng. (2019) 180:59–69. doi: 10.1016/j.biosystemseng.2019.01.012

CrossRef Full Text | Google Scholar

5. Slaughter DC, Giles DK, Downey D. Autonomous robotic weed control systems: a review. Comput Electron Agricult. (2008) 61:63–78. doi: 10.1016/j.compag.2007.05.008

CrossRef Full Text | Google Scholar

6. Liakos K, Busato P, Moshou D, Pearson S, Bochtis D. Machine learning in agriculture: a review. Sensors (2018) 18:2674. doi: 10.3390/s18082674

CrossRef Full Text | Google Scholar

7. Kamilaris A, Prenafeta-Boldu FX. Deep learning in agriculture: a survey. Comput Electron Agric. (2018) 147:70–90. doi: 10.1016/j.compag.2018.02.016

CrossRef Full Text | Google Scholar

8. Merfield CN. Robotic weeding’s false dawn? Ten requirements for fully autonomous mechanical weed management. Weed Res. (2016) 56:340–4. doi: 10.1111/wre.12217

CrossRef Full Text | Google Scholar

9. Wang A, Zhang W, Wei X. A review on weed detection using ground-based machine vision and image processing techniques. Comput Electron Agric. (2019) 158:226–40. doi: 10.1016/j.compag.2019.02.005

CrossRef Full Text | Google Scholar

10. Fernandez-Quintanilla C, Penna J, Andujar D, Dorado J, Ribeiro A, Lopez-Granados F. Is the current state of the art of weed monitoring suitable for site-specific weed management in arable crops? Weed Res. (2018) 58:259–72. doi: 10.1111/wre.12307

CrossRef Full Text | Google Scholar

11. Fawakherji M, Youssef A, Bloisi D, Pretto A, Nardi D. Crop and weeds classification for precision agriculture using context-independent pixel-wise segmentation. Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC). New York, NY (2019). doi: 10.1109/IRC.2019.00029

CrossRef Full Text | Google Scholar

12. Paikekari A, Ghule V, Meshram R, Raskar VB. Weed detection using image processing. Int Res J Eng Technol. (2016) 12.

Google Scholar

13. Desai R, Desai K, Desai S, Solanki Z, Pate D. Removal of weeds using image processing. Int J Adv Comput Technol. (2016) 4.

Google Scholar

14. Shinde A, Shukla M. Crop detection by machine vision for weed management. Int J Adv Eng Technol. (2014) 7:818–26.

Google Scholar

15. Sayeed A. Detection of weeds in a crop row using image processing. London: Imperial College London (2016).

Google Scholar

16. Aware AA. Crop and weed detection based on texture and size features and automatic spraying of herbicides. Int J Adv Res. (2016) 6:1–7.

Google Scholar

17. Nathlia B, Panqueba S, Medina C. A computer vision application to detect unwanted weed in early stage crops. WSEAS. (2016) 4:41–5.

Google Scholar

18. Prema P. A Novel approach for weed classification using curve let transform and Tamura texture feature (CTTTF) with RVM classification. Int J Appl Eng Res. (2016) 11:1841–8.

Google Scholar

19. Marinoudi V, Sorensen C, Pearson S, Bochtis D. Robotics and labour in agriculture: a context consideration. Biosyst Eng. (2019) 184:111–21. doi: 10.1016/j.biosystemseng.2019.06.013

CrossRef Full Text | Google Scholar

20. Dankhara F, Patel K, Doshi N. Analysis of robust weed detection techniques based on the Internet of Things (IoT). Coimbra: Elsevier (2019). doi: 10.1016/j.procs.2019.11.025

CrossRef Full Text | Google Scholar

21. Daman M, Aravind R, Kariyappa B. Design and Development of Automatic Weed Detection and Smart Herbicide Sprayer Robot. Proceedings of the IEEE recent advances in intelligent computational systems (RAICS). Manhattan, NY (2015). doi: 10.1109/RAICS.2015.7488424

CrossRef Full Text | Google Scholar

22. Liang W-C, Yang Y-J, Chao C-M. Low-cost weed identification system using drones. Candarw. (2019) 1:260263. doi: 10.1109/CANDARW.2019.00052

CrossRef Full Text | Google Scholar

23. Bah MD, Hafiane A, Canals R. Weeds detection in UAV imagery using SLIC and the hough transform. Proceedings of the 2017 seventh international conference on image processing theory, tools and applications (IPTA). Kolkata (2017). doi: 10.1109/IPTA.2017.8310102

CrossRef Full Text | Google Scholar