Название: The Digital Agricultural Revolution
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Программы
isbn: 9781119823445
isbn:
It is also important to know that climate change risks are not constant and not distributed equally neither in space nor in time. In turn, it requires regular crop monitoring and management of resources to get maximum yields. The monitoring of crops at regional level includes crop type mapping, cropping pattern recognition, crop condition estimation, crop yield estimation, estimation of evapotranspiration, irrigation scheduling, monitoring of water resources, uncertainty analysis, Identification of pest attack, soil mapping, and so on [2]. Although agriculture is a complex interlinked phenomenon, clear-cut success has been achieved with the technological interventions in decision-making processes and in shaping adaptation strategies with the changing scenarios. Technological innovations like mechanization, artificial intelligence and robotics, UAVs, sensors, Internet of Things (IoT), remote sensing, machine learning, deep learning, and their combinations in agriculture have the ability to transform food chain of the crops. The integration of local agricultural knowledge with remote sensing depends on the understanding of complex phenomena in agriculture [3].
Crop yield estimation at regional level plays crucial role in planning for food security of the population. This is of greater important task for some wide applications, including management of land and water management, crop planning, water use efficiency, crop losses, economy calculation, and so on. Traditional ground observation-based methods of yield estimation, such as visual examination and sampling survey, require continuous monitoring, and regular recording of crop parameters [4–6]. Spectral information from remote sensing images gives very accurate crop attributes that can be used for crop mapping and estimation. Further integration of machine learning algorithm with remote sensing provides explicit estimation of yield [7]. The present study focused on ability of machine learning algorithm in integration with remote sensing in crop yield prediction of paddy and sugarcane crops at regional level.
2.2 Introduction to Artificial Neural Networks
2.2.1 Overview of Artificial Neural Networks
An artificial neural network (ANN) is a wide class of flexible and simple mathematical model. It is capable to identify complex nonlinear relationships between input and output observed datasets. Neural network consists of a large number of “neurons,” nonlinear computational elements, connected internally in a complex way and arranged into layers [7]. Artificial neural network simulates natural neural network in the brain. In the rain, the fundamental neural network is connected to each other by synapses. The neurons are basic components of the human brain are processing unit in the brain. The neurons are responsible for learning and retention of information. The sensory/observed data are the input to the network, processes it, and gave output for other neurons. In ANN, everything is designed to replicate this process. An ANN also consists of a bundle of neurons. Biological axon-dendrite connects each node to other nodes via links. All the data the variable name “X” enters in the system with a weight of “w” for generating a weighted value. Each link weight determines the strength of nodes influence on other node. This denotes the strength of a signal in the brain. An activation function that use the basic mathematical equations to determine input-output relation. The familiar activation functions in NN are logistic function, binary step function, rectified linear units, and hyperbolic tangent function. The ANN models are efficient; particularly in solving the problem in the complex processes, which are difficult to describe using physical equations [8]. The ANN models are capable of modeling the complex nonlinear relationships, compared with a traditional linear regression model approach [9]. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. ANN models are similar to statistical models like generalized linear regression models, polynomial models, nonparametric models and discriminant analysis, principal component analysis, and other models in which the prediction of complicated phenomena is important than the explanation. On the other hand, NN models, like learning vector quantization, counter propagation, and self-organizing maps, are useful for data analysis. Some of the published work that provide insight about relation between statistics and NN are discussed.
2.2.2 Components of Neural Networks
The human brain on an average contains 86 billion neurons approximately [10]. A biological neuron consists of thin fibers, and those are known as dendrites. Dendrites receive incoming signals. The cell body, “soma” responsible for processing input signals and to decide firing/nonfiring of neurons to output signal. Processed signals output from neurons received by “axon” and passes it to relevant cells.
Artificial neuron called also as “perceptron” is a fundamental component of neural network, which is a mathematical function of a real-world problem with binary outputs. The neurons are systematically organized into two or more than two layers. One layer of neurons are connected to immediately preceding neurons layer and immediately succeeding neuron layer. The first (input) layer receives the external data, and the last (output) layer ultimately produces result. Each artificial neuron receives input from input layer, process the weights and sums and pass the sum through a nonlinear mathematical relation to produce output. In between them are one or more hidden layers (Figure 2.1). Weights are multiples of respective input values arranged in an array. To achieve a final value of prediction, bias is added to the weighted sum. The size of the correction values to adjust for errors by the model is known as a learning rate. Activation function decides whether or not a neuron is fired [11]. The neural network uses previous step output data values for the network training and minimizes the error between observed and estimated. The process readjusts the weights at each interaction of neuron. The training will stop after reaching the optimal learning rate [12, 13]. The higher learning rate reduces the time for training, and ultimate accuracy is low. Lower learning rate takes longer time and higher accuracy.
Figure 2.1 Architecture of artificial neural network (original figure).
2.2.3 Types and Suitability of Neural Networks
The artificial neural networks are usually selected based on the mathematical functions and output parameters. Among the different types of artificial neural networks, some of the most important kinds of the neural networks are discussed in this section.
A feed-forward neural network (FFNN) is an artificial neural network and is one of the simple type of neural network. In which, the input data travels in only one direction no loop or cycle formation. In an FFNN, every neuron (perceptron) in one layer is connected with each node in the immediate layer. As a result, each and every node is fully connected. This systematic arrangement of FFNN generates output by output layer. The number of hidden layers may arrange in between input and output layers and do not have a connection with the outer environment. These neural networks may or may not have a hidden layer. Common applications are pattern recognition, speech recognition, data compression, computer vision, and so on. If an FFN network uses more than one hidden layer, it is called a deep feed-forward network. By adding more hidden layers, overfitting will be reduced and improved generalization. The synaptic operation order in a hidden neuron, the ANNs were classified as first order, second, third, or higher order [14]. The back loops are absent in the FFN network. To reduce the error value in prediction, the back propagation algorithm may be used. The weights between the input hidden and output layers can be adjusted by using back propagation algorithm through learning rate and momentum. Then, the error value is back propagated from output layer to the input СКАЧАТЬ