To evaluate the performance of the system cases were collected from Ambo Plant protection research center and Assosa Agricultural research center

To evaluate the performance of the system cases were collected from Ambo Plant protection research center and Assosa Agricultural research center. The numbers of collected cases were fifteen (15). Classifying the test cases into negative or positive is required to evaluate the performance of the system in assigning the cases in to the correct category: positive or negative, this is done by domain experts. The good performing system decides not (none) mango diseases cases as negative and positive (infected) mango diseases cases as decided by the plant pathologist. The evaluators were domain expert selected from Ambo Plant Protection research center.
In this research, system performance testing confusion matrix techniques was used and the performance of the system was calculated using recall, precision and F measure were used for measure effectiveness.
The confusion matrix has four categories: True positive, False positive, False negative and True negative. True positive (TP) is cases that were identified by the domain expert as correctly diagnosis and also diagnosis by the prototype system as correctly. False positive occur when incorrect data inserted into the system and the system is give result diagnosis correct. That means some irrelevant documents may be retrieved by the system as relevant. True negative (TN) is incorrect cases diagnosis incorrectly by the prototype system and expert domain. This is the case when incorrect (negative) cases were inserted and the proposed system produce negative decision i.e. non mango diseases. For instance when the evaluators input the cases that domain expert decided as incorrect and the system also decide it as incorrect.
False negative (False drop or Errors of commission) is the situation when incorrect cases were inserted in the system for testing and the prototype system produce positive result. Put differently, when case which is not indicate some disease of mango by the domain expert is decided as disease for mango by the prototype system.
Actual Positive Actual Negative
Predicted Positive TP FP
Predicted Negative FN TN
Table 1 Confusion matrix concept
In the process of testing the performance of the prototype system, the domain experts classify correctly and incorrectly diagnosed mango diseases cases by comparing the judgments reached by the prototype system with that of the domain experts judgments reached on the same mango disease test cases. The result was presented by confusion matrix in table 2 below.
Actual correctly diagnosis cases Actual incorrectly diagnosis cases
Predicted correct by the prototype system 8 1
Predicted incorrect by the prototype system 2 4
Total 10 5

Table 2 Confusion matrix of the prototype system
From the above table the correct diagnosis by prototype system is 12 and incorrect diagnosis is 3. This indicated the system performance is 80%. The recall, precision and F measure were calculated depending on the above data in the confusion matrix.
TP Rate FP Rate Precision Recall F-Measure
Results 0.777 0.166 0.875 0.777 0.823
Table 3 Accuracy of the prototype system
Knowledge based system evaluation plays an important role in judging the efficiency and effectiveness of it. Recall and precision are the common performance measure of the system. As it showed in table 4.2 the value of recall is 0.875 and precision is 0.777. F measure is a derived effectiveness measurement. The resultant value was interpreted as a weighted average of the precision and recall. The best value is 1 and the worst is 0. As it showed in the above table 3 the F measure of the prototype system is 0.823 which indicate that the prototype has a very good performance.
The challenges behind evaluation performance of prototype system are some of the cases have no contain enough information of signs and symptoms of mango diseases except the commons one. The knowledge variation among the profession of mango diseases diagnosis and treatment is the other challenge.
User acceptance testing is a form of testing to verify if the system can support day-to-day business and user scenarios to validate rules, various workflows, data correctness, and overall fit for use and ensure the system is sufficient and correct for business usage (Vince, 2010).It is a process of evaluating a new or revised system undertaken by domain experts, knowledge engineer and end-users of the system to make sure it meets the objectives of its development (John, 2001). User aacceptances testing is independent of the system development process and performed by end-users and stakeholders before formally produced. Performingg system acceptance testing depends on different user acceptance criteria like functionality, correctness, validation, verification, easy of use and user interface. Solomon 71 selected visual interaction techniques to check acceptance of their system.
Eight (8) domain experts were selected from Assosa Agriculture research center. Since, the procedure requires visual interaction with all selected domain experts, it difficult to take a lot of respondents. The prototype system was showed to the domain experts what is the prototype system can do including its user interface.Then the researcher distributed the questionnaire for those domain experts and data was collected. Lastly the user acceptance of the prototype system was analyzed. Table 4.3 below shows the analyzed data about user acceptance of the system which was collected from the respondents.

Criteria of evaluation Poor Fair Good Very good Excellent Average
1 Is the prototype easy to use and interact with it? 0 0 2 2 4 4.25
2 Is MKBS attractive? 0 0 2 2 4 3.875
3 Is the system more efficient in time? 0 0 1 2 5 4.5
4 How accurately does the system reach a decision about diagnosis of mango diseases? 1 0 1 2 4 4.25
5 Does the system incorporate sufficient knowledge about how to diagnoses and treatment of mango? 0 0 3 3 2 3.875
6 Is the system giving the right conclusions and the right recommendations? 0 0 1 3 4 4.375
7 How do you rate the significance of the system in the domain area? 0 0 2 4 2 4
Average 4.16
Table 4 Performance evaluations by domain experts
As shown in the abov

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team