Improving Vision Impaired Users Access To Electronic Resources In E-Learning Environment With Modified Artificial Neural Network

ABSTRACT

Assistive Technology (ATs) provide means through which persons with visual impairment are empowered with adaptive devices and methods for accessing multimedia information. However, the degree of sensitivity and specificity values for access to electronic resources by visual impaired persons varies.  Existing ATs were designed as “one model fits all” (static calibration requirements), thereby limiting the usability by vision impaired users in an e-learning environment.  The study presents a Dynamic Thresholding Model (DTM) that adaptively adjusts the vision parameters to meet the calibration requirements of vision impaired users. 

Data from International Statistical Classification of Diseases and Related Health Problems of World Health Organisation (WHO) containing 1001 instances of visual impairment measures were obtained from 2008 to 2013. The users’ vision parameters of WHO for Visual Acuity Range (VAR) were adopted. These were: VAR ≥ 0.3(299); 0.1 < VAR < 0.3(182); 0.07 ≤ VAR < 0.1(364); 0.05 ≤ VAR < 0.07(120); 0.02 ≤ VAR < 0.05(24); and VAR < 0.02(12).  Data for six VAR groups were partitioned into 70% (700) and 30% (301) for training and testing, respectively.  Data for the six groups were transformed into 3-bits encoding to facilitate model derivation. The DTM was developed with calibrator parameters (Visual Acuity (Va), Print Size (Ps) and Reading Rate (Rr)) for low acuity, adaptive vision calibrator and dynamic thresholding. The VAR from the developed DTM was used to predict the optimal operating range and accuracy value on observed WHO dataset irrespective of the grouping.  Six-epochs were conducted for each thresholding value to determine the sensitivity and specificity values relative to the False Negative Rate (FNR) and False Positive Rate (FPR), respectively, which are evidences of misclassification. 

The 3-bit encoding coupled with the DTM yielded optimised equations of the form:

                                    OP1= 463.6073Ps-597.0703Va+573.8042Rr

                                    OP2= 1.9383Ps-1.7474Va+0.4508Rr

                                    OP3= 8.4985Va-1.2436Ps-17.1718Rr

Where OP1, OP2 and OP3 represent the first, second and third bit, respectively.  Five local maxima accuracy and one global maximum threshold values were obtained from the DTM. Local maxima threshold values were 0.455, 0.470, 0.515, 0.530, and 0.580, with corresponding percentage accuracy of 99.257, 99.343, 99.171, 99.229, and 99.429. Global maximum accuracy was 99.6 at threshold value of 0.5. The Va, Ps, and Rr produced equal numbers of observations (301) agreeing with the result in WHO report. Correctly classified user impairment was 99.89%, with error rate of 0.11%. The model predicted sensitivity value of 99.79% (0.21 FNR), and specificity value of 99.52% (0.48 FPR). 

The developed dynamic thresholding model adaptively classified various degrees of visual impairment for vision impaired users.