The objective of the modelling phase in this application was to develop classifiers that are able to

identify any input combination as belonging to either one of the two classes: normal or epileptic.

For developing the logistic regression and neural network classifiers, 300 examples were randomly

taken from the 500 examples and used for deriving the logistic regression models or for training the

neural networks. The remaining 200 examples were kept aside and used for testing the validity of the

developed models. The class distribution of the samples in the training, validation and test data set

is summarized in Table 2.

We divided four-channel EEG recordings into subbands frequencies by using LBDWT. Since four-frequency

band, which are alpha (D4), beta (D3), theta (D5) and delta (A5) is sufficient for the EEG signal

processing, these wavelet sub-band frequencies (delta (1—4 Hz), theta (4—8 Hz), alpha (8—13 Hz), beta

(13—30 Hz)) are applied to LR and MLPNN input. Then we take the average of the four channels and give

these wavelet coefficients (D3—D5 and A5) of EEG signals as an input to ANN and LR.

The MLPNN was designed with LBDWT coefficients (D3—D5 and A5) of EEG signal in the input layer; and

the output layer consisted of one node representing whether epileptic seizure detected or not. A value

of “0” was used when the experimental investigation indicated a normal EEG pattern and “1” for

epileptic seizure. The preliminary architecture of the network was examined using one and two hidden

layers with a variable number of hidden nodes in each. It was found that one hidden layer is adequate

for the problem at hand. Thus, the sought network will contain three layers of nodes. The training

procedure started with one hidden node in the hidden layer, followed by training on the training data

(300 data sets), and then by testing on the validation data (200 data sets) to examine the network’s

prediction performance on cases never used in its development. Then, the same procedure was run

repeatedly each time the network was expanded by adding one more node to the hidden layer, until the

best architecture and set of connection weights were obtained. Using the back propagation (L—M) algorithm

for training, a training rate of 0.01 (0.005) and momentum coefficient of 0.95 (0.9) were found optimum

for training the network with various topologies. The selection of the optimal network was based on

monitoring the variation of error and some accuracy parameters as the network was expanded in the hidden

layer size and for each training cycle. The sum of squares of error representing the sum of square of

deviations of ANN solution (output) from the true (target) values for both the training and test sets

was used for selecting the optimal network.

Additionally, because the problem involves classification into two classes, accuracy, sensitivity and

specifity were used as a performance measure. These parameters were obtained separately for both the

training and validation sets each time a new network topology was examined. Computer programs that we

have written for the training algorithm based on backpropagation of error and L—M were used to develop

the MLPNNs.