1 / 15

Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions

Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions. Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University of Hertfordshire, 2005. Outline. Problem Domain Description of the Datasets Experimental Techniques Experiments Summary.

tareq
Download Presentation

Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University of Hertfordshire, 2005

  2. Outline • Problem Domain • Description of the Datasets • Experimental Techniques • Experiments • Summary

  3. Problem Domain (1) • One of the most exciting and active areas of research in biology currently, is understanding the regulation of gene expression. • It is known that many of the mechanisms of gene regulation take place directly at the transcriptional or sequence level.

  4. Problem Domain (2) • Transcription factors will bind to a number of different but related sequences, thereby effecting changes in the expression of genes. The current state of the art algorithms for transcription factor binding site prediction are, in spite of recent advances, still severely limited in accuracy.

  5. Description of the Datasets (1) • The original dataset has 68910possible binding sites. • A prediction result for each of 12 algorithms. • Single sequence algorithms (7); • Coregulatory algorithms (3); • Comparative algorithm (1); • Evolutionary algorithm (1). • It includes two classes labelled as either binding sites or non-binding sites with about 93% being non-binding sites.

  6. Description of the Datasets (2) Fig. 1. Organisation of dataset, showing alignment of algorithmic predictions, known information and original DNA sequence data.

  7. Description of the Datasets (3)Windowing Fig. 2. The window size is set to 7 in this study. The middle label of 7 continuous prediction sites is the label for a new windowed inputs. The length of each windowed input now is 12X7.

  8. Imbalanced Data (93% being Non-binding Sites) Sampling Techniques for Imbalanced Dataset Learning • For the under-sampling, we randomly selected a subset of data points from the majority class. • The synthetic minority over-sampling technique (SMOTE) proposed by N.V.Chawla, et al,. is applied for the over-sampling. • For each pattern in the minority class, we search for its K-nearest neighbours in the minority class using Euclidean distance. • For continuous features, the difference of each feature between the pattern and its nearest neighbour is taken, and then multiplied by a random number between 0 and 1, and added to the corresponding feature of the pattern. • For binary features, the majority voting principle to each element of the K-nearest neighbours in the feature vector space is employed.

  9. Experimental Techniques • Majority Voting (MV); • Weighted Majority Voting (WMV); • Single Layer Networks (SLN); • Rules Sets (C4.5-Rules); • Support Vector Machines (SVM). The Classification Techniques

  10. Performance Metrics A confusion matrix

  11. Experiments (1)Consistent Dataset Table 1: Common performance metrics (%) tested on the same consistent possible binding sites with single and windowed inputs separately.

  12. Experiments (2)Full Dataset Table 2: Common performance metrics (%) tested on the full test dataset with single and windowed inputs separately.

  13. Experiments (3) Fig. 3. ROC graph: five classifiers applied to the consistent test set with single inputs.

  14. Experiments (4) Fig. 4. ROC graph: 3 classifiers applied to the full test set with windowed inputs.

  15. Summary • By integrating the 12 algorithms we considerably improve binding site prediction using the SVM. • Employing a ‘window’ of consecutive results in the input vector can contextualise the neighbouring results, so that it can use the distribution of data to improve binding site prediction.

More Related