Neural Network Ensemble based on FeatureSelection for Non-Invasive Recognition of Liver Fibrosis Stage Bartosz KRAWCZYK, Michał WOŹNIAK, Tomasz ORCZYK,Piotr PORWIK,Joanna MUSIALIK,Barbara BŁOŃSKA-FAJFROWSKA
Presentation agenda • Overview. • Current diagnostic methods. • Proposed method. • Analyzed data. • Data analysis methods. • Result comparison. • Conclusions.
Overview • Liver fibrosis: • Accumulation of tough, fibrous scar tissue in the liver. • ~1,75% of Poland’s population is infected with HCV. • Unthreated may cause Liver Cirrhosis and death. • Risk factors: • Chronic infection with hepatitis C or hepatitis B virus (HCV, HBV). • Immune system compromise (HIV or immunosuppressive drugs). • Heavy alcohol consumption. • Gradation indexes: • Knodell Histological Activity Index (HAI Score). • Ishak system. • METAVIR system.
Current diagnostic methods • Invasive • Liver biopsy • Risk of health complications or even death. • Up to 45% uncertainty depending on bioptate quality and size. • Still assumed as a „gold standard”. • Non-invasive • ELF Test • FibroTest & FibroScan • Expensive • Not very accurate
Proposed method • Non-invasive • Blood test based • Inexpensive • Only regular blood tests • Comparable with other non-invasive methods • Similar error level to FibroTest
Proposed method:Analyzed data and problems • Data characteristics: • 127 patients mostly with HCV (70%) and Liver Fibrosis. • All patients otherwise healthy and not under therapy. • 34 parameters measured. • Problems: • Low data samples count. • Unequal distribution of diagnosed fibrosis level. • Incomplete records. • Many poorquality biopsies.
Proposed method:Neural Network Ensemble The introduced method of classifier ensemble designconsists of threemain steps: • Building the pool of individual classifiers. • Pruning the acquired pool by discarding redundant predictors. • Using a sophisticated trained fuser to deliver the ensemble.
Proposed method:Building the pool of classifiers • Models should be complementary to each other, exhibiting at the same time high accuracy and high diversity. • There is no single optimal approach for feature selection task and results obtained on the basis of different methods may differ significantly. • Instead of selecting a single bestfeature selection method we use several of them to reduce the dimensionality of the feature space.
Proposed method:Ensemble pruning • There are several different ways in the literature on how to select valuable members to the committee. • Ideal ensemble consists of classifiers of high individual accuracy and high diversity. • Among diversity measures there are two major types: • Pairwise (shows how two classifiers differ from each other). • Non-pairwise (measure the diversity of the whole ensemble). • For measuring the diversity of whole ensemble we used the entropy measure.
Proposed method:Fusion of individual classifiers • Classifier fusion algorithms can make decisions on the basis of class labels given by individual classifiers or they can construct new discriminant functions on the basis of individual classifier support functions: • The first group includes voting algorithms. • The second group is based on discriminant analysis. • The design of improved fusion classification models, especially trained fusers, is the focus of current research.
Proposed method: Fusion of individual classifiers Assume that we have K classifiers in a pool after the pruning procedure. For a given object each individual classifier decides for class based on the values of discriminants. Let denote a function that is assigned to class i for a given value of x, and that is used by the l-th classifier . The combined classifier uses the decision rule , where . The weights can be set dependent on the classifier and class number: weight is assigned to the l-th classifier and the i-th class, and given classifier weights assigned to different classes may differ.
Proposed method:Featureselectionalgorithms Eightdifferentfeatureselectionalgorithmswereused, namely: • ReliefF, • Fast CorrelationBasedFilter, • GeneticWrapper, • SimulatedAnnealingWrapper, • ForwardSelection, • BackwardSelection, • QuickBranch & Bound, • Las Vegas Incremental. Neural network architecture was as follows: the number of neurons in the input layer was equal to the number of selected features, the number of output neurons was equal to the number of classes and the number of hidden neurons was equal to the half of the sum of number of neurons from the former layers.
Proposed method:Set-up • As reference methods we have selected most popular ensembles - Bagging, Boosting, Random Forest and Random Subspace. • Additionally we havecompared our method with the single best classier from the pool, allclassifiers from the pool and withsimple majority voting. • The combined 5x2 CV F test  was carried out toasses the statisticalsignicance of obtainedresults.
Proposed method:Results • The proposed neural network ensemble, based on feature selection methods, has outperformed all the previously used MCS for this problem. • The weakest results were returned by single best model approach, which highlights the usefulnessof utilizing more than one classier to fully exploit theoutputs of feature selection methods. • Second biggest accuracy boost lies in the used fuser-trained fusion of individual classiers allows to derive an optimal linear combination of them. • The pruning step had smallest but stillstatistically significant impact on the ensemble design.
Conclusions: • The presented paper shows that, despite some problems it is possible to reach similar or even lower error level than commercial tests. • It is also worth to mention that liver biopsy result, according to the other research, is also only a prediction with classification error varying from 35% up to 45% , depending on the sample size and count. • we proved that each of the three steps embedded in the proposed committee design has an important impact onthe quality of the final prediction and thus should not be omitted.
Thank you for your attention Contact: email@example.com firstname.lastname@example.org