Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
ORIGINAL ARTICLE
Year : 2018  |  Volume : 14  |  Issue : 3  |  Page : 625-633

Breast cancer tumor type recognition using graph feature selection technique and radial basis function neural network with optimal structure


1 Department of Electrical and Electronic Engineering, Eastern Mediterranean University, KKTC, Via Mersin-10, Gazimağusa, Turkey
2 Faculty of Electrical and Computer Engineering, Babol University of Technology, Babol, Iran

Date of Web Publication12-Jun-2018

Correspondence Address:
Dr. Abdoljalil Addeh
Faculty of Electrical and Computer Engineering, Babol University of Technology, Babol
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0973-1482.183561

Rights and Permissions
 > Abstract 


Context: Breast cancer is a major cause of mortality in young women in the developing countries. Early diagnosis is the key to improve survival rate in cancer patients.
Aims: In this paper an intelligent system is proposed to breast cancer tumor type recognition.
Settings and Design: The proposed system includes three main module: The feature selection module, the classifier module and the optimization module. Feature selection plays an important role in pattern recognition systems. The better selection of features usually results in higher accuracy rate.
Methods and Material: In the proposed system we used a new graph based feature selection approach to select the best features. In the classifier module, the radial basis function neural network (RBFNN)is used as classifier. In RBF training, the number of RBFs and their respective centers and widths (Spread) have very important role in its performance. Therefore, artificial bee colony (ABC) algorithm is proposed for selecting appropriate parameters of the classifier.
Statistical Analysis Used: The RBFNN with optimal structure and the selected feature classified the tumors with 99.59% accuracy.
Results: The proposed system is tested on Wisconsin breast cancer database (WBCD) and the simulation results show that the recommended system exhibits a high accuracy.
Conclusions: The proposed system has a high recognition accuracy and therefore we recommend the proposed system for breast cancer tumor type recognition.

Keywords: Artificial bee colony, feature selection, graph, radial basis function neural network, Wisconsin breast cancer database


How to cite this article:
Zarbakhsh P, Addeh A. Breast cancer tumor type recognition using graph feature selection technique and radial basis function neural network with optimal structure. J Can Res Ther 2018;14:625-33

How to cite this URL:
Zarbakhsh P, Addeh A. Breast cancer tumor type recognition using graph feature selection technique and radial basis function neural network with optimal structure. J Can Res Ther [serial online] 2018 [cited 2020 Jul 8];14:625-33. Available from: http://www.cancerjournal.net/text.asp?2018/14/3/625/183561




 > Introduction Top


There is a considerable increase in the number of breast cancer cases in recent years. It is reported in [1],[2] that breast cancer was the second one among the most diagnosed cancers. It is also stated that breast cancer was the most prevalent cancer in the world by the year 2015. Breast cancer outcomes have improved during the last decade with the development of more effective diagnostic techniques and improvements in treatment methodologies. A key factor in this trend is the early detection and accurate diagnosis of this disease. The long-term survival rate for women in whom breast cancer has not metastasized has increased, with the majority of women surviving many years after diagnosis and treatment.[3],[4],[5]

The use of machine learning tools in medical diagnosis is increasing gradually. This is mainly because the effectiveness of classification and recognition systems has improved in a great deal to help medical experts in diagnosing diseases.[6],[7],[8] The correct diagnosis of breast cancer is one of the major problems in the medical field. From the literature, it has been found that different pattern recognition techniques can help them to improve in this domain.[9],[10],[11],[12],[13],[14]

Prevention and early detection through screening remain the best form of protection for patients. X-ray mammography is the most widely used and effective tool for screening campaigns. The goal of mammography is the early detection of breast cancer, typically through the detection of characteristic masses and/or micro calcifications. A mammogram can find breast cancer before it can be felt.[15],[16],[17] However, it is not perfect. It is still challenging for radiologists to differentiate between benign and malignant tumors. The existence of breast tumor is usually reflected in the mammogram. The current procedure assumes that the image is recorded on an X-ray film and that it is interpreted by a human expert. Obviously, this suffers from the human error and error with visual inspection, which may further be enhanced by poor quality of the mammogram images.[18],[19],[20]

Many investigators believe that automation of mammogram screening analysis increases the rate of early detection. Several approaches have been proposed for breast cancer detection. In,[9],[10],[11],[12],[13],[14] artificial neural networks (ANNs) have been used to classify breast cancer. The advantage with neural network is that it is capable of handling noisy measurements requiring no assumption about the statistical distribution of the monitored data. It learns to recognize patterns directly through typical example patterns during a training phase. In,[21],[22],[23],[24],[25],[26] fuzzy approach were used to diagnosis breast cancer. The error rate of fuzzy systems are high as they suffer from the drawbacks of random initial cluster center selection and requirement of large training data set.[27]

Some of the researchers used the support vector machine (SVM) to breast cancer recognition.[28],[29],[30],[31] Using SVMs is the method that is receiving increasing attention, with remarkable results recently. However, the accuracy of an SVM is dependent on the choice of kernel function and the parameters (e.g., cost parameter, slack variables, margin of the hyper plane, etc.). Failure to find the optimal parameters for an SVM model affects its recognition accuracy (RA).[32]

Most the existing techniques used the unprocessed data as the inputs of breast cancer recognition system. The use of unprocessed breast cancer data has further many problems such as the amount of data to be processed is large. Unnecessary features increase the size of the search space and reduce the RA. In [33], an automatic diagnosis system for detecting breast cancer based on association rules (AR) and multilayer perceptron (MLP) neural network is proposed. In the proposed system, AR is used for reducing the dimension of breast cancer database and MLP neural network is used for intelligent classification. The obtained results proved that the feature selection can improve the breast cancer RA significantly.

Based on the published papers, there exist some important issues in the design of automatic breast cancer recognition system which if suitably addressed, lead to the development of more efficient recognizers. One of these issues is the selection of the features. In this paper, graph clustering with node centrality (GCNC) is proposed as feature selection technique.

Another issue is related to the choice of the classification approach to be adopted. The developed system uses radial basis function neural network (RBFNN) for recognition. RBFNN represents the promising new generation of information processing systems. RBFNN is good at tasks such as pattern matching and classification, function approximation, and optimization.[34],[35],[36],[37] In RBFNN training process, the number of RBFs and their respective centers and spreads have very important role in its performance. Therefore, artificial bee colony (ABC) algorithm is proposed for selecting appropriate parameters of the classifier. This technique will improve the RBFNN performance.

This paper is organized as follows. Section 2 describes the studied dataset. Section 3 describes the concepts needed, including the feature selection technique, optimization algorithm, and RBFNN concepts. Section 4 describes the proposed system. Section 5 shows some simulation results and finally Section 6 concludes the paper.


 > Wisconsin Breast Cancer Database Top


Breast cancer is the most common cancer among women; excluding nonmelanoma skin cancers. This cancer affects one in eight women during their lives. It occurs in both men and women, although male breast cancer is rare. Breast cancer is a malignant tumor that has developed from cells of the breast. Although scientists know some of the risk factors (e.g., ageing, genetic risk factors, family history, menstrual periods, not having children, obesity) that increase a woman's chance of developing breast cancer, they do not yet know what causes most breast cancers or exactly how some of these risk factors cause cells to become cancerous. Research is under way to learn more and scientists are making great progress in understanding how certain changes in DNA can cause normal breast cells to become cancerous.[31],[38] In this study, the Wisconsin breast cancer database (WBCD) was used and analyzed. They have been collected by Dr. William H. Wolberg (1989–1991) at the University of Wisconsin–Madison Hospitals. There are 699 records in this database. Each record in the database has nine attributes. The nine attributes detailed in [Table 1] are graded on an interval scale from a normal state of 1–10, with 1being the most abnormal state. In this database, 241 (65.5%) records are malignant and 458 (34.5%) records are benign.
Table 1: Wisconsin breast cancer data description of attributes

Click here to view



 > Needed Concepts Top


Feature selection techniques

As mentioned, in the feature selection module, we used GCNC techniques. The GCNC consists of three steps including (1) Graph representation of the problem space, (2) feature clustering, and (3) search for the best representative features from each cluster by applying the node centrality and term variance measures. In the first step, the feature set is represented as a weighted graph in which each node in the graph denotes a feature and each edge weight indicates the similarity value between its corresponding features. In the second step, the features are divided into several clusters using a specific community detection method. The goal of clustering features is to group most correlated features into the same cluster. Finally, in the third step, node centrality concepts is used to select the most informative features from each cluster. More details regarding the GCNC can be found in.[39]

Optimization algorithm

In a natural bee swarm, there are three kinds of honey bees to search foods generally, which include the employed bees, the onlookers, and the scouts (both the onlookers and the scouts are also called unemployed bees). The employed bees search the food around the food source in their memory, meanwhile they pass their food information to the onlookers. The onlookers tend to select good food sources from those founded by the employed bees, then further search the foods around the selected food source. The scouts are translated from a few employed bees, which abandon their food sources and search new ones. In a word, the food search of bees is collectively performed by the employed bees, the onlookers, and the scouts. The framework of ABC algorithm [40],[41],[42] can be described in [Figure 1].
Figure 1: Framework of artificial bee colony algorithm

Click here to view


There are some important details that should be pointed out for the framework of ABC algorithm described in [Figure 1]. First, the update process used in the onlooker stage is the same as that in the employed bee stage. Given a solution xi to be updated (here xi denotes the ith solution in the population), and let vi = xi. In the update process, a new candidate solution is firstly given by the following solution search equation:



Where xij (or vij) denotes the jth element of xi (or vi), and j is a random index. xk denotes another solution selected randomly from the population. And φij is a uniform random number in (-1, 1). Then, a greedy selection is done between xi and xij, which completes the update process.

Second, in the onlooker stage, the solutions are selected according to the probability where fiti denotes the fitness value of the ith solution in the population. Third, the main distinction between the employed bee stage and the onlooker stage is that every solution in the employed bee stage involves the update process, while only the selected solutions have the opportunity to update in the onlooker stage. Fourth, an inactive solution of the scout stage refers to a solution that does not change over a certain number of generations.

Radial basis function neural network

RBFNN is one of the most important ANN paradigms in machine learning field. It is a feed forward network with a single layer of hidden units, called RBFs. RBF outputs show the maximum value at its center point and decrease its output value as the input leaves the center. Typically, the Gaussian function is used for the activation function. The RBF network is constructed with three layers: input layer, hidden layer, and output layer [Figure 2].
Figure 2: Structure of radial basis function neural network

Click here to view


In the input layer, the number of neurons is the same with the number of input dimension. The input layer neuron will transmit data to the hidden layer, and calculates a value of the RBFs received from the input layer. These values will be transmitted to the output layer which calculates the values of linear sum of the hidden neuron. In this study, the Gaussian function is used as RBF. Let φj (x) be the jth RBF. φj (x) is represented as:



Here, x = (x1, x2,……., xd) T is the input vector, cj= (c1j, c2j, cdj)T and σj2 are the jth center vector and the width parameter, respectively. The output of RBF network y, which is the linear sum of RBF, is given as follows:



where y is the output of the RBF network, p is the number of the hidden layer neuron, and wj is the weight from jth neuron to the output layer. To construct RBF network, the number of the hidden layer neuron m must be set, and the centers cj, the widths σj and the weights wj must be estimated. In RBF typical learning, the network structure will be determined based on prior knowledge or the experiences of experts. The parameters are estimated using either the clustering or the least mean squared method.[43]


 > Proposed System Top


In this paper, an intelligent system is proposed for breast cancer tumor type recognition. This system consists of three-stages: feature selection stage, classifier stage, and optimization stage. The main structure of proposed system is shown in [Figure 3].
Figure 3: The main structure of proposed system

Click here to view


Feature selection techniques are used for three reasons:

  1. Simplification of models to make them easier to interpret by researchers/users [44]
  2. Shorter training times
  3. Enhanced generalization by reducing over fitting [45] (formally, reduction of variance [44]).


In the first stage, the input feature vector dimension is reduced and effective features selected by using GCNC techniques. This provides elimination of unnecessary data.

In the second stage, classifier uses these inputs and classifies the breast cancer data. In classifier stage, we used RBFNN. The RBFNN is a popular type of neural networks that is very useful for pattern classification problems.[46] The training of an RBFNN involves the minimization of an error function. The error function defines the total difference between the actual output and the desired output of the network over a set of training patterns. Training proceeds by presenting to the network a pattern of known class taken from the training set. The error component associated with that pattern is the sum of the squared differences between the desired and actual outputs of the network corresponding to the presented pattern. The procedure is repeated for all the patterns in the training set and the error components for all the patterns are summed to yield the value of the error function for an RBFNN with a given set of basis function centers, widths and neuron connection weights. With the standard procedure for training RBF networks, after the number of hidden neurons (p) has been decided, the following steps will be taken:

  1. Choose the RBF centers cj; center selection could be performed by trial and error, self-organized or supervised
  2. Choose widths σj; several heuristic methods are available. A popular method is to set σj equal to the distance to the center nearest to σj
  3. Calculate neuron weights wij.


As mentioned, in RBFNN training, the number of RBFs and their respective centers and widths have very important role in its performance. Therefore, ABC algorithm is proposed for selecting appropriate parameters of the classifier. The sample bee is illustrated in [Figure 4].
Figure 4: Sample of bee

Click here to view



 > Simulation Results Top


In this section, we evaluate the performance of proposed recognizer. For this purpose, we have used the WBCD. This dataset contains 699 examples of control charts. For this study, we have used 30% of the data (210 samples) for training the classifier and the rest (489 samples) for testing. All the obtained results are the average of fifty independent runs. The computational experiments for this section were done on Intel core 2 Duo with 2 GB RAM using ASUS computer. The computer program was performed on MATLAB (version 7.8.0.347 [R2009], Massachusetts, USA) environment by using the neural networks toolbox. In this section, we have done several experiments for evaluating of the proposed method.

Experiment 1: The performance of radial basis function with row data

In this experiment, the row data of WBCD is used as input to RBFNN. The number of RBFs are also selected equal to training data and the value of width is tested for various values. The obtained results are listed in [Table 2]. It can be seen that the network with 210 RBFs and width equal to 1, leads to best performance (93.97%). It can be seen that there is no linear relation between the value of width and performance of RBFNN. Therefore, the value of width must be obtained through trial and error and based on extensive simulations. This manner of network topology selection is very time consuming. [Figure 5] shows the effect of width on RBFNN's performance.
Table 2: The performance of radial basis function neural network with row data (number of radial basis functions is equal to training data)

Click here to view
Figure 5: The effect of width (spread) on radial basis function neural network performance (210 radial basis functions)

Click here to view


In the next experiment, the RBFs number is selected less than training data. For more investigations, various numbers of RBFs and various width values are considered. The obtained results are listed in [Table 3] and [Table 4]. In [Table 3], the RBFNN with fifty RBFs is built and the value of width is changed from 0.5 to 20. The variation of RA with respect to width variation is plotted in [Figure 6]. A similar experiment is done for network with 100 RBFs and the obtained results are listed in [Table 4] and are shown [Figure 6]. It can be seen that the performance of the network with fifty RBFs is better than the network with 210 RBFs. In the latter network, the best RA was 93.97%, whereas in the network with 50 RBFs and width equal to 1, the highest RA is 97.99%. The obtained results show that the changing of RBFs number improves the RA significantly. The effect of width variation on network performance is shown in [Figure 6] and [Figure 7].
Table 3: The performance of radial basis function neural network with 50 radial basis functions

Click here to view
Table 4: The performance of radial basis function neural network with 50 radial basis functions

Click here to view
Figure 6: The effect of width (spread) on radial basis function neural network performance (50 radial basis functions)

Click here to view
{Table 4}
Figure 7: The effect of width (spread) on radial basis function neural network performance (100 radial basis functions)

Click here to view


In previous experiments, the effect of width on RBFNN performance was investigated and it was seen that there were no linear relation between the width value and network performance. In the new experiment, the effect of RBF numbers will be investigated. For this purpose, the number of RBFs is changed from 1 to 210 and the value of width is fixed. In [Figure 8], the value of width is fixed on 0.5 and the number of RBFs is changed from 1 to 210 (210 is training data number). It can be seen that the performance of RBFNN is highly dependent on the number of RBFs. The best RA (98.66%) is obtained by network 65 RBFs and width equal to 0.5. In [Figure 9], the value of width is fixed on 1 and the number of RBFs is changed from 1 to 210. Similar to previous experiment, it can be seen that the performance of RBFNN is highly dependent on the number of RBFs. In this case, the highest RA (98.32%) is obtained by network six RBFs and width equal to 1. From these experiments, it can be seen that the RBFNN performance is highly dependent on RBFs number and width value.
Figure 8: The effect of radial basis function number on network performance (width = 0.5)

Click here to view
Figure 9: The effect of radial basis function number on network performance (width = 1)

Click here to view


Experiment 2: The performance of proposed method

In this section, the performance of proposed system is investigated. In the previous section, the importance of RBFs number and width value were investigated. For this purpose, in the proposed method these parameters will be selected by ABC. The best features will be selected by GCNC. By using the GCNC, the second feature of WBCD was detected to be redundant and was removed. Therefore, in new input dataset, this feature is removed and we have eight other features. Using feature selection algorithm, the dimension of input, dataset is reduced from 9 to 8. With elimination of the redundant features, the dimension of inputs dataset will be reduced and therefore the RA will be increased and the computation volume will reduce.

The obtained results are listed in [Table 5]. In the first row of this table, the row data is used, but the number of RBFs and width value are selected by ABC. In this case, it can be seen that the RA increases to 98.99%. In the second row of [Table 5], the results of proposed method are listed. In this case, the selected features are used as input of RBFNN with optimal structure. Similar to the first row, the number of RBFs and width value are selected by ABC. It can be seen that the proposed system can recognize the tumor types with 99.59% of accuracy. It can be seen that the proposed system significantly increased the RA. The improvement of RA using proposed method, shows the importance of feature selection and optimization.
Table 5: The performance of proposed system (graph clustering with node centrality + artificial bee colony+radial basis function neural network)

Click here to view


To indicate the details of the recognition for each type of tumor, the confusion matrix of the recognizer is shown by [Table 6]. As we know, the values in the diagonal of confusion matrix show the correct performance of recognizer for each pattern. In other words, these values show the number of considered patterns which are recognized correctly by the system. The other values show the mistakes of system. In test dataset, we have 359 benign cases and 130 malignant cases. For example, look at the first row of this matrix. The value of 358 shows the number of correct recognition of benign tumors and the value of 1shows that this type of tumor is wrongly recognized with malignant tumor. To achieve the RA of system, it is needed to compute the average value that appears in diagonal.
Table 6: The confusion matrix for best result (99.59%)

Click here to view


Experiment 3: Performance evaluation with optimization in different runs

In this sub-section, for evaluating the performance of the ABC, five different runs have been performed. [Figure 10] shows a typical increase in the fitness (classification accuracy) of the best individual fitness of the population obtained from proposed system for different runs. As indicated in this figure, its fitness curves gradually improved from iteration 0–100, and exhibited no significant improvements after iteration fifty for the five different runs. The optimal stopping iteration to get the highest validation accuracy for the five different runs was around iteration 40–50.
Figure 10: Evolution of fitness function for different runs

Click here to view


To compare the performance of ABC with other nature inspired algorithms, we have used several nature inspired algorithms such as genetic algorithm,[47] particle swarm optimization,.[48] and imperialist competitive algorithm [49] to evolve the proposed method. According to the results in [Table 7], the best accuracy obtained for the test set by ABC-RBFNN is 99.59%. It can be seen that the success rate of ABC is higher than the performance of other nature inspired algorithms.
Table 7: Comparison among the performance of different optimization algorithms

Click here to view


Comparison and discussion

For comparison purposes, [Table 8] gives the classification accuracies of our method and the previous methods applied to the same database. As it can be seen in the results, the proposed method obtains excellent classification accuracy.
Table 8: Classification accuracies obtained with proposed method and other classifiers from literature

Click here to view



 > Conclusion Top


In this paper, an intelligent system is proposed for early breast cancer detection based on mammography information. In the proposed system, GCNC is applied to remove the redundant features and improve the RA. Furthermore, ABC optimization algorithm is used to find the optimal structure of RBFNN. To demonstrate the advantages of proposed system, some experiments were done.

In the first experiment, the number of RBFs was fixed and the value of width changed from 0.5 to 20. The simulation results showed that the performance of RBFNN is highly dependent on width value. The important notice was that there is no linear relation between the performance of RBFNN and width value. Therefore in RBFNN applications, the value of this parameter must be selected based on trial and error.

In next experiment, the value of width fixed and the number of RBFs changed from 1 to 210. The simulation results show that the performance of RBFNN is highly dependent on RBFs number. Similar to the previous case, it is seen that there is no linear relation between the RBF numbers and network performance. This experiment reveals that the network with large number of RBFs reduces the generalization capability of network and therefore reduces the RA.

Having the determination of the optimal neural network structure proved, we used ABC algorithm to find the best structure of network. We also used GCNC to find the best features and removed the redundant features from original dataset. The best classifier will perform poorly if the features are not chosen well. A feature selection algorithm should reduce the feature vector to a lower dimension, which contains most of the useful information from the original vector. The RBFNN with optimal structure and the selected feature classified the tumors with 99.59% accuracy. The proposed system has a high RA and therefore we recommend the proposed system for breast cancer tumor type recognition.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.[59]



 
 > References Top

1.
Available from: http://www.cancer.gov. [Last accessed on 2015 Mar 01].  Back to cited text no. 1
    
2.
Available from: http://www.breastcancer.org. [Last accessed on 2015 Mar 01].  Back to cited text no. 2
    
3.
Endogenous Hormones and Breast Cancer Collaborative Group. Steroid hormone measurements from different types of assays in relation to body mass index and breast cancer risk in postmenopausal women: Reanalysis of eighteen prospective studies. Steroids 2015;99:49-55.  Back to cited text no. 3
[PUBMED]    
4.
Stieber P, Nagel D, Blankenburg I, Heinemann V, Untch M, Bauerfeind I, et al. Diagnostic efficacy of CA 15-3 and CEA in the early detection of metastatic breast cancer – A retrospective analysis of kinetics on 743 breast cancer patients. Clin Chim Acta 2015;448:228-31.  Back to cited text no. 4
[PUBMED]    
5.
Fortune ML. The impact of the national breast and cervical cancer early detection program on breast cancer outcomes for women in Mississippi. J Cancer Policy 2015;6:25-32.  Back to cited text no. 5
    
6.
Pasolli E, Melgani F. Genetic algorithm-based method for mitigating label noise issue in ECG signal classification. Biomed Signal Process Control 2015;19:130-6.  Back to cited text no. 6
    
7.
Ramírez J, Monasterio V, Mincholé A, Llamedo M, Lenis G, Cygankiewicz I, et al. Automatic SVM classification of sudden cardiac death and pump failure death from autonomic and repolarization ECG markers. J Electrocardiol 2015;48:551-7.  Back to cited text no. 7
    
8.
Shin Y, Lee S, Ahn M, Cho H, Jun SC, Lee HN. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications. Comput Biol Med 2015;66:29-38.  Back to cited text no. 8
[PUBMED]    
9.
Bhardwaj A, Tiwari A. Breast cancer diagnosis using Genetically Optimized Neural Network model. Expert Syst Appl 2015;42:4611-20.  Back to cited text no. 9
    
10.
Dheeba J, Albert Singh N, Tamil Selvi S. Computer-aided detection of breast cancer on mammograms: A swarm intelligence optimized wavelet neural network approach. J Biomed Inform 2014;49:45-52.  Back to cited text no. 10
[PUBMED]    
11.
Kalteh AA, Zarbakhsh P, Jirabadi M, Addeh J. A research about breast cancer detection using different neural networks and K-MICA algorithm. J Cancer Res Ther 2013;9:456-66.  Back to cited text no. 11
[PUBMED]    
12.
Hassanien AE, Kim TH. Breast cancer MRI diagnosis approach using support vector machine and pulse coupled neural networks. J Appl Log 2012;10:277-84.  Back to cited text no. 12
    
13.
Marcano-Cedeno A, Quintanilla-Dominguez J, Andina D. WBCD breast cancer database classification applying artificial metaplasticity neural network. Expert Syst Appl 2011;38:9573-9.  Back to cited text no. 13
    
14.
Choua S, Leeb T, Shaoc Y, Chenb I. Mining the breast cancer pattern using artificial neural networks and multivariate adaptive regression splines. Expert Syst Appl 2004;27:133-42.  Back to cited text no. 14
    
15.
Knox M, O'Brien A, Szabó E, Smith CS, Fenlon HM, McNicholas MM, et al. Impact of full field digital mammography on the classification and mammographic characteristics of interval breast cancers. Eur J Radiol 2015;84:1056-61.  Back to cited text no. 15
    
16.
Madadi M, Zhang S, Henderson LM. Evaluation of breast cancer mammography screening policies considering adherence behavior. Eur J Oper Res 2015;247:630-40.  Back to cited text no. 16
    
17.
Molloi S, Ding H, Feig S. Breast density evaluation using spectral mammography, radiologist reader assessment, and segmentation techniques: A retrospective study based on left and right breast comparison. Acad Radiol 2015;22:1052-9.  Back to cited text no. 17
[PUBMED]    
18.
Baguia S, Baguib S, Palc K, Pald N. Breast cancer detection using rank nearest neighbor classification rules. Pattern Recognit 2003;36:25-34.  Back to cited text no. 18
    
19.
Furundzic D, Djordjevic M, Bekic A. Neural networks approach to early breast cancer detection. J Syst Arch 1998;44:17-33.  Back to cited text no. 19
    
20.
Dengler J, Sabine B, Desaga J. Segmentation of micro calcifications in mammograms. IEEE Trans Med Imaging 1993;12:634-42.  Back to cited text no. 20
    
21.
Onan A. A fuzzy-rough nearest neighbor classifier combined with consistency-based subset evaluation and instance selection for automated diagnosis of breast cancer. Expert Syst Appl 2015;42:6844-52.  Back to cited text no. 21
    
22.
Polat K, Şahan S, Kodaz H, Güneş S. Breast cancer and liver disorders classification using artificial immune recognition system (AIRS) with performance evaluation by fuzzy resource allocation mechanism. Expert Syst Appl 2007;32:172-83.  Back to cited text no. 22
    
23.
Palma G, Bloch I, Muller S. Detection of masses and architectural distortions in digital breast tomosynthesis images using fuzzy and a contrario approaches. Pattern Recognit 2014;47:2467-80.  Back to cited text no. 23
    
24.
Mitra S, Hayashi Y. Neuro-fuzzy rule generation: Survey in soft computing framework. IEEE Trans Neural Netw 2000;11:748-57.  Back to cited text no. 24
[PUBMED]    
25.
Nieto J, Torres A. Midpoint for fuzzy sets and their application in medicine. Artif Intell Med 2003;27:321-55.  Back to cited text no. 25
    
26.
Keles A, Keles A, Yavuz U. Expert system based on neuro-fuzzy rules for diagnosis breast cancer. Expert Syst Appl 2011;38:5719-26.  Back to cited text no. 26
    
27.
Hemanth DJ, Vijila CK, Anitha J. Application of neuro-fuzzy model for MR brain tumor image classification. Int J Biomed Imaging 2009;16:95-102.  Back to cited text no. 27
    
28.
de Sampaio WB, Silva AC, de Paiva AC, Gattass M. Detection of masses in mammograms with adaption to breast density using genetic algorithm, phylogenetic trees, LBP and SVM. Expert Syst Appl 2015;42:8911-28.  Back to cited text no. 28
    
29.
de Oliveira FS, de Carvalho Filho AO, Silva AC, de Paiva AC, Gattass M. Classification of breast regions as mass and non-mass based on digital mammograms using taxonomic indexes and SVM. Comput Biol Med 2015;57:42-53.  Back to cited text no. 29
    
30.
Kerhet A, Raffetto M, Boni A, Massa A. A SVM-based approach to microwave breast cancer detection. Eng Appl Artif Intell 2006;19:807-18.  Back to cited text no. 30
    
31.
Ubeyli E. Implementing automated diagnostic systems for breast cancer detection. Expert Syst Appl 2007;33:1054-62.  Back to cited text no. 31
    
32.
Frie T, Cristianini N, Campbell C. The kernel-adatron algorithm: A fast and simple learning procedure for support vector machines. Machine Learning: Proceedings of the Fifteenth International Conference 1998;98:188-96.  Back to cited text no. 32
    
33.
Karabatak M, Ince M. An expert system for detection of breast cancer based on association rules and neural network. Expert Syst Appl 2009;36:3465-9.  Back to cited text no. 33
    
34.
Yunmei F, Juntao F, Kaiqi M. Model reference adaptive sliding mode control using RBF neural network for active power filter. Int J Electr Power Energy Syst 2015;73:249-58.  Back to cited text no. 34
    
35.
Wu ZQ, Jia WJ, Zhao LR, Wu CH. Maximum wind power tracking based on cloud RBF neural network. Renewable Energy 2016;86:466-72.  Back to cited text no. 35
    
36.
Assareh E, Biglari M. A novel approach to capture the maximum power from variable speed wind turbines using PI controller, RBF neural network and GSA evolutionary algorithm. Renewable Sustain Energy Rev 2015;51:1023-37.  Back to cited text no. 36
    
37.
Shi Y, Yu DL, Tian Y, Yaowu S. Air-fuel ratio prediction and NMPC for SI engines with modified Volterra model and RBF network. Eng Appl Artif Intell 2015;45:313-24.  Back to cited text no. 37
    
38.
Jerez-Aragones JM, Gomez-Ruiz JA, Ramos-Jimenez G, Munoz-Perez J, Alba-Conejo E. A combined neural network and decision trees model for prognosis of breast cancer relapse. Artif Intell Med 2003;27:45-63.  Back to cited text no. 38
    
39.
Moradi P, Rostami M. A graph theoretic approach for unsupervised feature selection. Eng Appl Artif Intell 2015;44:33-45.  Back to cited text no. 39
    
40.
Karaboga D. An Idea Based on Honey Bee Swarm for Numerical Optimization, Erciyes University, Kayseri, Turkey, Technical Report-TR06; 2005.  Back to cited text no. 40
    
41.
Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J Glob Optim 2007;39:171-459.  Back to cited text no. 41
    
42.
Karaboga D, Basturk B. On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 2008;8:687-97.  Back to cited text no. 42
    
43.
Leonardis A, Bischof H. An efficient MDL-based construction of RBF networks. Neural Netw 1998;11:963-73.  Back to cited text no. 43
[PUBMED]    
44.
James G, Witten D, Hastie T, Tibshirani R. An Introduction to Statistical Learning. New york: Springer; 2013. p. 204.  Back to cited text no. 44
    
45.
Bermingham ML, Pong-Wong R, Spiliopoulou A, Hayward C, Rudan I, Campbell H, et al. Application of high-dimensional feature selection: Evaluation for genomic prediction in man. Scientific Reports 2015;22:1-12.  Back to cited text no. 45
    
46.
Bishop CM. Neural Networks for Pattern Recognition. Oxford: Clarendon Press; 1995.  Back to cited text no. 46
    
47.
Tang KS, Man KF, Kwong S, He Q. Genetic algorithms and their applications. IEEE Signal Process Mag 1996;13:22-37.  Back to cited text no. 47
    
48.
Kennedy J, Eberhart R. Particle Swarm Optimization. In: Proceedings of IEEE International Conference on Neural Networks. Vol. 4. 1995. p. 1942-8.  Back to cited text no. 48
    
49.
Atashpaz-Gargari E, Lucas C. Imperialist Competitive Algorithm: An Algorithm for Optimization Inspired by Imperialistic Competition. In: Proceedings of the IEEE Congress on Evolutionary Computation, Singapore. Vol. 34. 2007. p. 4661-7.  Back to cited text no. 49
    
50.
Quinlan J. Improved use of continuous attributes in C4.5. J Artif Intell Res 1996;4:77-09.  Back to cited text no. 50
    
51.
Hamiton H, Shan N, Cercone N. RIAC: A Rule Induction Algorithm Based on Approximate Classification. In International Conference on Engineering Applications of Neural Networks, University of Regina; 1996.  Back to cited text no. 51
    
52.
Ster B, Dobnikar A. Neural Networks in Medical Diagnosis: Comparison with Other Methods. In Proceedings of the International Conference on Engineering Applications of Neural Networks. Vol. 43. 1996. p. 427-30.  Back to cited text no. 52
    
53.
Nauck D, Kruse R. Obtaining interpretable fuzzy classification rules from medical data. Artif Intell Med 1999;16:149-69.  Back to cited text no. 53
[PUBMED]    
54.
Pena-Reyes C, Sipper M. A fuzzy-genetic approach to breast cancer diagnosis. Artif Intell Med 1999;17:131-55.  Back to cited text no. 54
    
55.
Setiono R. Generating concise and accurate classification rules for breast cancer diagnosis. Artif Intell Med 2000;18:205-17.  Back to cited text no. 55
[PUBMED]    
56.
Abonyi J, Szeifert F. Supervised fuzzy clustering for the identification of fuzzy classifiers. Pattern Recognit Lett 2003;14:2195-207.  Back to cited text no. 56
    
57.
Guijarro B, Fontenla O, Perez B, Fraguela P. A linear learning method for multilayer perceptrons using least-squares. Lect Notes Comput Sci 2007;11:365-74.  Back to cited text no. 57
    
58.
Akay M. Support vector machines combined with feature selection for breast cancer diagnosis. Expert Syst Appl 2009;36:3240-7.  Back to cited text no. 58
    
59.
Peng L, Yang B, Jiang J. A novel feature selection approach for biomedical data classification. J Biomed Inform 2009;179:809-19.  Back to cited text no. 59
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7], [Table 8]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  >Abstract>Introduction>Wisconsin Breast...>Needed Concepts>Proposed System>Simulation Results>Conclusion>Article Figures>Article Tables
  In this article
>References

 Article Access Statistics
    Viewed2111    
    Printed98    
    Emailed0    
    PDF Downloaded147    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]