Clin Infect Immun
Clinical Infection and Immunity, ISSN 2371-4972 print, 2371-4980 online, Open Access
Article copyright, the authors; Journal compilation copyright, Clin Infect Immun and Elmer Press Inc
Journal website https://www.ciijournal.org

Original Article

Volume 7, Number 2, September 2022, pages 37-48


Transfer Learning-Based Model for Automated COVID-19 Detection Using Computerized Tomography Scan Graph

Chao Zhanga, Guang Dong Huanga, b

aSchool of Science, China University of Geosciences, Haidian District, Beijing, China
bCorresponding Author: Guang Dong Huang, School of Science, China University of Geosciences, Haidian District, Beijing, China

Manuscript submitted May 31, 2022, accepted June 9, 2022, published online September 29, 2022
Short title: Automated COVID-19 Detection Using CT
doi: https://doi.org/10.14740/cii154

Abstract▴Top 

Background: Coronavirus disease 2019 (COVID-19) has had a huge impact on healthcare systems worldwide since 2019. In this study, we discussed how the combination of transfer learning and traditional classifiers performed for image classification of healthy people, COVID-19 patients, pneumonia patients, and lung cancer patients.

Methods: Different combinations of image preprocessing methods and transfer learning architectures were tested and evaluated. The best performed combination was chosen as feature extractor. Features was finally classified by support vector machine (SVM) and optimized by particle swarm optimization (PSO) algorithm.

Results: When combined VGG16 architecture with the PSO-SVM approach, we obtained exciting results, with 93.5% accuracy in recognition.

Conclusions: The experiments’ results suggest VGG16 can reach high accuracy with a small number of epochs. And using VGG16 as a feature extractor then combining it with SVM and appropriate optimization algorithm can improve the classification performance. The new developed classification algorithm may to some extent help clinicians lighten their workload when facing COVID-19 diagnostic problems.

Keywords: Transfer learning; COVID-19; PSO; SVM

Introduction▴Top 

In 2019, the coronavirus disease 2019 (COVID-19) occurred in Wuhan, China. Due to its high infection rate and lethality rate, the COVID-19 is wreaking havoc upon the global society. Globally, as of 5:37 pm Central European Summer Time (CEST), June 3, 2022, there have been more than 500 million confirmed cases of COVID-19, including about 6 million deaths, reported to the World Health Organization (WHO). Therefore, finding COVID-19 patients with accurate and fast diagnoses is quite important for public health.

It is obvious that swift and accurate diagnosis of COVID-19 is essential to give patients appropriate treatments and effectively control its transmission. The reverse transcription-polymerase chain reaction (RT-PCR) from oral-nasopharyngeal swabs is the most widely used method for the diagnosis of COVID-19. The low cost and convenience of RT-PCR made it to be widely used in large-scale screening. Nowadays, computed tomography (CT) findings have been used as criteria for diagnosis of COVID-19 as well [1]. In contrast to RT-PCR, chest CT was proven to have high sensitivity and be expedited for diagnosing COVID-19 [2]. Due to the high cost, chest CT is usually used for diagnostics of patients with suspected infection. In consideration of the large numbers of patients, the application of an artificial intelligence (AI) diagnosis system to automatically classify medical images may help reduce the workload of doctors and improve the patient admission rate.

The use of computers to assist in the detection of disease in medical images dates back to the 1960s. In the face of the detection of COVID-19 using medical images, many reports have shown consistent progress with high accurate detection. In recent studies, Hemdan et al compared the classification results of seven deep learning network models applied to X-rays of patients with COVID-19 and X-rays of healthy individuals, and showed that VGG19 worked best, achieving 90% accuracy [3]. In a paper by Islam et al, an ensemble convolutional neural network (CNN)-recurrent neural network (RNN) architecture was proposed. In the CNN-RNN architecture, image features were extracted by transfer learning models like VGG-19, DenseNet121, InceptionV3, and InceptionResNetV2, and classified by RNN classifier [4]. Until now many researchers have built models to efficiently distinguish COVID-19 images from healthy people images. While during the pandemic of COVID-19, binary classifiers cannot meet our needs anymore. In some cases, we need to establish an AI method not only tell us COVID-19 or non-COVID-19, but also diagnose the exact lung disease with high accuracy rate.

In this paper, a COVID-19 classification model is implemented. There are three main novelties: 1) By testing different combinations of image preprocess methods and CNNs, gamma transform combined with VGG architecture is found to have the best performance on feature extraction; 2) The image is classified by particle swarm optimization (PSO) optimized support vector machine (SVM) to enhance accuracy rate; 3) The model proposed in the paper classifies CT images of four classes including COVID-19, pneumonia, lung cancer and healthy individual.

Materials and Methods▴Top 

Methodology

Workflow

The proposed methodology follows a standard image processing pipeline of data acquisition, preprocess feature extraction, and classification. The flowchart in Figure 1 gives an overview of the proposed model. The Institutional Review Board approval is not applicable, because the study is not a clinical trial. The study was conducted in compliance with the ethical standards of the responsible institution on human subjects as well as with the Helsinki Declaration.


Click for large image
Figure 1. Workflow of the process. CLAHE: contrast-limited adaptive histogram equalization; SVM: support vector machine; PSO: particle swarm optimization.

Data acquisition

The dataset used in this paper was collected in several ways. Available COVID-19, pneumonia, and lung cancer CT images were collected at GitHub and Kaggle. Medical images of healthy individuals were collected on Imagenet. In total, more than 6,000 CT images were collected from these websites. However, the quality of these images varies considerably, and not every one of them is suitable for training and testing. For example, some pixels were too low, and some CT images did not provide information about the lung. After screening, 4,650 were finally included in the study. Among them, there were 1,600 CT images of COVID-19, 850 CT images of healthy people, 1,100 CT images of lung cancer, and 1,100 images of pneumonia. Ninety percent of the images were classified as training data, and 10% of the images were used to finally test the accuracy of the model. Figure 2 illustrates the data collection details.


Click for large image
Figure 2. Data set collection and partition. CT: computed tomography; COVID-19: coronavirus disease 2019.

Preprocess

CT scans have artifacts like beam hardening, noise, and scatter, which reduce the accuracy of the model. To overcome this, image enhancement techniques would be used.

1) Region of interest (ROI)

The edge region of the image does not provide valid information. Therefore, an inscribed ellipse region contour will be extracted, and the unselected region will be masked to reduce the influence from invalid information. Figure 3 illustrates the ROI process.


Click for large image
Figure 3. Region of interest (red ellipse).

2) Image enhancement

Image enhancement is the process of making images more useful. It is an important image preprocessing technique, which highlights key information in an image and reduces certain unimportant information to enhance the image feature. By enhancement, the objective images would be more suitable for a specific application than the original ones. Two enhancement techniques of mainstream were tested in this paper. In the following section, these image enhancement techniques will be briefly introduced.

a) Contrast-limited adaptive histogram equalization (CLAHE)

CLAHE is an adaptive contrast enhancement method. It is based on adaptive histogram equalization, in which the histogram is calculated for the contextual region of a pixel. The pixel’s intensity is then transformed to a value within the display range proportional to the pixel intensity’s rank in the local intensity histogram. CLAHE is a modification of AHE in which the enhancement calculation modifies the maximum contrast enhancement factor by applying a user-specified maximum to the height of the local histogram. As a result, the enhancement effect is reduced in very homogeneous areas of the image, which prevents over-enhancement of noise [5, 6].

In recent studies, CLAHE was usually applied to improve the distinctiveness between the background and the tinted dark areas [7]. Effects of CLAHE transform can be observed in Figure 4.


Click for large image
Figure 4. Image enhancement of CLAHE. (a1-d1) represents a CT of healthy, COVID-19, pneumonia, lung cancer individual without image enhancement. (a2-d2) represents a CT of healthy, COVID-19, pneumonia, lung cancer individual with CLAHE preprocess. CLAHE: contrast-limited adaptive histogram equalization; CT: computed tomography; COVID-19: coronavirus disease 2019.

b) Gamma transform

Gamma transform is also known as power-law transform. It is a popular image enhancement method in the image recognition area. Grayscale value would be transformed by equation:

s = Crγ

variable r represents grayscale of input image, variable s represents grayscale of output image, C is a constant, and γ is the transform coefficient. CT image can be adjusted with an appropriate power exponent-γ. For example, when the image looks dark, the grayscale value needs to be enlarged. This can be brightened by setting the power exponent range from 0 to 1. On the other side, we can also darken the image by setting the exponent higher than 1 [8]. Differences between original images and transformed images can be observed in Figure 5.


Click for large image
Figure 5. Image enhancement of gamma transform. (a1-d1) represents a CT of healthy, COVID-19, pneumonia, lung cancer individual without image enhancement. (a3-d3) represents a CT of healthy, COVID-19, pneumonia, lung cancer individual with gamma transform. CT: computed tomography; COVID-19: coronavirus disease 2019.

Transfer learning model selection

CNNs have excellent performance in the field of feature extraction. The nature of convolution and pooling computation that makes the panning part of the image does not affect the final feature extracted. It is meaningless to transform the panned characters back to the original position. Thanks to the property of transfer learning, the extracted features are less likely to be overfitted. The overall model fitting ability can be controlled by using different convolution, pooling, and the size of the final output feature vector. The convolutional layer is the first layer of a convolutional network. While convolutional layers can be followed by additional convolutional layers or pooling layers, the fully-connected layer is the final layer. Within each layer, the CNN increases in its complexity, identifying greater portions of the image. Earlier layers focus on simple features, such as colors and contours. As the image data progresses through the layers of the CNN, it starts to recognize larger elements or shapes of the object until it finally identifies the object.

Training a neural network from scratch requires a large amount of data. Since the existing COVID-19 dataset is significantly small, we use transfer learning to extract an accurate and concise feature set from the training data. Transfer learning, used in machine learning, is the reuse of a pre-trained model on a new problem. In transfer learning, a machine exploits the knowledge gained from a previous task to improve generalization about another. This is a popular way and has also achieved great results in recent research [9]. In this paper, pre-trained models were downloaded on ImageNet. Although these models and parameters are not specific to lung CT image classification, they can be well preformed to this task after adjustment and training. Transfer learning models can be trained in a short time to gain satisfactory results. In transfer learning, no single algorithm is suitable for all problems. Thus, five different CNN architectures were evaluated in this paper.

1) VGG16

VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. This model achieved top-5 test accuracy in ImageNet, which is a dataset of over 14 million images belonging to 1,000 classes. VGG16 is one of the significant innovations that paved the way for several innovations that followed in the computer vision field [10]. As the name suggests, VGG16 consists of 16 convolutional layers and is very appealing because of its very uniform architecture. Here we can understand its structure deeply through the Figure 6.


Click for large image
Figure 6. VGG16 structure.

2) VGG19

The concept of the VGG19 model is the same as the VGG16 but it contains 19 layers.

3) Other structures

In this paper we also tested the following models. Residual networks (ResNets), such as the popular ResNet50 model, is another type of convolutional neural network architecture that is 50-layer deep. A residual neural network uses the insertion of shortcut connections in turning a plain network into its residual network counterpart. Compared to VGG, ResNets are less complex since they have fewer filters. DenseNet121 and MobileNet are also included due to their outstanding performance in image classification tasks.

Feature extraction

By combining different CNN frameworks with two image enhancement methods, we can obtain different results of these combinations. In general, the model that has best classification performance will extract the best features which offer us the most effective information.

Prediction with PSO-SVM

SVM is one of the most popular supervised learning algorithms, which can be used for classification as well as regression issues. The essence of SVM algorithm is to create a best line or decision boundary that can segregate n-dimensional space into classes, so that we can easily put the new data point in the correct category in the classification task. This best decision boundary is called a hyperplane [11, 12].

In our model, RBF is chosen as the kernel function. Therefore, there are two parameters to determine the performance of the classifier: C and gamma. The higher the c was selected, the more intolerance we have for incorrect classification and the easier it is to overfit. The smaller the c was chosen, the easier it is to underfit. Therefore, parameter C affects classification ability. Gamma is a parameter that comes with the RBF function after deciding it as the kernel. It implicitly determines the distribution of the data after mapping to the new feature space, which is usually set to 1/K by default (K is the number of categories) [13].

To find the best C and gamma for the model, we use PSO algorithm. The PSO algorithm belongs to evolutionary algorithms, which is designed by simulating the predatory behavior of a flock of birds. Starting from a random solution, the optimal solution is found by iteration, and the quality of the solution is evaluated by fitness. Fitness function will be defined to optimally find parameters to gain the best accuracy [14].

Experimentation

Testing environment

The proposed methodology is implemented on python software, run on a central processing unit (CPU). The system requirements are an Intel Core i5 processor with a 4 GB graphic card, a 64-bit operating system at 1.80 GHz, and 20 GB random access memory (RAM).

Training transfer learning model

Figures 7-11 show the convergence graph of training and validation accuracy of the transfer learning-based CNN model. These graphs are plotted by accuracy and loss of validation set and reflect some aspects of the performances of the trained model. Moreover, DenseNet121 and MobileNet are more likely to encounter overfitting problems after a few training epochs. Thus, the epochs are set to 10. VGG16, VGG19, and ResNet50 converge faster than the other two models.


Click for large image
Figure 7. Training accuracy and loss of VGG16. CLAHE: contrast-limited adaptive histogram equalization.


Click for large image
Figure 8. Training accuracy and loss of VGG19. CLAHE: contrast-limited adaptive histogram equalization.


Click for large image
Figure 9. Training accuracy and loss of ResNet50. CLAHE: contrast-limited adaptive histogram equalization.


Click for large image
Figure 10. Training accuracy and loss of DenseNet121. CLAHE: contrast-limited adaptive histogram equalization.


Click for large image
Figure 11. Training accuracy and loss of MobileNet. CLAHE: contrast-limited adaptive histogram equalization.

To measure the performance of the CNN models, classic measurement index below is introduced:

Accuracy=TP+TNFP+FN+TP+TN×100%

where true positive (TP) means correctly detects a positive result, false positive (FP) means incorrectly detects a positive result, true negative (TN) means correctly detects a negative result, false negative (FN) means incorrectly detects a negative result.

Accuracy measures the ratio of all correctly recognized cases to the overall number of cases. As each class in the test dataset have an equal size, accuracy can directly reflect the classifier’s performance without any correction. This is the most important parameter we will consider to determine model selection.

Precision=TPFP+TP×100%

Precision is the percentage of correct predictions among all positive predictions. Since there are four classes in this scenario. Precision will be calculated for each class and then be averaged:

Recall=TPFP+FN×100%

The recall is the proportion of correct prediction results among all positive cases. Since there are four classes in this scenario. The recall will be calculated for each class and then be averaged.

Specificity=TNFP+FN×100%

Specificity is the proportion of correctly classified negative instances by a model to the overall number of true negative instances being tested. Specificity will be averaged in the same way.

F1 score=2×precision×recallprecision+recall×100%

F1 score is a statistical measure of the accuracy of a binary classification model. The F1score can be considered as a summed average of the model accuracy and recall.

As shown in Table 1 below, gamma transform have better performance than CLAHE transform in the most part. When combining with gamma transfer, VGGs gain best results with an accuracy of 91.5%, precision of 92.6%, specificity of 97.2%, F1score of 91.3% for VGG16 and an accuracy of 90%, precision of 91.2%, specificity of 96.7%, F1score of 89.8% for VGG19 (Table 1).

Table 1.
Click to view
Table 1. Performance Metrics
 

Figures 12-16 show the graph receiver operating characteristic (ROC) curve of different architectures. ROC curve shows the performance of a classification model at all classification thresholds.


Click for large image
Figure 12. ROC curve of VGG16. CLAHE: contrast-limited adaptive histogram equalization; AUC: area under curve; ROC: receiver operating characteristic; COVID-19: coronavirus disease 2019.


Click for large image
Figure 13. ROC curve of VGG19. CLAHE: contrast-limited adaptive histogram equalization; AUC: area under curve; ROC: receiver operating characteristic; COVID-19: coronavirus disease 2019.


Click for large image
Figure 14. ROC curve of ResNet50. CLAHE: contrast-limited adaptive histogram equalization; AUC: area under curve; ROC: receiver operating characteristic; COVID-19: coronavirus disease 2019.


Click for large image
Figure 15. ROC curve of DenseNet121. CLAHE: contrast-limited adaptive histogram equalization; AUC: area under curve; ROC: receiver operating characteristic; COVID-19: coronavirus disease 2019.


Click for large image
Figure 16. ROC curve of MobileNet. CLAHE: contrast-limited adaptive histogram equalization; AUC: area under curve; ROC: receiver operating characteristic; COVID-19: coronavirus disease 2019.

As shown in the figures, we can conclude that VGG16 and VGG19 have the best classification results, which correspond to the conclusion of Table 1. In the predictions of VGG16 combined with gamma transfer, the area under curve (AUC) is 0.992, 0.996, 0.905, 0.980, respectively for healthy class, COVID-19 class, cancer class, and pneumonia class. And the AUC of each class close to each other, this means VGG16 has the most balanced results in each class.

As for VGG16, AUC is higher in healthy and COVID-19 group. But ResNet50 showed opposite characteristics, the AUC achieved 0.993 and 0.952 for pneumonia and cancer, respectively (Table 2).

Table 2.
Click to view
Table 2. AUC of ROC Curve
 

PSO-SVM classification

In this section, the features obtained after transfer learning are fitted into the SVM as variables for training. For the parameters of SVM, two main components are involved: penalty parameter C, and gamma for RBF kernel. To gain best predictions results, parameters are optimized by PSO algorithm.

Following settings are made for PSO algorithm. In this paper, the training data are divided into training set and validation set in the proportion of 4:1. The training set is used for model fitting and the validation set is used for calculating the accuracy for model. The population size is set to 10, inertia weight is set to 0.8. The local search capability is ensured by set c1 = 0.1 and c2 = 0.1.

Figures 17 and 18 show the confusion matrices of the proposed architecture with and without the SVM classifier.


Click for large image
Figure 17. Cofusion maxtrix of VGG19. COVID-19: coronavirus disease 2019.


Click for large image
Figure 18. Cofusion maxtrix of VGG19 combined with SVM. COVID-19: coronavirus disease 2019.

By comparing the results of each model, finally VGG16 combined with gamma transform and PSO-SVM classifier reached the highest testing accuracy among all the models. And VGG19 was significantly improved by combining PSO-SVM. As shown in Figure 18, recall for healthy, lung cancer, and COVID-19 classes is improved.

Results and Discussion▴Top 

As shown in Table 3, VGG16 combined with gamma transform and PSO-SVM achieved an accuracy of 93.5%.

Table 3.
Click to view
Table 3. Performance Metrics of the Optimized Model
 

From the above analysis of individual datasets, it can be found that VGG16 can always achieve the best performance among all tested models in the training process. VGG16 can achieve an accuracy of 91.5% when solving the classification task alone; and the accuracy finally reaches 93.5% when combined with PSO-SVM algorithm. From the training and loss figure, we notice that VGG16, VGG19, and ResNet50 converged faster and reached a considerable accuracy in a few epochs. When considering the characteristics of these four datasets, lung cancer image is found to be the most difficult one to distinguish from other classes. And that may explain why the accuracy for lung cancer is significantly lower than other classes, which can be observed in Figures 17 and 18. By comparing these two figures, we also found that SVM does not address the problem of low accuracy in the lung cancer class. This may explain that feature extraction process plays a decisive role in the final result.

Overall, our experiments suggest VGG16 can reach high accuracy with a small number of epochs. Using VGG16 as a feature extractor and combining it with a PSO-SVM algorithm can to some extent improve the performance.

Acknowledgments

Throughout the writing of this paper, we have received a great deal of support and assistance. We would like to thank our research partners, who were instrumental in adjusting the methodology of my research. We are extremely grateful for this. Finally, we would also like to thank the researchers who shared the publicly available images used for this study.

Financial Disclosure

None to declare.

Conflict of Interest

The authors declare that they have no competing interests.

Informed Consent

Not applicable.

Author Contributions

Chao Zhang: methodology, software, investigation, formal analysis, writing - original draft. Guang Dong Huang: conceptualization, supervision, writing - review and editing.

Data Availability

The data supporting the findings of this study are available from the corresponding author upon reasonable request.


References▴Top 
  1. Mohseni Afshar Z, Ebrahimpour S, Javanian M, Vasigala VR, Masrour-Roudsari J, Babazadeh A. Vital role of chest CT in diagnosis of coronavirus disease 2019 (COVID-19). Caspian J Intern Med. 2020;11(3):244-249.
  2. Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, Ji W. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117.
    doi pubmed
  3. Hemdan Ezz El-Din. COVIDX-net: a framework of deep learning classifiers to diagnose COVID-19 in X-Ray Images. arXiv:2003.11055v1.
  4. Islam MM, Islam MZ, Asraf A, et al. Diagnosis of COVID-19 from X-rays using combined CNN-RNN architecture with transfer learning. Cold Spring Harbor Laboratory Press. 2020.
  5. Zuiderveld K. Contrast limited adaptive histogram equalization. Academic Press Professional, Inc. 1994.
    doi
  6. Suharyanto, Hasibuan ZA, Andono PN, et al. Contrast limited adaptive histogram equalization for underwater image matching optimization use SURF. Journal of Physics Conference Series. 2021;1803(1):012008.
    doi
  7. Umri BK, Akhyari MW, Kusrini K. Detection of COVID-19 in chest X-ray image using CLAHE and convolutional neural network. 2020 2nd International Conference on Cybernetics and Intelligent System (ICORIS). 2020.
    doi
  8. Giorgio M, Guida M, Pulcini G. The transformed gamma process for degradation phenomena in presence of unexplained forms of unit-to-unit variability. Quality and Reliability Engineering International. 2018;34(4):543-562.
    doi
  9. Singh M, Bansal S, Ahuja S, et al. Transfer learning-based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data. Medical & Biological Engineering & Computing. 2021(2).
    doi
  10. Chowdary E. Impact of machine learning models in pneumonia diagnosis with features extracted from chest X-rays using VGG16. Turkish Journal of Computer and Mathematics Education (TURCOMAT). 2021;12(5):1521-1530.
    doi
  11. Abe S. Multiclass support vector machines. Springer London. 2010.
    doi
  12. Osuna E, Freund R, Girosi F. Training support vector machines: an application to face detection. IEEE Computer Society Conference on Computer Vision & Pattern Recognition. IEEE; 2000.
  13. Chang CC, Lin CJ. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology. 2007;2(3):27.
    doi
  14. Sudheer C, Maheswaran R, BK Panigrahi. A hybrid SVM-PSO model for forecasting monthly streamflow. Neural Computing & Applications. 2014;24(6):1381-1389.
    doi


This article is distributed under the terms of the Creative Commons Attribution Non-Commercial 4.0 International License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.


Clinical Infection and Immunity is published by Elmer Press Inc.

 

Browse  Journals  

     

Journal of Clinical Medicine Research

Journal of Endocrinology and Metabolism

Journal of Clinical Gynecology and Obstetrics

World Journal of Oncology

Gastroenterology Research

Journal of Hematology

Journal of Medical Cases

Journal of Current Surgery

Clinical Infection and Immunity

Cardiology Research

World Journal of Nephrology and Urology

Cellular and Molecular Medicine Research

Journal of Neurology Research

International Journal of Clinical Pediatrics

AI in Clinical Medicine

Current Translational Medicine

Current Public Health and Epidemiology

Ophthalmology and Eye Health

Clinical Research of Dermatology

Food Sciences and Clinical Nutrition

Current Psychiatry and Mental Health

Current Emergency Medicine

Journal of Current Pharmacology

Current Dentistry and Oral Health

Current Research of Life Sciences

Journal of Sports Medicine Research

Journal of Minimally Invasive Medicine

Plastic Surgery and Aesthetic Medicine

Clinical Geriatric Medicine

Current Occupational Medicine

Journal of Current Surgery, quarterly, ISSN 1927-1298 (print), 1927-1301 (online), published by Elmer Press Inc.                     
The content of this site is intended for health care professionals.
This is an open-access journal distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License, which permits unrestricted
non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Creative Commons Attribution license (Attribution-NonCommercial 4.0 International CC-BY-NC 4.0)


This journal follows the International Committee of Medical Journal Editors (ICMJE) recommendations for manuscripts submitted to biomedical journals,
the Committee on Publication Ethics (COPE) guidelines, and the Principles of Transparency and Best Practice in Scholarly Publishing.

website: www.currentsurgery.org   editorial contact: editor@currentsurgery.org    elmer.editorial2@hotmail.com
Address: 9225 Leslie Street, Suite 201, Richmond Hill, Ontario, L4B 3H6, Canada

© Elmer Press Inc. All Rights Reserved.


Disclaimer: The views and opinions expressed in the published articles are those of the authors and do not necessarily reflect the views or opinions of the editors and Elmer Press Inc. This website is provided for medical research and informational purposes only and does not constitute any medical advice or professional services. The information provided in this journal should not be used for diagnosis and treatment, those seeking medical advice should always consult with a licensed physician.