Data Science: Pima Indians Diabetes Database

Predict the onset of diabetes based on diagnostic measures

Context

This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.

The objective of this study is to build
a machine learning model to accurately predict whether or not the patients in the dataset have diabetes or not.

This notebook is a guide to end to end a complete study in machine learning with different concepts like :

  • Completing missing values (most important part)
  • Exploratory data analysis
  • Creating new features (to increase accuracy)
  • Encoding features
  • Using LightGBM and optimize hyperparameters
  • Adding a KNN to LGBM to beat 90% accuracy (voting classifier)

Who are Pima Indians ?

“The Pima (or Akimel O’odham, also spelled Akimel O’otham, “River People”, formerly known as Pima) are a group of Native Americans living in an area consisting of what is now central and southern Arizona. The majority population of the surviving two bands of the Akimel O’odham are based in two reservations: the Keli Akimel O’otham on the Gila River Indian Community (GRIC) and the On’k Akimel O’odham on the Salt River Pima-Maricopa Indian Community (SRPMIC).” Wikipedia

1. Load libraries and read the data

1.1. Load libraries

# Python libraries
# Classic,data manipulation and linear algebra
import pandas as pd
import numpy as np

# Plots
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.offline as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.tools as tls
import plotly.figure_factory as ff
py.init_notebook_mode(connected=True)
import squarify

# Data processing, metrics and modeling
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.model_selection import GridSearchCV, cross_val_score, train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import precision_score, recall_score, confusion_matrix,  roc_curve, precision_recall_curve, accuracy_score, roc_auc_score
import lightgbm as lgbm
from sklearn.ensemble import VotingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import roc_curve,auc
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_predict
from yellowbrick.classifier import DiscriminationThreshold

# Stats
import scipy.stats as ss
from scipy import interp
from scipy.stats import randint as sp_randint
from scipy.stats import uniform as sp_uniform

# Time
from contextlib import contextmanager
@contextmanager
def timer(title):
    t0 = time.time()
    yield
    print("{} - done in {:.0f}s".format(title, time.time() - t0))

#ignore warning messages 
import warnings
warnings.filterwarnings('ignore') 

1.2. Read data

Loading dataset with pandas (pd)

data = pd.read_csv('../input/diabetes.csv')

2. Overview

2.1. Head

Checking data head

data.head(12)

The rest of this study is available on the following link: https://bit.ly/2SZG7MY

About Vincent Lugat 4 Articles
Vincent Lugat is a Data Science consultant specializing in Machine Learning. Based on econometrics training, he helps companies optimize the use of their data and model future behavior. His favorite fields are classification and data visualization.

Be the first to comment

Leave a Reply

Your email address will not be published.


*