Skip to content
Snippets Groups Projects
Commit 26159253 authored by Raakesh's avatar Raakesh
Browse files

Replace preprocessing-csv-rsna-ih-2019-19023.ipynb

parent 16f398a6
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Converting DICOM metadata to CSV files ## Preprocessing CSV's for training
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Table Of Contents ![](https://www.rsna.org/-/media/Images/RSNA/Menu/logo_sml.ashx?w=100&la=en&hash=9619A8238B66C7BA9692C1FC3A5C9E97C24A06E1)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
- [Dependencies](#1) Are you working a lot with Data Generators (for example Keras' ".flow_from_dataframe") and competing in the [RSNA Intercranial Hemorrhage 2019 competition](https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection)?
- [Preparation](#2)
- [Metadata](#3) I've created a function that creates a simple preprocessed DataFrame with a column for ImageID and a column for each label in the competition. ('epidural', 'intraparenchymal', 'intraventricular', 'subarachnoid', 'subdural', 'any')
- [Type Conversion](#4)
- [Merge and Save](#5) I also made a function which translates your predictions into the correct submission format.
- [Final Check](#6)
If you are interested in getting the metadata as CSV files also you can check out [this Kaggle kernel](https://www.kaggle.com/carlolepelaars/converting-dicom-metadata-to-csv-rsna-ihd-2019).
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Dependencies <a id="1"></a> ## Preparation
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Standard libraries # We will only need OS and Pandas for this one
import os import os
import gc import pandas as pd
import pydicom # For accessing DICOM files
import numpy as np # Path names
import pandas as pd BASE_PATH = "../input/rsna-intracranial-hemorrhage-detection/rsna-intracranial-hemorrhage-detection/"
import random as rn TRAIN_PATH = BASE_PATH + 'stage_2_train.csv'
from tqdm import tqdm TEST_PATH = BASE_PATH + 'stage_2_sample_submission.csv'
# Visualization # All labels that we have to predict in this competition
import matplotlib.pyplot as plt targets = ['epidural', 'intraparenchymal',
import matplotlib.image as mpimg 'intraventricular', 'subarachnoid',
'subdural', 'any']
# Paths
KAGGLE_DIR = '../input/rsna-intracranial-hemorrhage-detection/rsna-intracranial-hemorrhage-detection/'
IMG_PATH_TRAIN = KAGGLE_DIR + 'stage_2_train/'
IMG_PATH_TEST = KAGGLE_DIR + 'stage_2_test/'
TRAIN_CSV_PATH = KAGGLE_DIR + 'stage_2_train.csv'
TEST_CSV_PATH = KAGGLE_DIR + 'stage_2_sample_submission.csv'
# Seed for reproducability
seed = 1234
np.random.seed(seed)
rn.seed(seed)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# File sizes and specifications # File sizes and specifications
print('\n# Files and file sizes') print('\n# Files and file sizes')
for file in os.listdir(KAGGLE_DIR)[2:]: for file in os.listdir(BASE_PATH)[2:]:
print('{}| {} MB'.format(file.ljust(30), print('{}| {} MB'.format(file.ljust(30),
str(round(os.path.getsize(KAGGLE_DIR + file) / 1000000, 2)))) str(round(os.path.getsize(BASE_PATH + file) / 1000000, 2))))
```
%% Cell type:markdown id: tags:
## Preparation <a id="2"></a>
%% Cell type:code id: tags:
``` python
# Load in raw datasets
train_df = pd.read_csv(TRAIN_CSV_PATH)
test_df = pd.read_csv(TEST_CSV_PATH)
# For convenience, collect sub type and seperate PatientID as new features
for df in [train_df, test_df]:
df['Sub_type'] = df['ID'].str.split("_", n = 3, expand = True)[2]
df['PatientID'] = df['ID'].str.split("_", n = 3, expand = True)[1]
``` ```
%% Cell type:code id: tags: %% Output
``` python
# All filenames for train and test images
train_images = os.listdir(IMG_PATH_TRAIN)
test_images = os.listdir(IMG_PATH_TEST)
```
%% Cell type:markdown id: tags:
## Metadata <a id="3"></a> # Files and file sizes
stage_2_train | 26.59 MB
stage_2_train.csv | 119.7 MB
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The [pydicom](https://pydicom.github.io/pydicom/stable/getting_started.html) library allows us to conveniently read in DICOM files and access different values from the file. The actual image can be found in "pixel_array". ## Preprocessing CSV's
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
print('Example of all data in a single DICOM file:\n') train_df = pd.read_csv(TRAIN_PATH)
example_dicom = pydicom.dcmread(IMG_PATH_TRAIN + train_images[0]) train_df['ImageID'] = train_df['ID'].str.rsplit('_', 1).map(lambda x: x[0]) + '.png'
print(example_dicom) label_lists = train_df.groupby('ImageID')['Label'].apply(list)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# All columns for which we want to collect information train_df[train_df['ImageID'] == 'ID_0002081b6.png']
meta_cols = ['BitsAllocated','BitsStored','Columns','HighBit',
'Modality','PatientID','PhotometricInterpretation',
'PixelRepresentation','RescaleIntercept','RescaleSlope',
'Rows','SOPInstanceUID','SamplesPerPixel','SeriesInstanceUID',
'StudyID','StudyInstanceUID','ImagePositionPatient',
'ImageOrientationPatient','PixelSpacing']
``` ```
%% Output
ID Label ImageID
770232 ID_0002081b6_epidural 0 ID_0002081b6.png
770233 ID_0002081b6_intraparenchymal 1 ID_0002081b6.png
770234 ID_0002081b6_intraventricular 0 ID_0002081b6.png
770235 ID_0002081b6_subarachnoid 0 ID_0002081b6.png
770236 ID_0002081b6_subdural 0 ID_0002081b6.png
770237 ID_0002081b6_any 1 ID_0002081b6.png
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Initialize dictionaries to collect the metadata def prepare_df(path, train=False, nrows=None):
col_dict_train = {col: [] for col in meta_cols} """
col_dict_test = {col: [] for col in meta_cols} Prepare Pandas DataFrame for fitting neural network models
Returns a Dataframe with two columns
ImageID and Labels (list of all labels for an image)
"""
df = pd.read_csv(path, nrows=nrows)
# Get ImageID and type for pivoting
df['ImageID'] = df['ID'].str.rsplit('_', 1).map(lambda x: x[0]) + '.png'
df['type'] = df['ID'].str.split("_", n = 3, expand = True)[2]
# Create new DataFrame by pivoting
new_df = df[['Label', 'ImageID', 'type']].drop_duplicates().pivot(index='ImageID',
columns='type',
values='Label').reset_index()
return new_df
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get values for training images # Convert dataframes to preprocessed format
for img in tqdm(train_images): train_df = prepare_df(TRAIN_PATH, train=True)
dicom_object = pydicom.dcmread(IMG_PATH_TRAIN + img) test_df = prepare_df(TEST_PATH)
for col in meta_cols:
col_dict_train[col].append(str(getattr(dicom_object, col)))
# Store all information in a DataFrame
meta_df_train = pd.DataFrame(col_dict_train)
del col_dict_train
gc.collect()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Get values for test images print('Training data: ')
for img in tqdm(test_images): display(train_df.head())
dicom_object = pydicom.dcmread(IMG_PATH_TEST + img)
for col in meta_cols:
col_dict_test[col].append(str(getattr(dicom_object, col)))
# Store all information in a DataFrame print('Test data: ')
meta_df_test = pd.DataFrame(col_dict_test) test_df.head()
del col_dict_test
gc.collect()
``` ```
%% Cell type:markdown id: tags: %% Output
## Type Conversion <a id="4"></a> Training data:
%% Cell type:markdown id: tags:
Above we used a bit of a hacky solution by converting all metadata to string values. Now we will convert all features back to proper types.
All numeric features will be converted to float types. We will keep all categorical features as string types.
The 'WindowCenter' and 'WindowWidth' were rather odd as they featured both int, float and list values. For now I skipped these features, but I may add them to this kernel later. Feel free to share code to conveniently handle this data. Test data:
The features 'ImagePositionPatient', 'ImageOrientationPatient' and 'PixelSpacing' are stored as lists. In order to easily access these features we create a new column for every value in the list. type ImageID any epidural intraparenchymal intraventricular \
0 ID_000000e27.png 0.5 0.5 0.5 0.5
1 ID_000009146.png 0.5 0.5 0.5 0.5
2 ID_00007b8cb.png 0.5 0.5 0.5 0.5
3 ID_000134952.png 0.5 0.5 0.5 0.5
4 ID_000176f2a.png 0.5 0.5 0.5 0.5
We fill missing values with values that are outside the range of the feature (-999). type subarachnoid subdural
0 0.5 0.5
1 0.5 0.5
2 0.5 0.5
3 0.5 0.5
4 0.5 0.5
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Specify numeric columns # Save to CSV
num_cols = ['BitsAllocated', 'BitsStored','Columns','HighBit', 'Rows', train_df.to_csv('clean_train_df.csv', index=False)
'PixelRepresentation', 'RescaleIntercept', 'RescaleSlope', 'SamplesPerPixel'] test_df.to_csv('clean_test_df.csv', index=False)
``` ```
%% Cell type:code id: tags: %% Cell type:markdown id: tags:
``` python
# Split to get proper PatientIDs
meta_df_train['PatientID'] = meta_df_train['PatientID'].str.split("_", n = 3, expand = True)[1]
meta_df_test['PatientID'] = meta_df_test['PatientID'].str.split("_", n = 3, expand = True)[1]
# Convert all numeric cols to floats ## Creating submission file
for col in num_cols:
meta_df_train[col] = meta_df_train[col].fillna(-9999).astype(float)
meta_df_test[col] = meta_df_test[col].fillna(-9999).astype(float)
```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Hacky solution for multi features def create_submission_file(IDs, preds):
for df in [meta_df_train, meta_df_test]: """
# ImagePositionPatient Creates a submission file for Kaggle when given image ID's and predictions
ipp1 = []
ipp2 = []
ipp3 = []
for value in df['ImagePositionPatient'].fillna('[-9999,-9999,-9999]').values:
value_list = eval(value)
ipp1.append(float(value_list[0]))
ipp2.append(float(value_list[1]))
ipp3.append(float(value_list[2]))
df['ImagePositionPatient_1'] = ipp1
df['ImagePositionPatient_2'] = ipp2
df['ImagePositionPatient_3'] = ipp3
# ImageOrientationPatient IDs: A list of all image IDs (Extensions will be cut off)
iop1 = [] preds: A list of lists containing all predictions for each image
iop2 = []
iop3 = []
iop4 = []
iop5 = []
iop6 = []
# Fill missing values and collect all Image Orientation information
for value in df['ImageOrientationPatient'].fillna('[-9999,-9999,-9999,-9999,-9999,-9999]').values:
value_list = eval(value)
iop1.append(float(value_list[0]))
iop2.append(float(value_list[1]))
iop3.append(float(value_list[2]))
iop4.append(float(value_list[3]))
iop5.append(float(value_list[4]))
iop6.append(float(value_list[5]))
df['ImageOrientationPatient_1'] = iop1
df['ImageOrientationPatient_2'] = iop2
df['ImageOrientationPatient_3'] = iop3
df['ImageOrientationPatient_4'] = iop4
df['ImageOrientationPatient_5'] = iop5
df['ImageOrientationPatient_6'] = iop6
# Pixel Spacing Returns a DataFrame that has the correct format for this competition
ps1 = [] """
ps2 = [] sub_dict = {'ID': [], 'Label': []}
# Fill missing values and collect all pixal spacing features # Create a row for each ID / Label combination
for value in df['PixelSpacing'].fillna('[-9999,-9999]').values: for i, ID in enumerate(IDs):
value_list = eval(value) ID = ID.split('.')[0] # Remove extension such as .png
ps1.append(float(value_list[0])) sub_dict['ID'].extend([f"{ID}_{target}" for target in targets])
ps2.append(float(value_list[1])) sub_dict['Label'].extend(preds[i])
df['PixelSpacing_1'] = ps1 return pd.DataFrame(sub_dict)
df['PixelSpacing_2'] = ps2
``` ```
%% Cell type:markdown id: tags:
## Merge and Save <a id="5"></a>
%% Cell type:markdown id: tags:
This metadata will only be useful if we can connect it to specific images. To make sure every value is in the correct row we can conveniently merge on the PatientID feature. However, an inner or left join will not work since our DataFrame with metadata contains a lot of rows that are not in the original DataFrame. Joining on the right and using a few columns from the original DataFrame will do the trick.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Merge DataFrames # Finalize submission files
train_df_merged = meta_df_train.merge(train_df, how='left', on='PatientID') train_sub_df = create_submission_file(train_df['ImageID'], train_df[targets].values)
train_df_merged['ID'] = train_df['ID'] test_sub_df = create_submission_file(test_df['ImageID'], test_df[targets].values)
train_df_merged['Label'] = train_df['Label']
train_df_merged['Sub_type'] = train_df['Sub_type']
test_df_merged = meta_df_test.merge(test_df, how='left', on='PatientID')
test_df_merged['ID'] = test_df['ID']
test_df_merged['Label'] = test_df['Label']
test_df_merged['Sub_type'] = test_df['Sub_type']
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Save to CSV print('Back to the original submission format:')
train_df_merged.to_csv('stage_2_train_with_metadata.csv', index=False) train_sub_df.head(6)
test_df_merged.to_csv('stage_2_test_with_metadata.csv', index=False)
``` ```
%% Cell type:markdown id: tags: %% Output
## Final Check <a id="6"></a>
%% Cell type:code id: tags: Back to the original submission format:
``` python ID Label
# Final check on the new dataset 0 ID_000012eaf_epidural 0
print('Training Data:') 1 ID_000012eaf_intraparenchymal 0
display(train_df_merged.head(3)) 2 ID_000012eaf_intraventricular 0
display(train_df_merged.tail(3)) 3 ID_000012eaf_subarachnoid 0
print('Testing Data:') 4 ID_000012eaf_subdural 0
display(test_df_merged.head(3)) 5 ID_000012eaf_any 0
display(test_df_merged.tail(3))
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment