Car Part Damage Segregation using Computer Vision

Khushboo .
4 min readDec 28, 2020

Car insurance inspection process involves various steps, such as identifying the car type, identifying the disfigured parts of the Car using the images of damaged car, providing the necessary documents to the inspector, etc, which in return is a tedious process. So, here comes Deep Learning which play a vital role in automating Car insurance inspection process.

Through this blog, I am going to explain only an intermediate step of Car insurance inspection process i.e. to determine which part of the car is particularly damaged such as Bumper, Door, headlight, etc, which I automated using Deep Learning technology. Here, I used the Image classification method of Computer Vision Technology.

Image Classification method is used to create a model with an ability to segregate the images under the given categories. For instance, if you want to classify whether an image is of a car or not. In this case, you can take two categories i.e “car” or “unknown”. If the image is containing a car then the Image classification model will return a message saying: “The image is of a car” otherwise it will return a meassage saying: “The image is not of a car”. Below I mentioned the steps that I followed to create the model using Pytorch Deep Learning Framework.

Collecting the dataset.
I created the dataset manually. I downloaded 1510 close up images of damaged cars and then splitted(or labeled) those cars under Car Part Damage Categories.

The number of images with respect to the car part damage type after categorizing.

Train, Val and Test Dataset.
Splitted the data into three category i.e. Training, Validation and Test set with the ratio of 85:5:10 using the following command:

Augmentated the Dataset.
Augmentation is a technique to increase the size of the dataset and add variance to the dataset. It is basically used for solving the overfiting problem.
Here in this use case, I am using two type of augmentation — Vertical Flip and Random Rotation.
I have created a custom dataset. Which is taking original images and label as input, then randomly performing the augmentation such as vertical flip, random roation or both on the same image and then passing it to the dataloader with the corresponding lable for training.

Code for Virtical Flip
Code for random rotation between the angle -25 to 25

Transfer Learning.
Transfer learning is a technique to use the pre-trained model (which is already trained on millions of data) weight, to retrain it on your own small size dataset. For this use case, I chose Resnet50 CNN pre-trained model, in which I freezed all the layers of model except the last layer, which I made as trainable.
Below you can find the code, to perfom this step.
To add regularization, I have used dropout layer(to prevent overfitting and over-specialization) and changed the Linear Layer(Last Layer) output value, as in this case I am using 11 classes(already mentioned above). After that, I made the last layer as trainable by make its attribute requires_grad as True.

Code for adding layer and making last layer trainable

Training and Validating the Model.
For training the model, I am using Training dataset images and labels. And for Validating the model after training in each epoch, i am using Validation dataset and labels. In optimizer, I used SGD(Stochastic Graidient Descent) and for calculating LOSS, I used CrossEntrophyLoss.
To stop the training automatically, I used callbacks. Where I put a condition, if the validation loss is not decreasing after 20 epochs then stop training and save the model checkpoint with the minimum Validation Loss.

Code for Training the Model
Code for Validating the Model

After Training, the accuracy was 82.54%(which is a very good number).

--

--

Khushboo .
0 Followers

Research Engineer (Specialist of Computer Vision)