-
Notifications
You must be signed in to change notification settings - Fork 4
Expand file tree
/
Copy pathapp.py
More file actions
290 lines (204 loc) · 14.2 KB
/
app.py
File metadata and controls
290 lines (204 loc) · 14.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
import streamlit as st
from ultralytics import YOLO
import random
import PIL
from PIL import Image, ImageOps
import numpy as np
import torchvision
import torch
from sidebar import Sidebar
import rcnnres, vgg
# hide deprication warnings which directly don't affect the working of the application
import warnings
warnings.filterwarnings("ignore")
# Sidebar
sb = Sidebar()
title_image = sb.title_img
model = sb.model_name
conf_threshold = sb.confidence_threshold
#Main Page
st.title("Bone Fracture Detection")
st.write("The Application provides Bone Fracture Detection using multiple state-of-the-art computer vision models such as Yolo V8, ResNet, AlexNet, and CNN. To learn more about the app - Try it now!")
st.markdown("""
<style>
.stTabs [data-baseweb="tab-list"] {
gap: 10px;
}
.stTabs [data-baseweb="tab"] {
height: 50px;
white-space: pre-wrap;
border-radius: 2px 2px 2px 2px;
gap: 8px;
padding-left: 10px;
padding-right: 10px;
padding-top: 8px;
padding-bottom: 8px;
}
.stTabs [aria-selected="true"] {
background-color: #7f91ad;
}
</style>""", unsafe_allow_html=True)
tab1, tab2 = st.tabs(["Overview", "Test"])
with tab1:
st.markdown("### Overview")
st.text_area(
"TEAM MEMBERS",
"Ashita Shetty, Raj Motwani, Ramya Sri Gautham, Rishita Chebrolu, Akshita Agrawal, Revanth Chowdhary",
)
st.markdown("#### Network Architecture")
network_img = "images\\NN_Architecture_Updated.jpg"
st.image(network_img, caption="Network Architecture", width=None, use_column_width=True)
st.markdown("#### Models Used")
st.markdown("##### YoloV8")
st.text_area(
"Description",
"In this bone fracture detection project, we're using a lightweight and efficient version of the YOLO v8 algorithm called YOLO v8 Nano (yolov8n). This algorithm is tailored for systems with limited computational resources. We start by training the model on a dataset of X-ray images that are labeled to show where fractures are. We specify the training settings in a YAML file. The model is trained for 50 epochs, and we save its progress every 25 epochs to keep the best-performing versions. YOLO v8 Nano is great at quickly and accurately spotting fractures, even on devices with lower computing power. After training, we test the model on a separate set of images to ensure it can reliably detect fractures. In practical use, the trained model automatically identifies and marks fractures on new X-ray images by drawing boxes around them. This helps doctors quickly and accurately diagnose fractures. We assess the model's effectiveness using performance metrics like confusion matrix and Intersection over Union (IoU) scores to understand how well it performs across different types of fractures.",
)
st.markdown("##### FasterRCNN with ResNet")
st.text_area(
"Decription",
"This code implements a bone fracture detection system using the Fast R-CNN (Region-based Convolutional Neural Network) architecture. The purpose of Faster R-CNN (Region-based Convolutional Neural Network) is to perform efficient and accurate object detection within images. It addresses the challenge of localizing and classifying objects of interest in images, a fundamental task in computer vision applications. Faster R-CNN achieves this by introducing a Region Proposal Network (RPN) to generate candidate object bounding boxes, which are then refined and classified by subsequent network components. By combining region proposal generation and object detection into a single unified framework, Faster R-CNN significantly improves detection accuracy while maintaining computational efficiency, making it suitable for real-time applications such as autonomous driving, surveillance, medical imaging, and more. The dataset containing bone X-ray images is prepared, with images and their corresponding labels loaded and augmented to resize and transform boundary boxes. The Faster R-CNN model is then instantiated, with a pre-trained ResNet-50 backbone and a custom classification layer for bone fracture detection. The training loop is executed over multiple epochs, optimizing the model's parameters using the Adam optimizer and minimizing the combined loss. The best-performing model is saved, and its performance is evaluated on the validation set, with the best model further tested on a separate test set. After training, the model's ability to detect fractures is evaluated by comparing its predictions with the actual fractures. This helps us understand how accurate the model is in finding fractures. Also, a confusion matrix is created to see how well the model performs for different types of fractures, providing insights into its overall performance. By combining the power of ResNet's feature extraction capabilities with Faster R-CNN's precise object localization and classification, the system can effectively detect bone fractures within medical images with improved accuracy and reliability.",
height=45,
)
st.markdown("##### SSD with VGG16")
st.text_area(
"Description",
"This code uses a Single Shot Multibox Detector (SSD) with a VGG16 backbone to construct a bone fracture detection system. It performs preprocessing on training and validation datasets, doing augmentations including image scaling and bounding box coordinate conversion. Pre-trained weights are used to initialize the SSD300_VGG16 model, and additional custom layers are added to allow for fine-tuning to the particular purpose of fracture identification. The training loop is executed over multiple epochs, optimizing the model's parameters using the Adam optimizer and minimizing the combined loss. While the evaluation loop evaluates the model's performance on the validation dataset, the training loop continually runs over the dataset, calculating losses, back propagating, and adjusting weights using an optimizer. Overall, this code efficiently integrates SSD with VGG16 for real-time fracture detection, leveraging the model's ability to predict one of the 7 class labels directly from input images.",
)
#weights
yolo_path ="weights\yolov8.pt"
with tab2:
st.markdown("### Upload & Test")
#Image Uploading Button
if 'clicked' not in st.session_state:
st.session_state.clicked = False
def set_clicked():
st.session_state.clicked = True
st.button('Upload Image', on_click=set_clicked)
if st.session_state.clicked:
image = st.file_uploader("", type=["jpg", "png"])
if image is not None:
st.write("You selected the file:", image.name)
if model == 'YoloV8':
try:
yolo_detection_model = YOLO(yolo_path)
yolo_detection_model.load()
except Exception as ex:
st.error(f"Unable to load model. Check the specified path: {yolo_path}")
st.error(ex)
col1, col2 = st.columns(2)
with col1:
uploaded_image = PIL.Image.open(image)
st.image(
image=image,
caption="Uploaded Image",
use_column_width=True
)
if uploaded_image:
if st.button("Execution"):
with st.spinner("Running..."):
res = yolo_detection_model.predict(uploaded_image,
conf=conf_threshold, augment=True, max_det=1)
boxes = res[0].boxes
res_plotted = res[0].plot()[:, :, ::-1]
if len(boxes)==1:
names = yolo_detection_model.names
probs = boxes.conf[0].item()
for r in res:
for c in r.boxes.cls:
pred_class_label = names[int(c)]
with col2:
st.image(res_plotted,
caption="Detected Image",
use_column_width=True)
try:
with st.expander("Detection Results"):
for box in boxes:
st.write(pred_class_label)
st.write(probs)
st.write(box.xywh)
except Exception as ex:
st.write("No image is uploaded yet!")
st.write(ex)
else:
with col2:
st.image(res_plotted,
caption="Detected Image",
use_column_width=True)
try:
with st.expander("Detection Results"):
st.write("No Detection")
#st.write(output[2])
except Exception as ex:
st.write("No Detection")
st.write(ex)
elif model == 'FastRCNN with ResNet':
resnet_model = rcnnres.get_model()
device = torch.device('cpu')
resnet_model.to(device)
col1, col2 = st.columns(2)
with col1:
uploaded_image = PIL.Image.open(image)
st.image(
image=image,
caption="Uploaded Image",
use_column_width=True
)
content = Image.open(image).convert("RGB")
to_tensor = torchvision.transforms.ToTensor()
content = to_tensor(content).unsqueeze(0)
content.half()
if uploaded_image:
if st.button("Execution"):
with st.spinner("Running..."):
output = rcnnres.make_prediction(resnet_model, content, conf_threshold)
print(output[0])
fig, _ax, class_name = rcnnres.plot_image_from_output(content[0].detach(), output[0])
with col2:
st.image(rcnnres.figure_to_array(fig),
caption="Detected Image",
use_column_width=True)
try:
with st.expander("Detection Results"):
st.write(class_name)
st.write(output)
#st.write(output[2])
except Exception as ex:
st.write("No image is uploaded yet!")
st.write(ex)
elif model == 'VGG16':
vgg_model = vgg.get_vgg_model()
device = torch.device('cpu')
vgg_model.to(device)
col1, col2 = st.columns(2)
with col1:
uploaded_image = PIL.Image.open(image)
st.image(
image=image,
caption="Uploaded Image",
use_column_width=True
)
content = Image.open(image).convert("RGB")
to_tensor = torchvision.transforms.ToTensor()
content = to_tensor(content).unsqueeze(0)
content.half()
if uploaded_image:
if st.button("Execution"):
with st.spinner("Running..."):
output = rcnnres.make_prediction(vgg_model, content, conf_threshold)
print(output[0])
fig, _ax, class_name = rcnnres.plot_image_from_output(content[0].detach(), output[0])
with col2:
st.image(rcnnres.figure_to_array(fig),
caption="Detected Image",
use_column_width=True)
try:
with st.expander("Detection Results"):
st.write(class_name)
st.write(output)
#st.write(output[2])
except Exception as ex:
st.write("No image is uploaded yet!")
st.write(ex)
else:
st.write("Please upload an image to test")