About the Project
This system is designed for detecting and tracking each fish from the collected web-cam data and establishing the relationship among the trending, situations, and controllable parameters.
Also comparing and showing the Movement patterns of the fishes. User can change detected fish id on server and will assign correct uuid. Later, that will help to build model with high accuracy to assign fish id.
DashBoard is monitoring How many videos are extracted from web-cam, Total length of video, Finding Bounding Boxes from how many extracted frames, Assigning uuid on extracted frames.
About Us
Dr. Maiga Chang
Supervisor
Dr. Maiga Chang is a Full Professor in the School of Computing and Information Systems at Athabasca University, Canada.
Our Mission
The goal is to worked with ecological farming facility in Australia. The facility includes all kinds of Internet of Things (IoT) sensors and webcam to collect data for analysis purpose. This research includes the tasks of finding the trends via data analysis; identifying, recognizing and tracking objects from the collected data.
Our Team
Ishika Tailor
Current Member
Ishika Tailor
is an undergraduate student of Information Technology from L.D. College of engineering, India (2018-2022)
Videos
Stage 1 of Fish Recognition and Tracking (Bounding Box identification and UUID assignment)
Stage 1's major features include (but not limited to)
- Find fish in a frame of video and extract its bounding box.
- Assign uuid for each fish's bounding box with heuristic approach.
- Web-based user interface for users to check and correct a fish's uuid assignment.
- A dashboard shows the process and the progress done by services running periodically behind the scene.
Stage 1 of Fish Recognition and Tracking (the Dashboard)
Stage 1's major features include (but not limited to)
- Find fish in a frame of video and extract its bounding box.
- Assign uuid for each fish's bounding box with heuristic approach.
- Web-based user interface for users to check and correct a fish's uuid assignment.
- A dashboard shows the process and the progress done by services running periodically behind the scene.
Presentation Video
Live demonstrations on a 12-weeks work outcome (June 2021~August 2021). This research uses heuristic method and Neural Networks to train models for identifying fish from video feed. The research outcome involve Python, PHP, JavaScript (AJAX and JSON)
Publications
Frequenty Asked Questions
-
Is it a live object Tracking Process?
No. We have to capture a short timing video from a webcam. Then we can upload it to the server by video-upload web-UI.
-
How many models are we using? Why?
Currently, two Models are using. one for Bounding-box detection by YOLOv3 model and the second for Fish Movement path detection by the TensorFlow model.
-
What is dashboard means?
On clicking on Dashboard Button, you will be redirected to the dashboard page. In which, we can analyse How Many Videos are under processing, How many Videos are done with Frame-extraction & Bounding-box assign and the UUID-assign process. We can see the information of total video, total frames, fps rate, and Menu option on the left side. This page is updating every 30/60 seconds.
-
How can we generate an accuracy and loss graph from a log file?
Log file is generated by our path detected model. When the model completes its training it will generate a log file in the log folder. We can analyse it by tensorboard by running this command in terminal: 'tensorboard --logdir=logs'. Make sure you have given an exact path to the logs directory. It will give us a localhost server link (http://localhost:6006/). On click, we can see the accuracy and loss graph for all epochs.
-
How boundingBox generated from a video file?
After uploading the video to the server, the Video file will go into the Mainly 4-Stage process. Bounding-Box script is using the YOLOv3 model which is already train. This script will check if any new video is there? If yes, then it will take that video and convert it into the frame as per fps. Each frame will pass into the model to detect fish. The output of this model is in (x, y, w, h) .csv form. Then preprocessing and UUID-assign script will assign unique UUID to correspond fishes and generate final .csv file. In this way, all Boxes values are generated and stored in the database to analyse the bounding box on our Web-UI page.
-
How can we get a path Image Dataset?
After assigning the correct UUID to each fish, we have now a truthful value present in our all_images database table. We can select all entries to correspond UUID one by one and drawing a path on canvas to the observer movement pattern of each UUID fish.
-
How many WebUI created? What is their specification?
Total 5 Web-UI is created for Different purposes. 1) Dashboard: To analyse how much video processing has been done. 2) Upload Video: To upload a new video on the server. 3) Analyse Bounding-Box: Display all videos Frames and UUID box. We can change the UUID if we found any fault. 4) Track UUID Path: We can put Selected UUID in the right box, and correspond UUID's Fish-path Images can be observed. 5) Analyse accuracy: Two Buttons are there, one for the path model and the second for the YOLOv3 model. We can analyse accuracy, loss and other parameters on this Web-UI.
-
Is the dashboard running daily?
Yes. By using the crontab command, in the terminal, Every python script which is associated with the dashboard is running as per crontab timing and stop on that day when the time limit is exceeded.