Adaptive Ecological Farming: Fish Recognition and Tracking



About the Project

This system is designed for detecting and tracking each fish from the collected web-cam data and establishing the relationship among the trending, situations, and controllable parameters.

Also comparing and showing the Movement patterns of the fishes. User can change detected fish id on server and will assign correct uuid. Later, that will help to build model with high accuracy to assign fish id.

DashBoard is monitoring How many videos are extracted from web-cam, Total length of video, Finding Bounding Boxes from how many extracted frames, Assigning uuid on extracted frames.

About Us

...
Dr. Maiga Chang
Supervisor

Dr. Maiga Chang is a Full Professor in the School of Computing and Information Systems at Athabasca University, Canada.

...
Dr. Nian-Shing Chen
Advisor

National Yunlin University of Science and Technology, Taiwan

...
Our Mission

The goal is to worked with ecological farming facility in Australia. The facility includes all kinds of Internet of Things (IoT) sensors and webcam to collect data for analysis purpose. This research includes the tasks of finding the trends via data analysis; identifying, recognizing and tracking objects from the collected data.

Our Team

...
Ishika Tailor
Current Member

Ishika Tailor
is an undergraduate student of Information Technology from L.D. College of engineering, India (2018-2022)

Videos


Stage 1 of Fish Recognition and Tracking (Bounding Box identification and UUID assignment)

Stage 1's major features include (but not limited to)

  1. Find fish in a frame of video and extract its bounding box.
  2. Assign uuid for each fish's bounding box with heuristic approach.
  3. Web-based user interface for users to check and correct a fish's uuid assignment.
  4. A dashboard shows the process and the progress done by services running periodically behind the scene.
This video mainly talks on the details of the first three features. For the dashboard feature, please check out the video at https://youtu.be/6p7u-SGV4pY.


Stage 1 of Fish Recognition and Tracking (the Dashboard)

Stage 1's major features include (but not limited to)

  1. Find fish in a frame of video and extract its bounding box.
  2. Assign uuid for each fish's bounding box with heuristic approach.
  3. Web-based user interface for users to check and correct a fish's uuid assignment.
  4. A dashboard shows the process and the progress done by services running periodically behind the scene.
This video mainly talks on the details of the last feature. For the other three features, please check out the video at https://youtu.be/Zg-vLUjZcaA.


Presentation Video

Live demonstrations on a 12-weeks work outcome (June 2021~August 2021). This research uses heuristic method and Neural Networks to train models for identifying fish from video feed. The research outcome involve Python, PHP, JavaScript (AJAX and JSON)

Publications

To Be Announced

Frequenty Asked Questions

  • No. We have to capture a short timing video from a webcam. Then we can upload it to the server by video-upload web-UI.

  • Currently, two Models are using. one for Bounding-box detection by YOLOv3 model and the second for Fish Movement path detection by the TensorFlow model.

  • On clicking on Dashboard Button, you will be redirected to the dashboard page. In which, we can analyse How Many Videos are under processing, How many Videos are done with Frame-extraction & Bounding-box assign and the UUID-assign process. We can see the information of total video, total frames, fps rate, and Menu option on the left side. This page is updating every 30/60 seconds.

  • Log file is generated by our path detected model. When the model completes its training it will generate a log file in the log folder. We can analyse it by tensorboard by running this command in terminal: 'tensorboard --logdir=logs'. Make sure you have given an exact path to the logs directory. It will give us a localhost server link (http://localhost:6006/). On click, we can see the accuracy and loss graph for all epochs.

  • After uploading the video to the server, the Video file will go into the Mainly 4-Stage process. Bounding-Box script is using the YOLOv3 model which is already train. This script will check if any new video is there? If yes, then it will take that video and convert it into the frame as per fps. Each frame will pass into the model to detect fish. The output of this model is in (x, y, w, h) .csv form. Then preprocessing and UUID-assign script will assign unique UUID to correspond fishes and generate final .csv file. In this way, all Boxes values are generated and stored in the database to analyse the bounding box on our Web-UI page.

  • After assigning the correct UUID to each fish, we have now a truthful value present in our all_images database table. We can select all entries to correspond UUID one by one and drawing a path on canvas to the observer movement pattern of each UUID fish.

  • Total 5 Web-UI is created for Different purposes. 1) Dashboard: To analyse how much video processing has been done. 2) Upload Video: To upload a new video on the server. 3) Analyse Bounding-Box: Display all videos Frames and UUID box. We can change the UUID if we found any fault. 4) Track UUID Path: We can put Selected UUID in the right box, and correspond UUID's Fish-path Images can be observed. 5) Analyse accuracy: Two Buttons are there, one for the path model and the second for the YOLOv3 model. We can analyse accuracy, loss and other parameters on this Web-UI.

  • Yes. By using the crontab command, in the terminal, Every python script which is associated with the dashboard is running as per crontab timing and stop on that day when the time limit is exceeded.