Live OpenCV-Webcam live hand detection and motion tracking application.
Screenshot Gallery...
External Code for preparing the program take from https://techvidvan.com/tutorials/hand-gesture-recognition-tensorflow-opencv/
Provided pretrained hand-recognition TensorFlow network, implemented through Google Mediapipe
From this tutorial, the pretrained network was taken.
No author information nor git information provided
Hand detection is executed using Google Mediapipe, with the pretrained hand-skeleton detection network.
From this skeleton, a box is drawn around the user’s hand and displayed on the screen.
As the hands more, the palm of the hand is measured and recorded. As the last 50 points are displayed on the screen, a post process filters connected points only. If two points are too far apart (which occurs when the hands become swapped or another hand appears abruptly), a line is not draw. This provides a cleaner output of motion tracking for a maximum of two hands.
Input video and output video handlers
Initialize 2 empty lists, to store previous hand-positions
For each frame grabbed: