This document describes an electronic voting machine system created by a team of students. It consists of four machines and a camera. Machine 1 checks if a voter is registered to vote at that location. If so, it activates Machine 2, which verifies the voter's fingerprint. If matched, Machine 3 is activated for the voter to cast their ballot. A separate counting machine then tallies the votes. The system uses computer vision algorithms like Viola-Jones and SURF for face detection and recognition to identify voters. It captures training images of voters, extracts features to create classifiers to match incoming images for verification.
6. This Machine will check the voter belongs to
the voting center or not.
If not, then the machine will reject the voter.
If yes, then the Machine-2 will be activated
automatically and Machine-1 will be
deactivated. The camera will start recording
this time.
7. Machine-2 is to check the fingerprint of the
voter.
To check and verify the voter, this machine is
introduced.
If the fingerprint matches, then the
Machine-3 will be activated and Machine-2
will be deactivated.
14. Implementation
A training set is Captured(for each person
8 images)
Stored it in a folder
Detect and extract match
Minimum error between detected and
training set
Labels are inserted to indicate the name
of that person
15. More Detailed
Step-1
a). Matlab support package for USB Camera
b). Introduced a face detector
Step-2
a). Capture a snapshot
b). Convert it grayscale
c). Preprocess it (100*100 matrix)
16. Step-3
a). Detected face in capture image
b). Verifying that there is exactly one
detection.
c). And that is above size of threshold
(bbox size=30)
Step-4 ( Quality Control Checks)
a). Make sure that the face contains “one
mouth and one nose”.
b). Auto crop the detected face
c). Preprocess it as desired and resize it to
a standard size
d). Stored it an auto- created directory
17. Step-5
Repeat step 3 to 5 until 8 images are captured for each person
Step-6
Slurp in all captured face image and train detector
using SURF
18. Training Algorithm
Aggregated feature from stacked images
Part-1
a). Create a stacked image montage of
5 randomly selected training image
Montage image look like this:
19. Part-2
a). Run a feature detector and detect and
extract
b). Store feature in from each montage
we trained using SURF
SURF feature extraction
Part-3
a). Now we have a set of feature
b). Calculated for each face ,we can use
it as a classifier.( this should have
taken only few second )
20. Best Prediction
Part-4
a). Restart the streaming image capture
b). Detect face
c). If multiple face detected then live Streaming will be OFF
instantly
c). Crop each face
d). Apply detect and extract feature in each
e). Match with training set which most closely is called “Best
Prediction”
21. Best Prediction
SURF feature matching
Part-4
a). Restart the streaming image capture
b). Detect face
c). If multiple face detected then live
Streaming will be OFF instantly
c). Crop each face
d). Apply detect and extract feature in
each
e). Match with training set which most
closely is called “Best Prediction”