shape, motion) +Widely available - Illumination affects performance -Occlusions affects performance -Subject needs to be in field of view - No depth information +Very accurate -Expensive -Need a lot of space RGB-Camera MoCap System
of skeletal joint positions (Microsoft Kinect) -Unrealistic skeletal joint Positions -Object Should be in field of view -Privacy concerns HAR – Comparison of Sensors Depth Camera
of skeletal joint positions (Microsoft Kinect) -Unrealistic skeletal joint positions -Object should be in field of view -Privacy concerns HAR – Comparison of Sensors -Limit to the number of sensors worn -Unwillingness to wear a sensor +Easy to wear +Provide little or no hindrance +Very accurate with high sampling rate Depth Camera Wearable (MEMS) Inertial Sensors
row by its norm to reduce subject and joint dependency Partition into windows (win size = 3) and calculate µ and σ for each direction Stack features column- wise Bicubic interpolation Bicubic interpolation Least number of frames from training set Stack features column-wise and use Savitzky-Golay filter [2] to reduce noise (spikes) Proposed Solution
- Trained Using Conjugate Gradient with Polak-Ribiére updates [3] - Individual Neural Network classifiers are used as the classifiers - 1 hidden layer with 90 Neurons - Trained Using Conjugate Gradient with Polak-Ribiére updates [3] Softmax output layer Feature Classification
• 1 Inertial and 1 Depth sensor - IMU to capture 3 axis linear acceleration, 3 axis angular velocity, 3 axis magnetic strength - IMU placed on right wrist for 21 actions, right thigh for 6 actions - Microsoft Kinect to track movement of 20 joints • Total size of the dataset 861 entries - 27 registered actions - 8 subjects (4 males, 4 females) - Each action performed 4 times by each subject - 3 corrupt sequences were removed Results
Accuracy (%) Chen et al. [5] 74.7 76.4 91.5 Implemented Algorithm 74.8 81.2 95.0 Table 1. Recognition Accuracies for subject-generic experiment - 8-fold cross-validation performed (for each subject) Comparison with state of the art implementation Results
using fusion of depth and inertial sensors." International Conference Image Analysis and Recognition. Springer, Cham, 2018. 30 CVR Control, Vision and Robotics Research Group