From a cybernetic perspective, our body system could be described as a network of structure and function based sub - systems with motor equiform within the principle of equilibrium, energy economy and contentment: thus, the optimal posture is something that provides full motor movement performance, and in absence of stress with maximal energy economy. The current research work is focused on the requirement of defining a real objective method of evaluating the postural variables, affordable and of easy use.5
Human computer interaction (HCI) based on vision doesn't involve direct physical contact between users and apps. In this area, conventional approaches were generally based on regular sensors such as the RGB camera, which are not only computationally expensive but also easily affected by variations in brightness and background clutters.1 Low-cost sensors such as Microsoft's Kinect II or Microsoft's Kinect azure sensor in 3D motion capturing systems showing a growing interest in vision-based HCI as an alternative to more expensive devices.1,2 (fig 1)
Micro-soft Kinect ®, a gameplay device associated with Xbox console, has been the tool used throughout this analysis. Designed by Microsoft with in field of sport, through the Motion Capture Program, the kinect sensor ® has fascinated millions of customers for years, capturing movements through cameras and instantaneous or delayed replay. The Kinect ® seems to be an RGB lens, fitted via an infrared feature that captures 3D pictures.5
Obtaining 3-dimensional body joint details is important for understanding the position of the human body however there seem to be several possible applications under which motion capture (MOCAP) systems can be used effectively. Two general methods being: marker-based methods where active (Light-emitting) or passive (reflective) markers could be used, and marker-less approach. Few camera based professional Motion - capture systems, like the Vicon or even Qualisys, have passive markers evident in infrared(IR) lenses, whereas marker-less processes do not need additional equipment in addition to cameras. For example, theCMU1'sOpenPosealgorithm analyses two-dimensional clips for joint approximation, but the Azure Kinect is using RGB and IR lenses to construct a three-dimensional value of the image.
Recently work on the recognition of human activity has been documented on systems showing strong overall success in recognition. Azure Machine can be used to construct up models of machine learning algorithms. Recognition output is based on: an activity set, data collection quality, feature extraction process, and learning algorithm.4 The machine will calculate the distance of objects within the environment. The data from the sensor can be used in software applications by using a software development kit. The sensor provides information on the location of the recognized user joints in the frame in addition to the depth and color details.2
Advancement has been achieved this far in introducing new frameworks for understanding human posture through Kinect. Suma et al. suggested a versatile action with articulated skeletal toolkit called FAAST, that comprises 27 pre - determined human roles, such as LEAN LEFT, LEFT ARM UP, LEFT FOOT UP, and so on. When acquiring device skeleton data, the FAAST toolbox can execute virtual events such as mouse cursor control and keypad inputs, and can thus be used this to control virtual reality apps. Kang and some colleagues proposed a new way of controlling 3-dimensional application by collecting user commands through remote information, and also user position data via joints. Furthermore, Thanh et al. designed a system where robot can study human postures relying on the Semaphore method. Command terms were sent out letter by letter utilizing sign language throughout their approach. The area of physical posture recognition has been revamped by modern algorithms utilizing Kinect. While Kinect 's depth images consists complete three dimensional posture detail, body recognition takes a lot of effort to extract away unnecessary items that is not even component of human body. With implementation of NITE and also the Flexible Action and Articulated Skeleton Toolkit (FAAST), designed by Suma et al., depending on articulated skeleton sites, some simple pre - specified human actions such as swipe, loop, jump and hop may be recognized. These works build a solid basis for posture recognition research that use data from Kinect skeletons. Such aforementioned methods, with Kinect's support, have vastly improved existing human posture image recognition. But certain drawbacks hamper their success in general circumstances in terms of productivity and applicability. First, all of these strategies have quite constricted lexicon posture, and could only handle basic postures that are far from sufficient in advanced tasks. Second, such methods are mainly based on computer programmer-defined postures, which doubtlessly lessen an HCI system's versatility and applicability. Moreover, user-defined parameters are needed in these processes that may be subjective and cumbersome in real-world applications. To tackle these question, we suggest a newer method of recognizing human posture via adopting machine learning technique. Without empirical criteria this technique can instantly recognize a certain user-defined pose often with superior efficiency.5
Human Posture Recognition can also be seen as a sort of sub-field for recognition of motion as a pose is a "static motion". The Kinect system features an infrared imaging scanner coupled via a monochrome CMOS sensor collecting information from images. This machine has an RGB lens, and even a microphone with several arrays. The Kinect system therefore gives us the opportunity to simultaneously record the image of the color and the description of the observable scene in detail. A skeleton monitoring feature was recently implemented in the latest version of Kinect SDK. This method is intended to store the joints as points compared to the apparatus itself. The knowledge of conjoints is contained in images. The locations of the various points are determined and extracted to each frame..3(Fig 2)
We have three key details for each Joint. A joint has a certain discrete value for index. The first info is the Joints index. The 2nd knowledge is where each joint will be located in coordinates x, y, & z. Those 3 coordinates are measured in metres. The Depth Sensor's body axis are the axes x, y, and z. It is a right-hand coordination device that positions the array of sensors at the beginning where the positive z axis extends in the way the series of sensors points in. A positive y dimension moves upward, and thus the positive x dimension falls to the left (for the sensor array).3 (fig. 3).