We begin with quite a simple step – using a smartphone camera we detect and scan a targeted human body. The captured photos are then instantly transferred to be processed by the algorithm ‘brain’ or the neural networks so the next step can be carried out.
The neural networks then detect and determine key points and produce a set of probability maps for each keypoint. These maps are combined and processed with predetermined filters at every stage.
Key points located on the images are then used as initialization for the 2D contour matching to begin. A virtual camera is used in order to build each of the 2D contour models into a parameterized 3D human body model projection onto an image plane. The frontal and profile contour models in two dimensions use a set of parameters to control pose and shape of the parameterized projections.
By matching 2D to 3D the measurements of the human body can now be determined.