We use computer vision to analyze the photo from which we acquire and process the human body. Our advanced computer vision algorithms allows us to detect the human body on photos that were taken with any smartphone on any background.
We use statistical modeling and 3D geometry algorithms to build a 3D model of the human body, based on the detected key points. It allows us to accurately obtain the human body measurements.
How it works step by step
We begin by using images from a smartphone to detect and scan the human body using computer vision. The detected human body outline is then instantly transferred to be processed by our neural networks.
The neural networks then detect and determine key points and produce a set of probability maps for each keypoint. These maps are combined and processed with predetermined filters at every stage.
Key points located on the images are then used as initialization for the 2D contour matching to begin. A virtual camera is used in order to build each of the 2D contour models into a parameterized 3D human body model projection onto an image plane.
By matching virtual 2D models with real 3D human models we ensure that there is little to no errors in scanning of the human body and in processing of measurements. The 3D matching is also used to build a 3D model of a human body.