Because there is no depth camera on the smartphone, motion capture on the phone must rely on monocular capture, and a single-purpose motion capture scheme becomes a threshold that must be crossed. However, due to the problems such as occlusion, depth, and difficulty in estimating the posture of the skeleton, the solution to monocular motion capture becomes more complicated.

Recently, researchers at the EPFL and the Planck Institute for Computer Research in Lausanne, Switzerland, published papers demonstrating a single-eye-based camera-based "markless performance capture technology" MonoPerfCap that enables single-target motion capture. And can achieve 3D reconstruction of clothing.

In simple terms, the principle of MonoPerfCap is divided into two steps. The first is to circle the camera around the target character. In this process, the recognition of target characters and the collection of information can be achieved, and a skeleton model of the target character is generated based on the outline. This facilitates the capture of actions, while the collection of other information can be applied to the rendering of external models.

The paper states that MonoPerfCap's contour-based gesture refinement improves attitude estimation and contour segmentation. Errors in posture estimation result in inaccurate background subtraction around the left arm. The gesture refinement of MonoPerfCap moves the arm skeleton to the right position. The segmentation after the extraction of the second contour is significantly refined based on the precise gesture (e).

The second step is to calculate the connection point based on the skeleton model identified in the first step, and then based on the learning of the convolutional neural network to estimate the change of the skeleton posture during the action, and then render the outline of the entire human body on the skeleton model.

By using this proposed method, we can not only reconstruct the posture of the human body, but we can also rebuild the dress movements and enable us to render from the perspective of freedom. However, recognition at high speeds and when strong shadowing occurs becomes slightly unstable.

Researchers at MonoPerfCap compared their own motion capture with those of 2015, 2016, and 2017, respectively. The results obtained using the MonoPerfCap method outperformed the results of other methods, and were already very close to the results of the multicamera motion capture of eight cameras.

First, this method of capture must rely on a circular scan of the target person for approximately half a minute. Although the subsequent processing does not require human intervention, it is still slightly troublesome.

In addition, recognition in some scenes may still fail to capture. Because the capture of the feet is difficult to achieve from a general perspective, it may fail if there is a lot of occlusion or when the movement is very rapid. If the target person wears clothes such as a windbreaker or a jacket, it is likely to interfere with the recognition of the camera and thus cannot be captured. In the example of the paper, the catch targets are all wearing clothes that are closer to the body.

This is an example of the paper's identification of each character's target. It can be seen that the recognition effect is better, and the details of the clothes and body rendering are also good.

The first two days we just mentioned a single-minded action program Vnect, their difference in principle is not very big, is also based on the skeleton model, also using convolutional neural network learning, the difference is that Vnect can only estimate the skeleton The pose model, MonoPerfCap also adds the ability to estimate the external body posture and details of clothes, and is relatively more mature.

Actually, MonoPerfCap's solution can be applied not only to motion capture, but also in terms of the solution implemented in the paper, this solution can also be used to identify the action of a character in a suitable video.

In a word, MonoPerfCap can be applied to various scenarios, and the complex background will not affect the recognition result. It has already surpassed the previous scheme in terms of performance. The detail rendering is particularly remarkable, and it can be said to be a more colorful one. Monocular motion capture program. Interested students can also click to read the original download paper.

60V Battery pack

60V Battery Pack ,Battery Pack With Outlet,Back Up Battery Pack,Ev Battery Pack

Zhejiang Casnovo Materials Co., Ltd. , https://www.casnovo-new-energy.com