项目地址:github.com/xiaoyou-bil…
本项目没有任何自己原创的部分(由于已经有现成的了)这儿只是简略介绍一下项目如何运用以及项目的原理
作用演示:www.bilibili.com/video/BV1cP…
项目运转
原项目地址:github.com/huahuahuage…
原项目的依赖可能有些装置不上,这儿能够运用本仓库的requirements.txt
还有项目最好运用python3.7的,运用3.9的话TensorFlow会装置不上。。
# 自己按照本来的项目去下载模型和openpose文件,然后装置依赖运转就能够了,运用十分简略
pip install -r requirements.txt
python launch.py
原理简略介绍
项目主要分成以下几步
视频姿势辨认
榜首部分是提取出视频里边的动作信息 这儿运用的openpose这个项目,目前这个项目在github有25k star
主要的中心代码便是下面这部分,这儿其实便是调用了openpose来生成动作信息
# OpenPose组件相关路径
openpose_path = os.path.join(work_dir, "utils/openpose/bin/OpenPoseDemo.exe")
openpose_write_json_path = os.path.join(video_dir, "_json")
openpose_write_video_path = os.path.join(video_dir, "_openpose.avi")
# 运转组件
os.chdir(os.path.join(work_dir, "utils/openpose"))
cmd_part_1 = "{} --model_pose COCO --video {} --write_json {} --write_video {} --number_people_max 1 --net_resolution "-1x240"".format(openpose_path, video_path, openpose_write_json_path, openpose_write_video_path)
os.system(cmd_part_1)
os.chdir(work_dir)
我们能够看一下json的内容,其实便是一堆坐标点(每一帧都有一个json文件)
{"version":1.3,"people":[{"person_id":[-1],"pose_keypoints_2d":[516.183,157.359,0.965886,495.35,226.672,0.894758,455.993,232.744,0.814519,419.86,302.161,0.661659,389.612,323.369,0.366841,543.449,211.66,0.842323,630.887,202.454,0.793637,643.068,139.28,0.830431,498.183,371.487,0.727788,522.463,495.224,0.748898,546.481,621.947,0.70364,546.511,362.499,0.68047,522.464,495.231,0.70542,498.347,591.737,0.705498,498.184,151.241,0.929691,519.331,151.271,0.936586,471.082,154.305,0.999959,0,0,0],"face_keypoints_2d":[],"hand_left_keypoints_2d":[],"hand_right_keypoints_2d":[],"pose_keypoints_3d":[],"face_keypoints_3d":[],"hand_left_keypoints_3d":[],"hand_right_keypoints_3d":[]}]}
生成3D姿势平面数据
原始项目地址:github.com/una-dinosau…
简略来说便是生成下面这种图,以及把姿势数据提取出来
姿势数据如下
516.178 157.36149999999998 495.3515 226.665 455.998 232.72750000000002 419.86350000000004 302.1445 380.58000000000004 323.281 543.4515 211.661 630.8705 202.44850000000002 643.069 139.2785 498.20050000000003 371.48400000000004 522.4615 495.226 546.4755 621.9404999999999 546.5125 362.4925 522.4545 495.2335 498.346 591.741 498.17949999999996 151.24450000000002 519.328 151.2745 471.0795 154.3055 0.0 0.0
姿势数据深度推定
原始项目地址:github.com/iro-cp/FCRN…
这儿一步会生成深度信息,便是下面这种
深度信息的成果如下
生成vmd动作文件
这儿便是最终一步了,这儿会依据前面的的动作信息和深度信息来生成最终的mmd文件
代码的榜首行便是在读取初音的骨骼信息
def run(target, verbose, centerxy, centerz, xangle, ddecimation, mdecimation, idecimation, alignment, legik, heelpos):
bone = os.path.join(os.getcwd(), "data/born/animasa_miku_born.csv")
upright = 0
骨骼信息如下
主要是各个骨骼的一些坐标,能够看到这个骨骼是依据miku的模型来生成的,所以可能对其他模型不适用
别的vmd动作文件内容网上有一篇不错的的文章介绍,感兴趣的能够看下:www.jianshu.com/p/ae312fb53…
相关问题
TensorFlow装置不上能够运用下面这个命令来装置
pip install tensorflow==1.13.1 -i https://pypi.tuna.tsinghua.edu.cn/simple
CMake must be installed to build dlib
由于你电脑没有装置cmake,请自行装置,装置地址:cmake.org/download/
protobuf装置包问题
错误信息如下
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
这个是protobuf版别的问题,能够装置下面这个
pip install protobuf==3.19.0