使用TensorFlow进行手势识别

本文介绍了一个使用TensorFlow进行手势识别的项目,包括挥手、拳头抽气、跑步和随机运动。通过Arduino ESP32 Thing Plus收集运动数据,使用Python脚本进行训练,并最终在Twitch直播互动中简化控制命令。
摘要由CSDN通过智能技术生成

A TensorFlow gesture detector (waving, fist pumping, running, random motion) for the Atltvhead project and exploration into data science.

用于Atltvhead项目的TensorFlow手势检测器(挥手,拳头抽气,跑步,随机运动)和数据科学探索。

While completing this project I made some tools for anyone who wants to replicate this with their own gestures. All my files are found in this Github Repo. Getting up and Running TLDR:

在完成此项目时,我为想要用自己的手势复制它的任何人提供了一些工具。 我的所有文件都可以在Github Repo中找到。 启动并运行TLDR:

  • If you use Docker you can use the JupyterNotebook’s Tensorflow container or build my makefile

    如果您使用Docker,则可以使用JupyterNotebook的Tensorflow容器或构建我的makefile

  • Get started by uploading the capture data ino in the Arduino_Sketch folder onto a Sparkfun ESP32 Thing Plus with a push button between pin 33 and GND and Adafruit LSM6DSOX 9dof IMU connect with a Qwiic connector.

    通过将Arduino_Sketch文件夹中的捕获数据ino上载到Sparkfun ESP32 Thing Plus并通过引脚33和GND之间的按钮开始操作,IMU的Adafruit LSM6DSOX 9dof IMU通过Qwiic连接器进行连接。

  • Use the Capture data python script in the Training_Data folder. Initiate this script on any pc, type in the gesture name, start recording the motion data by pressing the button on the Arduino

    使用Training_Data文件夹中的Capture data python脚本 。 在任何PC上启动此脚本,输入手势名称,然后按Arduino上的按钮开始记录运动数据

  • After several gestures were recorded change to a different gesture, do it again. I tried to get at least 50 motion recordings of each gesture, you can try less if you like.

    记录了几个手势后,将其更改为另一个手势,请再次执行此操作。 我尝试每个手势至少获取50个运动记录,如果愿意,可以尝试更少。
  • Once all the data is collected, navigate to the Python Scripts folder and run data pipeline python script and model pipeline script in that order. Models are trained here and can time some time.

    收集完所有数据后,导航到“ Python脚本”文件夹,并按此顺序运行数据管道python脚本模型管道脚本 。 在这里训练模型,并且可以花一些时间。

  • Run predict gesture script and press the button on the Arduino to take a motion recording and see the results printed out. Run the tflite gesture prediction script to run with the smaller model.

    运行预测手势脚本 ,然后按Arduino上的按钮进行运动记录并查看打印结果。 运行tflite手势预测脚本以与较小的模型一起运行。

Problem:

问题:

I run an interactive live stream. I wear an old tv (with working led display) for a helmet and a backpack with an integrated display. Twitch chat controls what’s displayed on the television screen and the backpack screen through chat commands. Together Twitch chat and I go through the city of Atlanta, Ga spreading cheer.

我运行一个交互式实时流。 我戴着一台旧电视(带有可正常工作的LED显示屏),用于头盔和带有集成显示屏的背包。 Twitch聊天通过聊天命令控制电视屏幕和背包屏幕上显示的内容。 在一起Twitch聊天时,我经历了乔治亚州亚特兰大市的欢呼声。

As time has gone on, I have over 20 channel commands for the tv display. Remembering these commands has become complicated and tedious. So it’s time to simplify my interface to the tvhead.

随着时间的流逝,我有20多个电视显示频道命令。 记住这些命令变得复杂而乏味。 因此,现在该简化我与tvhead的界面了。

What are my resources? During the live stream, I am on rollerblades, my right hand is holding the camera, my left hand has a high five detecting glove I’ve built from a lidar sensor and esp32, my backpack has a raspberry pi 4, and a website with buttons that post commands in twitch chat.

我有什么资源? 在直播期间,我身着旱冰鞋,右手握着相机,左手有一个我用激光雷达传感器和esp32制成的高五指检测手套,我的背包有一个raspberry pi 4,在抽搐聊天中发布命令的按钮。

What to simplify? I’d like to simplify the channel commands and gamify it a bit more.

要简化什么? 我想简化通道命令,并使其更加游戏化。 <

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值