基于事件驱动的机器人视觉触觉感知与学习Event-Driven Visual-Tactile Sensing and Learning

通过神经形态感知和学习实现准确、快速和低功耗的多感官感知,这项工作贡献了一个事件驱动的视觉触觉感知系统,包括一个新颖的生物启发触觉传感器和基于多模态尖峰的学习。

机器人的事件驱动视觉触觉感知和学习,Tasbolat Taunyazov★, Weicong Sng★, Hian Hian See, Brian Lim, Jethro Kuan★, Abdul Fatir Ansari★, Benjamin Tee, and Harold Soh★, Robotics: Science and Systems Conference (RSS), 2020

链接github:GitHub - clear-nus/VT_SNN: VT-SNN

许多日常任务需要多种感官模式才能成功完成。例如,考虑从冰箱里取出一盒豆浆;人类使用视觉来定位纸箱,并可以通过简单的掌握来推断纸箱中含有多少液体。这一推断是使用高效能的神经基质进行的——与当前的人工系统相比,人类大脑需要的能量要少得多。

在这项工作中,我们从异步和事件驱动的生物系统中获得灵感。我们提供了一个事件驱动的视觉触觉感知系统,包括NeuTouch(一种受生物启发的触觉传感器)和VT-SNN(用于多模态尖峰感知)。

我们在两个机器人任务上评估了我们的视觉触觉系统(使用NeuTouch和Prophesee事件摄像机):集装箱分类和旋转滑动检测。我们表明,通过我们的原型传感器和尖峰模型,可以区分相对较小的重量差异(20个物体重量类别中约30克)。第二个实验表明,可以在0.08秒内准确检测旋转滑移。当在Intel Loihi上测试时,SNN实现了与GPU类似的推理速度,但所需的功率要少一个数量级。

 

Getting Started

Cloning the Repository

This project also requires a fork of the SLAYER framework to learn a Spiking Neural Network (SNN), which we have included here as a git submodule. To obtain the full set of dependencies, clone this repository recursively:

git clone https://github.com/clear-nus/VT_SNN/ --recursive

Installing Requirements

The requirements for this project that can be installed from PyPI are found in requirements.txt. To install the requirements, run:

pip install -r requirements.txt

This project also requires a fork of the SLAYER framework to learn a Spiking Neural Network (SNN), which we have included here as a git submodule. To install this dependency, run:

   cd slayerPytorch
   python setup.py install

This repository has been tested with the declared sets of dependencies, on Python 3.6.10.

Datasets

The datasets are hosted on Google Drive.

  1. Slip Dataset (LiteFull)

We also provide helper scripts for headless fetching of the required data. For slip:

./fetch_slip.sh

The preprocessed data can be also downloaded with parameters specified in the paper:

./fetch_slip.sh preprocess

Basic Usage

We provide the scripts for preprocessing the raw event data, and training the models in the vtsnn folder. We provide code for the 3 models presented in our paper:

  1. VT-SNN (Using SLAYER)
  2. ANN (MLP-GRU)
  3. CNN3D

The repository has been carefully crafted to use guild.ai to track experiment runs, and its use is encouraged. However, instructions for running each script (using both guild and vanilla Python) can be found in each script.

To see all possible operations, run:

guild operations

For example, to run our VT-SNN tactile-only model on the Container-Weight classification task, run:

guild run vtsnn:train-cw mode=tact data_dir=/path/to/data

Visit the vtsnn/train_*.py files for instructions to run with vanilla Python.

BibTeX

To cite this work, please use:

@inproceedings{taunyazov20event,
    title={Event-Driven Visual-Tactile Sensing and Learning for Robots}, 
    author={Tasbolat Taunyazoz and Weicong Sng and Hian Hian See and Brian Lim and Jethro Kuan and Abdul Fatir Ansari and Benjamin Tee and Harold Soh},
    year={2020},
    booktitle = {Proceedings of Robotics: Science and Systems}, 
    year      = {2020}, 
    month     = {July}}

Troubleshooting

if your scripts cannot find the vtsnn module, please run in the root directory:

export PYTHONPATH=.
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值