自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(1)
  • 资源 (16)
  • 收藏
  • 关注

转载 ubuntu无法ssh连接

Ubuntu安装时默认安装了ssh的client但没有安装server,这样在外部无法ssh连接到ubuntu虚拟机,需要安装sshd 下面为转载安装方法: ubuntu开启SSH服务 SSH分客户端openssh-client和openssh-server 如果你只是想登陆别的机器的SSH只需要安装openssh-client(ubuntu有默认安装,如果没有则sud

2015-01-13 11:00:16 907

CVPR2019-ocr.zip

Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps using a sliding window-based method, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.10% and 97.15%, respectively, which are significantly better than the best result reported thus far in the literature.

2020-05-28

基于深度学习的文字识别技术现状及发展趋势.pdf

为线上查找到的金莲文老师演讲ppt资源;主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。

2020-05-28

人脸识别、行人ReID图像分割

Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.

2020-05-27

对抗学习-图像生成Gan.zip

几篇gan论文。We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss func- tion to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demon- strate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a commu- nity, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either

2020-05-25

人流统计及视频人流属性分析相关监控专利.zip

安防监控相关的,人流统计、人脸属性分析专利。一种人群跟踪及人流量统计的方法及装置;人流量统计的方法及装置;一种基于人脸属性的人群分析方法及装置。

2020-05-21

单向准连通的表格线检测算法_彭绍湖.pdf

针 对 表 格 框线存 在 倾 抖 破 裂 断裂及 字符 与 表线 粘 连 等情 况 对 表格 框 线的 检 浏 方 法 进行 了 深入 研 究了 表格 框 线 检 浏 与处 理 相 结 合 的 方 法获 取 表 线。采 用在表 格框线 检 浏 中 提 出 基 于 单 向 准 连 通 的 检 浏 方 法 有 效 地 克 服 了框 线 的 倾 料 破裂及 拈连 等情 况 在 表 格 框线 的 处 理 中 采 用 对 检 浏 线 的 连 接 和 筛 选 的 方 法 有 效 解 决 了 表 格 框 线 断 裂的 问 题。通过 大 1 的 实 脸 表 明 该 方 法 能 取 得 较好 的 检 浏 效果

2020-05-20

handwriting.zip

OCR相关的一些论文 At present, text orientation is not diverse enough in the existing scene text datasets. Specifically, curve-orientated text is largely out-numbered by horizontal and multi-oriented text, hence, it has received minimal attention from the community so far. Motivated by this phenomenon, we collected a new scene text dataset, Total-Text, which emphasized on text orientations diversity. It is the first relatively large scale scene text dataset that features three different text orientations: horizontal, multi- oriented, and curve-oriented. In addition, we also study several other important elements such as the practicality and quality of ground truth, evaluation protocol, and the annotation process. We believe that these elements are as important as the images and ground truth to facilitate a new research direction. Secondly, we propose a new scene text detection model as the baseline for Total-Text, namely Polygon-Faster-RCNN, and demonstrated its ability to detect text of all orientations.

2020-05-20

tiplog.odt

要进行准确的人流密度估计,面临了如下的难点 1.低分辨率:可以看看UCF Crowd Counting 50这个数据集,在很多密集的情况下,一个人头的pixel可能只有5*5甚至更小,这就决定了基于检测的很多方法都行不通; 2.遮挡严重:在人群中,头肩模型都难以适用更不用说人体模型,头部之间的遮挡都挺严重; 3.透视变换:简而言之就是近大远小,什么尺度的头部都可能出现。

2020-05-19

paper——crf,attention

一些深度学习论文;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等论文。

2020-05-18

大数运算c++

大数运算代码;写了大数运算基本的加减乘除、矩阵求逆运算及矩阵加减乘除。采用字符数组的方式实现,可以通过宏来设定所需要的精度,包括小数点后的位数。

2016-07-31

vs10.0助手

vs10.0助手

2016-07-31

vs6.0 助手

VS6.0助手

2016-07-31

QT opencv camera

QT调用opencv操作摄像头,并上传拍摄图像到服务器获取相应结果。

2016-02-05

C编程指南及面试逻辑题与答案

C语言深度剖析教程,c,c++编程指南教程,计算机面试逻辑题及答案(word版)

2014-04-17

VC2010,opencv,matlab2012混合编程方法总结

介绍了VC调用matlab的方法,及相关配置;并给出了示例;给出了opencv中CvMat,Mat,IPlimage以及Matlab中mexArray的相互转换方法。转换后即可对其中的函数进行相互的调用和传递综合。

2013-01-11

计算机绘图片书中的源代码

《VC++绘图程序实例及典型习题》一书中的源代码,有打开图像,绘图,三维图形,动画图形绘制方面的代码。

2012-11-30

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除