- 博客(1)
- 资源 (6)
- 收藏
- 关注
原创 【OPENCV】基于背景差法的运动目标检测
平台:VS2010+opencv 2.3.11、读取视频数据,并初始化相关的变量,如红框的初始值;2、灰度转化cvCvtColor;3、二值化cvThreshold;4、均值滤波cvSmooth;5、sobel算子cvSobel;6、腐蚀cvErode;7、膨胀cvDilate;8、逐帧比较,找出运动物体的边界;9、在原视频中,将运动物体的边界用红框标出;以
2015-04-23 10:30:48 4322 1
How NLP Cracked Transfer Learing
The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural
Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences
in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community
has been putting forward incredibly powerful components that you can freely download and use in your own models
and pipelines
.
One of the latest milestones in this development is the release (https://ai.googleblog.com/2018/11/open-sourcing-bertstate-
of-art-pre.html) of BERT (https://github.com/google-research/bert), an event described
(https://twitter.com/lmthang/status/1050543868041555969) as marking the beginning of a new era in NLP. BERT is a
model that broke several records for how well models can handle language-based tasks. Soon after the release of the
paper describing the model, the team also open-sourced the code of the model, and made available for download
versions of the model that were already pre-trained on massive datasets. This is a momentous development since it
enables anyone building a machine learning model involving language processing to use this powerhouse as a
readily-available component – saving the time, energy, knowledge, and resources that would have gone to training a
language-processing model from scratch.
(It’s been referred to as NLP’s ImageNet moment (http://ruder.io/nlp-imagenet/), referencing how years
ago similar developments accelerated the development of machine learning in Computer Vision tasks)
2018-12-04
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人