- 博客(427)
- 资源 (10)
- 收藏
- 关注
原创 假期离校必备:Mac远程连接Win10桌面(设置断电自动重启、Win10配置远程桌面、Microsoft Remote Desktop Beta远程桌面连接、将Win作为服务器可以conda跑代码)
假期离校必备:Mac远程连接Win10桌面(设置断电自动重启、Win10配置远程桌面、Microsoft Remote Desktop Beta远程桌面连接、将Win作为服务器可以conda跑代码)
2022-11-01 21:13:49 4382
原创 (一零八):GRIT: Faster and Better Image captioning Transformer Using Dual Visual Features
(一零八):GRIT: Faster and Better Image captioning Transformer Using Dual Visual Features
2023-03-01 21:21:24 719
转载 台大李宏毅报告:ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程
台大李宏毅报告:ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程
2023-02-19 21:01:35 781
原创 (一零三):CLIPascene: Scene Sketching with Different Types and Levels of Abstraction
(一零三):CLIPascene: Scene Sketching with Different Types and Levels of Abstraction
2022-12-09 10:04:16 177
原创 (一零一):ClipCap: CLIP Prefix for Image Captioning
(一零一):ClipCap: CLIP Prefix for Image Captioning
2022-11-25 14:38:18 685
原创 (一百): A Reference-free Evaluation Metric for Image Captioning
(一百): A Reference-free Evaluation Metric for Image Captioning
2022-11-23 23:06:38 314
原创 时间序列预测算法梳理(Arima、Prophet、Nbeats、NbeatsX、Informer)
时间序列预测算法梳理(Arima、Prophet、Nbeats、NbeatsX、Informer)
2022-11-02 11:14:00 6825
原创 (九十七):Gumbel-Attention for Multi-modal Machine Translation
(九十七):Gumbel-Attention for Multi-modal Machine Translation
2022-07-17 21:37:05 138
原创 Mac使用Homebrew安装Redis
Mac使用Homebrew安装RedisHomebrew安装Redis1. Redis配置文件位置2. 使用Redis的常用命令若Mac未安装Homebrew,见我的博客:https://blog.csdn.net/qq_37486501/article/details/80632201注意:有可能因为系统升级为Big Sur,导致Homebrew不好用,可以更新:brew update-resetHomebrew安装Redisbrew install redis1. Redis配置文件位
2022-05-15 13:05:32 1555
原创 (九十四):GLU Variants Improve Transformer
(九十三):GLU Variants Improve TransformerAbstract1. Introduction2. Gated Linear Units (GLU) and Variants3. Experiments on Text-to-Text Transfer Transformer (T5)3.1 Model Architecture3.2 Pre-Training and Perplexity Results3.3 Fine-Tuning4. Conclusions出处:CoRR
2022-05-02 19:58:35 514
原创 (九十三):Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transform
(九十三):Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersAbstract1. Introduction2. Related WorkExplainability in computer visionExplainability for TransformersTransformers in computer vision3. MethodRelevancy i
2022-05-02 19:57:02 205
原创 (九十二):Re-evaluating Automatic Metrics for Image Captioning
(九十二):Re-evaluating Automatic Metrics for Image CaptioningAbstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions出处:EACL (1) 2017: 199-209代码:题目:主要内容:Abstract从图像中生成自然语言描述的任务近年来受到了广泛的关注。因此,以一种自动的方式评估这种图
2022-04-30 10:20:03 188
原创 (九十一):Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answer
标题Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions出处:代码:题目:主要内容:Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions...
2022-04-25 13:57:17 360
原创 解决macbook没有ip地址或者ip地址变为ipv6格式 无法上网的问题
解决macbook没有ip地址或者ip地址变为ipv6格式 无法上网的问题
2022-04-25 07:12:34 8926
原创 (九十):Multimodal Transformer for Multimodal Machine Translation
(九十):Multimodal Transformer for Multimodal Machine TranslationAbstract1. Introduction2. Methodology2.1 Incorporating Method2.2 Multimodal Self-attention3 Experiment3.1 Baselines and Metrics3.2 Datasets3.3 Settings3.4 Results3.5 Visualization Analysis3.6 Ab
2022-04-17 17:25:39 438
原创 (八十九):Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization
(八十九):Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive SummarizationAbstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions出处:EMNLP (1) 2021: 3995-4007代码:https://github.com/h
2022-04-07 15:35:17 104
转载 利用Dataset与Dataloader自定义数据集
自定义Dataset使用Dataloaderpytorch教程:https://pytorch.org/docs/1.7.1/data.htmlPytorch之Dataset与DataLoader打造你自己的数据集,源码阅读https://chenllliang.github.io/2020/02/04/dataloader/Map式数据集必须要重写__getitem__(self, index),len(self) 两个内建方法,用来表示从索引到样本的映射(Map).这样一个数据集d
2022-04-04 08:44:22 1125
原创 (八十八):Pay Attention to MLPs
(八十八):Pay Attention to MLPsAbstract1. Introduction2. Model2.1 Spatial Gating Unit3. Image Classification5. Conclusions出处:CoRR abs/2105.08050 (2021)代码:https://paperswithcode.com/paper/pay-attention-to-mlps#code题目:关注mlp主要内容:Transformer中基于门控的mlpAbstrac
2022-04-01 10:58:30 278
原创 (八十七):Visual Attention Network
(八十七):Visual Attention NetworkAbstract1. Introduction2. Related Work2.1 Convolutional Neural Networks2.2 Visual Attention Methods2.3 Vision MLPs3. Method3.1 Large Kernel Attention3.2 Visual Attention Network (VAN)4. Experiment4.1 Image Classification4.2 Ob
2022-04-01 10:20:51 114
原创 (八十六):When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Me
(八十六):When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention MeAbstract1. Introduction2. Related WorkAttention and Vision TransformersMLP VariantsShift Operation3. Shift Operation Meets Vision TransformerArchitecture Ov
2022-03-31 21:27:02 75
原创 (八十五):Is Space-Time Attention All You Need for Video Understanding?
(八十五):Is Space-Time Attention All You Need for Video Understanding?Abstract1. Introduction2. Related Work3. The TimeSformer Model4. Experiment5. Conclusions出处: ICML 2021: 813-824代码:https://github.com/microsoft/HMNet题目:时空注意力是你理解视频所需要的全部吗?主要内容:Abstrac
2022-03-31 20:18:38 240
原创 (八十四):A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining
(八十四):A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain PretrainingAbstract1. Introduction2. Problem Formulation3. Method3.1 Encoder3.1.1 Role Vector3.1.2 Hierarchical Transformer3.2 Decoder3.3 Pretraining4. Experiment5. Evalua
2022-03-31 14:35:38 1013
原创 (七十九):Dynamic Fusion with Intra- and Inter-modality Attention Flow for Visual Question Answering
Dynamic Fusion with Intra- and Inter-modality Attention Flow for Visual Question AnsweringAbstract1. Introduction2. Related Work3. Dynamic Fusion with Intra- and Inter- modality Attention Flow for VQA3.1. Overview3.2. Base visual and language feature extra
2022-03-27 09:11:30 70
原创 (八十二):Multimodal Transformer with Multi-View Visual Representation for Image Captioning
(八十二):Multimodal Transformer with Multi-View Visual Representation for Image CaptioningAbstract1. Introduction2. Related WorkA:图像字幕Image CaptioningB:注意力机制Attention Mechanism3. Multimodal transformerA. The Transformer ModelB. Multimodal Transformer for Imag
2022-03-27 09:11:17 266
原创 (八十三):Vision Transformer with Deformable Attention
标题Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions出处:代码:题目:主要内容:Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions...
2022-03-27 09:10:54 347
原创 MobaXterm查看远端服务器上的Tensorboard
MobaXterm查看远端服务器上的Tensorboard1. 安装tensorboard2. 使用tensorboard3. 利用MobaXterm建立ssh隧道,实现远程端口到本机端口的转发1. 安装tensorboardpip install tensorboard2. 使用tensorboard(可见我的博客:https://blog.csdn.net/qq_37486501/article/details/118598891)tensorboard --logdir=log_dir --
2022-03-21 20:02:11 3513 1
原创 (八十一):Image Change Captioning by Learning from an Auxiliary Task
Image Change Captioning by Learning from an Auxiliary TaskAbstract1. Introduction2. Related Work图像字幕:变化检测方法:3. Background图像改变字幕:组合查询图像检索:4. Our Approach4.1. Joint Primary and Auxiliary Networks 联合主辅网络主要→辅助:辅助→主要:4.2. Model Training5. Experimental Results5.
2022-03-06 14:29:14 479
原创 Fibonacci数列的DP演算法java代码(暴力法、Top-down、Bottom-up)
Fibonacci数列的DP演算法(暴力法、Top-down、Bottom-up)package leetcode_practise;public class Fibonacci_DP { // 方法一:迭代暴力 public static int fib_recursive(int N) { if (N == 1 || N == 2) return 1; else // 存在大量重复计算
2022-02-21 22:43:42 305
原创 (七十八):Co-attending Free-form Regions and Detections with Multi-modal Multiplicativ
标题Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions出处:代码:题目:主要内容:Abstract1. Introduction2. Related Work3. Method4. Experimental Setup5. Evaluation Results6. Conclusions...
2022-02-16 11:54:02 393
原创 Linux服务器jdk配置
1. oracle官网下载对应的jdk版本https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html2. 解压jdk tar -zxvf jdk-8u181-linux-x64.tar.gz 3. 配置环境变量参考:https://www.jianshu.com/p/add543fb9167https://blog.csdn.net/kdongyi/article/details/10700206
2022-01-26 20:24:18 3253
原创 利用transformers包加载预训练好的Bert模型
利用transformers包加载预训练好的Bert模型得到句子Embedding1. transformers包加载预训练好的Bert模型2. 得到句子Embedding(1)encode()方法:仅返回input_ids(2)encode_plus()方法:返回所有的编码信息3. Eg:以上代码整理,可跑1. transformers包加载预训练好的Bert模型# 1. 导入包import torchfrom transformers import BertTokenizer# 2. 所需要
2021-12-28 21:20:29 2918
原创 项目中:Json文件的读取
项目中:Json文件的读取读Json文件取Json文件中内容举例:举例:Json文件内容如下(Flickr8k){'images': [{'sentids': [39300, 39301, 39302, 39303, 39304], 'imgid': 7860, 'sentences': [{'tokens': ['a', 'girl', 'in', 'blue', 'is', 'jumping', 'on', 'the', 'shore', 'as', '
2021-12-28 14:48:58 1975 2
原创 (七十六):Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision LearnersAbstract1. Introduction2. Related Work3. Method4. ImageNet Experiments4.1. Main Properties4.2. Comparisons with Previous Results4.3. Partial Fine-tuning5. Transfer Learning Experiments6.Discussion and Conclus
2021-12-25 21:19:44 4340
原创 (七十五):Bangla Image Caption Generation through CNN-Transformer based Encoder-Decoder Network
(七十五):Bangla Image Caption Generation through CNN-Transformer based Encoder-Decoder NetworkAbstract1. Introduction2. Related Work3. Method3.1 Encoder特征提取位置编码多头注意Q和K的点积降低注意力得分将Softmax输出与值向量相乘残差连接(residual connections)、层归一化(LayerNormalization)和前馈网络(feed forw
2021-12-20 19:51:45 494
方面级情感分析.pptx
2021-05-26
Paddle.zip
2021-02-03
机器学习模型部署在Flask的Web平台上demo:SpamPredictionWeb.zip
2020-02-29
Python的Numpy用法
2020-02-23
Python 环境配置(Anaconda和Pycharm).pdf
2020-02-20
demo+test.zip
2019-06-24
MySql_5.5安装图解说明.docx
2019-06-24
web课设作业--学生管理系统
2018-07-18
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人