论文笔记-Personalizing Dialogue Agents: I have a dog, do you have pets too?

Abstract

闲聊机器人存在的问题:

1.The lack of specificity,do not display a consistent personality and are often captivating.

文章所做的工作

2.By the given profile information and the information about the person they are talking to predict the utterance prediction

Introduction

1.communication between a human and a machine is still in its infancy。
2.It‘s only recently that neural models had sufficient capacity to face the large datasets, but these models for a short time quickly express their weakness(Serban et al., 2016; Vinyals and Le,2015).

chit-chat models 存在的问题

(1)the lack of consistent personality(Li et al.,2016a)
(2)the lack of explicit long-term memory(Vinyals and Le, 2015)
(3)produce non-specific answers like “I don’t know”(Li et al., 2015)

原因

Those problems are due to there being no good publicly available data set fro general chit-chat.
4.Because of the low quality of conversational models and the difficulty in evaluating these models, chit-chat is often an ignored end-application.

task-oriented communication

5.The researchers community has focused on task-oriented communication, such as airline or restaurant booking.(Bordes and Weston, 2016) or else single-turn information seeking, question answering(Rajpurkaret al., 2016)
6.Despite the success of the task-oriented dialogue system, is well-known that a large quantity of human dialogue centers on socialization, personal interests and chit-chat (Dunbar et al., 1997).

persona的产生方式

7.persona encoded by multiple sentences termed a profile, this profile can be stored in a memory-augmented neural network and then used to produce better responses.
8.The same mechanism, any existing information about the persona of the dialogue partner can be used in the same way.

有关PERSONA-CHAT datasets的介绍

9.present the PERSONA-CHAT datasets, a new dialogue datasets consisting of 164,356 utterances between crowdworkers who were randomly paired and each asked to act the part of a given provided
persona(random assigned, and created by another set of crowdworkers).The PERSONA-CHAT data set is designed to facilitate research into alleviating some of the issues that traditional chitchat models face, and with the aim of making such models more consistent and engaging, by endowing them with a persona.

与其他model的比较

10.The author compare a range of models:both generative and ranking models, including Seq2Seq models and Memory Network(Sukhbaatar et al.,2015) as well other standard retrieval baselines.

Related Work

traditional dialogue systems

1.Traditional dialogue systems consist of building blocks, such as dialogue state tracking components and responses generators, and often been applied to the tasks with labeled internal dialogue state and precisely defined user intent(i.e., goal-oriented dialogue) (Young, 2000)

the most successful goal-oriented dialogue systems

2.The most successful goal-oriented dialogue systems model is partially observable Markov decision process(POMDPs)(Young et al., 2013).

上述methods不足

3.All those methods do not consider the chit-chat setting and are more concerned with achieving functional goals than displaying a personality.
4.Many of the tasks and datasets available are constrained to narrow domains(Serbanet al., 2015).

handed-coded systems

5.hand-coded systems have combined to be used in application to this today,for example modern solutions is build an opened dialogue system to the Alexa challenge combine handed-coded and machine-learned elements(Serban et al., 2017a).
6.Simplest statistical systems that can be used in domain, retrieval models(based on data rather than hand-coding)(Sordoniet al., 2015), rank the responses based on their score with the recent dialogue history.Use IR systems as a baseline in this work.

End-to-end neural approaches

7.End-to-end neural approaches have seen growing recent interest.

Seq2Seq model

8.Seq2Seq applied to dialogue(Sutskever et al., 2014; Vinyals and Le, 2015; Sordoniet al., 2015; Li et al., 2016b; Serban et al.,2017b)
Seq2Seq model的不足
9.Seq2Seq model are able to produce well responses, but they lack long-term coherence and a persistent personality.

解决方式

10.To fix this issues is to use a memory-augmented network by providing or learning appropriate memories.(Sukhbaatar et al., 2015; Dodge et al., 2015)
11.Serban et al. (2015) list available corpora for training dialogue systems.

performance不错的corpus

12.The most relevant to learning chit-chat models based on movie scripts such as OpenSubtitles and Cornell Movie-Dialogue Corpus, and dialogue from web platforms such as Reddit and Twitter, all have been used for training neural approaches(Vinyals and Le, 2015; Dodge et al., 2015; Liet al., 2016b; Serban et al., 2017b).
13.Naively training on these datasets leads to the models with the lack of a consistent personality.
14.Moreover, the data does little to encourage the model to engage in understanding and maintaining knowledge of the dialogue partner’s personality and topic interests.
15.According to Serban et al. (2015)’s survey, personalization of dialogue systems is “an important task, which so far has not received much attention”.

其他goal-oriented dialogue work的不足

16.The other works have focused on the agent being aware of the human’s profile and adjusting the dialogue, but without a personality to the agent itself.(Lucas et al., 2009; Joshi et al., 2017)
17.About the chit-chat setting,the most relevant work is (Liet al., 2016a)

the most relevant work的介绍

For each user in the Twitter corpus, personas were captured via distributed embeddings(one per speaker) to encapsulate individual characteristics( background information and speaking style) and then using those vectors improved the output of their seq2seq model.

use seq2seq model 的不足

Their work does not focus on attempting to engage the other speaker bu getting to know them.

本文解决的问题

18.For that reason, our focus is on explicit profile information, not hard-to-interpret latent variables.

Certainly! Below is a summary of the key steps and tasks you need to complete for the EE-Labs-Part2.pdf document, specifically focusing on the VHDL aspects: ### Summary of Key Steps and Tasks #### 1. **Setting Up the Project** - Open Vivado and load the project from Part 1. - Save it as a new project named "Part2" on your U: drive. - Ensure the correct board is selected by checking the Project Device settings. #### 2. **Using All Digits on the Display** - Download `drive_display.vhd` from Canvas. - Replace the existing display entity with `drive_display`. - **Entity Declaration**: ```vhdl entity drive_display is Port ( CLK100MHZ : in std_logic; display : in STD_LOGIC_VECTOR(31 downto 0); DIGITS : out STD_LOGIC_VECTOR(7 downto 0); SEGMENTS : out STD_LOGIC_VECTOR(7 downto 0) ); end drive_display; ``` - Test the display by initializing a local variable `display` in your top-level VHDL file and assigning it a recognizable value (e.g., `0x01234567`). #### 3. **Adding the Display to the ALU Inputs and Outputs** - Modify the ALU to use the new display for better visibility. - Assign specific parts of the `display` signal to show different ALU parameters: - Digit 6: Opcode - Digit 4: Result - Digit 2: Second Input - Digit 0: First Input - Set the remaining digits to zero. - Synthesize the design and verify its functionality. #### 4. **Making Better Use of the Simulator: Testbenches** - Download `testbench.vhd` from Canvas and add it as a simulation source. - **Testbench Structure**: ```vhdl library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity tb is end tb; architecture Behavioral of tb is signal CLK100MHZ : std_logic := '0'; signal BTNL, BTNR, BTNU, BTND, BTNC : std_logic; signal SWITCHES : STD_LOGIC_VECTOR(15 downto 0); signal LEDS : STD_LOGIC_VECTOR(15 downto 0); signal DIGITS : STD_LOGIC_VECTOR(7 downto 0); signal SEGMENTS : STD_LOGIC_VECTOR(7 downto 0); begin uut : entity work.micro(Behavioral) port map ( CLK100MHZ => CLK100MHZ, BTNL => BTNL, BTNR => BTNR, BTNU => BTNU, BTND => BTND, BTNC => BTNC, SWITCHES => SWITCHES, LEDS => LEDS, DIGITS => DIGITS, SEGMENTS => SEGMENTS ); CLK100MHZ <= not CLK100MHZ after 5 ns; process begin SWITCHES <= "0000000000000000"; BTNL <= '1'; wait for 100 ns; SWITCHES <= "0001000000010001"; wait; end process; end Behavioral; ``` - Run a behavioral simulation to verify the ALU's operation. - Use the simulator's features to zoom, navigate, and inspect signal values. #### 5. **Personalizing the ALU to Your Student ID** - Determine the opcodes based on the last digit of your student ID using the provided table. - Example for ID ending in 2: - `num1 + num2`: Opcode 1 - `num1 - num2`: Opcode 15 - `num2 - num1`: Opcode 3 - etc. - Implement logic to handle unrecognized opcodes by outputting zero. #### 6. **Uploading Your Final Result** - Upload your final code and a simulation showing the correct operation of your personalized ALU to the Canvas assignment for Lab 2. - Ensure the simulation demonstrates the effect of each opcode with `num1 = 0100` and `num2 = 0011`. - Save your final project on your U: drive for accessibility and safety. By following these steps, you will complete the lab assignment effectively and ensure that your ALU is fully functional and personalized according to your student ID.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值