2019.6.13 GAN note section one

generator against to discriminitor.

input a random vector 

step1:fix generator G,and update discriminator D(one is G generated,one is dataset) train the discriminitor (assigh high score high scores to real objects) 

step 2: fix D,upgrade G:learn to fool the D. use the fixed D, then train the generator. (double layer in the NN, fix the later, then change the front) ----use gradient ascent

algorithm is as followed

in each training iteration:

sample m example from database

sample m noise samples from a distribution

generator false -->small is good

update D

V=1/m sum(logD(xi)) +1/m sum(log(1-D(xi)))

to maximize the funtion  a<--a+l*grad(a)

(gradient ascent)

then train G 

random vector to generate 

update G

V=1/m *sum(log(D(G(zi))))

a<---a+h*gradient(a)

STRUCTURE LEARNING

Machine learning is to find a function f 

regression scale

classification class

structured learning other transfer learning  recognition chat-bot 

one-shot/zero-shot learning -----most class do not have any training data machine has to create new staff 

machine has to learn to do planning component-by-component 

random G then D 

structure learning solution ------> GAN

bottom up (G)

top down  (D)

 

why G not do this alone?

hope input vecter has connection with output vecter

Auto-encoder  

coffee---->encoder NN----->code------>NN decoder------->coffee

NN decoder is the Generator

variational auto-encoder (put some noise)

shortcoming: the diffence of the two pictire is hard to difined

the relation between the components are critical.

why D not work alone?

D is a function D

D:X--->R

input : image

output: scores

D : such as convolutional NN

X=arg max D(x) enumerate all possible x!!!

do not feasible 

only have positive example 

iterations solve the problem: because the D can generate .

but result trained by D are scord high  compare with real data  

 

let us compare the G with D

G: easy to generate  but hard to learn the correlation between components 

D: solve the argmax . make assumption . it is hard to do that.

take measures:

introduce G to learn how to solve the difficulty to solve the argmax problem.

 

 

thoughts:

利用反向思维把分类器逆向做生成。

自己完成不了的任务,引入一个辅助分类器G。

两者辅助像警察与小偷。学生和老师。更像生物进化。个人理解为一种仿生思想。

用随机数生成人的行为。那么产生的东西应用到机器人上面可能产生未知情绪。

情绪的产生变成了随机数的生成。那么控制这些随机数就可以产生人的思维。

控制这些随机数的方法也是一个研究的方向。

如果保证每个维度与输出有些密切的联系,那么就可以打开黑匣子。

否则这就是 I, robot 里面的思想所在。

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值