当代人工智能 实验一 使用神经网络解决三好学生问题

  1. 运行并测试三好学生问题代码,根据实验分析学习率对网络的影响。
  2. 若某校三好学生在评判准则为德育分、智育分、体育分和社会实践构成,有三位学生的分数分别如下。试使用神经网络求解每一项分数的权重。
    德育分智育分体育分社会实践总分
    9080807084
    80901009086
    901007010092

    三好学生代码(TensorFlow 1.x):
     

    import tensorflow as tf
    
    x1 = tf.placeholder(dtype=tf.float32)
    x2 = tf.placeholder(dtype=tf.float32)
    x3 = tf.placeholder(dtype=tf.float32)
    
    w1 = tf.Variable(0.1, dtype=tf.float32)
    w2 = tf.Variable(0.1, dtype=tf.float32)
    w3 = tf.Variable(0.1, dtype=tf.float32)
    
    n1 = x1 * w1
    n2 = x2 * w2
    n3 = x3 * w3
    
    y = n1 + n2 + n3
    
    sess = tf.Session()
    
    init = tf.global_variables_initializer()
    sess.run(init)
    
    result = sess.run([x1, x2, x3, w1, w2, w3, y], feed_dict={x1: 90, x2: 80, x3: 70})
    print(result)
    
    yTrain = tf.placeholder(dtype=tf.float32)
    
    loss = tf.abs(y - yTrain)
    optimizer = tf.train.RMSPropOptimizer(0.001)
    
    train = optimizer.minimize(loss)
    
    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2: 80, x3: 70, yTrain: 85})
    print(result)
    
    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2: 95, x3: 87, yTrain: 96})
    print(result)
    
    for i in range(5000):
    	result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2: 80, x3: 70, yTrain: 85})
    	print(result)
    
    result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2: 95, x3: 87, yTrain: 96})
    print(result)

    若是TensorFlow 2.x的版本,需要修改代码,运行如下代码:

    import tensorflow as tf
    # 定义变量
    w1 = tf.Variable(0.1, dtype=tf.float32)
    w2 = tf.Variable(0.1, dtype=tf.float32)
    w3 = tf.Variable(0.1, dtype=tf.float32)
    # 定义模型函数
    def model(x1, x2, x3):
        n1 = x1 * w1
        n2 = x2 * w2
        n3 = x3 * w3
        y = n1 + n2 + n3
        return y
    # 定义损失函数
    def loss_function(y_pred, y_true):
        return tf.abs(y_pred - y_true)
    # 选择优化器
    optimizer = tf.optimizers.RMSprop(0.001)
    # 训练步骤函数
    def train_step(x1, x2, x3, y_true):
        with tf.GradientTape() as tape:
            y_pred = model(x1, x2, x3)
            loss = loss_function(y_pred, y_true)
        gradients = tape.gradient(loss, [w1, w2, w3])
        optimizer.apply_gradients(zip(gradients, [w1, w2, w3]))
        return loss
    # 示例数据和训练
    x1_data = tf.constant([90, 98], dtype=tf.float32)
    x2_data = tf.constant([80, 95], dtype=tf.float32)
    x3_data = tf.constant([70, 87], dtype=tf.float32)
    y_train_data = tf.constant([85, 96], dtype=tf.float32)
    # 运行训练循环
    for i in range(5000):
        loss = train_step(x1_data, x2_data, x3_data, y_train_data)
        if i % 100 == 0:
            print(f"Iteration {i}, Loss: {loss.numpy()}")
    # 检查最终的权重和损失
    y_pred = model(x1_data, x2_data, x3_data)
    final_loss = loss_function(y_pred, y_train_data)
    print(f"Final weights: {w1.numpy()}, {w2.numpy()}, {w3.numpy()}")
    print(f"Final loss: {final_loss.numpy()}")

    分析权重代码(TensorFlow版本为2.x):

    import tensorflow as tf
    # 定义变量
    w1 = tf.Variable(0.1, dtype=tf.float32)
    w2 = tf.Variable(0.1, dtype=tf.float32)
    w3 = tf.Variable(0.1, dtype=tf.float32)
    w4 = tf.Variable(0.1, dtype=tf.float32)
    # 定义模型函数
    def model(x1, x2, x3,x4):
        n1 = x1 * w1
        n2 = x2 * w2
        n3 = x3 * w3
        n4 = x4 * w4
        y = n1 + n2 + n3 + n4
        return y
    # 定义损失函数
    def loss_function(y_pred, y_true):
        return tf.abs(y_pred - y_true)
    # 选择优化器
    optimizer = tf.optimizers.RMSprop(0.001)
    # 训练步骤函数
    def train_step(x1, x2, x3, x4,y_true):
        with tf.GradientTape() as tape:
            y_pred = model(x1, x2, x3,x4)
            loss = loss_function(y_pred, y_true)
        gradients = tape.gradient(loss, [w1, w2, w3,w4])
        optimizer.apply_gradients(zip(gradients, [w1, w2, w3,w4]))
        return loss
    # 示例数据和训练
    x1_data = tf.constant([90, 80,90], dtype=tf.float32)
    x2_data = tf.constant([80, 90,100], dtype=tf.float32)
    x3_data = tf.constant([80, 100,70], dtype=tf.float32)
    x4_data = tf.constant([70, 90,100], dtype=tf.float32)
    y_train_data = tf.constant([84, 86,92], dtype=tf.float32)
    # 运行训练循环
    for i in range(5000):
        loss = train_step(x1_data, x2_data, x3_data,x4_data, y_train_data)
        if i % 100 == 0:
            print(f"Iteration {i}, Loss: {loss.numpy()}")
    # 检查最终的权重和损失
    y_pred = model(x1_data, x2_data, x3_data,x4_data)
    final_loss = loss_function(y_pred, y_train_data)
    print(f"Final weights: {w1.numpy()}, {w2.numpy()}, {w3.numpy()},{w4.numpy()}")
    print(f"Final loss: {final_loss.numpy()}")

    最终的权重为:

    ω1=0.5046337842941284

    ω2= 0.25738802552223206

    ω3=0.10077429562807083

    ω4=0.13746078312397003

    作者:henu  21级 计算机与信息工程学院 空午

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值