基于BP神经网络ANN的鼠标手势识别C#.NET实验程序

C#.net 程序基于Mat Buckland在《游戏编程中的人工智能》一书中例子的实现。核心源代码在文章最后。。。

 

●问题陈述
  本程序基于BP神经网络完成了对用户使用鼠标输入的特定手势的识别,并且可以学习用户创建的新手势。程序中已经存储的手势是Palm系列掌上电脑中手 写识别专用的Graffiti字体的数字0~9,这种字体的特点是每个字由一个连续笔划构成,适合用一个连贯的鼠标手势表示:

 Picture1.png

图1 Graffiti字体的数字

●程序概述
  本程序针对鼠标手势的存储、识别和学习功能,需要实现以下要素:
一、鼠标手势的表示
  由于每个鼠标手势仅由一个连续笔划构成,因此可将这一笔划分解为一系列连续的向量。不同的手势对应不同的向量组合。为计算方便,所有向量可以归一化为单位长度向量,如图2示例:

 Picture1.png

图2 鼠标手势向量示例

  分别以向右、向下为x轴、y轴正方向,则图2所示笔画的向量组合为(0,1)-(0,1)-(0,1)-(0,1)-(1,0)-(1,0)-(1,0)。
二、定义网络输入输出
  使用BP神经网络,需要定义一组一维的输入向量与输出向量。将一个手势的笔划向量依次连接,x坐标、y坐标依次出现,即可构成该手势的输入向量。对于可以识别N种手势的神经网络,可以使用长度为N,第M位为1、其余各位均为0的向量作为第M种手势的标准输出向量。
  上例对应的输入向量为(0,1,0,1,0,1,0,1,1,0,1,0,1,0),输出向量形式为(0,…,0,1,0,…,0)。
三、训练神经网络
  对于程序中已存储的手势,需要训练神经网络,使得网络对每个输入向量都能够产生误差在允许范围之内的输出向量。BP神经网络训练使用反向传播过程,即对每个输入向量重复以下步骤:
  (1)将它送入网络,计算网络的输出o。
  (2)计算输出o与目标输出t之间的误差。
  (3)调整输出层权重,为每个隐藏层重复步骤(4)和(5)。
  (4)计算隐藏层误差。
  (5)调整隐藏层权重。
四、鼠标手势的记录
  用户使用鼠标,在程序界面指定区域按下左键画出一个手势,程序定时记录鼠标坐标,构成一个点集(图3所示蓝点)。使用一定的算法对点集进行处理,取出比笔画的向量数目多一个的代表点(图3所示红圈),相邻两个代表点求差得到的向量经归一化即可得到笔画的向量组合。
取代表点的方法采用了光滑化算法,即对分布不均的原始记录点找到其中跨度最小的两个点,用它们的中点取代它们。重复执行,直到点的数目符合向量数目要求。

 Picture1.png

图3 鼠标手势记录示例

五、鼠标手势的识别
  用户的输入不可能与定义的标准输入完全一致。将待测试的手势对应的输入向量输入训练过的神经网络后,网络的输出与定义的第M种手势的标准输出向量越接 近,说明输入手势越接近第M种手势的标准输入。评判“接近”的简单标准是找出输出向量中最大(最接近1)的元素,认为输入最有可能是该位为1的输出对应的 手势。
六、鼠标手势的学习
  学习过程是将用户新输入的一个手势构建输入向量,加入原有的输入向量集;对输出向量的长度加一,设置其对应的输出向量,然后将网络重新训练一遍。

●程序架构
  本程序使用Visual Studio 2005(C#语言)开发,基于.Net Framework 2.0运行。
一、关键数据结构
  程序包含的类如下:

Picture1.png
图4 程序类图

  (1)Neuron类(神经元):记录一个神经元的输入数目、各个输入的权重、偏差值等数据。
  (2)NeuronLayer类(神经网络层):记录一个神经网络层的神经元数目和各个神经元对象。
  (3)NeuralNet类(神经网络):记录一个BP神经网络的各个神经网络层以及隐藏层数、学习率、训练与否等信息;提供训练、更新等方法。
  (4)GestureData类(鼠标手势):记录所有鼠标手势的名称、笔划向量组合、对应输出等信息;提供追加新鼠标手势等方法。
二、程序界面操作说明

Picture1.png
图5 程序界面

  (1)程序运行后,首先需要训练神经网络。用户可以设置网络的一些参数,包括 学习率、可允许的偏差门限、隐藏层数等。此外可以选择开启带动量的算法或带噪声的算法,并设置动量分数和噪声最大值。单击“Train Network”按钮开始训练网络,界面左下方显示当前训练的迭代数和偏差。
  (2)训练之后,在绘图区域按下鼠标左键画出一个连续的手势,程序会将代表点标注在手势上,并在界面左下方给出通过神经网络运算得到的最佳匹配输出值与对应手势的名称,用卡通图标显示最佳匹配输出值的大小。
  (3)单击“Learn New Gesture”按钮,程序进入学习状态。此时用户可以输入一个新的手势,并在弹出的对话框中为之命名。程序随即重新训练神经网络。
  (4)单击“Clear Pad”按钮清空绘图区域。

●参考文献
[1] Mat Buckland.游戏编程中的人工智能,北京:清华大学出版社,2006.
[2] 朱大奇,史慧.人工神经网络原理及应用,北京:科学出版社,2006.

--------------------------------------------------------------------------------------------------------------

 //GestureData类用于存储鼠标手势相关数据
    public class GestureData
    {
        #region "属性"

        //手势名称
        private List<string> Names;
        
        //手势向量
        private List<List<double>> Patterns;

        //加载的手势数目
        private int PatternNumber;

        //手势向量长度
        private int PatternSize;

        //训练集
        public List<List<double>> SetIn;
        public List<List<double>> SetOut;

        #endregion

        #region "方法"

        //初始化预定义的手势
        private void Init()
        {
            //对于每个预定义的手势执行
            for (int j = 0; j < PatternNumber; j++)
            {
                List<double> temp = new List<double>();
                for (int v = 0; v < PatternSize * 2; v++)
                {
                    temp.Add(Useful.InitPatterns[j][v]);
                }
                Patterns.Add(temp);
                Names.Add(Useful.InitNames[j]);
            }
        }


        //构造函数
        public GestureData(int _PatternNumber, int _PatternSize)
        {
            Names = new List<string>();
            Patterns = new List<List<double>>();
            SetIn = new List<List<double>>();
            SetOut = new List<List<double>>();
            PatternNumber = _PatternNumber;
            PatternSize = _PatternSize;

            Init();
            CreateTrainingSet();
        }

        //获得手势名称
        public string PatternName(int index)
        {
            if (Names[index] != null)
            {
                return Names[index];
            }
            else
            {
                return "";
            }
        }

        //增加新的手势
        public bool AddPattern(List<double> _Pattern, string _Name)
        {
            //检查手势向量长度
            if (_Pattern.Count != PatternSize * 2)
            {
                throw new Exception("手势向量长度错误!");
            }

            Names.Add(_Name);
            Patterns.Add(new List<double>(_Pattern));
            PatternNumber++;

            CreateTrainingSet();
            return true;
        }

        //创建训练集
        public void CreateTrainingSet()
        {
            //清空训练集
            SetIn.Clear();
            SetOut.Clear();

            //对每个手势操作
            for (int j = 0; j < PatternNumber; j++)
            {
                SetIn.Add(Patterns[j]);

                //相关的输出为1,不相关的输出为0
                List<double> outputs = new List<double>();
                for (int i = 0; i < PatternNumber; i++)
                {
                    outputs.Add(0);
                }
                outputs[j] = 1;

                SetOut.Add(outputs);
            }
        }

        #endregion
    }
-------------------------------------------------------------------------------------------------------------
 //神经元
    public class Neuron
    {
        #region "属性"

        //神经元输入数
        public int NumInputs;

        //权值向量
        public List<double> Weights;

        //前一步的权值更新向量
        public List<double> PrevUpdate;

        //活跃值
        public double Activation;

        //错误值
        public double Error;

        #endregion

        #region "方法"
       
        //构造函数
        public Neuron(int _NumInputs)
        {
            NumInputs = _NumInputs + 1;
            Activation = 0;
            Error = 0;
            Weights = new List<double>();
            PrevUpdate = new List<double>();

            //生成随机权重
            for (int i = 0; i < NumInputs; i++)
            {
                Weights.Add(Useful.RandomClamped());
                PrevUpdate.Add(0.0);
            }
        }

        #endregion
    }

    //神经网络层
    public class NeuronLayer
    {
        #region "属性"

        //本层神经元数
        public int NumNeurons;

        //神经元
        public List<Neuron> Neurons;

        #endregion

        #region "方法"

        //构造函数
        public NeuronLayer(int _NumNeurons, int _NumInputsPerNeuron)
        {
            NumNeurons = _NumNeurons;
            Neurons = new List<Neuron>();

            for (int i = 0; i < NumNeurons; i++)
            {
                Neurons.Add(new Neuron(_NumInputsPerNeuron));
            }
        }

        #endregion
    }

    //NeuralNet类定义了神经网络
    public class NeuralNet
    {
        #region "属性"

        //输入数
        private int NumInputs;

        //输出数
        private int NumOutputs;

        //隐含层数
        private int NumHiddenLayers;

        //每个隐含层的神经元数
        private int NeuronsPerHiddenLyr;

        //学习率
        private double LearningRate;

        //积累错误
        public double ErrorSum;

        //是否经过了训练
        public bool Trained;

        //迭代数
        public int NumEpochs;

        //神经网络的各个层
        private List<NeuronLayer> Layers;

        //向窗体发送消息的委托
        public delegate void DelegateOfSendMessage(int Epochs, double Error);
        public event DelegateOfSendMessage SendMessage;

        #endregion

        #region "方法"

        //训练神经网络的迭代
        private bool NetworkTrainingEpoch(List<List<double>> SetIn, List<List<double>> SetOut)
        {
            if (Useful.WITH_MOMENTUM)
            {
                return NetworkTrainingEpochWithMomentum(SetIn, SetOut);
            }
            else
            {
                return NetworkTrainingEpochNonMomentum(SetIn, SetOut);
            }
        }

        //训练神经网络的迭代(无动量)
        private bool NetworkTrainingEpochNonMomentum(List<List<double>> SetIn, List<List<double>> SetOut)
        {

            int curWeight;
            int curNrnOut, curNrnHid;

            ErrorSum = 0;

            //计算积累错误,修正权重
            for (int vec = 0; vec < SetIn.Count; vec++)
            {
                List<double> outputs = Update(SetIn[vec]);

                if (outputs.Count == 0)
                {
                    return false;
                }

                //修正权重
                for (int op = 0; op < NumOutputs; op++)
                {
                    //计算偏差
                    double err = (SetOut[vec][op] - outputs[op]) * outputs[op] * (1.0 - outputs[op]);
                    Layers[1].Neurons[op].Error = err;

                    ErrorSum += (SetOut[vec][op] - outputs[op]) * (SetOut[vec][op] - outputs[op]);

                    curWeight = 0;
                    curNrnHid = 0;

                    //除bias之外的权值
                    while (curWeight < Layers[1].Neurons[op].Weights.Count - 1)
                    {
                        //新权值
                        Layers[1].Neurons[op].Weights[curWeight] += err * LearningRate * Layers[0].Neurons[curNrnHid].Activation;
                        ++curWeight; ++curNrnHid;
                    }

                    //bias
                    Layers[1].Neurons[op].Weights[curWeight] += err * LearningRate * Useful.BIAS;
                }

                curNrnHid = 0;

                int n = 0;

                //对于隐含层的每个神经元计算
                while (curNrnHid < Layers[0].Neurons.Count)
                {
                    double err = 0;

                    curNrnOut = 0;

                    //对于输出层的每个神经元计算
                    while (curNrnOut < Layers[1].Neurons.Count)
                    {
                        err += Layers[1].Neurons[curNrnOut].Error * Layers[1].Neurons[curNrnOut].Weights[n];
                        ++curNrnOut;
                    }

                    //计算偏差
                    err *= Layers[0].Neurons[curNrnHid].Activation * (1.0 - Layers[0].Neurons[curNrnHid].Activation);

                    //计算新权重
                    for (int w = 0; w < NumInputs; w++)
                    {
                        //BP
                        Layers[0].Neurons[curNrnHid].Weights[w] += err * LearningRate * SetIn[vec][w];
                    }

                    //bias
                    Layers[0].Neurons[curNrnHid].Weights[NumInputs] += err * LearningRate * Useful.BIAS;

                    ++curNrnHid;
                    ++n;
                }
            }

            return true;
        }

        //训练神经网络的迭代(增加动量)
        private bool NetworkTrainingEpochWithMomentum(List<List<double>> SetIn, List<List<double>> SetOut)
        {

            int curWeight;
            int curNrnOut, curNrnHid;

            double WeightUpdate = 0;

            ErrorSum = 0;

            //计算积累错误,修正权重
            for (int vec = 0; vec < SetIn.Count; vec++)
            {
                List<double> outputs = Update(SetIn[vec]);

                if (outputs.Count == 0)
                {
                    return false;
                }

                //修正权重
                for (int op = 0; op < NumOutputs; op++)
                {
                    //计算偏差
                    double err = (SetOut[vec][op] - outputs[op]) * outputs[op] * (1.0 - outputs[op]);
                    Layers[1].Neurons[op].Error = err;

                    ErrorSum += (SetOut[vec][op] - outputs[op]) * (SetOut[vec][op] - outputs[op]);

                    curWeight = 0;
                    curNrnHid = 0;

                    int w = 0;

                    //除bias之外的权值
                    while (curWeight < Layers[1].Neurons[op].Weights.Count - 1)
                    {
                        //计算权重更新
                        WeightUpdate = err * LearningRate * Layers[0].Neurons[curNrnHid].Activation;
                        //加入动量之后的新权重
                        Layers[1].Neurons[op].Weights[curWeight] += WeightUpdate + Layers[1].Neurons[op].PrevUpdate[w] * Useful.MOMENTUM;
                        //记录权重更新
                        Layers[1].Neurons[op].PrevUpdate[w] = WeightUpdate;

                        ++curWeight; ++curNrnHid; ++w;
                    }

                    //bias
                    WeightUpdate = err * LearningRate * Useful.BIAS;
                    Layers[1].Neurons[op].Weights[curWeight] += WeightUpdate + Layers[1].Neurons[op].PrevUpdate[w] * Useful.MOMENTUM;
                    Layers[1].Neurons[op].PrevUpdate[w] = WeightUpdate;
     
                }

                curNrnHid = 0;

                int n = 0;

                //对于隐含层的每个神经元计算
                while (curNrnHid < Layers[0].Neurons.Count)
                {
                    double err = 0;

                    curNrnOut = 0;

                    //对于输出层的每个神经元计算
                    while (curNrnOut < Layers[1].Neurons.Count)
                    {
                        err += Layers[1].Neurons[curNrnOut].Error * Layers[1].Neurons[curNrnOut].Weights[n];
                        ++curNrnOut;
                    }

                    //计算偏差
                    err *= Layers[0].Neurons[curNrnHid].Activation * (1.0 - Layers[0].Neurons[curNrnHid].Activation);

                    //计算新权重
                    int w;
                    for (w = 0; w < NumInputs; w++)
                    {
                        //BP
                        WeightUpdate = err * LearningRate * SetIn[vec][w];
                        Layers[0].Neurons[curNrnHid].Weights[w] += WeightUpdate + Layers[0].Neurons[curNrnHid].PrevUpdate[w] * Useful.MOMENTUM;
                        Layers[0].Neurons[curNrnHid].PrevUpdate[w] = WeightUpdate;
                    }

                    //bias
                    WeightUpdate = err * LearningRate * Useful.BIAS;
                    Layers[0].Neurons[curNrnHid].Weights[NumInputs] += WeightUpdate + Layers[0].Neurons[curNrnHid].PrevUpdate[w] * Useful.MOMENTUM;
                    Layers[0].Neurons[curNrnHid].PrevUpdate[w] = WeightUpdate;

                    ++curNrnHid;
                    ++n;
                }
            }

            return true;
        }

        //创建神经网络
        private void CreateNet()
        {
            if (NumHiddenLayers > 0)
            {
                //隐含层
                Layers.Add(new NeuronLayer(NeuronsPerHiddenLyr, NumInputs));
                for (int i = 0; i < NumHiddenLayers - 1; i++)
                {
                    Layers.Add(new NeuronLayer(NeuronsPerHiddenLyr, NeuronsPerHiddenLyr));
                }

                //输出层
                Layers.Add(new NeuronLayer(NumOutputs, NeuronsPerHiddenLyr));
            }
            else
            {
                //输出层
                Layers.Add(new NeuronLayer(NumOutputs, NumInputs));
            }
        }

        //将所有权重设置为随机的小值
        private void InitializeNetwork()
        {
            //对于每一层执行
            for (int i = 0; i < NumHiddenLayers + 1; i++)
            {
                //对于每个神经元执行
                for (int n = 0; n < Layers[i].NumNeurons; n++)
                {
                    //对于每个权重执行
                    for (int k = 0; k < Layers[i].Neurons[n].NumInputs; k++)
                    {
                        Layers[i].Neurons[n].Weights[k] = Useful.RandomClamped();
                    }
                }
            }

            ErrorSum = 9999;
            NumEpochs = 0;
        }
 
        //S型函数
        private double Sigmoid(double activation, double response)
        {
            return (1.0 / (1.0 + Math.Exp(- activation / response)));
        }

        //构造函数
        public NeuralNet(int _NumInputs, int _NumOutputs, int _HiddenNeurons, double _LearningRate)
        {
            NumInputs = _NumInputs;
            NumOutputs = _NumOutputs;
            NumHiddenLayers = 1;
            NeuronsPerHiddenLyr = _HiddenNeurons;
            LearningRate = _LearningRate;
            ErrorSum = 9999;
            Trained = false;
            NumEpochs = 0;
            Layers = new List<NeuronLayer>();
            CreateNet();
        }

        //计算网络输出
        public List<double> Update(List<double> _inputs)
        {
            List<double> inputs = new List<double>(_inputs);
            List<double> outputs = new List<double>();
            int cWeight = 0;

            //添加噪声
            if (Useful.WITH_NOISE)
            {
                for (int k = 0; k < inputs.Count; k++)
                {
                    inputs[k] += Useful.RandomClamped() * Useful.MAX_NOISE_TO_ADD;
                }
            }

            //验证输入长度
            if (inputs.Count != NumInputs)
            {
                return outputs;
            }

            //对于每一层执行
            for (int i = 0; i < NumHiddenLayers + 1; i++)
            {
                if (i > 0)
                {
                    inputs = new List<double>(outputs);
                }
                outputs.Clear();

                cWeight = 0;

                //对于每个神经元执行
                for (int n = 0; n < Layers[i].NumNeurons; n++)
                {
                    double netinput = 0;

                    int num = Layers[i].Neurons[n].NumInputs;

                    //对于每个权重执行
                    for (int k = 0; k < num - 1; k++)
                    {
                        netinput += Layers[i].Neurons[n].Weights[k] * inputs[cWeight++];
                    }

                    netinput += Layers[i].Neurons[n].Weights[num - 1] * Useful.BIAS;

                    Layers[i].Neurons[n].Activation = Sigmoid(netinput, Useful.ACTIVATION_RESPONSE);

                    outputs.Add(Layers[i].Neurons[n].Activation);

                    cWeight = 0;
                }
            }

            return outputs;
        }

        //训练神经网络
        public bool Train(GestureData data)
        {
            List<List<double>> SetIn = new List<List<double>>(data.SetIn);
            List<List<double>> SetOut = new List<List<double>>(data.SetOut);

            //校验训练集
            if ((SetIn.Count != SetOut.Count) || (SetIn[0].Count != NumInputs) || (SetOut[0].Count != NumOutputs))
            {
                throw new Exception("训练集输入输出不符!");
            }

            InitializeNetwork();

            //训练直至错误小于阈值
            while (ErrorSum > Useful.ERROR_THRESHOLD)
            {
                //迭代训练
                if (!NetworkTrainingEpoch(SetIn, SetOut))
                {
                    return false;
                }

                NumEpochs++;

                //窗体刷新
                SendMessage(NumEpochs, ErrorSum);
            }

            Trained = true;
            return true;
        }

        #endregion
    }
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值