android实现基于表情识别和敲击识别的认证系统,表情识别支持自动的连续隐藏式拍照

一个演示用的基于表情识别和敲击识别的App

演示用app,表情识别基于微软人脸识别API,可以在点击表情后自动多次拍照,拍照界面已经做了隐藏处理,当与预定义的表情序列匹配后人之成功。
敲击识别的功能上也差不多,是基于加速传感器和录音机实现的,保证较高的准确率,可以设置识别的敲击次数。

下载地址:

https://download.csdn.net/download/fanyishi/10446661


表情识别

基本框架基于微软表情识别基于微软人脸识别API

注:API免费试用一个月申请地址https://azure.microsoft.com/zh-cn/try/cognitive-services/?unauthorized=1

我之前申请的API已经过期,可以在上面的网址申请一个在Cognitive-Face-Android-master\Sample\app\src\main\res\values\strings.xml 中

<!-- Please refer to https://azure.microsoft.com/en-us/services/cognitive-services/face/ to get your subscription key -->
    <!-- If you have subscription key with its corresponding endpoint, you can add them here to use the service -->
    <string name="subscription_key">26e66675269944f4a27656d35afd8026</string>
    <string name="endpoint">https://westcentralus.api.cognitive.microsoft.com/face/v1.0</string>

这里添加自己的key即可。

思路就是在本地用前置摄像头拍一张图片,上传到微软的API,会返回一个人脸的各种信息,从中提取出表情的信息,在界面上显示表情所对应的字符。

相机是做了一个隐藏式的自动连续拍照的相机,用的谷歌官方的自定义相机API。

核心代码:

// Background task of face detection.
    private class DetectionTask extends AsyncTask<InputStream, String, Face[]> {
        private boolean mSucceed = true;

        @Override
        protected Face[] doInBackground(InputStream... params) {
            // Get an instance of face service client to detect faces in image.
            FaceServiceClient faceServiceClient = SampleApp.getFaceServiceClient();
            try {
                publishProgress("Detecting...");

                // Start detection.
                return faceServiceClient.detect(
                        params[0],  /* Input stream of image to detect */
                        true,       /* Whether to return face ID */
                        true,       /* Whether to return face landmarks */
                        /* Which face attributes to analyze, currently we support:
                           age,gender,headPose,smile,facialHair */
                        new FaceServiceClient.FaceAttributeType[] {
                                FaceServiceClient.FaceAttributeType.Emotion});
            } catch (Exception e) {
                mSucceed = false;
                publishProgress(e.getMessage());
                addLog(e.getMessage());
                return null;
            }
        }

        @Override
        protected void onPreExecute() {
            mProgressDialog.show();
            addLog("Request: Detecting in image " + mImageUri);
        }

        @Override
        protected void onProgressUpdate(String... progress) {
            mProgressDialog.setMessage(progress[0]);
            setInfo(progress[0]);
        }

        @Override
        protected void onPostExecute(Face[] result) {
            if (mSucceed) {
                addLog("Response: Success. Detected " + (result == null ? 0 : result.length)
                        + " face(s) in " + mImageUri);
            }

            // Show the result on screen when detection is done.
            setUiAfterDetection(result, mSucceed);
        }
    }

将传回的文本提取,根据不同表情返回相应的符号:

public View getView(final int position, View convertView, ViewGroup parent) {
            if (convertView == null) {
                LayoutInflater layoutInflater =
                        (LayoutInflater) getSystemService(Context.LAYOUT_INFLATER_SERVICE);
                convertView = layoutInflater.inflate(R.layout.item_face_with_description, parent, false);
            }
            convertView.setId(position);

            // Show the face thumbnail.
            ((ImageView) convertView.findViewById(R.id.face_thumbnail)).setImageBitmap(
                    faceThumbnails.get(position));

            // Show the face details.
            emotion = getEmotion(faces.get(position).faceAttributes.emotion).substring(0, 2);
            input(view);
            return convertView;
        }
private String getEmotion(Emotion emotion)
        {
            String emotionType = "";
            double emotionValue = 0.0;
            if (emotion.anger > emotionValue)
            {
                emotionValue = emotion.anger;
                emotionType = "An: Anger";
            }
            if (emotion.contempt > emotionValue)
            {
                emotionValue = emotion.contempt;
                emotionType = "Co: Contempt";
            }
            if (emotion.disgust > emotionValue)
            {
                emotionValue = emotion.disgust;
                emotionType = "Di: Disgust";
            }
            if (emotion.fear > emotionValue)
            {
                emotionValue = emotion.fear;
                emotionType = "Fe: Fear";
            }
            if (emotion.happiness > emotionValue)
            {
                emotionValue = emotion.happiness;
                emotionType = "Ha: Happiness";
            }
            if (emotion.neutral > emotionValue)
            {
                emotionValue = emotion.neutral;
                emotionType = "Ne: Neutral";
            }
            if (emotion.sadness > emotionValue)
            {
                emotionValue = emotion.sadness;
                emotionType = "Sa: Sadness";
            }
            if (emotion.surprise > emotionValue)
            {
                emotionValue = emotion.surprise;
                emotionType = "Su: Surprise";
            }
            return String.format("%s: %f", emotionType, emotionValue);
        }
相机部分:
cameraView = new CameraView(this);// 通过一个surfaceview的view来实现拍照
        cameraView.setTag(1);
        cameraView = new CameraView(this, this);
        setContentView(R.layout.activity_camera);
        relative = (RelativeLayout) this.findViewById(R.id.ly);
        RelativeLayout.LayoutParams Layout = new RelativeLayout.LayoutParams(1, 1);// 设置surfaceview使其在隐藏的同时可以进行拍照
        Layout.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM, RelativeLayout.TRUE);
        Layout.addRule(RelativeLayout.ALIGN_PARENT_RIGHT, RelativeLayout.TRUE);

        ActivityManager.getInstance().addActivity(this);
        relative.addView(cameraView, Layout);

相机具体代码可以在源码里面看。

敲击识别

检测算法核心就是发起一个Timer每隔20ms去侦测录音的敲击声音的录入,以及侦听了加速度传感器的Z轴数值波动;还有一个识别敲击的Timer每隔25ms去读取录音和加速度识别的结果,若是辨识出一次敲击然后起一个2-3s的超时去识别这个时间内敲击响应的次数。

核心代码

录音部分

private void startEventDetectTimer(){

		mTimer = new Timer();

		eventGen = new TimerTask(){

			int nrTicks = 0;

			EventGenState_t state = EventGenState_t.NoneSet;

			@Override
			public void run() {

				switch(state){
				//None of the bools set
				case NoneSet:
					if		( mSoundKnockDetector.spikeDetected && !mAccelSpikeDetector.spikeDetected) state = EventGenState_t.VolumSet;
					else if	(!mSoundKnockDetector.spikeDetected &&  mAccelSpikeDetector.spikeDetected) state = EventGenState_t.AccelSet; 
					else if	( mSoundKnockDetector.spikeDetected &&  mAccelSpikeDetector.spikeDetected){

						mSoundKnockDetector.spikeDetected = false;
						mAccelSpikeDetector.spikeDetected = false;
						state =  EventGenState_t.NoneSet;
						//generate knock event
						mPatt.knockEvent();
					}
					nrTicks = 0;
					break;
					//volum set
				case VolumSet:
					if(mAccelSpikeDetector.spikeDetected){
						mSoundKnockDetector.spikeDetected = false;
						mAccelSpikeDetector.spikeDetected = false;
						state =  EventGenState_t.NoneSet;
						//generate knock event
						mPatt.knockEvent();
						break;
					}else{
						nrTicks+=1;
						if(nrTicks > period){
							nrTicks = 0;
							mSoundKnockDetector.spikeDetected = false;
							state = EventGenState_t.NoneSet;
						}
					}
					break;

					//accsel set
				case AccelSet:
					if(mSoundKnockDetector.spikeDetected){
						mSoundKnockDetector.spikeDetected = false;
						mAccelSpikeDetector.spikeDetected = false;
						state =  EventGenState_t.NoneSet;
						//generate knock event
						mPatt.knockEvent();
						break;
					}else{
						nrTicks+=1;
						if(nrTicks > period){
							nrTicks = 0;
							mAccelSpikeDetector.spikeDetected = false;
							state = EventGenState_t.NoneSet;
						}
					}						
					break;
				}
			}
		};
		mTimer.scheduleAtFixedRate(eventGen, 0, period); //start after 0 ms
	}

加速传感器部分

public void onSensorChanged(SensorEvent event) {
		prevXVal = currentXVal;
		currentXVal = abs(event.values[0]); // X-axis
		diffX = currentXVal - prevXVal;
		
		prevYVal = currentYVal;
		currentYVal = abs(event.values[1]); // Y-axis
		diffY = currentYVal - prevYVal;		
		
		prevZVal = currentZVal;
		currentZVal = abs(event.values[2]); // Z-axis
		diffZ = currentZVal - prevZVal;

		//Z force must be above some limit, the other forces below some limit to filter out shaking motions
		if (currentZVal > prevZVal && diffZ > thresholdZ && diffX < threshholdX && diffY < threshholdY){
			accTapEvent();
		}

	}

具体实现部分太多了,就不一一贴了。

参考的项目:

Emotion API:

https://docs.microsoft.com/zh-cn/azure/cognitive-services/emotion/home

微软Cognitive Service: 

https://github.com/Microsoft/Cognitive-Face-Android.git

博文:Android 检测手机的敲击事件 https://blog.csdn.net/dahaohan/article/details/52883743

github:https://github.com/lishushu/KnockDetection.git 




  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值