android studio环境配置以及流程:
- 首先在项目moudle下添加依赖包括socket和webrtc的依赖库ligjingle:
compile ‘com.github.nkzawa:socket.io-client:0.4.2’
compile ‘io.pristine:libjingle:8871@aar’
调用步骤:
①:创建读取视频流的控件GLSurfaceView 我在项目里是直接在代码创建的,当然也可以在xml布局文件中添加:
surFaceView = new GLSurfaceView(context, null);
②:GLSurfaceView创建之后必须先进性预渲染,否则会出现异常,在这里就不过多赘述,我是在创建GLSurfaceView之后直接就请求远程视频流渲染界面:
surfaceRunnable = new Runnable() {
@Override
public void run() {
initStreamView();
}
};
VideoRendererGui.setView(surFaceView, surfaceRunnable);
remoteRender =VideoRendererGui.create(
REMOTE_X, REMOTE_Y,
REMOTE_WIDTH, REMOTE_HEIGHT, scalingType, false);
③:接下来就是初始化视频流,这是在 VideoRendererGui.setView(surFaceView, surfaceRunnable)中的一个异步任务:
synchronized (PatientCenterActivity.class){
try {
Point displaySize = new Point();
getWindowManager().getDefaultDisplay().getSize(displaySize);
if (eglContext == null) {
eglContext = VideoRendererGui.getEGLContext();
}
PeerConnectionParameters params = new PeerConnectionParameters(
true, false, displaySize.x, displaySize.y, 30, 1, VIDEO_CODEC_VP9, true, 1, AUDIO_CODEC_OPUS, true);
webRtcClient = new WebRtcClient(this, AppConstant.RTCADDRESS, params, eglContext);
} catch (Exception e) {
e.printStackTrace();
}
}
④:使用webrtc第一步就是先初始化PeerConnectionFactory,初始化PeerConnectionFactory之后创建PeerConnectionFactory对象:
factory = new PeerConnectionFactory();
⑤:使用webrtc第一步就是先初始化PeerConnectionFactory,初始化PeerConnectionFactory之后创建PeerConnectionFactory对象:
factory = new PeerConnectionFactory();
⑥:接下来就是对一些通过Socket链接服务器ice服务器,添加handler发送到服务器和有人加入房间,以及加入房间之后的一些监听(包括是否有人加入会话和通话状态监听,视频流的包装层配置等)
MessageHandler messageHandler = new MessageHandler();
try {
client = IO.socket(host);
} catch (URISyntaxException e) {
e.printStackTrace();
}
client.on("id", messageHandler.onId);
client.on("id2", messageHandler.onId2);
client.on("message", messageHandler.onMessage);
client.connect();
iceServers.add(new PeerConnection.IceServer("stun:23.21.150.121"));
iceServers.add(new PeerConnection.IceServer("stun:stun.l.google.com:19302"));
pcConstraints.mandatory.add(new MediaConstraints.KeyValuePair("OfferToReceiveAudio", "true"));
pcConstraints.mandatory.add(new MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"));
pcConstraints.optional.add(new MediaConstraints.KeyValuePair("DtlsSrtpKeyAgreement", "true"));
⑦最后一步就是通过接口回调调用libjingle里的.so文件下的本地方法createLoclaMediaStream()打开摄像头获取视频流和语音流,封装到MediaStream对象中通过服务器传给对方:
private void setCamera(){
// try {
localMS =
// new MediaStream()
factory.createLocalMediaStream("ARDAMS");
// } catch (Exception e) {
// e.printStackTrace();
// }
if(pcParams.videoCallEnabled){
MediaConstraints videoConstraints = new MediaConstraints();
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(pcParams.videoHeight)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(pcParams.videoWidth)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(pcParams.videoFps)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(pcParams.videoFps)));
videoSource = factory.createVideoSource(getVideoCapturer(), videoConstraints);
localMS.addTrack(factory.createVideoTrack("ARDAMSv0", videoSource));
}
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
mListener.onLocalStream(localMS);
}
private VideoCapturer getVideoCapturer() {
String frontCameraDeviceName = VideoCapturerAndroid.getNameOfFrontFacingDevice();
if (videoCapturer!=null)
{
videoCapturer.dispose();
videoCapturer=null;
}
videoCapturer = VideoCapturerAndroid.create(frontCameraDeviceName);
return videoCapturer;
}
下面是自己借鉴别人修改的服务端Demo:
服务端webrtc demo 地址
*在ProjectRTC-master根项目下运行:node app.js
默认端口3000*
服务器搭好运行之后,修改android demo里的host ip地址,改为服务器的IP:
android webrtc demo地址:
android webrtc demo地址