UDP是一种数据报协议位于传输层,是一种面向无需连接就可以发送IP数据报的,我们在建立聊天室的时候就是将这些数据变成一个包发给对方,对方接收到这个包后分解成数据最后根据我们之间的协议转换成我发给对方的一个数据。(关于UDP的理解不是特别深入)在JAVA中利用UDP的方式实现聊天室如下:
- 接收端
DatagramSocket socket = new DatagramSocket(端口号); //建立一个接收端,给一个端口号
while(true) {
byte[] bytes=new byte[socket.getReceiveBufferSize()]; //建立接收的字节数组,长度为接收端得到长度
DatagramPacket p=new DatagramPacket(bytes,socket.getReceiveBufferSize()); //将字节数组做成一个包,因为们会接收到的就是包
socket.receive(p); //开始接收包
String a=new String(bytes); //下面两条语句是实验是否接收到并输出
System.out.println(a);}
- 发送端
DatagramSocket socket = new DatagramSocket();
String s=new String("需要发送的文字"); //这里是最初的发送文字
InetAddress address=InetAddress.getByName("IP地址");
int port=端口号;
byte[] bytes=s.getBytes(); //无论发送什么都需要转成byte数组
DatagramPacket p=new DatagramPacket(bytes,bytes.length,address,port); //用包发送,字节数组及其发送的长度,接收端的IP地址和接收端的端口号
socket.send(p); //开始发送
- 实现聊天室
首先要做出一个界面出来,这个由自己去建立一个界面,因为做的界面比较拙劣就不放代码了
因为我们后面要实现类似你画我猜游戏,带视频,所以发送包过去收到的字节我们需要双方制定一个协议,给第一个字节为1就为画画,为2就是文字,为7就是视频,因为发送端要循环接收所以需要增加一个线程然后一直运行等待接收
1.简单的聊天
作为接收端:
DatagramSocket socket = new DatagramSocket(端口号);
while (true) {
byte[] bytes = new byte[socket.getReceiveBufferSize()];
DatagramPacket p = new DatagramPacket(bytes, socket.getReceiveBufferSize());
socket.receive(p);
String a = null;
if (bytes[0] == 2) {
a = new String(bytes);
}
if (ui.text.getText().equals(" ")) {
ui.text.setText(a);
ui.text.setText(ui.text.getText() + '\n' + a); // 接收对方的文字,界面中的文本区域设置接收到的文字
}
作为发送端:
DatagramSocket socket = new DatagramSocket();
String s=ui.text2.getText(); //界面会有一个发送文本的区域,得到你打字上去的文本去发送
ui.text.setText(ui.text.getText()+'\n'+ui.text2.getText()); //聊天时能看到自己发送的文字
ui.text2.setText(" "); //发送完后自己的输入框清空
InetAddress address=InetAddress.getByName("IP地址");
int port=端口号;
byte[] bytes=new byte[s.getBytes().length+1]; //因为我们需要告诉对方我们发送的是文本,给定一个协议
bytes[0]=2; //自行定义
System.arraycopy(s.getBytes(), 0, bytes, 1, s.getBytes().length); 将文本字节写入字节数组
DatagramPacket p=new DatagramPacket(bytes,bytes.length,address,port);
socket.send(p);
2.简单的你画我猜游戏
这里我们需要得到点的坐标,我们要将int转换成byte发送过去,接收到后又需要将byte转换成int画出对应的画,可以做成两个方法这样方便需要转换的时候调用
int转byte:
public byte[] intobyte(int x) {
byte b1=(byte)(x&0xFF);
byte b2=(byte)((x>>8)&0xFF);
byte b3=(byte)((x>>16)&0xFF);
byte b4=(byte)((x>>24)&0xFF);
byte[] bytes=new byte[4];
bytes[0]=b1;
bytes[1]=b2;
bytes[2]=b3;
bytes[3]=b4; //int型转字节型
return bytes;
}
byte转int
public int bytetoint(byte[] bytes) {
int x;
x=(bytes[0]&0xff)|((bytes[1]&0xff)<<8)|((bytes[2]&0xff)<<16)|((bytes[3]&0xff)<<24);
System.out.println(x);
return x;
}
作为接收端:
属性:Graphics g;
构造方法(Graphics g){
this.g = g; 需要从界面传画板的画图过来
}
DatagramSocket socket = new DatagramSocket(端口号);
while (true) {
byte[] bytes = new byte[socket.getReceiveBufferSize()];
DatagramPacket p = new DatagramPacket(bytes, socket.getReceiveBufferSize());
socket.receive(p);
if (bytes[0] == 1) {
byte[] bytes2 = new byte[4];
System.arraycopy(bytes, 1, bytes2, 0, 4);
int x1 = ui.bytetoint(bytes2);
System.arraycopy(bytes, 5, bytes2, 0, 4);
int y1 = ui.bytetoint(bytes2);
System.out.println("接收到线坐标" + x1 + "he" + y1);
g.drawLine(x1, y1, x1, y1); // 接收对方所画的东西,因为我们简单化就只发了一个坐标用点成图,最好传两个坐标,线成图这样快速画的时候不会觉得是很多点组成的,看起来比较流畅
作为发送端:
package UDP;
import java.awt.BasicStroke;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import java.awt.event.MouseMotionListener;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
public class tu implements MouseListener,MouseMotionListener{
int x1,x2,y1,y2;
Graphics g;
UI ui;
tu(Graphics g,UI ui){
this.g=g;
this.ui=ui;
Graphics2D g2d =(Graphics2D) this.g; //将画笔变得粗一点
g2d.setStroke(new BasicStroke(5));
}
@Override
public void mouseDragged(MouseEvent e) { //鼠标经过方法
// TODO Auto-generated method stub
x2=e.getX();
y2=e.getY();
g.drawLine(x1, y1, x2, y2);
try {
DatagramSocket socket = new DatagramSocket();
InetAddress address=InetAddress.getByName("IP地址");
int port=端口号;
byte[] bytes1=new byte[9];
bytes1[0]=1; //双方协议
System.arraycopy(ui.intobyte(x1), 0, bytes1, 1, 4); //放到字节数组中发送
System.arraycopy(ui.intobyte(y1), 0, bytes1, 5, 4);
// System.arraycopy(ui.intobyte(ui.t.x2), 0, bytes1, 9, 4); //刚开始发送的第二个坐标的代码
// System.arraycopy(ui.intobyte(ui.t.y2), 0, bytes1, 13, 4);
DatagramPacket p=new DatagramPacket(bytes1,bytes1.length,address,port);
socket.send(p);
} catch (Exception a) {
// TODO Auto-generated catch block
a.printStackTrace();
}
x1=x2;
y1=y2;
}
@Override
public void mouseMoved(MouseEvent e) {
// TODO Auto-generated method stub
}
@Override
public void mouseClicked(MouseEvent e) {
// TODO Auto-generated method stub
}
@Override
public void mousePressed(MouseEvent e) { 鼠标按下的方法
// TODO Auto-generated method stub
x1=e.getX();
y1=e.getY();
}
@Override
public void mouseReleased(MouseEvent e) {
// TODO Auto-generated method stub
}
@Override
public void mouseEntered(MouseEvent e) {
// TODO Auto-generated method stub
}
@Override
public void mouseExited(MouseEvent e) {
// TODO Auto-generated method stub
}
}
3.视频通信和人脸识别
这两个都需要导入两个文件
webcam and opensv4
选择你的项目右击Build Path------Configure Build Path------Add JARs里面或Add External JARs外部-----人脸识别时64位导入x64,32位导入x86,导入后需要在Build Path下点开opencv在Native library location点击然后点击Edit然后选择opencv下的x64
(1)视频通信:
作为接收端:
DatagramSocket socket = new DatagramSocket(端口号);
Webcam webcam = Webcam.getDefault(); //视频获得你电脑默认的摄像头
webcam.setViewSize(new Dimension(320, 240)); //设置视频的大小
webcam.open();
while (true) {
byte[] bytes = new byte[socket.getReceiveBufferSize()];
DatagramPacket p = new DatagramPacket(bytes, socket.getReceiveBufferSize());
socket.receive(p);
if (bytes[0] == 7) {
byte[] bytes1 = new byte[bytes.length - 1];
System.arraycopy(bytes, 1, bytes1, 0, bytes1.length);
ByteArrayInputStream pos = new ByteArrayInputStream(bytes1); //将字节数组放入字节流中
BufferedImage img1 = ImageIO.read(pos); //将字节流转换成图片
g.drawImage(img1, 500, 500, null); // 接收对方图像(不断的放图片就能形成视频动画)
}
作为发送端:
做了一个线程
class sendvideo implements ActionListener {
InetAddress address;
DatagramSocket socket;
public void actionPerformed(ActionEvent e) {
Webcam webcam = Webcam.getDefault();
webcam.open();
try {
socket = new DatagramSocket();
address = InetAddress.getByName("地址");
} catch (Exception e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
while (true) {
BufferedImage img = webcam.getImage(); //从你的摄像头中得到图片
if (img != null) {
ByteArrayOutputStream bos = new ByteArrayOutputStream(); //字节输出流
try {
ImageIO.write(img, "jpg", bos); //将图片写入字节输出流
byte[] bytes = bos.toByteArray(); //将字节输出流转换成byte数组就可以打包传送
byte[] bytes1 = new byte[bytes.length + 1];
bytes1[0] = 7;
System.arraycopy(bytes, 0, bytes1, 1, bytes1.length - 1);
int port =端口;
DatagramPacket p = new DatagramPacket(bytes1, bytes1.length, address, port);
socket.send(p); // 发送自己的图像
} catch (Exception ex) {
// TODO Auto-generated catch block
ex.printStackTrace();
}
}
}
}
}
(2)人脸识别
public BufferedImage face(BufferedImage img) {
Graphics g2 = img.getGraphics();
CascadeClassifier faceDetector = new CascadeClassifier();
faceDetector.load(Capture.class.getResource("opencv/lbpcascades/lbpcascade_frontalface.xml").getPath().substring(1));
faceDetector.load("opencv/lbpcascades/lbpcascade_frontalface.xml");
Mat mat = ImageFilters.bufImg2Mat(img);
MatOfRect faceDetections = new MatOfRect();
faceDetector.detectMultiScale(mat, faceDetections);
Rect[] rects = faceDetections.toArray();
for (Rect rect : rects) {
try {
BufferedImage img2 = ImageIO.read(new File("存放图片的地址"));
g2.drawImage(img2, rect.x, rect.y, rect.width, rect.height, null);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return img;
}