Differences between Protocol and Profile in Bluetooth

蓝牙中提到的协议栈(Protocol)和Profile容易混淆,现总结如下:

Protocol :

蓝牙技术规范的目的是使符合该规范的各种应用之间能够实现互操作。互操作的远端设备需要使用相同的协议栈,不同的应用需要不同的协议栈。但是,所有的应用都要使用蓝牙技术规范中的数据链路层和物理层。

蓝牙的核心协议由基带,链路管理,逻辑链路控制与适应协议和服务搜索协议等4部分组成.
(1)Baseband(基带协议)

带和链路控制层确保微微网内各蓝牙设备单元之间由射频构成的物理连接。蓝牙的射频系统是一个跳频系统,其任一分组在指定时隙、指定频率上发送。它使用查询和分页进程同步不同设备间的发送频率和时钟,为基带数据分组提供了两种物理连接方式,即面向连接(SCO)和无连接(ACL),而且,在同一射频上可实现多路数据传送。ACL适用于数据分组,SCO适用于话音以及话音与数据的组合,所有的话音和数据分组都附有不同级别的前向纠错(FEC)或循环冗余校验(CRC),而且可进行加密。此外,对于不同数据类型(包括连接管理信息和控制信息)都分配一个特殊通道。可使用各种用户模式在蓝牙设备间传送话音,面向连接的话音分组只需经过基带传输,而不到达L2CAP。话音模式在蓝牙系统内相对简单,只需开通话音连接就可传送话音。


(2)LMP(链路管理协议)
链路管理协议(LMP)负责蓝牙各设备间连接的建立和设置。LMP通过连接的发起,交换和核实进行身份验证和加密,通过协商确定基带数据分组大小;还控制无线设备的节能模式和工作周期,以及微微网络内设备单元的连接状态。
(3)l2CAP(逻辑链路控制和适配协议)
逻辑链路控制和适配协议(L2CAP)是基带的上层协议 ,可以认为它与 LMP 并行工作,他们的区别在于:当业务数据不经过 LMP 时, L2CAP 为上层提供服务。 L2CAP 向上层提供面向连接的和无连接的数据服务,它采用了多路技术、分割和重组技术、群提取技术。 L2CAP 允许高层协议以 64K 字节长度收发数据分组。虽然基带协议提供了 SCO ACL 两种连接类型,但 L2CAP 只支持 ACL

(4)SDP(服务搜索协议)

发现服务在蓝牙技术框架中起着至关紧要的作用,它是所有用户模式的基础。使用SDP可以查询到设备的信息和服务类型,从而在蓝牙设备间建立相应的连接。

除此以外,还有  RFCOMM(电缆替代协议)、TCS-Binary(二元电话控制协议)、  OBEX(对象交换协议)等。

Profile:

Bluetooth的一个很重要的特性,就是所有的Bluetooth产品都无须实现全部的bluetooth规范。为了更容易的保持Bluetooth设备之间的兼容,Bluetooth规范中定义了Profile。Profile定义了设备如何实现一种连接或者应用,可以把profile理解为连接层或者应用层协议。Profile的实现必须依赖于Protocol(核心协议栈)。Bluetooth定义了以下的Profile,带有蓝牙功能的产品不必都实现以下Profile,可以就功能选择性实现。

Traditional Profiles (Qualifiable)

Adopted Versions

3DS

3D Synchronization Profile

1.0

A2DP

Advanced Audio Distribution Profile

1.0 / 1.2 / 1.3

AVRCP

A/V Remote Control Profile

1.0 / 1.3 / 1.4 / 1.5

BIP

Basic Imaging Profile

1.0 / 1.1 / 1.2

BPP

Basic Printing Profile

1.0 / 1.2

DI

Device ID Profile

1.2 / 1.3

DUN

Dial-Up Networking Profile

1.1 / 1.2

FTP

File Transfer Profile

1.1/ 1.2 / 1.3

GAVDP

Generic A/V Distribution Profile

1.0 / 1.2 / 1.3

GOEP

Generic Object Exchange Profile

1.1 / 2.0 / 2.1

GNSS

Global Navigation Satellite System Profile

1.0

HCRP

Hardcopy Cable Replacement Profile

1.0 / 1.2

HDP

Health Device Profile

1.0 / 1.1

HFP

Hands-Free Profile

1.5 / 1.6

HSP

Headset Profile

1.1 / 1.2

HID

Human Interface Device Profile

1.0 / 1.1

MAP

Message Access Profile

1.0

OPP

Object Push Profile

1.1 / 1.2

PAN

Personal Area Networking Profile

1.0

PBAP

Phone Book Access Profile

1.0 / 1.1

SAP

SIM Access Profile

1.0 / 1.1

SDAP

Service Discovery Application Profile

1.1

SPP

Serial Port Profile

1.1 / 1.2

SYNCH

Synchronization Profile

1.1 / 1.2

VDP

Video Distribution Profile


  • 5
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers) are both advanced natural language processing (NLP) models developed by OpenAI and Google respectively. Although they share some similarities, there are key differences between the two models. 1. Pre-training Objective: GPT is pre-trained using a language modeling objective, where the model is trained to predict the next word in a sequence of words. BERT, on the other hand, is trained using a masked language modeling objective. In this approach, some words in the input sequence are masked, and the model is trained to predict these masked words based on the surrounding context. 2. Transformer Architecture: Both GPT and BERT use the transformer architecture, which is a neural network architecture that is specifically designed for processing sequential data like text. However, GPT uses a unidirectional transformer, which means that it processes the input sequence in a forward direction only. BERT, on the other hand, uses a bidirectional transformer, which allows it to process the input sequence in both forward and backward directions. 3. Fine-tuning: Both models can be fine-tuned on specific NLP tasks, such as text classification, question answering, and text generation. However, GPT is better suited for text generation tasks, while BERT is better suited for tasks that require a deep understanding of the context, such as question answering. 4. Training Data: GPT is trained on a massive corpus of text data, such as web pages, books, and news articles. BERT is trained on a similar corpus of text data, but it also includes labeled data from specific NLP tasks, such as the Stanford Question Answering Dataset (SQuAD). In summary, GPT and BERT are both powerful NLP models, but they have different strengths and weaknesses depending on the task at hand. GPT is better suited for generating coherent and fluent text, while BERT is better suited for tasks that require a deep understanding of the context.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值