火山RTC 3 创建RTC引擎

这个QT DEMO  提供的功能 ,相当有限, 对于我集成,没多少参考价值。

因为我之前使用的是网易的,用的方式是自定义视频采集方式,在这个DEMO中,没有这方面的演示,而这个DEMO ,只提供了简单的集成的渲染方式。

不过,如果你只是简单的使用这个RTC的话,还是有一定参加价值的。

一般流程

一、音视频引擎

这个是QT 资源,还得借助QT Create 对应的ui ,才能找到对应的按钮函数。

1、创建引擎

1)、btn_createVideo

//连接信号槽
void QuickStartWidget::initConnections()
{
    connect(ui->btn_createVideo, &QPushButton::clicked, this, &QuickStartWidget::onBtnCreateVideoClicked);
    connect(ui->btn_startCapture, &QPushButton::clicked, this, &QuickStartWidget::onBtnStartCaptureClicked);
    connect(ui->btn_joinroom, &QPushButton::clicked, this, &QuickStartWidget::onBtnJoinRoomClicked);
    connect(ui->btn_destroyroom, &QPushButton::clicked, this, &QuickStartWidget::onBtnLeaveRoomClicked);
    connect(ui->btn_destroyvideo, &QPushButton::clicked, this, &QuickStartWidget::onBtnDestroyVideoClicked);
    connect(ui->btn_setLocalView, &QPushButton::clicked, this, &QuickStartWidget::onBtnSetLocalViewClicked);

}


//创建引擎
void QuickStartWidget::onBtnCreateVideoClicked() 
{
    if (m_video == nullptr) {
        m_handler.reset(new ByteRTCEventHandler());
        m_video = bytertc::createRTCVideo(g_appid.c_str(), m_handler.get(), nullptr);

        QString str_log = "createRTCVideo";
        appendAPI(str_log);
        if (m_video == nullptr) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("CreateRTCVideo error"), QMessageBox::Ok);
            box.exec();
            return;
        }
    }
    else {
        QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("RTCVideo has been created."), QMessageBox::Ok);
        box.exec();
        return;
    }
}

2)createRTCVideo

这里使用的是:createRTCVideo接口

/**
* @locale zh
* @type api
* @region 引擎管理
* @brief 创建 IRTCVideo 实例。  <br>
*        如果当前进程中未创建引擎实例,那么你必须先使用此方法,以使用 RTC 提供的各种音视频能力。  <br>
*        如果当前进程中已创建了引擎实例,再次调用此方法时,会返回已创建的引擎实例。
* @param app_id 每个应用的唯一标识符。只有使用相同的 app_id 生成的实例,才能够进行音视频通信。
* @param event_handler SDK 回调给应用层的 Callback 对象,详见 IRTCVideoEventHandler{@link #IRTCVideoEventHandler} 。
* @param parameters 私有参数。如需使用请联系技术支持人员。
* @return  
*        + IRTCVideo:创建成功。返回一个可用的 IRTCVideo{@link #IRTCVideo} 实例  <br>
*        + Null:app_id 或者event_handler为空, event_handler 为空。
* @list 引擎管理
* @order 1
*/
/**
* @locale en
* @type api
* @region Engine Management
* @brief Creates an engine instance.   <br>
*        This is the very first API that you must call if you want to use all the RTC capabilities.  <br>
*        If there is no engine instance in current process, calling this API will create one. If an engine instance has been created, calling this API again will have the created engine instance returned.
* @param app_id A unique identifier for each App, randomly generated by the RTC console. Only instances created with the same app_id are able to communicate with each other.
* @param event_handler Handler sent from SDK to App. See IRTCVideoEventHandler{@link #IRTCVideoEventHandler}.
* @param parameters Reserved parameters. Please contact technical support fellow if needed.
* @return  
*        + IRTCVideo: A successfully created engine instance.  <br>
*        + Null:  app_id is null or empty. event_handler is null.
* @list Engine Management
* @order 1
*/
BYTERTC_API bytertc::IRTCVideo* createRTCVideo(const char* app_id,
     bytertc::IRTCVideoEventHandler *event_handler, const char* parameters);

创建RTC 引擎的两个主要参数,一个是 APPID,由后台获得

                                                   一个是音视频引擎事件回调接口 IRTCVideoEventHandler 

3)、IRTCVideoEventHandler

音视频引擎事件回调接口

/**
 * @locale zh
 * @type callback
 * @brief 音视频引擎事件回调接口<br>
 * 注意:回调函数是在 SDK 内部线程(非 UI 线程)同步抛出来的,请不要做耗时操作或直接操作 UI,否则可能导致 app 崩溃。
 */
/**
 * @locale en
 * @type callback
 * @brief Audio & video engine event callback interface<br>
 * Note: Callback functions are thrown synchronously in a non-UI thread within the SDK. Therefore, you must not perform any time-consuming operations or direct UI operations within the callback function, as this may cause the app to crash.
 */
class IRTCVideoEventHandler {
public:

    /**
     * @hidden constructor/destructor
     */
    virtual ~IRTCVideoEventHandler() {
    }

派生这个接口


class ByteRTCEventHandler : public QObject,
        public bytertc::IRTCVideoEventHandler,
        public bytertc::IAudioEffectPlayerEventHandler,
        public bytertc::IMixedStreamObserver,
                            public bytertc::IMediaPlayerEventHandler

{
    Q_OBJECT
public:

    struct Stru_RemoteStreamKey {
        std::string room_id;
        std::string user_id;
        bytertc::StreamIndex stream_index;

    };

    explicit ByteRTCEventHandler(QObject *parent = nullptr);
    ~ByteRTCEventHandler();
    void setIsSupportClientPushStream(bool support);


private:
    //from IRTCVideoEventHandler
    void onWarning(int warn) override;
    void onError(int err) override;
    void onConnectionStateChanged(bytertc::ConnectionState state) override;
    void onNetworkTypeChanged(bytertc::NetworkType type) override;
    void onAudioDeviceStateChanged(const char* device_id, bytertc::RTCAudioDeviceType device_type,
                bytertc::MediaDeviceState device_state, bytertc::MediaDeviceError device_error) override;
    void onVideoDeviceStateChanged(const char* device_id, bytertc::RTCVideoDeviceType device_type,
                bytertc::MediaDeviceState device_state, bytertc::MediaDeviceError device_error) override;
    void onAudioDeviceWarning(const char* device_id, bytertc::RTCAudioDeviceType device_type,
                bytertc::MediaDeviceWarning device_warning) override;
    void onVideoDeviceWarning(const char* device_id, bytertc::RTCVideoDeviceType device_type,
                bytertc::MediaDeviceWarning device_warning) override;
    void onSysStats(const bytertc::SysStats& stats) override;
    void onLocalVideoSizeChanged(bytertc::StreamIndex index,
                                 const bytertc::VideoFrameInfo& info) override;
    void onFirstLocalVideoFrameCaptured(bytertc::StreamIndex index, bytertc::VideoFrameInfo info) override;
    void onFirstLocalAudioFrame(bytertc::StreamIndex index) override;

    void onFirstRemoteVideoFrameRendered(const bytertc::RemoteStreamKey key, const bytertc::VideoFrameInfo& info) override;
    void onFirstRemoteVideoFrameDecoded(const bytertc::RemoteStreamKey key, const bytertc::VideoFrameInfo& info) override;

    //from IAudioEffectPlayerEventHandler
    void onAudioEffectPlayerStateChanged(int effect_id, bytertc::PlayerState state, bytertc::PlayerError error);

    //from IMixedStreamObserver
    virtual bool isSupportClientPushStream() override;

    virtual void onMixingEvent(
        bytertc::StreamMixingEvent event, const char* task_id, bytertc::StreamMixingErrorCode error, bytertc::MixedStreamType mix_type) override;

    virtual void onMixingVideoFrame(const char* task_id, bytertc::IVideoFrame* video_frame) override;

    virtual void onMixingAudioFrame(const char* task_id, bytertc::IAudioFrame* audio_frame) override;

    virtual void onMixingDataFrame(const char* task_id, bytertc::IDataFrame* data_frame) override;

    virtual void onCacheSyncVideoFrames(const char* task_id, const char* uids[], bytertc::IVideoFrame* video_frames[],
        bytertc::IDataFrame* data_frame[], int count) override;

    // from IMediaPlayerEventHandler
    virtual void onMediaPlayerStateChanged(int playerId, bytertc::PlayerState state, bytertc::PlayerError error) override;

    virtual void onMediaPlayerPlayingProgress(int playerId, int64_t progress) override;

    void onSEIMessageReceived(bytertc::RemoteStreamKey stream_key, const uint8_t* message, int length) override;

    void onStreamSyncInfoReceived(bytertc::RemoteStreamKey stream_key, bytertc::SyncInfoStreamType stream_type,
                                          const uint8_t* data, int32_t length) override;


signals:
    void sigWarning(int);
    void sigError(int);
    void sigConnectionStateChanged(bytertc::ConnectionState state);
    void sigNetworkTypeChanged(bytertc::NetworkType type);
    void sigAudioDeviceStateChanged(std::string device_id, bytertc::RTCAudioDeviceType device_type,
                                  bytertc::MediaDeviceState device_state,
                                  bytertc::MediaDeviceError device_error);
    void sigVideoDeviceStateChanged(std::string device_id, bytertc::RTCVideoDeviceType device_type,
                                    bytertc::MediaDeviceState device_state, bytertc::MediaDeviceError device_error);

    void sigAudioDeviceWarning(std::string device_id, bytertc::RTCAudioDeviceType device_type,
                    bytertc::MediaDeviceWarning device_warning);
    void sigVideoDeviceWarning(std::string device_id, bytertc::RTCVideoDeviceType device_type,
                    bytertc::MediaDeviceWarning device_warning);
    void sigSysStats(const bytertc::SysStats stats);
    void sigLocalVideoSizeChanged(bytertc::StreamIndex index,
                               const bytertc::VideoFrameInfo info);
    void sigFirstLocalVideoFrameCaptured(bytertc::StreamIndex index, bytertc::VideoFrameInfo info);
    void sigFirstLocalAudioFrame(bytertc::StreamIndex index);

    //IAudioEffectPlayerEventHandler
    void sigAudioEffectPlayerStateChanged(int effect_id, bytertc::PlayerState state, bytertc::PlayerError error);

    //IMixedStreamObserver
    void sigMixingEvent(bytertc::StreamMixingEvent event, std::string task_id, bytertc::StreamMixingErrorCode error, bytertc::MixedStreamType mix_type);

    void sigFirstRemoteVideoFrameRendered(const ByteRTCEventHandler::Stru_RemoteStreamKey key, const bytertc::VideoFrameInfo& info);

    //IMediaPlayerEventHandler
    void sigMediaPlayerStateChanged(int playerId, bytertc::PlayerState state, bytertc::PlayerError error) ;

    void sigMediaPlayerPlayingProgress(int playerId, int64_t progress);

    void sigSEIMessageReceived(ByteRTCEventHandler::Stru_RemoteStreamKey stream_key, std::string str);

    void sigStreamSyncInfoReceived(ByteRTCEventHandler::Stru_RemoteStreamKey stream_key, bytertc::SyncInfoStreamType stream_type, std::string str);


private:
    bool m_supportClientPushStream = false;

};

4)、创建RTC功能后,

createRTCVideo 创建后,返回IRTCVideo* RTC视频引擎

    if (m_video == nullptr) {
        m_handler.reset(new ByteRTCEventHandler());
        m_video = bytertc::createRTCVideo(g_appid.c_str(), m_handler.get(), nullptr);

        QString str_log = "createRTCVideo";
        appendAPI(str_log);
        if (m_video == nullptr) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("CreateRTCVideo error"), QMessageBox::Ok);
            box.exec();
            return;
        }
    }

2、IRTCVideo 引擎

    bytertc::IRTCVideo* m_video = nullptr;

1)、提供的功能

这个引擎提供了以下功能

class ITCVideo {
public:
    vitual int muteAudioCaptue(SteamIndex index, bool mute) = 0;
    vitual int setCaptueVolume(SteamIndex index, int volume) = 0;
    vitual int setPlaybackVolume(const int volume) = 0;
    vitual int setEaMonitoMode(EaMonitoMode mode) = 0;
    vitual int setEaMonitoVolume(const int volume) = 0;
    vitual int setBluetoothMode(BluetoothMode mode) = 0;
    vitual int statAudioCaptue() = 0;
    vitual int stopAudioCaptue() = 0;
    vitual int setAudioScenaio(AudioScenaioType scenaio) = 0;
    vitual int setAudioScene(AudioSceneType scene) = 0;
    vitual int setVoiceChangeType(VoiceChangeType voice_change) = 0;
    vitual int setVoiceevebType(VoiceevebType voice_eveb) = 0;
    vitual int setLocalVoiceEqualization(VoiceEqualizationConfig config) = 0;
    vitual int setLocalVoiceevebPaam(VoiceevebConfig paam) = 0;
    vitual int enableLocalVoiceeveb(bool enable) = 0;
    vitual int setAudioPofile(AudioPofileType audio_pofile) = 0;
    vitual int setAnsMode(AnsMode ans_mode) = 0;
    vitual int enableAGC(bool enable) = 0;
    vitual int setAudioSouceType (AudioSouceType type) = 0;
    vitual int setAudioendeType (AudioendeType type) = 0;
    vitual int pushExtenalAudioFame(IAudioFame* audio_fame) = 0;
    vitual int pullExtenalAudioFame(IAudioFame* audio_fame) = 0;
    vitual int statVideoCaptue() = 0;
    vitual int stopVideoCaptue() = 0;
    vitual int setVideoCaptueConfig(const VideoCaptueConfig& video_captue_config) = 0;
    vitual int setVideoCaptueotation(Videootation otation) = 0;
    vitual int enableSimulcastMode(bool enabled) = 0;
    vitual int setVideoEncodeConfig(const VideoEncodeConfig& max_solution) = 0;
    vitual int setVideoEncodeConfig(const VideoEncodeConfig& encode_config, const cha* paametes) = 0;
    vitual int setVideoEncodeConfig(const VideoEncodeConfig* channel_solutions, int solution_num) = 0;
    vitual int setSceenVideoEncodeConfig(const SceenVideoEncodeConfig& sceen_solution) = 0;
    BYTETC_DEPECATED vitual int setVideoEncodeConfig(SteamIndex index, const VideoSolution* solutions, int solution_num) = 0;
    vitual int enableAlphaChannelVideoEncode(SteamIndex steamIndex, AlphaLayout alphaLayout) = 0;
    vitual int disableAlphaChannelVideoEncode(SteamIndex steamIndex) = 0;
    vitual int setLocalVideoCanvas(SteamIndex index, const VideoCanvas& canvas) = 0;
    vitual int updateLocalVideoCanvas(SteamIndex index, const enum endeMode ende_mode, const uint32_t backgound_colo) = 0;
    vitual int setemoteVideoCanvas(emoteSteamKey steam_key, const VideoCanvas& canvas) = 0;
    vitual int updateemoteSteamVideoCanvas(emoteSteamKey steam_key, const enum endeMode ende_mode, const uint32_t backgound_colo) = 0;
    vitual int updateemoteSteamVideoCanvas(emoteSteamKey steam_key, const emoteVideoendeConfig& emote_video_ende_config) = 0;
    vitual int setLocalVideoSink(SteamIndex index, IVideoSink* video_sink, IVideoSink::PixelFomat equied_fomat) = 0;
    vitual int setLocalVideoende(SteamIndex index, IVideoSink* video_sink, LocalVideoSinkConfig& ende_config) = 0;
    vitual int setemoteVideoSink(emoteSteamKey steam_key, IVideoSink* video_sink, IVideoSink::PixelFomat equied_fomat) = 0;
    vitual int setemoteVideoende(emoteSteamKey steam_key, IVideoSink* video_sink, emoteVideoSinkConfig& config) = 0;
    vitual int switchCamea(CameaID camea_id) = 0;
    vitual int pushSceenVideoFame(IVideoFame* fame) = 0;
    vitual int setOiginalSceenVideoInfo(int oiginal_captue_width, int oiginal_captue_height) = 0;
    vitual int updateSceenCaptueegion(const ectangle& egion_ect) = 0;
    vitual int statSceenVideoCaptue(const SceenCaptueSouceInfo& souce_info, const SceenCaptuePaametes& captue_paams) = 0;
    vitual int stopSceenVideoCaptue() = 0;
    vitual int updateSceenCaptueHighlightConfig(const HighlightConfig& highlight_config) = 0;
    vitual int updateSceenCaptueMouseCuso(MouseCusoCaptueState captue_mouse_cuso) = 0;
    vitual int updateSceenCaptueFilteConfig(const SceenFilteConfig& filte_config) = 0;
    vitual ISceenCaptueSouceList* getSceenCaptueSouceList() = 0;
    vitual IVideoFame* getThumbnail(SceenCaptueSouceType type, view_t souce_id, int max_width, int max_height) = 0;
    vitual IVideoFame* getWindowAppIcon(view_t souce_id, int max_width = 100, int max_height = 100) = 0;
    vitual int setVideoSouceType(SteamIndex steam_index, VideoSouceType type) = 0;
    vitual int pushExtenalVideoFame(IVideoFame* fame) = 0;
    BYTETC_DEPECATED vitual int setAudioPlaybackDevice(AudioPlaybackDevice device) = 0;
    vitual int setAudiooute(Audiooute oute) = 0;
    vitual int setDefaultAudiooute(Audiooute oute) = 0;
    vitual Audiooute getAudiooute() = 0;
    vitual ITCoom* ceateTCoom(const cha* oom_id) = 0;
    vitual int setPublishFallbackOption(PublishFallbackOption option) = 0;
    vitual int setSubscibeFallbackOption(SubscibeFallbackOption option) = 0;
    vitual int setemoteUsePioity(const cha* oom_id, const cha* use_id, emoteUsePioity pioity) = 0;
    vitual int setBusinessId(const cha* business_id) = 0;
    vitual int setVideootationMode(VideootationMode otation_mode) = 0;
    vitual int setLocalVideoMioType(MioType mio_type) = 0;
    vitual int setemoteVideoMioType(emoteSteamKey steam_key, emoteMioType mio_type) = 0;
    vitual IVideoEffect* getVideoEffectInteface() = 0;
    vitual int enableEffectBeauty(bool enable) = 0;
    vitual int setBeautyIntensity(EffectBeautyMode beauty_mode, float intensity) = 0;
    vitual int setemoteVideoSupeesolution(emoteSteamKey steam_key, VideoSupeesolutionMode mode) = 0;
    vitual int setVideoDenoise(VideoDenoiseMode mode) = 0;
    vitual int setVideoOientation(VideoOientation oientation) = 0;
    vitual ICameaContol* getCameaContol() = 0;
    vitual int setEncyptInfo(EncyptType encypt_type, const cha* key, int key_size) = 0;
    vitual int setCustomizeEncyptHandle(IEncyptHandle* handle) = 0;
    vitual int enableAudioFameCallback(AudioFameCallbackMethod method, AudioFomat fomat) = 0;
    vitual int disableAudioFameCallback(AudioFameCallbackMethod method) = 0;
    vitual int egisteAudioFameObseve(IAudioFameObseve* obseve) = 0;
    vitual int egisteAudioPocesso(IAudioFamePocesso* pocesso) = 0;
    vitual int enableAudioPocesso(AudioPocessoMethod method, AudioFomat fomat) = 0;
    vitual int disableAudioPocesso(AudioPocessoMethod method) = 0;
    BYTETC_DEPECATED vitual int egisteVideoFameObseve(IVideoFameObseve* obseve) = 0;
    BYTETC_DEPECATED vitual int sendSEIMessage(SteamIndex steam_index, const uint8_t* message, int length, int epeat_count) = 0;
    vitual int sendSEIMessage(SteamIndex steam_index, const uint8_t* message, int length, int epeat_count, SEICountPeFame mode) = 0;
    vitual int sendPublicSteamSEIMessage(SteamIndex steam_index, int channel_id, const uint8_t* message, int length, int epeat_count, SEICountPeFame mode) = 0;
    vitual IVideoDeviceManage* getVideoDeviceManage() = 0;
    vitual IAudioDeviceManage* getAudioDeviceManage() = 0;
    vitual int statFileecoding(SteamIndex type, ecodingConfig config, ecodingType ecoding_type) = 0;
    vitual int stopFileecoding(SteamIndex type) = 0;
    vitual int statAudioecoding(AudioecodingConfig& config) = 0;
    vitual int stopAudioecoding() = 0;
    vitual int enableExtenalSoundCad(bool enable) = 0;
    vitual int setuntimePaametes(const cha * json_sting) = 0;
    vitual int statAS(const TCASConfig& as_config, ITCASEngineEventHandle* handle) = 0;
    vitual int stopAS() = 0;
    vitual int feedback(uint64_t type, const PoblemFeedbackInfo* info) = 0;
    vitual IAudioMixingManage* getAudioMixingManage() = 0;
    vitual IAudioEffectPlaye* getAudioEffectPlaye() = 0;
    vitual IMediaPlaye* getMediaPlaye(int playe_id) = 0;
    vitual int login(const cha* token, const cha* uid) = 0;
    vitual int logout() = 0;
    vitual int updateLoginToken(const cha* token) = 0;
    vitual int setSevePaams(const cha* signatue, const cha* ul) = 0;
    vitual int getPeeOnlineStatus(const cha* pee_use_id) = 0;
    vitual int64_t sendUseMessageOutsideoom(const cha* uid, const cha* message, MessageConfig config = kMessageConfigeliableOdeed) = 0;
    vitual int64_t sendUseBinayMessageOutsideoom(const cha* uid, int length, const uint8_t* message, MessageConfig config = kMessageConfigeliableOdeed) = 0;
    vitual int64_t sendSeveMessage(const cha* message) = 0;
    vitual int64_t sendSeveBinayMessage(int length, const uint8_t* message) = 0;
    vitual int statNetwokDetection(bool is_test_uplink, int expected_uplink_bitate, bool is_test_downlink, int expected_downlink_biteate) = 0;
    vitual int stopNetwokDetection() = 0;
    vitual int setSceenAudioSouceType(AudioSouceType souce_type) = 0;
    vitual int setSceenAudioSteamIndex(SteamIndex index) = 0;
    vitual int setSceenAudioChannel(AudioChannel channel) = 0;
    vitual int statSceenAudioCaptue() = 0;
    vitual int statSceenAudioCaptue(const cha device_id[MAX_DEVICE_ID_LENGTH]) = 0;
    vitual int stopSceenAudioCaptue() = 0;
    vitual int pushSceenAudioFame(IAudioFame* fame) = 0;
    vitual int setAudioAlignmentPopety(const emoteSteamKey& steam_key, AudioAlignmentMode mode) = 0;
    vitual int setExtensionConfig(const cha* goup_id) = 0;
    vitual int sendSceenCaptueExtensionMessage(const cha* message, size_t size) = 0;
    vitual int statSceenCaptue(SceenMediaType type, const cha* bundle_id) = 0;
    vitual int statSceenCaptue(SceenMediaType type, void* context) = 0;
    vitual int stopSceenCaptue() = 0;
    BYTETC_DEPECATED vitual int statLiveTanscoding(const cha* task_id, ITanscodePaam* paam, ITanscodeObseve* obseve) = 0;
    BYTETC_DEPECATED vitual int stopLiveTanscoding(const cha* task_id) = 0;
    BYTETC_DEPECATED vitual int updateLiveTanscoding(const cha* task_id, ITanscodePaam* paam) = 0;
    vitual int statPushMixedSteamToCDN(const cha* task_id, IMixedSteamConfig* config, IMixedSteamObseve* obseve) = 0;
    vitual int updatePushMixedSteamToCDN(const cha* task_id, IMixedSteamConfig* config) = 0;
    vitual int statPushSingleSteamToCDN(const cha* task_id, PushSingleSteamPaam& paam, IPushSingleSteamToCDNObseve* obseve) = 0;
    vitual int stopPushSteamToCDN(const cha* task_id) = 0;
    vitual int statChousCacheSync(ChousCacheSyncConfig * config, IChousCacheSyncObseve* obseve) = 0;
    vitual int stopChousCacheSync() = 0;
    vitual int statPushPublicSteam(const cha* public_steam_id, IPublicSteamPaam* paam) = 0;
    vitual int stopPushPublicSteam(const cha* public_steam_id) = 0;
    vitual int updatePublicSteamPaam(const cha* public_steam_id, IPublicSteamPaam* paam) = 0;
    vitual int updateSceenCaptue(SceenMediaType type) = 0;
    vitual int enableAudioPopetiesepot(const AudioPopetiesConfig& config) = 0;
    vitual int setemoteAudioPlaybackVolume(const cha* oom_id,const cha* use_id, int volume) = 0;
    vitual int enableVocalInstumentBalance(bool enable) = 0;
    vitual int enablePlaybackDucking(bool enable) = 0;
    vitual int egisteLocalEncodedVideoFameObseve(ILocalEncodedVideoFameObseve* obseve) = 0;
    vitual int egisteemoteEncodedVideoFameObseve(IemoteEncodedVideoFameObseve* obseve) = 0;
    vitual int setExtenalVideoEncodeEventHandle(IExtenalVideoEncodeEventHandle* encode_handle) = 0;
    vitual int pushExtenalEncodedVideoFame(SteamIndex index, int video_index, IEncodedVideoFame* video_steam) = 0;
    vitual int setVideoDecodeConfig(emoteSteamKey key, VideoDecodeConfig config) = 0;
    vitual int equestemoteVideoKeyFame(const emoteSteamKey& steam_info) = 0;
    vitual int sendSteamSyncInfo(const uint8_t* data, int32_t length, const SteamSycnInfoConfig& config) = 0;
    vitual int setLocalVoicePitch(int pitch) = 0;
    BYTETC_DEPECATED vitual int muteAudioPlayback(MuteState mute_state) = 0;
    vitual int statPlayPublicSteam(const cha* public_steam_id) = 0;
    vitual int stopPlayPublicSteam(const cha* public_steam_id) = 0;
    vitual int setPublicSteamVideoCanvas(const cha* public_steam_id, const VideoCanvas& canvas) = 0;
    vitual int setPublicSteamVideoSink(const cha* public_steam_id, IVideoSink* video_sink, IVideoSink::PixelFomat fomat) = 0;
    vitual int setPublicSteamAudioPlaybackVolume(const cha* public_steam_id, int volume) = 0;
    vitual int setVideoWatemak(SteamIndex steam_index, const cha* image_path, TCWatemakConfig config) = 0;
    vitual int cleaVideoWatemak(SteamIndex steam_index) = 0;
    vitual long takeLocalSnapshot(const SteamIndex steam_index, ISnapshotesultCallback* callback) = 0;
    vitual long takeemoteSnapshot(const emoteSteamKey steam_key, ISnapshotesultCallback* callback) = 0;
    vitual int setDummyCaptueImagePath(const cha* file_path) = 0;
    vitual int statCloudPoxy(const CloudPoxyConfiguation& configuation) = 0;
    vitual int stopCloudPoxy() = 0;
    vitual int statEchoTest(EchoTestConfig echo_test_config, unsigned int play_delay_time) = 0;
    vitual int stopEchoTest() = 0;
    vitual ISingScoingManage* getSingScoingManage() = 0;
    vitual NetwokTimeInfo getNetwokTimeInfo() = 0;
    vitual int invokeExpeimentalAPI(const cha* paam) = 0;
    vitual IKTVManage* getKTVManage() = 0;
    vitual int statHadwaeEchoDetection(const cha* test_audio_file_path) = 0;
    vitual int stopHadwaeEchoDetection() = 0;
    vitual int setCellulaEnhancement(const MediaTypeEnhancementConfig& config) = 0;
    vitual int setLocalPoxy(const LocalPoxyConfiguation* configuations, int configuation_num) = 0;
    vitual int setLowLightAdjusted(VideoEnhancementMode mode) = 0;
};
BYTETC_API bytetc::ITCVideo* ceateTCVideo(const cha* app_id, bytetc::ITCVideoEventHandle *event_handle, const cha* paametes);
BYTETC_API bytetc::ITCVideo* ceateTCVideoMulti(const cha* app_id, bytetc::ITCVideoEventHandle* event_handle, const cha* paametes);
BYTETC_API void destoyTCVideo();
BYTETC_API void destoyTCVideoMulti(ITCVideo* instance_multi);
BYTETC_API const cha* getEoDesciption(int code);
BYTETC_API const cha* getSDKVesion();
BYTETC_API int setLogConfig(const LogConfig& log_config);

2)、示例

DEMO中的例子

2.1)、创建
//开始采集音视频
void QuickStartWidget::onBtnStartCaptureClicked()
{
    if (m_video) {
        m_video->startAudioCapture();
        m_video->startVideoCapture();
        QStringList str_log = { "startAudioCapture", "startVideoCapture" };
        appendAPI(str_log);
    }
    else {
        QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("RTCVideo has't been created."), QMessageBox::Ok);
        box.exec();
        return;
    }
}


 

2.2)、开始采集音视频

//开始采集音视频
void QuickStartWidget::onBtnStartCaptureClicked()
{
    if (m_video) {
        m_video->startAudioCapture();
        m_video->startVideoCapture();
        QStringList str_log = { "startAudioCapture", "startVideoCapture" };
        appendAPI(str_log);
    }
    else {
        QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("RTCVideo has't been created."), QMessageBox::Ok);
        box.exec();
        return;
    }
}

2.3)、设置本地渲染

//设置本地视频渲染窗口
void QuickStartWidget::onBtnSetLocalViewClicked()
{
    int ret = 0;
    if (m_video == nullptr) {
        QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("RTCVideo has been created."), QMessageBox::Ok);
        box.exec();
        return;
    }
    else {
        bytertc::VideoCanvas cas;
        cas.background_color = 0x00000000;
        cas.render_mode = bytertc::kRenderModeHidden;
        cas.view = (void*)ui->widget_local->getWinId();
        ret = m_video->setLocalVideoCanvas(bytertc::kStreamIndexMain, cas);
        appendAPI("setLocalVideoCanvas");
        if (ret < 0) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("setLocalVideoCanvas error, ret=") + QString::number(ret), QMessageBox::Ok);
            box.exec();
        }
    }
}

2.4)、创建房间,进入房间
//进房
void QuickStartWidget::onBtnJoinRoomClicked()
{
    if (m_video == nullptr) {
        QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("RTCVideo has't been created."), QMessageBox::Ok);
        box.exec();
        return;
    }

    if (m_room == nullptr) {
        std::string roomid, uid, token;
        int ret = 0;
        QString str_error;
        roomid = ui->lineEdit_room->text().toStdString();
        uid = ui->lineEdit_uid->text().toStdString();
        if (!Utils::checkIDValid(QString::fromStdString(roomid), QStringLiteral("房间id"), str_error)) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("roomid error ") + str_error, QMessageBox::Ok);
            box.exec();
            return;
        }
        if (!Utils::checkIDValid(QString::fromStdString(uid), QStringLiteral("uid"), str_error)) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("uid error ") + str_error, QMessageBox::Ok);
            box.exec();
            return;
        }
        m_roomHandler = createRoomHandler(roomid, uid);
        token = Utils::generateToken(roomid, uid);
        m_room = m_video->createRTCRoom(roomid.c_str());
        appendAPI("createRTCRoom");
        if (m_room == nullptr) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("createRTCRoom error"), QMessageBox::Ok);
            box.exec();
            return;
        }

        m_room->setRTCRoomEventHandler(m_roomHandler.get());
        appendAPI("setRTCRoomEventHandler");

        bytertc::UserInfo uinfo;
        uinfo.uid = uid.c_str();
        uinfo.extra_info = nullptr;
        bytertc::RTCRoomConfig config;
        config.is_auto_publish = true;
        config.is_auto_subscribe_audio = true;
        config.is_auto_subscribe_video = true;
        config.room_profile_type = bytertc::kRoomProfileTypeCommunication;
        ret = m_room->joinRoom(token.c_str(), uinfo, config);
        appendAPI("joinRoom");
        ui->widget_local->setUserInfo(ui->lineEdit_room->text().toStdString(), ui->lineEdit_uid->text().toStdString());


        if (ret < 0) {
            QMessageBox box(QMessageBox::Warning, QStringLiteral("提示"), QString("joinRoom error, ret=") + QString::number(ret), QMessageBox::Ok);
            box.exec();
            return;
        }
    }
}
2.4.1)、创建房间实例

这里需要注意:

 m_room = m_video->createRTCRoom(roomid.c_str());

创建房间实例 返回一个房间实例, 如果需要加入的房间已存在,仍需先调用本方法来获取 IRTCRoom 实例,再调用 joinRoom{@link #IRTCRoom#joinRoom} 加入房间。

2.4.2)、加入房间时,设置是否自动发布
        bytertc::UserInfo uinfo;
        uinfo.uid = uid.c_str();
        uinfo.extra_info = nullptr;
        bytertc::RTCRoomConfig config;
        config.is_auto_publish = true;
        config.is_auto_subscribe_audio = true;
        config.is_auto_subscribe_video = true;
        config.room_profile_type = bytertc::kRoomProfileTypeCommunication;
        ret = m_room->joinRoom(token.c_str(), uinfo, config);

2.5)、离房 销毁房间

//离房
void QuickStartWidget::onBtnLeaveRoomClicked()
{
    if (m_video == nullptr) {
        return;
    }
    if (m_room) {
        m_room->leaveRoom();
        m_room->destroy();
        m_room = nullptr;
    }
    m_remote_rendered = false;

    appendAPI("leaveRoom");
    appendAPI("destroy");
}

这里销毁房间是销毁的本地room实例(本地用户退房),不是销毁服务端的rtc房间,房间内的其他用户不会退房

服务端rtc房间无法通过本地sdk去销毁,只能房间内的用户都统一退房(调用退房销毁实例接口),

或者调用服务端的openapi接口直接解散房间

https://www.volcengine.com/docs/6348/357815

 2.6)、销毁引擎
//销毁引擎
void QuickStartWidget::onBtnDestroyVideoClicked()
{
    QStringList list = {"stopAudioCapture", "stopVideoCapture", "destroyRTCVideo"};
    if (m_video) {
        if (m_room) {
            m_room->leaveRoom();
            m_room->destroy();
            m_room = nullptr;
            list = QStringList{ "leaveRoom", "destroy" } + list;
        }
        
        m_video->stopAudioCapture();
        m_video->stopVideoCapture();
        bytertc::destroyRTCVideo();
        m_video = nullptr;
        appendAPI(list);
    }
    m_remote_rendered = false;
    ui->widget_local->setUserInfo("", "");
    ui->widget_remote->setUserInfo("", "");
}

### 火山引擎 RTC 在安卓平台上的集成概述 火山引擎RTC(Real-Time Communication)服务提供了高质量的实时音视频通信能力,适用于多种场景,如在线教育、远程医疗、社交娱乐等。对于安卓开发者而言,可以通过官方 SDK 和文档快速实现 RTC 功能的集成。 以下是关于安卓平台上火山引擎 RTC 的相关信息以及可能的集成方法: #### 1. 开发准备 在开始集成之前,需要完成以下准备工作: - **注册并获取 App ID**:访问火山引擎官网,创建项目并获取专属的 App ID[^2]。 - **下载 SDK**:前往火山引擎开发者的资源页面,下载最新版本的 Android SDK。 - **配置环境**:确保项目的 Gradle 文件中已正确引入依赖项。通常情况下,SDK 提供了一个 `aar` 文件或者 Maven 仓库地址用于集成。 ```gradle dependencies { implementation &#39;com.volcengine.rtc:volcengine_rtc_sdk:<version>&#39; } ``` 上述代码片段展示了如何通过 Gradle 添加火山引擎 RTC SDK 到您的项目中。 --- #### 2. 初始化 SDK 初始化是使用火山引擎 RTC 的第一步操作。需调用相应的 API 方法传入 App ID 完成实例化过程。 ```java import com.volcengine.rtc.RTCClient; import com.volcengine.rtc.RTCConfig; public class MainActivity extends AppCompatActivity { private RTCClient rtcClient; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // 创建 RTC 配置对象 RTCConfig config = new RTCConfig(); config.setAppId("your_app_id_here"); // 初始化客户端 rtcClient = RTCClient.create(this, config); setContentView(R.layout.activity_main); } } ``` 此部分代码实现了基本的初始化逻辑,并设置了必要的参数来启动 RTC 会话。 --- #### 3. 加入频道与管理流媒体 成功初始化后,可以进一步执行加入频道的操作以便与其他用户建立连接。同时支持发布本地音视频轨道及订阅远端数据流等功能。 ```java // 假设已经定义好房间名 roomName 变量 rtcClient.joinChannel(roomName, null, new ResultCallback<Void>() { @Override public void onSuccess(Void aVoid) { Log.d("RTC", "Join channel success"); // 发布本地音视频 rtcClient.publishLocalAudioTrack(true); rtcClient.publishLocalVideoTrack(true); } @Override public void onFailure(int errorCode, String message) { Log.e("RTC", "Failed to join channel: " + message); } }); ``` 以上示例演示了如何安全地进入指定频道并启用本地图像声音传输功能。 --- #### 4. 处理事件回调 为了更好地监控整个生命周期状态变化情况,在实际应用过程中还需要监听各种重要时刻的通知消息,比如成员进出通知、网络质量报告等等。 ```java rtcClient.registerEventHandler(new RTCEngineHandler() { @Override public void onUserJoined(String userId) { Log.i("RTC_EVENT", "Remote user joined:" + userId); } @Override public void onUserOffline(String userId, int reason) { Log.w("RTC_EVENT", "Remote user offline:" + userId + ", Reason=" + reason); } }); ``` 这些接口允许应用程序动态响应来自系统的反馈信息从而提升用户体验水平。 --- ### 总结 综上所述,基于火山引擎提供的强大技术支持,可以在较短时间内轻松搭建起具备跨平台特性的高效沟通桥梁。不过值得注意的是具体实施细节可能会因业务需求不同而有所调整因此建议深入查阅官方指南获得更多指导材料。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

清水迎朝阳

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值