TELEGRAM网络层源码分析

最近看了一下Telegram网络层的源码,本来想网上找一下现成的结论,降低一点学习成本,但是并没有发现相关的资料。于是自己梳理了一下telegram是如何发送网络请求和响应回包的,这里做个总结。

连接的建立
先上一张图:
在这里插入图片描述

ConnectionsManager
这张图描述了telegram客户端和server的连接时如何建立的。Java层的ConnectionsManager是一个线程安全的单例,其实只是个wrapper,真正的逻辑都是转交给C++层的ConnectionsManager类处理的。 C++层的ConnectionsManager对象,在TgNetWrapper.cpp中初始化,也同样是个单例。这里贴一下C++层ConnectionsManager类的init方法:


```cpp
void ConnectionsManager::init(uint32_t version, int32_t layer, int32_t apiId, std::string deviceModel, 
                              std::string systemVersion, std::string appVersion, std::string langCode, 
                              std::string systemLangCode, std::string configPath, std::string logPath, 
                              int32_t userId, bool isPaused, bool enablePushConnection, bool hasNetwork, 
                              int32_t networkType) {
    currentVersion = version;
    currentLayer = layer;
    currentApiId = apiId;
    currentConfigPath = configPath;
    currentDeviceModel = deviceModel;
    currentSystemVersion = systemVersion;
    currentAppVersion = appVersion;
    currentLangCode = langCode;
    currentSystemLangCode = systemLangCode;
    currentUserId = userId;
    currentLogPath = logPath;
    pushConnectionEnabled = enablePushConnection;
    currentNetworkType = networkType;
    networkAvailable = hasNetwork;
    if (isPaused) {
        lastPauseTime = getCurrentTimeMonotonicMillis();
    }

    if (!currentConfigPath.empty() && currentConfigPath.find_last_of('/') != currentConfigPath.size() - 1) {
        currentConfigPath += "/";
    }
    
    if (!logPath.empty()) {
        FileLog::init(logPath);
    }

    loadConfig();

    pthread_create(&networkThread, NULL, (ConnectionsManager::ThreadProc), this);
}


这里有两个关键的方法调用,一个是loadConfig(),一个是pthread_create()。顺着时间线往下看,loadConfig()调用了initDataCenter()方法,这个方法实现如下:

```cpp
void ConnectionsManager::initDatacenters() {
    Datacenter *datacenter;
    if (!testBackend) {
        if (datacenters.find(1) == datacenters.end()) {
            datacenter = new Datacenter(1);
            datacenter->addAddressAndPort("149.154.175.50", 443, 0);
            datacenter->addAddressAndPort("2001:b28:f23d:f001:0000:0000:0000:000a", 443, 1);
            datacenters[1] = datacenter;
        }

        if (datacenters.find(2) == datacenters.end()) {
            datacenter = new Datacenter(2);
            datacenter->addAddressAndPort("149.154.167.51", 443, 0);
            datacenter->addAddressAndPort("2001:67c:4e8:f002:0000:0000:0000:000a", 443, 1);
            datacenters[2] = datacenter;
        }

        if (datacenters.find(3) == datacenters.end()) {
            datacenter = new Datacenter(3);
            datacenter->addAddressAndPort("149.154.175.100", 443, 0);
            datacenter->addAddressAndPort("2001:b28:f23d:f003:0000:0000:0000:000a", 443, 1);
            datacenters[3] = datacenter;
        }

        if (datacenters.find(4) == datacenters.end()) {
            datacenter = new Datacenter(4);
            datacenter->addAddressAndPort("149.154.167.91", 443, 0);
            datacenter->addAddressAndPort("2001:67c:4e8:f004:0000:0000:0000:000a", 443, 1);
            datacenters[4] = datacenter;
        }

        if (datacenters.find(5) == datacenters.end()) {
            datacenter = new Datacenter(5);
            datacenter->addAddressAndPort("149.154.171.5", 443, 0);
            datacenter->addAddressAndPort("2001:b28:f23f:f005:0000:0000:0000:000a", 443, 1);
            datacenters[5] = datacenter;
        }
    } else {
        if (datacenters.find(1) == datacenters.end()) {
            datacenter = new Datacenter(1);
            datacenter->addAddressAndPort("149.154.175.40", 443, 0);
            datacenter->addAddressAndPort("2001:b28:f23d:f001:0000:0000:0000:000e", 443, 1);
            datacenters[1] = datacenter;
        }

        if (datacenters.find(2) == datacenters.end()) {
            datacenter = new Datacenter(2);
            datacenter->addAddressAndPort("149.154.167.40", 443, 0);
            datacenter->addAddressAndPort("2001:67c:4e8:f002:0000:0000:0000:000e", 443, 1);
            datacenters[2] = datacenter;
        }

        if (datacenters.find(3) == datacenters.end()) {
            datacenter = new Datacenter(3);
            datacenter->addAddressAndPort("149.154.175.117", 443, 0);
            datacenter->addAddressAndPort("2001:b28:f23d:f003:0000:0000:0000:000e", 443, 1);
            datacenters[3] = datacenter;
        }
    }
}

可以看到,主要是一些hardcode的IP和端口,其实在连接上第一个数据中心之后,客户端就会被分发到最优的数据中心接入,这里写这么多,可以看作一种fallback逻辑。
接下来使用了一个pthread_create的调用,新的线程中会执行ThreadProc()方法,这个方法会先调用sendPing(),实现如下:

void ConnectionsManager::sendPing(Datacenter *datacenter, bool usePushConnection) {
    if (usePushConnection && (currentUserId == 0 || !usePushConnection)) {
        return;
    }
    Connection *connection = nullptr;
    if (usePushConnection) {
        connection = datacenter->getPushConnection(true);
    } else {
        connection = datacenter->getGenericConnection(true);
    }
    if (connection == nullptr || (!usePushConnection && connection->getConnectionToken() == 0)) {
        return;
    }
    TL_ping_delay_disconnect *request = new TL_ping_delay_disconnect();
    request->ping_id = ++lastPingId;
    if (usePushConnection) {
        request->disconnect_delay = 60 * 7;
    } else {
        request->disconnect_delay = 35;
        pingTime = (int32_t) (getCurrentTimeMonotonicMillis() / 1000);
    }

    NetworkMessage *networkMessage = new NetworkMessage();
    networkMessage->message = std::unique_ptr<TL_message>(new TL_message());
    networkMessage->message->msg_id = generateMessageId();
    networkMessage->message->bytes = request->getObjectSize();
    networkMessage->message->body = std::unique_ptr<TLObject>(request);
    networkMessage->message->seqno = connection->generateMessageSeqNo(false);

    std::vector<std::unique_ptr<NetworkMessage>> array;
    array.push_back(std::unique_ptr<NetworkMessage>(networkMessage));
    NativeByteBuffer *transportData = datacenter->createRequestsData(array, nullptr, connection);
    if (usePushConnection) {
        DEBUG_D("dc%d send ping to push connection", datacenter->getDatacenterId());
        sendingPushPing = true;
    }
    connection->sendData(transportData, false);
}

这里调用了DataCenter的getPushConnection()和getGenericConnection()方法,这里把connection类型区分开的本意是下载,推送等使用不同的连接,这样可以防止一些耗时任务一直占用连接,同一个数据中心,可以给不同类型的连接分配不同的IP和端口,来提高网络连接的效率。
接下来是一个循环,循环中不断调用select()方法,来处理请求,这个方法的实现如下:

void ConnectionsManager::select() {
    checkPendingTasks();
    int eventsCount = epoll_wait(epolFd, epollEvents, 128, callEvents(getCurrentTimeMonotonicMillis()));
    checkPendingTasks();
    int64_t now = getCurrentTimeMonotonicMillis();
    callEvents(now);
    for (int32_t a = 0; a < eventsCount; a++) {
        EventObject *eventObject = (EventObject *) epollEvents[a].data.ptr;
        eventObject->onEvent(epollEvents[a].events);
    }
    size_t count = activeConnections.size();
    for (uint32_t a = 0; a < count; a++) {
        activeConnections[a]->checkTimeout(now);
    }

    Datacenter *datacenter = getDatacenterWithId(currentDatacenterId);
    if (pushConnectionEnabled) {
        if ((sendingPushPing && llabs(now - lastPushPingTime) >= 30000) 
            || llabs(now - lastPushPingTime) >= 60000 * 3 + 10000) {
            lastPushPingTime = 0;
            sendingPushPing = false;
            if (datacenter != nullptr) {
                Connection *connection = datacenter->getPushConnection(false);
                if (connection != nullptr) {
                    connection->suspendConnection();
                }
            }
            DEBUG_D("push ping timeout");
        }
        if (llabs(now - lastPushPingTime) >= 60000 * 3) {
            DEBUG_D("time for push ping");
            lastPushPingTime = now;
            if (datacenter != nullptr) {
                sendPing(datacenter, true);
            }
        }
    }

    if (lastPauseTime != 0 && llabs(now - lastPauseTime) >= nextSleepTimeout) {
        bool dontSleep = !requestingSaltsForDc.empty();
        if (!dontSleep) {
            for (requestsIter iter = runningRequests.begin(); iter != runningRequests.end(); iter++) {
                Request *request = iter->get();
                if (request->connectionType & ConnectionTypeDownload 
                    || request->connectionType & ConnectionTypeUpload) {
                    dontSleep = true;
                    break;
                }
            }
        }
        if (!dontSleep) {
            for (requestsIter iter = requestsQueue.begin(); iter != requestsQueue.end(); iter++) {
                Request *request = iter->get();
                if (request->connectionType & ConnectionTypeDownload 
                    || request->connectionType & ConnectionTypeUpload) {
                    dontSleep = true;
                    break;
                }
            }
        }
        if (!dontSleep) {
            if (!networkPaused) {
                DEBUG_D("pausing network and timers by sleep time = %d", nextSleepTimeout);
                for (std::map<uint32_t, Datacenter *>::iterator iter = datacenters.begin(); 
                     iter != datacenters.end(); iter++) {
                    iter->second->suspendConnections();
                }
            }
            networkPaused = true;
            return;
        } else {
            lastPauseTime = now;
            DEBUG_D("don't sleep because of salt, upload or download request");
        }
    }
    if (networkPaused) {
        networkPaused = false;
        DEBUG_D("resume network and timers");
    }

    if (delegate != nullptr) {
        delegate->onUpdate();
    }
    if (datacenter != nullptr) {
        if (datacenter->hasAuthKey()) {
            if (llabs(now - lastPingTime) >= 19000) {
                lastPingTime = now;
                sendPing(datacenter, false);
            }
            if (abs((int32_t) (now / 1000) - lastDcUpdateTime) >= DC_UPDATE_TIME) {
                updateDcSettings(0, false);
            }
            processRequestQueue(0, 0);
        } else if (!datacenter->isHandshaking()) {
            datacenter->beginHandshake(true);
        }
    }
}

这里有一个epoll_wait()的调用,直觉上来说,这就是处理网络请求的核心,所以我们也从这里开始分析,先看一下这个epolFd上注册的fd:

void ConnectionSocket::adjustWriteOp() {
    eventMask.events = EPOLLIN | EPOLLRDHUP | EPOLLERR | EPOLLET;
    //...
    if (epoll_ctl(ConnectionsManager::getInstance().epolFd, EPOLL_CTL_MOD, socketFd, &eventMask) != 0) {
        //...
    }
}

void ConnectionSocket::closeSocket(int reason) {
    //...
    if (socketFd >= 0) {
        epoll_ctl(ConnectionsManager::getInstance().epolFd, EPOLL_CTL_DEL, socketFd, NULL);
        //...
    }
    //...
}

void ConnectionSocket::openConnection(std::string address, uint16_t port, bool ipv6, int32_t networkType) {
    //...
    int epolFd = ConnectionsManager::getInstance().epolFd;
    //...
    
    if (connect(socketFd, (ipv6 ? (sockaddr *) &socketAddress6 : (sockaddr *) &socketAddress), 
                (socklen_t) (ipv6 ? sizeof(sockaddr_in6) : sizeof(sockaddr_in))) == -1 && errno != EINPROGRESS) {
        closeSocket(1);
    } else {
        eventMask.events = EPOLLOUT | EPOLLIN | EPOLLRDHUP | EPOLLERR | EPOLLET;
        eventMask.data.ptr = eventObject;
        if (epoll_ctl(epolFd, EPOLL_CTL_ADD, socketFd, &eventMask) != 0) {
            //...
        }
    }
}

ConnectionsManager::ConnectionsManager() {
    if ((epolFd = epoll_create(128)) == -1) {
        DEBUG_E("unable to create epoll instance");
        exit(1);
    }
    //...
    eventFd = eventfd(0, EFD_NONBLOCK);
    if (eventFd != -1) {
        //...
        event.events = EPOLLIN | EPOLLET;
        if (epoll_ctl(epolFd, EPOLL_CTL_ADD, eventFd, &event) == -1) {
            //...
        }
    }

    if (eventFd == -1) {
        pipeFd = new int[2];
        if (pipe(pipeFd) != 0) {
            DEBUG_E("unable to create pipe");
            exit(1);
        }
        //...
        eventMask.events = EPOLLIN;
        if (epoll_ctl(epolFd, EPOLL_CTL_ADD, pipeFd[0], &eventMask) != 0) {
            //...
        }
    }

    //...
}

这里分析一下,epolFd上注册的fd一共有两类:

socket。这里可以看到,socket连接在创建时,会向epoll注册自己,在关闭时会把自己从epoll的监视列表移除。而adjustWriteOp()方法是在socket发送数据时调用,我们知道,对于socket的send,只是把数据发送到了缓冲区,如果缓冲区满会触发EAGAIN,这个时候就要监听EPOLLOUT,表示缓冲区可写,adjustWriteOp()方法完成的就是这个操作。
用于事件通知的fd,这里有eventFd和pipeFd。这两个用于事件通知的fd不会同时使用,先初始化eventFd,pipeFd仅仅是在eventFd无法成功初始化时的备选方案。这两个fd具体用在哪里可以看下面的代码段。

void ConnectionsManager::wakeup() {
    if (pipeFd == nullptr) {
        eventfd_write(eventFd, 1);
    } else {
        char ch = 'x';
        write(pipeFd[1], &ch, 1);
    }
}
view rawwakeup.cpp hosted with ❤ by GitHub
wakeup()方法调用的地方在这里:

void ConnectionsManager::scheduleTask(std::function<void()> task) {
    pthread_mutex_lock(&mutex);
    pendingTasks.push(task);
    pthread_mutex_unlock(&mutex);
    wakeup();
}

这个scheduleTask()方法是写得最耐人寻味的地方。我本来以为,这只是个网络请求的任务队列,后来查了一下这个方法的调用,发现这个队列的东西真是包罗万象,什么加载文件,加载配置,设置dns,设置语言等任务都放到了这个队列里,感觉上是一个jni层全局的调度器。而对于网络请求,都是放在requestQueue中,查看sendRequest()方法不难看出来,处理网络请求是在processRequestQueue()方法中进行的。
现在再来看select()方法,流程就清楚多了,epoll_wait会在以下几种情况下被唤醒:

socket连接有数据需要接收
socket缓冲区可写,上一次发送的数据没有完成
有任务在pendingTasks队列中
当线程被唤醒,不再阻塞之后,会首先去处理pendingTasks队列中的请求。然后调用epoll返回的EventObject对象的onEvent()方法依次处理,这个EventObject方法其实可以看作一个wrapper,会根据EventObjectType把逻辑分发到各个对象上,代码如下:

/*
 * This is the source code of tgnet library v. 1.0
 * It is licensed under GNU GPL v. 2 or later.
 * You should have received a copy of the license in this archive (see LICENSE).
 *
 * Copyright Nikolai Kudashov, 2015.
 */

#include <unistd.h>
#include <sys/eventfd.h>
#include "EventObject.h"
#include "Connection.h"
#include "Timer.h"

EventObject::EventObject(void *object, EventObjectType type) {
    eventObject = object;
    eventType = type;
}

void EventObject::onEvent(uint32_t events) {
    switch (eventType) {
        case EventObjectTypeConnection: {
            Connection *connection = (Connection *) eventObject;
            connection->onEvent(events);
            break;
        }
        case EventObjectTypeTimer: {
            Timer *timer = (Timer *) eventObject;
            timer->onEvent();
            break;
        }
        case EventObjectTypePipe: {
            int *pipe = (int *) eventObject;
            char ch;
            ssize_t size = 1;
            while (size > 0) {
                size = read(pipe[0], &ch, 1);
            }
            break;
        }
        case EventObjectTypeEvent: {
            int *eventFd = (int *) eventObject;
            uint64_t count;
            eventfd_read(eventFd[0], &count);
            break;
        }
        default:
            break;
    }
}

对于epoll,会用到的类型是EventObjectTypeConnection, EventObjectTypePipe, EventObjectTypeEvent.
完成EventObject的处理之后,接下来会关闭超时连接,这里的超时时间是12秒,然后要处理长连接的心跳。这里对于长连接心跳的处理如下:

关闭超时的长连接。这里对于超时的判定有两个条件,一是发送心跳包没有收到回包,已经过去了30s,二是距离上次发送心跳包时间已经过了190s。
判断当前时间距离上一次发送心跳包是否已经超过了180s,如果是的话,则发送长连接心跳包。这里可以得出结论,Telegram的长连接心跳周期是3分钟。
接下的处理非常良心,Telegram在检测到当前队列没有上传下载任务和向server请求盐值的任务时,会关闭和数据中心的socket连接。这里有一个阈值是10s,可以认为,如果把应用切到后台,在完成当前队列里的上传和下载请求10s后,Telegram就会关闭长连接,停止发心跳。这个操作非常省电,反观国内大把app,各种黑科技保活。
那么有一个问题,Telegram是如何保证休眠后还能接收到消息呢?答案是GCM,从AndroidManifest.xml里很容易找到GCM对应service的实现类:

public class GcmPushListenerService extends GcmListenerService {

    public static final int NOTIFICATION_ID = 1;

    @Override
    public void onMessageReceived(String from, final Bundle bundle) {
        FileLog.d("GCM received bundle: " + bundle + " from: " + from);
        AndroidUtilities.runOnUIThread(new Runnable() {
            @Override
            public void run() {
                //...
                ConnectionsManager.onInternalPushReceived();
                ConnectionsManager.getInstance().resumeNetworkMaybe();
            }
        });
    }
}

这里的ConnectionsManager.getInstance().resumeNetworkMaybe()就是尝试重新恢复长连接的操作,逻辑很简单,就不再赘述。
最后一步,对于和DataCenter的普通连接,也发送心跳保活,这里的心跳周期是19s,然后每小时更新一次DataCenter的配置。再调用processRequestQueue()方法开始处理请求队列中的任务。
这里整理了一个比较直观的流程图,说明了发起请求的过程。
select
在这里插入图片描述

Telegram网络模块的分析就到这里了。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值