服务器端的代码就是在python的官方文档上摘的,只是将日志的配置方式改为从文件读取。
服务器端LogServer.py代码
客户端LogClient.py直接根据传过来的日志名来生成日志
下面是对python客户端的进一步封装,这样就更加方便对日志的使用
LogFunc.py
整个日志系统,服务器端的.py文件通过循环不断的接收和处理来自客户端的请求。
客户端通过一个SocketHandler与服务器端通讯。整个系统中服务器端和客户端可以部署在不同的机器上,记录的日志放在服务器端(可在服务器上的配置文件中指定日志文件存放的路径)。
LogFunc.py文件是客户端的进一步封装。方便了整个日志系统的使用。这样只要在要写的python文件中import了LogFunc,然后调用相关的LogInfo等函数就可以记录日志了。
服务器端LogServer.py代码
- #coding:utf-8
- import cPickle
- import logging
- import logging.handlers
- import SocketServer
- import struct
- import logging.config
- class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
- """Handler for a streaming logging request.
- This basically logs the record using whatever logging policy is
- configured locally.
- """
- def handle(self):
- """
- Handle multiple requests - each expected to be a 4-byte length,
- followed by the LogRecord in pickle format. Logs the record
- according to whatever policy is configured locally.
- """
- while 1:
- chunk = self.connection.recv(4)
- if len(chunk) < 4:
- break
- slen = struct.unpack(">L", chunk)[0]
- chunk = self.connection.recv(slen)
- while len(chunk) < slen:
- chunk = chunk + self.connection.recv(slen - len(chunk))
- obj = self.unPickle(chunk)
- record = logging.makeLogRecord(obj)
- self.handleLogRecord(record)
- def unPickle(self, data):
- return cPickle.loads(data)
- def handleLogRecord(self, record):
- # if a name is specified, we use the named logger rather than the one
- # implied by the record.
- if self.server.logname is not None:
- name = self.server.logname
- else:
- name = record.name
- logger = logging.getLogger(name)
- # N.B. EVERY record gets logged. This is because Logger.handle
- # is normally called AFTER logger-level filtering. If you want
- # to do filtering, do it at the client end to save wasting
- # cycles and network bandwidth!
- logger.handle(record)
- class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
- """simple TCP socket-based logging receiver suitable for testing.
- """
- allow_reuse_address = 1
- def __init__(self, host='localhost',
- port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
- handler=LogRecordStreamHandler):
- SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
- self.abort = 0
- self.timeout = 1
- self.logname = None
- def serve_until_stopped(self):
- import select
- abort = 0
- while not abort:
- rd, wr, ex = select.select([self.socket.fileno()],
- [], [],
- self.timeout)
- if rd:
- self.handle_request()
- abort = self.abort
- def main():
- logging.config.fileConfig('log.conf') #通过配置文件来读取日志的配置
- tcpserver = LogRecordSocketReceiver()
- print "About to start TCP server..."
- tcpserver.serve_until_stopped()
- if __name__ == "__main__":
- main()
- #coding:utf-8
- import logging, logging.handlers
- def getLogger( loggerName ):
- logger = logging.getLogger( loggerName )
- logger.setLevel(logging.DEBUG)
- socketHandler = logging.handlers.SocketHandler('localhost',/
- logging.handlers.DEFAULT_TCP_LOGGING_PORT)
- logger.addHandler(socketHandler)
- return logger
- def setLevel( loggerName,level ):
- logger = logging.getLogger( "loggerName" )
- logger.setLevel( level )
LogFunc.py
- #coding:utf-8
- import LogClient
- logger = LogClient.getLogger("pgt")
- def LogDebug(msg):
- logger.debug(msg)
- def LogInfo(msg):
- logger.info(msg)
- def LogWarn(msg):
- logger.warn(msg)
- def LogError(msg):
- logger.error(msg)
- def LogFatal(msg):
- logger.fatal(msg)
客户端通过一个SocketHandler与服务器端通讯。整个系统中服务器端和客户端可以部署在不同的机器上,记录的日志放在服务器端(可在服务器上的配置文件中指定日志文件存放的路径)。
LogFunc.py文件是客户端的进一步封装。方便了整个日志系统的使用。这样只要在要写的python文件中import了LogFunc,然后调用相关的LogInfo等函数就可以记录日志了。