LAVA源码阅读笔记梳理

LAVA框架

LAVA是一个开源的硬件自动化测试工具, 它的框架如下:
lava 框架

LAVA分为master 和worker两个部分.
网页前端用Django框架开发而成, 用户可以通过网页查看设备类型,增加设备,提交任务等等, 网页端提交的数据会记录到后台的PostgreSQL服务器上去.
schduler任务调度器会周期性地扫描数据库中的数据,检查排队的测试任务,空闲可用的设备,并在资源可用时启动任务.
lava-master-daemon 会通过ZMQ(Zero MQ)机制与worker进行通信.
lava-slave-daemon 会接收master发送的控制消息并将log和任务结果通过ZMQ返回给master.
Dispatcher是LAVA系统特别重要的一个组件,它根据提交的任务定义和设备参数来执行对设备的所有操作.

在我的电脑上, LAVA启动之后,系统会启动下面这些相关的进程:

root@f4acd930a59a:/usr/bin# ps -efH | grep lava
UID        PID  PPID  C STIME TTY          TIME CMD
root      3641   727  0 03:05 pts/13   00:00:00   grep lava
root       220     1  0 Oct15 ?        00:00:03   /usr/bin/python3 /usr/bin/lava-slave --level DEBUG --master tcp://localhost:5556 --socket-addr tcp://localhost:5555
lavaser+   232     1  0 Oct15 ?        00:00:00   /usr/bin/python3 /usr/bin/lava-server manage lava-publisher --level DEBUG
root       233     1  0 Oct15 ?        00:00:05   gunicorn: master [lava_server.wsgi]
lavaser+   346   233  0 Oct15 ?        00:00:00     gunicorn: worker [lava_server.wsgi]
lavaser+   348   233  0 Oct15 ?        00:00:00     gunicorn: worker [lava_server.wsgi]
lavaser+   354   233  0 Oct15 ?        00:00:00     gunicorn: worker [lava_server.wsgi]
lavaser+   358   233  0 Oct15 ?        00:00:00     gunicorn: worker [lava_server.wsgi]
lavaser+   236     1  0 Oct15 ?        00:00:02   /usr/bin/python3 /usr/bin/lava-server manage lava-logs --level DEBUG
postgres   389   277  0 Oct15 ?        00:00:21     postgres: 10/main: lavaserver lavaserver 127.0.0.1(57668) idle
root       338     1  0 Oct15 ?        00:00:01   /usr/bin/python /usr/bin/lava-coordinator --loglevel=DEBUG
lavaser+   379     1  0 Oct15 ?        00:00:59   /usr/bin/python3 /usr/bin/lava-server manage lava-master --level DEBUG

lava-server 工作流程

可以看到上面的lava-server程序manage 参数之后接了三种参数: lava-publisher, lava-logs, lava-master, --level参数都是DEBUG.
lava_server程序非常简短, 它会调用一个main函数, 解析命令行参数, 选择合适的配置文件, 然后调用django库中的execute_from_command_line(argv=None)方法来执行对应的程序. 如下:

def main():
    # Is the script called from an installed packages or from a source install?
    installed = not sys.argv[0].endswith('manage.py')

    # Create the command line parser
    parser = argparse.ArgumentParser()
    if installed:
        subparser = parser.add_subparsers(title='subcommand', help='Manage LAVA')
        manage = subparser.add_parser("manage")
    else:
        manage = parser

    group = manage.add_argument_group("Server configuration")

    group.add_argument("-I", "--instance-template",
                       action="store",
                       default="/etc/lava-server/{filename}.conf",
                       help="Template used for constructing instance pathname."
                            " The default value is: %(default)s")

    manage.add_argument("command", nargs="...",
                        help="Invoke this Django management command")

    # Parse the command line
    options = parser.parse_args()

    # Choose the right Django settings
    if installed:
        settings = "lava_server.settings.distro"
    else:
        # Add the root dir to the python path
        find_sources()
        settings = "lava_server.settings.development"
    os.environ["DJANGO_SETTINGS_MODULE"] = settings
    os.environ["DJANGO_DEBIAN_SETTINGS_TEMPLATE"] = options.instance_template

    # Create and run the Django command line
    django_options = [sys.argv[0]]
    django_options.extend(options.command)
    execute_from_command_line(django_options)

以 /usr/bin/python3 /usr/bin/lava-server manage lava-master --level DEBUG 为例进行说明.
程序先检测第一个参数是不是manage.py, 如果是manage.py, 说明lava还没有安装好, 否则就说明已经安装好了.这里, 第一个参数是/usr/bin/lava-server ,所以installed的值为True.
所以, 程序会再增加一个subparser 用于解析manage 参数.
经过参数解析之后, 最后会执行函数:
execute_from_command_line(["/usr/bin/lava-server", “lava-master”, “–level DEBUG” ])
上面这个命令最终会定位到lava_server/management/commands/lava-master.py文件,并执行其中的程序.
lava-master.py中定义了很多的commands, 比如:

class Command(LAVADaemonCommand):
    def send_status(self, hostname):
        """
        The master crashed, send a STATUS message to get the current state of jobs
        """
        ...
	def start_job(self, job, options):
        # Load job definition to get the variables for template
        # rendering
        ...
    def start_jobs(self, options, jobs=None):
        """
        Loop on all scheduled jobs and send the START message to the slave.
        """        
        ...

lava_server/management/commands/目录下还有与另外另个进程对应的lava-logs.py和lava-publisher.py.
其中, lava-publisher主要是用ZMQ机制与slave传递lava event事件消息. lava-logs主要是通过ZMQ与slave传递log消息.

备忘:
启动Django服务程序的命令格式是:
python manage.py xx

lava-server的配置

lava-server的配置文件位于: lava_server/settings/, 包括下面这些配置文件:
common.py config_file.py development.py distro.py production.py secret_key.py
主要的配置信息位于common.py中,例如:

ROOT_URLCONF = 'lava_server.urls'
WSGI_APPLICATION = 'lava_server.wsgi.application'
STATIC_ROOT = "/usr/share/lava-server/static"

关于master 和 slave 具体是怎么通信的, 需要再深入看代码.
Django框架部分这里就不展开了.

lava-slave工作流程

下面分析lava-slave的工程流程, 它是job运行的起点.
以上面的进程为例:
/usr/bin/lava-slave --level DEBUG --master tcp://localhost:5556 --socket-addr tcp://localhost:5555

lava-slave-daemon 的主要工作流程如下(lava-slave.main()):

  1. 解析命令行参数
  2. 设置log等级
  3. 创建与master dispatcher连接的ZMQ上下文, 并返回必要的参数元组
  4. 连接数据库并创建表格jobs
  5. 配置ZMQ
  6. 连接到master
  7. 进入下面的循环:
    7.1 从master接收消息
    7.2 处理消息(重要的函数: handle())
    7.3 如有必要, 回复master ping消息
    7.4 检查job状态
    7.5 删除陈旧的资源

代码如下所示:

def main():
    # Parse command line
    options = setup_parser().parse_args()

    # Setup logger
    setup_logger(options.log_file, options.level)

    try:
    	# 创建与master dispatcher连接的ZMQ上下文, 并返回必要的参数元组
        ctx, sock, poller, pipe_r, pipe_w = create_context(options)
    except Exception as exc:
        return 1

    # slave states
    master = Master()
    mkdir(SLAVE_DIR)
    #连接数据库并创建表格jobs (/var/lib/lava/dispatcher/slave/db.sqlite3)
    jobs = JobsDB(os.path.join(SLAVE_DIR, "db.sqlite3"))
    
    if options.encrypt:
        zmq_config = ZMQConfig(options.socket_addr, options.master_cert,
                               options.slave_cert, options.ipv6)
    else:
        zmq_config = ZMQConfig(options.socket_addr, None, None, options.ipv6)

	### main loop
	try:
		#从master接收消息
        if not connect_to_master(poller, pipe_r, sock, options.master, options.ipv6):
            return 1
		master.received_msg()
		(leaving, msg) = recv_from_master("", poller, pipe_r, sock)
		#处理来自master的消息
        while not leaving:
            # If the message is not empty, handle it
            if msg is not None:
                handle(msg, master, jobs, zmq_config, sock)
            # Ping the master if needed
            master.ping(sock)
            # Regular checks
            last_jobs_check = check_job_status(jobs, sock, last_jobs_check)
            last_stale_check = remove_stale_resources(jobs, last_stale_check)
            # Listen to the master
            (leaving, msg) = recv_from_master("", poller, pipe_r, sock)
    except Exception as exc:
        return 1
    finally:
        destroy_context(ctx, sock, pipe_r, pipe_w)

    return 0

其中, handle的定义如下:

def handle(msg, master, jobs, zmq_config, sock):
    """
    Handle the master message

    :param msg: the master message (the header was removed)
    """
    # 1: identity and action
    try:
        action = u(msg[0])
    except (IndexError, TypeError):
        LOG.error("Invalid message from master: %s", msg)
        return

    # 2: handle the action
    if action == "CANCEL":
        handle_cancel(msg, jobs, sock, master)
    elif action == "END_OK":
        handle_end_ok(msg, jobs)
    elif action == "HELLO_OK":
        handle_hello_ok()
    elif action == "PONG":
        handle_pong(msg, master)
    elif action == "START":
        handle_start(msg, jobs, sock, master, zmq_config)
    elif action == "STATUS":
        handle_status(msg, jobs, sock, master)
    else:
        # Do not tag the master as alive as the message does not mean
        # anything.
        LOG.error("Unknown action: '%s', args=(%s)",
                  action, msg[1:])

对于 handle_start(msg, jobs, sock, master, zmq_config), 它的主要工作是启动这个job, 如果这个job已经被启动了,那么就返回这个job的状态信息给master. 主要代码如下:

def handle_start(msg, jobs, sock, master, zmq_config):
	...
        # Start the job, grab the pid and create it in the dabatase
        pid = start_job(job_id, job_definition, device_definition, zmq_config,
                        dispatcher_config, env, env_dut)
        #创建表格job_id
        job = jobs.create(job_id, 0 if pid is None else pid,
                          Job.FINISHED if pid is None else Job.RUNNING)
        #回复"START_OK"消息
        job.send_start_ok(sock)

    # Mark the master as alive
    master.received_msg()

启动一个job的时候会创建一个子进程执行lava-run程序.start_job定义如下:

def start_job(job_id, definition, device_definition, zmq_config,
              dispatcher_config, env_str, env_dut_str):
    ...
    #TMP_DIR 为 /var/lib/lava/dispatcher/slave/tmp/
    #base_dir 为 TMP_DIR/<job_id>
    # Write back the job, device and dispatcher configuration
    with open(os.path.join(base_dir, "job.yaml"), "w") as f_job:
        f_job.write(definition)
    with open(os.path.join(base_dir, "device.yaml"), "w") as f_device:
        f_device.write(device_definition)
    with open(os.path.join(base_dir, "dispatcher.yaml"), "w") as f_job:
        f_job.write(dispatcher_config)
    # Dump the environment variables in the tmp file.
    if env_dut_str:
        with open(os.path.join(base_dir, "env.dut.yaml", 'w')) as f_env:
            f_env.write(env_dut_str)
    try:
        out_file = os.path.join(base_dir, "stdout")
        err_file = os.path.join(base_dir, "stderr")
        env = create_environ(env_str)
        ##关键参数!!启动lava-run程序
        args = [
            "lava-run",
            "--device=%s" % os.path.join(base_dir, "device.yaml"),
            "--dispatcher=%s" % os.path.join(base_dir, "dispatcher.yaml"),
            "--output-dir=%s" % base_dir,
            "--job-id=%s" % job_id,
            "--logging-url=%s" % zmq_config.logging_url,
            os.path.join(base_dir, "job.yaml"),
        ]

        if zmq_config.ipv6:
            args.append("--ipv6")

        # Use certificates if defined
        if zmq_config.master_cert is not None and \
           zmq_config.slave_cert is not None:
            args.extend(["--master-cert", zmq_config.master_cert,
                         "--slave-cert", zmq_config.slave_cert])
        if env_dut_str:
            args.append("--env-dut=%s" % os.path.join(base_dir, "env.dut.yaml"))

        proc = subprocess.Popen(
            args,
            stdout=open(out_file, "w"),
            stderr=open(err_file, "w"),
            env=env,
            preexec_fn=os.setpgrp)
        return proc.pid
...              

说明:
job_definition, device_definition,dispatcher_config, env, env_dut这些值都是从master发送的消息中解析而来. master有一些默认的值, 在lava_server/management/commands/lava-master中的class Command(LAVADaemonCommand) 中的 add_arguments方法中有定义.
比如:
–env 默认值为 /etc/lava-server/env.yaml,
–env-dut 默认值为 /etc/lava-server/env.dut.yaml,
–dispatchers-config 默认值为 /etc/lava-server/dispatcher.d

lava-run的工作流程如下:

  1. 解析命令行参数
  2. 设置log等级
  3. 解析job file文件, 执行JobParser.parse().
  4. 验证job, 执行job.validate()
  5. 运行job, 执行job.run()
    第4, 5 步中的job是class Job(object)对象, 在lava_dispatcher/job.py中定义.

从第3步开始, 后续工作基本上由dispatcher来做.
解析job file文件的主要代码如下:

def parse_job_file(logger, options):
	...
	#根据job definition创建device对象,scheduler 和 dispatcher保持一致
    device = NewDevice(options.device)
	...
	#创建JobParser对象并执行parse函数
    parser = JobParser()
    # 创建pipeline
    return parser.parse(options.definition.read(),
                        device, options.job_id,
                        logger=logger,
                        dispatcher_config=dispatcher_config,
                        env_dut=env_dut)

JobParser 类在lava_dispatcher/parse.py中定义.
JobParser.parse()的主要工作是为该job pipeline增加各个action;
Pipeline.add_action()方法会执行各个action中的populate()方法,并设置各个timeout参数;
populate()方法一般会add更多的子actions到pipeline中去;
class Job(object) 在lava_dispatcher/job.py中定义
job.validate()过程会执行该job pipeline中的各个action中的validate()方法;
job.run() 过程中会执行该job pipeline中的各个action的run()方法;
各个action方法在lava_dispatcher/action.py中定义, 主要是deploy, boot和test.
另外, 网页提交的job中还支持"repeat" 和 “command”, 但是用的不多, 这里就不做介绍.

注意: 在JobParser.parse()中有一个重要的概念: namespace, 它会把不同阶段的actions (deploy, boot 和test)需要共享的一些参数放到namespace中去存储.

下面重点分析LAVA的dispatcher组件的行为,因为它是核心.

dispatcher组件分析

dispatcher中的组件比较多, 按照一个job运行过程来看, 将会按照先后顺序调用到如下这些文件:
lava_dispatcher/device.py yaml解析并创建NewDevice对象
lava_dispatcher/parse.py yaml解析并创建JobParser 和 Job对象
lava_dispatcher/deploy/… 调用不同deploy策略中的handler
lava_dispatcher/boot/… 调用不同boot策略中的handler
lava_dispatcher/test/… 调用不同LavaTestShell策略中的handler
lava_dispatcher/action.py 调用Pipeline.run_actions()启动整个pipeline

JobParser.parse()的主要定义如下:

class JobParser(object):
    def parse(self, content, device, job_id, logger, dispatcher_config,
              env_dut=None):
        job = Job(job_id, data, logger)
        pipeline = Pipeline(job=job)

		#增加test_info信息
        test_info = {}
        test_actions = ([action for action in data['actions'] if 'test' in action])
        for test_action in test_actions:
            test_parameters = test_action['test']
            test_type = LavaTest.select(device, test_parameters)
            namespace = test_parameters.get('namespace', 'common')
            connection_namespace = test_parameters.get('connection-namespace', namespace)
            if namespace in test_info:
                test_info[namespace].append({'class': test_type, 'parameters': test_parameters})
            else:
                test_info.update({namespace: [{'class': test_type, 'parameters': test_parameters}]})
            if namespace != connection_namespace:
                test_info.update({connection_namespace: [{'class': test_type, 'parameters': test_parameters}]})
        
        #分别处理各个action       
        for action_data in data['actions']:
            action_data.pop('yaml_line', None)
            for name in action_data:
                # Set a default namespace if needed
                namespace = action_data[name].setdefault('namespace', 'common')
                test_counts.setdefault(namespace, 1)

                if name == 'deploy' or name == 'boot' or name == 'test':
                    parse_action(action_data, name, device, pipeline,
                                 test_info, test_counts[namespace])
                    if name == 'test':
                        test_counts[namespace] += 1
                elif name == 'repeat': # 不常用, 略去
                    ...
                elif name == 'command': #不常用, 略去
                    ...

                else:
                    raise JobError("Unknown action name '%s'" % name)

        # there's always going to need to be a finalize_process action
        finalize = FinalizeAction()
        pipeline.add_action(finalize)
        finalize.populate(None)
        job.pipeline = pipeline
        if 'compatibility' in data: #不常用,略去
           ...
        return job
        

其中parse_action会解析各个action的参数,并选择对应的action. 它的定义如下:

def parse_action(job_data, name, device, pipeline, test_info, test_count):
    """
    If protocols are defined, each Action may need to be aware of the protocol parameters.
    """
    parameters = job_data[name]
    parameters['test_info'] = test_info
    if 'protocols' in pipeline.job.parameters:
        parameters.update(pipeline.job.parameters['protocols'])

    if name == 'boot':
        Boot.select(device, parameters)(pipeline, parameters)
    elif name == 'test':
        # stage starts at 0
        parameters['stage'] = test_count - 1
        LavaTest.select(device, parameters)(pipeline, parameters)
    elif name == 'deploy':
        candidate = Deployment.select(device, parameters)
        if parameters['namespace'] in test_info and candidate.uses_deployment_data():
            if any([testclass for testclass in test_info[parameters['namespace']] if testclass['class'].needs_deployment_data()]):
                parameters.update({'deployment_data': get_deployment_data(parameters.get('os', ''))})
        if 'preseed' in parameters:
            parameters.update({'deployment_data': get_deployment_data(parameters.get('os', ''))})
        Deployment.select(device, parameters)(pipeline, parameters)

三种action类型

如前所述, job启动之后, 一个action主要会依次调用下面这些方法:
populate() --> validate() --> run()
action类型主要有三种:deploy, boot, test.
而各种action也一般按照这个顺序来添加.

注意: 在所有job的最后,都会自动添加一个FinalizeAction(), 用于结束job和cleanup.
它包含的子action有PowerOff() 和 ReadFeedback().

deploy

deploy的基类是Deployment, 它根据device[‘actions’][‘deploy’][‘methods’] 和parameters[‘to’]的值选择对应的子类.
子类型有如下这些(冒号后边是device[‘actions’][‘deploy’][‘methods’]的值):
VExpressMsd
UBootUMS
Tftp: tftp
Ssh: ssh
Removable
RecoveryMode
Overlay: overlay
Nfs
Nbd
Mps
Lxc
DeployIso
DeployQemuNfs
DeployImages
Flasher
Fastboot
Download
Docker

以overlay为例进行说明

Deployment.select(device, parameters)(pipeline, parameters) 最终会解释为Overlay(pipeline, parameters).
在Overlay实例化的初始化过程中,会将OverlayAction()添加到pipeline中, 并执行populate().
OverlayAction() 的主要工作就是:创建一个临时目录, lava test shell scripts 将会安装到这个目录中;
之后CreateOverlay会在输出目录中创建一个job的tar包,并移除临时目录;
ApplyOverlay会解压那个tar包到image中去;
对于包含有"test"这个action的job, 这个Action还会把一个TestDefinitionAction添加到pipeline中去.
Overlay类定义如下:

class Overlay(Deployment):
    compatibility = 4
    name = "overlay"

    def __init__(self, parent, parameters):
        super().__init__(parent)
        self.action = OverlayAction()
        self.action.section = self.action_type
        self.action.job = self.job
        parent.add_action(self.action, parameters)

    @classmethod
    def accepts(cls, device, parameters):
        if 'overlay' not in device['actions']['deploy']['methods']:
            return False, "'overlay' not in the device configuration deploy methods"
        if parameters['to'] != 'overlay':
            return False, '"to" parameter is not "overlay"'
        return True, 'accepted'      
class OverlayAction(DeployAction)

validate()
在这个阶段, 主要是获取或者添加一些参数到job中去, 包括如下这些:
self.scripts_to_copy 为lava_dispatcher/lava_test_shell/lava-* 和lava_dispatcher/lava_test_shell/distro/xx/lava-*文件;
用set_namespace_data()函数设置变量lava_test_results_dir和lava_test_sh_cmd,
其中, self.lava_test_dir为lava_dispatcher/lava_test_shell/;
如果存在self.job.device[‘parameters’][‘interfaces’][‘target’].get(‘mac’, ‘’)则设置mac;
如果存在self.job.device[‘parameters’][‘interfaces’][‘target’].get(‘ip’, ‘’)则设置ip;

populate()
在这个阶段,系统又会添加另外一些actions到pipeline中, 如下:

class OverlayAction(DeployAction):
    def populate(self, parameters):
        self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
        if self.test_needs_overlay(parameters):
            if any('ssh' in data for data in self.job.device['actions']['deploy']['methods']):
                # only devices supporting ssh deployments add this action.
                self.internal_pipeline.add_action(SshAuthorize())
            self.internal_pipeline.add_action(VlandOverlayAction())
            self.internal_pipeline.add_action(MultinodeOverlayAction())
            self.internal_pipeline.add_action(TestDefinitionAction())
            self.internal_pipeline.add_action(CompressOverlay())
            self.internal_pipeline.add_action(PersistentNFSOverlay())  # idempotent
            
    def validate(self):
        super().validate()
        self.scripts_to_copy = sorted(glob.glob(os.path.join(self.lava_test_dir, 'lava-*')))
        # Distro-specific scripts override the generic ones
        if not self.test_needs_overlay(self.parameters):
            return
        lava_test_results_dir = self.parameters['deployment_data']['lava_test_results_dir']
        lava_test_results_dir = lava_test_results_dir % self.job.job_id
        self.set_namespace_data(action='test', label='results', key='lava_test_results_dir',
                                value=lava_test_results_dir)
        lava_test_sh_cmd = self.parameters['deployment_data']['lava_test_sh_cmd']
        self.set_namespace_data(action='test', label='shared', key='lava_test_sh_cmd',
                                value=lava_test_sh_cmd)
		...

run()
在run阶段, 主要做这些事情:

  • 创建临时目录;
  • 对<tmp_dir>/<lava_test_results_dir>, 如果不存在bin, test, results目录,则创建之.
  • 对self.scripts_to_copy中的每个文件:
    如果是在distro子目录下的,则在<tmp_dir>/<lava_test_results_dir>/bin/目录下update之; 否则创建新的文件放到<tmp_dir>/<lava_test_results_dir>/bin/目录下.
    在每个文件中添加如下的部分:
#!/bin/sh 
#也可能是#!/bin/bash, 根据lava_test_sh_cmd变量来定

#下面根据lava_test_shell/lava-xx不同的文件名写入不同的变量
#下面的变量只可能出现其中的一个
TARGET_DEVICE_MAC=<self.target_mac>
TARGET_DEVICE_IP=<self.target_ip>
PROBE_DEVICE_IP=<self.probe_ip>
PROBE_DEVICE_CHANNEL=<self.probe_channel>
LAVA_STORAGE=
	(key, value)

#再把lava_test_shell/lava-xx中的内容复制到该文件中, 一般是echo或者printf变量值
#job.parameters若含有secrets参数, 则把对应的变量也写入其中
  • 最后依次执行各个子action中的run方法.

下面具体来看各个子action.

  • SshAuthorize(): If self.job.device[‘actions’][‘deploy’][‘methods’]) has “ssh”, call this. Include public key in overlay and authorize root user
  • VlandOverlayAction(): Adds data for vland interface locations, MAC addresses and vlan names
  • MultinodeOverlayAction(): add lava scripts during deployment for multinode test shell use
  • TestDefinitionAction(TestAction): load test definitions into image. installs each test definition into the overlay. It does not execute the scripts in the test definition, that is the job of the TestAction class. One TestDefinitionAction handles all test definitions for the current job.
  • CompressOverlay(Action): Makes a tarball of the finished overlay and declares filename of the tarball. Create a lava overlay tarball and store alongside the job.
  • PersistentNFSOverlay(Action): Instead of extracting, just populate the location of the persistent NFS so that it can be mounted later when the overlay is applied.
class TestDefinitionAction(TestAction)介绍

TestDefinitionAction在lava_dispatcher/action/deploy/testdef.py中定义.

validate()

在validate()阶段, 会保证下面几点:
job definition中包含 “name” "from"这两种变量;
job definition中如果有"parameters"变量,则一定是dict类型.

populate()

在populated()过程中,又会增加下面几种actions:

  • handler = RepoAction.select(testdef[‘from’])()
  • TestOverlayAction(): overlay test support files onto image
  • TestInstallAction(): overlay dependency installation support files onto image
  • TestRunnerAction(): overlay run script onto image

其中handler是RepoAction的子类,根据test job的definition的from参数来决定.
有如下这些种类, 都在lava_dispatcher/actions/deploy/testdef.py中定义:

  • GitRepoAction (from: git)
  • BzrRepoAction (from: bzr)
  • InlineRepoAction (from: inline)
  • TarRepoAction (from: tar)
  • UrlRepoAction (from: url)
run()

在run()阶段,首先会设置namespace data:

self.set_namespace_data(action='test', label='test-definition', key='overlay_dir',value=overlay_base )

其中, overlay_base为<tmp_dir>/<lava_test_results_dir>.
然后, 运行子action中的run方法.
最后,在overlay_base的各个stage下打开lava-test-runner.conf文件, 在后面追加如下内容: RepoAction子类(如GitRepoAction)的runner值.
备忘: runner_conf.write(handler.runner)
runner: runner_path is the path to read and execute from to run the tests after boot

下面详细介绍各个子action.

class GitRepoAction(RepoAction):

validate() 阶段会保证:
job definition中包含"repository" “path” "test_name"参数.
dispatcher上安装了/usr/bin/git程序.

run() 阶段会先执行RepoAction类中的run方法, 主要是用set_namespace_data方法设置一些namespace变量, 比如:

        self.set_namespace_data(action='uuid', label='overlay_path', key=args['test_name'], value=overlay_path)
        self.set_namespace_data(
            action='test', label=self.uuid, key='repository', value=self.parameters['repository'])
        self.set_namespace_data(
            action='test', label=self.uuid, key='path', value=self.parameters['path'])

然后会git clone 在job definition中指定的文件;
接着会读取在runner_path路径下由job definition的path参数指定的文件, 然后, 将它复制到testdef字典中.
最后存储跟testdef_metadata 和testdef_pattern相关的一些参数.

class InlineRepoAction(RepoAction)

class InlineRepoAction(RepoAction): 会导出test definition中的repository参数,并写入到path指定的yaml文件中.并且会对数据使用utf-8编码,然后用hashlib.sha1()函数进行SHA1加密.
备注查找方法:
python3-pkgs# grep -nr “(RepoAction)” |sed -e ‘s#.*class ##g’ | sed -e ‘s#(RepoAction):##g’ | awk ‘{print $1}’

TestOverlayAction(TestAction)

TestOverlayAction(TestAction) : 会重新写 testdef.yaml uuid testdef_metadata这三个文件. 这几个文件放在runner_path路径下,其中
runner_path = self.get_namespace_data(action=‘uuid’, label=‘overlay_path’, key=self.parameters[‘test_name’])
例如runner path: /var/lib/lava/dispatcher/tmp/129/lava-overlay-9fo5y2__/lava-129/0/tests/0_rts3903_test1 test_uuid 129_1.3.1

TestInstallAction(TestOverlayAction)

TestInstallAction(TestOverlayAction): 检查test definition相关文件(如: install.sh)是否存在,如果不存在,则创建之.

TestRunnerAction(TestOverlayAction)

TestRunnerAction(TestOverlayAction): 会读取test definition内容, 并创建run.sh文件.

class CompressOverlay(Action)介绍

会在临时目录下创建一个压缩包: overlay-<level_number>.tar.gz, 压缩包的内容就是lava_test_results_dir目录中的内容. 如果存在./root/目录,则把./root/内容也添加到tar包中去. 其中,
lava_test_results_dir = self.get_namespace_data(action=‘test’, label=‘results’, key=‘lava_test_results_dir’)

class PersistentNFSOverlay(Action)介绍

如果deploy parameters中包含有"persistent_nfs"项,则进行下面的工作, 否则直接返回:

  • 获取self.parameters[‘persistent_nfs’][‘address’],
  • 解析出nfs_server 和 dirname
  • 检查rpcinfo命令是否可用
  • 启动子进程执行命令subprocess.Popen([’/usr/sbin/rpcinfo’, ‘-u’, server, ‘nfs’, “%s” % version], stdout=devnull, stderr=subprocess.PIPE)
    也就是运行命令 rpcinfo -u nfs
  • 设置namespace data:
    self.set_namespace_data(action=self.name, label=‘nfs_address’, key=‘nfsroot’, value=dirname)
    self.set_namespace_data(action=self.name, label=‘nfs_address’, key=‘serverip’, value=nfs_server)
overlay 和deploy的一些杂项信息备忘

needs_overlay: TestMonitor不需要, MultinodeTestShell 和TestShell需要
needs_deployment_data: TestMonitor不需要,MultinodeTestShell 和TestShell需要
deployment_data 定义在lava_dispatcher/deployment_data.py中, 比如:

ubuntu = deployment_data_dict({  # pylint: disable=invalid-name
    'TESTER_PS1': r"linaro-test [rc=$(echo \$?)]# ",
    'TESTER_PS1_PATTERN': r"linaro-test \[rc=(\d+)\]# ",
    'TESTER_PS1_INCLUDES_RC': True,
    'boot_cmds': 'boot_cmds',
    'line_separator': '\n',

    # for lava-test-shell
    'distro': 'ubuntu',
    'tar_flags': '--warning no-timestamp',
    'lava_test_sh_cmd': '/bin/sh',
    'lava_test_dir': '/lava-%s',
    'lava_test_results_part_attr': 'root_part',
    'lava_test_results_dir': '/lava-%s',
    'lava_test_shell_file': '~/.bashrc',
})

以tftp为例进行说明

详情参见lava_dispatcher/actions/deploy/tftp.py.

  1. 首先, 实例化class Tftp(Deployment)的时候,会增加一个TftpAction();
  2. 接着, TftpAction()的populate()会增加下面几个actions:
    DownloaderAction()
    PrepareOverlayTftp()
    LxcCreateUdevRuleAction()
    DeployDeviceEnvironment()
  3. 然后, TftpAction()的 validate() 会确保job定义中包含有"kernel"键, 不能包含"nfs_url"键, 并且 “nfsrootfs” 和 "persistent_nfs"二者只能出现其中一个.
  4. TftpAction()的 run()会运行上面添加到pipeline中的各个action.

注意:
TftpAction() 的validate阶段会确保在deploy参数中:
– 包含’kernel’;
– nfsrootfs 或 persistent_nfs二者只能出现一个;
– 不能出现 ‘nfs_url’ , 如果想使用, 则用persistent_nfs代替.

DownloaderAction(RetryAction) 会根据deploy url类型选择下面对应的下载方法:

  • ScpDownloadAction
  • HttpDownloadAction
  • FileDownloadAction
  • LxcDownloadAction
    比如 HttpDownloadAction 会通过request模块的get方法获取url资源.

PrepareOverlayTftp(Action) 会为lava overlay解压ramdisk 或 nfsrootfs. 它会依次增加下面这些actions:
ExtractNfsRootfs() : 如果deploy参数不包含’nfsrootfs’参数,则什么都不做
OverlayAction():
ExtractRamdisk()
ExtractModules()
ApplyOverlayTftp()
PrepareKernelAction() : if ‘kernel’ in parameters and ‘type’ in parameters[‘kernel’]
ConfigurePreseedFile()
CompressRamdisk()
PrepareKernelAction(): if ‘depthcharge’ in self.job.device[‘actions’][‘boot’][‘methods’]

tftp deploy阶段会设置的namespace data
actionlabelkeyvalueparameters
“tftp-deploy”“tftp”“tftp_dir”从/etc/default/tftpd-hpa文件获取, 默认值: “/srv/tftp/{job_id}/tftp-deploy-”
“tftp-deploy”“tftp”“suffix”从/etc/default/tftpd-hpa文件获取, 默认值: “{job_id}/tftp-deploy-”
“tftp-deploy”“tftp”“ramdisk”True(如果deploy参数中包含’ramdisk’,设置该值)
“test-definition”“test-definition”‘test_list’test[‘parameters’][‘definitions’]
‘test-definition’‘test-definition’‘testdef_index’download action’s name
‘test’‘results’‘lava_test_results_dir’{self.parameters[‘deployment_data’][‘lava_test_results_dir’]}/{job_id}
‘test’‘shared’‘lava_test_sh_cmd’self.parameters[‘deployment_data’][‘lava_test_sh_cmd’]}
“deploy-device-env”environment’‘shell_file’self.parameters[‘deployment_data’][‘lava_test_shell_file’]
“deploy-device-env”environment’‘env_dict’self.parameters[‘deployment_data’][‘lava_test_shell_file’]
“deploy-device-env”environment’‘line_separator’self.parameters[‘deployment_data’].get(‘line_separator’, LINE_SEPARATOR)

以ssh为例进行说明

class ScpOverlay(DeployAction) populated的时候, 会add 下面这些actions:

  • OverlayAction()
  • DownloaderAction()
    (如果parameters里面包含有’firmware’, ‘kernel’, ‘dtb’, ‘rootfs’, 'modules’中的项)
  • PrepareOverlayScp()
  • DeployDeviceEnvironment()
class DownloaderAction(RetryAction)

会根据url.scheme选择不同的下载方式, 包括如下几种:

  • ScpDownloadAction
  • HttpDownloadAction
  • FileDownloadAction
  • LxcDownloadAction
class PrepareOverlayScp(Action)

class PrepareOverlayScp(Action) 将会通过scp把overlay复制到device上,并且远程解压缩包; 但是要求device已经准备好SSH连接.

  1. populate阶段会再增加下面两个action:
  • ExtractRootfs()
  • ExtractModules()
  1. validate阶段会增加prepare-scp-overlay相关的namespace data;
  2. run 阶段会先run 上面两个internal pipeline, 然后设置namespace data
self.set_namespace_data(action=self.name, label='scp-deploy', key='overlay', value=overlay_file)

其中overlay_file是 /xxx/overlay_.tar.gz

class ExtractRootfs(Action) 会解压整个rootfs.
class ExtractModules(Action) 会先检查deploy参数中有没有module这个参数, 如果没有,则什么也不做. 如果有,则会检查是否有ramdisk或者nfsrootfs中一个,如果都没有,则报错. 只要指定了这两种参数中的一个,就把module文件加压到nfsrootfs 或ramdisk文件中去.

class DeployDeviceEnvironment(Action)

创建在job parameters 'env_dut’中指定的环境变量, 并且把它设置到common_data中去.

以nfs为例进行说明

如果要使用nfs的deploy方式,则job需要满足下面几个条件:

  1. deploy 参数中有"to"
  2. deploy 的"to" 参数应该是"nfs"
  3. device[‘actions’][‘deploy’][‘methods’] 参数中不能有"image"
  4. device[‘actions’][‘deploy’][‘methods’] 参数中必须有"nfs"
    此外, 还应该保证dispatcher上成功安装了NFS server, 存在/usr/sbin/exportfs文件.

class Nfs(Deployment) 实例化的时候会将NfsAction()添加到pipeline中.

class NfsAction(DeployAction) 在validate阶段会保证 deploy参数中只出现’nfsrootfs’或者’persistent_nfs’中的一个.

NfsAction() 在populate 阶段又会依次添加下面这些actions:
DownloaderAction()
ExtractNfsRootfs()
OverlayAction()
ExtractModules()
ApplyOverlayTftp()
DeployDeviceEnvironment()

class NfsAction(DeployAction):
    def populate(self, parameters):
        download_dir = self.mkdtemp()
        self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
        if 'nfsrootfs' in parameters:
            self.internal_pipeline.add_action(DownloaderAction('nfsrootfs', path=download_dir))
        if 'modules' in parameters:
            self.internal_pipeline.add_action(DownloaderAction('modules', path=download_dir))
        # NfsAction is a deployment, so once the nfsrootfs has been deployed, just do the overlay
        self.internal_pipeline.add_action(ExtractNfsRootfs())
        self.internal_pipeline.add_action(OverlayAction())
        self.internal_pipeline.add_action(ExtractModules())
        self.internal_pipeline.add_action(ApplyOverlayTftp())
        if self.test_needs_deployment(parameters):
            self.internal_pipeline.add_action(DeployDeviceEnvironment())

class DownloaderAction(RetryAction):

根据url 的类型选择不同的下载方式, 如下:
‘scp’: ScpDownloadAction()
‘https’: HttpDownloadAction()
‘file’: FileDownloadAction()
‘lxc’: LxcDownloadAction()

class ExtractNfsRootfs(ExtractRootfs):

解压nfsrootfs, 并且把overlay文件应用上去.
它主要是运行class ExtractRootfs(Action) 中的run程序,
然后设置self.set_namespace_data(action=‘extract-rootfs’, label=‘file’, key=self.file_key, value=root_dir), root_dir可能包含deploy参数中的prefix.

class ExtractRootfs(Action)

真正执行解压动作的action, 它是ExtractNfsRootfs的基类.
会把self.get_namespace_data(action=‘download-action’, label=self.param_key, key=‘file’)指定的文件解压到{DISPATCHER_DOWNLAOD_DIR}/{job_id}指定的临时目录中.

class OverlayAction(DeployAction)

在前面已经介绍过了.

class ExtractModules(Action)

如果在deploy参数中指定了"modules"参数, 则会执行这个action, 否则直接返回.
如果指定了modules, 则至少要指定ramdisk或者nfsrootfs中的一个.
并且, 应该指定各自对应的install_modules, 否则该action会直接跳出.
程序会解压module对应的file, 并把它写到各自对应的位置.
对于 nfsrootfs, 路径为
root = self.get_namespace_data(action=‘extract-rootfs’, label=‘file’, key=‘nfsroot’ )
对于 ramdisk, 路径为:
root = self.get_namespace_data( action=‘extract-overlay-ramdisk’, label=‘extracted_ramdisk’, key=‘directory’)

class ApplyOverlayTftp(Action)

在ramdisk或者nfs的顶层解压overlay文件. 隐含的默认顺序是: overlay优先进入NFS, 只有当NFS没有被指定时,才进入ramdisk.

在validate阶段, 会确认下面这些信息:
如果deploy参数中指定了persistent_nfs, 则它必须是一个字典类型,并且字典中必须包括address键.

在run阶段, 会检查deploy参数,解析出nfsrootfs 或这ramdisk 的apply 目录,并且overlay_file解压到目录directory中, 如果有永久挂载点, 最终会把挂载点移除掉.
具体分为以下几种情况:

  • parameters[‘nfsrootfs’][‘install_overlay’]为True:
    overlay_file = self.get_namespace_data(action=‘compress-overlay’, label=‘output’, key=‘file’)
    directory = self.get_namespace_data(action=‘extract-rootfs’, label=‘file’, key=‘nfsroot’)
  • parameters[‘images’][‘nfsrootfs’][‘install_overlay’] 为True:
    overlay_file = self.get_namespace_data(action=‘compress-overlay’, label=‘output’, key=‘file’)
    directory = self.get_namespace_data(action=‘extract-rootfs’, label=‘file’, key=‘nfsroot’)
  • parameters[‘persistent_nfs’][‘install_overlay’] 为True:
    overlay_file = self.get_namespace_data(action=‘compress-overlay’, label=‘output’, key=‘file’)
    nfs_address = self.parameters[‘persistent_nfs’].get(‘address’)
    directory = mkdtemp(autoremove=False)
    subprocess.check_output([‘mount’, ‘-t’, ‘nfs’, nfs_address, directory])
  • parameters[‘ramdisk’][‘install_overlay’]为True:
    overlay_file = self.get_namespace_data(action=‘compress-overlay’, label=‘output’, key=‘file’)
    directory = self.get_namespace_data(action=‘extract-overlay-ramdisk’, label=‘extracted_ramdisk’, key=‘directory’)
  • parameters[‘rootfs’] 不为空:
    overlay_file = self.get_namespace_data(action=‘compress-overlay’, label=‘output’, key=‘file’)
    directory = self.get_namespace_data(action=‘apply-overlay’, label=‘file’, key=‘root’)

最后会 执行 untar_file(overlay_file, directory), 把overlay_file解压到directory中去.
如果 nfs_address 存在, 则会执行 subprocess.check_output([‘umount’, directory]).

class DeployDeviceEnvironment(Action)

Create environment found in job parameters ‘env_dut’ and set it in common_data.

boot

Boot的子类型有如下这些:
UefiMenu
UefiShell
UBoot
SshLogin
Schroot
RecoveryBoot
BootQEMU
PyOCD
Minimal
BootLxc
BootKExec
BootIsoInstaller
IPXE
GrubSequence
Grub
BootFastboot
BootDocker
DFU
Depthcharge
CMSIS
SecondaryShell

以SshLogin为例进行说明

Ssh boot 方式是一个登录过程,不需要真的启动一个内核, 但是仍然需要AutoLoginAction.
如果device[‘actions’][‘boot’][‘methods’] 和job的boot parameters[‘method’]都是’ssh’, 则选择这种Boot方式, 并把class SshAction(RetryAction) 添加到pipeline中去.
class SshAction(RetryAction)过程:
populate 阶段会依次增加下面这些action:

class SshAction(RetryAction):
    def populate(self, parameters):
        self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
        scp = Scp('overlay')
        self.internal_pipeline.add_action(scp)
        self.internal_pipeline.add_action(PrepareSsh())
        self.internal_pipeline.add_action(ConnectSsh())
        self.internal_pipeline.add_action(AutoLoginAction())
        self.internal_pipeline.add_action(ExpectShellSession())
        self.internal_pipeline.add_action(ExportDeviceEnvironment())
        self.internal_pipeline.add_action(ScpOverlayUnpack())
  • class Scp(ConnectSsh): Use the SSH connection options to copy files over SSH.
  • class PrepareSsh(Action): Sets the host for the ConnectSsh.
  • class ConnectSsh(Action): Initiate an SSH connection from the dispatcher to a device.
    通过ssh连接到开发板上,并导出$PS1参数.
  • class AutoLoginAction(Action): Automatically login on the device.
    If ‘auto_login’ is not present in the parameters, this action does nothing.
    最终会将export PS1="${default-shell-prompt}"这条指令发送到device上.
  • class ExpectShellSession(Action): Waits for a shell connection to the device for the current job. The shell connection can be over any particular connection, all that is needed is a prompt.
  • class ExportDeviceEnvironment(Action): Exports environment variables found in common data on to the device.
    会将deploy-device-env action -> environment 中env_dict发送给device shell connection, 并执行的shell_file文件.
  • class ScpOverlayUnpack(Action): unpack the overlay over an existing ssh connection.
    会调用tar命令解压overlay压缩包.解压的命令如下:
    cmd = “tar %s -C / -xzf /%s” % (tar_flags, filename)
    NOTE: 如果device上没有tar工具,或者不支持上面指定的参数, 则可能导致ScpOverlayUnpack action失败, 从而终止pipeline.

到此, ssh boot阶段完成.

以UBoot为例进行说明

class UBoot(Boot) 在实例化过程中,会先添加UBootAction()。
class UBootAction(BootAction) 在实例化过程中又会依次增加下面这些actions:
UBootSecondaryMedia()
BootloaderCommandOverlay()
ConnectDevice()
UBootRetry()

class UBootSecondaryMedia(BootloaderSecondaryMedia):

只有当boot 参数中包含“media”时, 才会有动作, 否则什么都不做。

class BootloaderCommandOverlay(Action):
Replace KERNEL_ADDR and DTB placeholders with the actual values for this
particular pipeline.
addresses are read from the device configuration parameters
bootloader_type is determined from the boot action method strategy
bootz or bootm is determined by boot action method type. (i.e. it is up to
the test writer to select the correct download file for the correct boot command.)
server_ip is calculated at runtime
filenames are determined from the download Action.
class ConnectDevice(Action):
General purpose class to use the device commands to
make a serial connection to the device. e.g. using ser2net
Inherit from this class and change the session_class and/or shell_class for different behaviour.
class UBootRetry(BootAction):

在populate阶段会依次增加下面这些actions:

class UBootRetry(BootAction):

    name = "uboot-retry"
    description = "interactive uboot retry action"
    summary = "uboot commands with retry"

    def populate(self, parameters):
        self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
        self.method_params = self.job.device['actions']['boot']['methods']['u-boot']['parameters']
        self.usb_mass_device = self.method_params.get('uboot_mass_storage_device', None)
        # establish a new connection before trying the reset
        self.internal_pipeline.add_action(ResetDevice())
        self.internal_pipeline.add_action(BootloaderInterruptAction())
        if self.method_params.get('uboot_ums_flash', False):
            self.internal_pipeline.add_action(BootloaderCommandsAction(expect_final=False))
            self.internal_pipeline.add_action(WaitDevicePathAction(self.usb_mass_device))
            self.internal_pipeline.add_action(FlashUBootUMSAction(self.usb_mass_device))
            self.internal_pipeline.add_action(ResetDevice())
        else:
            self.internal_pipeline.add_action(BootloaderCommandsAction())
        if self.has_prompts(parameters):
            self.internal_pipeline.add_action(AutoLoginAction())
            if self.test_has_shell(parameters):
                self.internal_pipeline.add_action(ExpectShellSession())
                if 'transfer_overlay' in parameters:
                    self.internal_pipeline.add_action(OverlayUnpack())
                self.internal_pipeline.add_action(ExportDeviceEnvironment())
class BootloaderCommandsAction(Action):

解析出boot commands,其中
commands = self.get_namespace_data(action=‘bootloader-overlay’, label=self.method, key=‘commands’)
然后把上面的commands通过connection.sendline()的方式发送给bootloader。

test

LavaTest子类型包括如下这些:
TestShell : 要求test 参数中包含"definition"或者"definitions"
MultinodeTestShell: 要求test参数中包含’role’ 和 “lava-multinode”, 且后者包含"target_group"参数
TestMonitor: 要求test 参数中包含’monitors’

以TestShell为例进行说明

class TestShell(LavaTest) 初始化的时候会增加TestShellRetry()这个action.
class TestShellRetry(RetryAction) 在populate阶段又会添加TestShellAction().
class TestShellAction(TestAction) 会安装并运行LAVA Test Shell Definition 脚本, 支持一个预定义命令列表, 用于在测试脚本执行之前的启动阶段做一些必要操作.

TestShellAction流程

下面具体介绍TestShellAction()的run过程:

  1. 获取一些基本参数;
  2. 向device connection发送connection.check_char;
  3. 如果有pre_command_list, 则发送这个列表中的命令给connection;
  4. 发送命令 ls -l <lava_test_results_dir>/
  5. 如果有lava_test_sh_cmd, 发送命令 export SHELL=<lava_test_sh_cmd>
  6. 发送下面的命令,执行lava_test_runner程序(这是最重要的部分!!):
        running = self.parameters['stage']
            with connection.test_connection() as test_connection:
                test_connection.sendline(
                    "%s/bin/lava-test-runner %s/%s" % (
                        lava_test_results_dir,
                        lava_test_results_dir,
                        running),
                    delay=self.character_delay)
  1. 执行keep_running()函数. 这个函数是class TestShellAction(TestAction)中的方法.
    keep_running()定义如下:
class TestShellAction(TestAction):
    def _keep_running(self, test_connection, timeout, check_char):
        if 'test_case_results' in self.patterns:
            self.logger.info("Test case result pattern: %r" % self.patterns['test_case_results'])
        retval = test_connection.expect(list(self.patterns.values()), timeout=timeout)
        return self.check_patterns(list(self.patterns.keys())[retval], test_connection, check_char)
  1. 根据patterns类型, 执行check_patterns(self, event, test_connection, check_char) 中的不同的方法. check_patterns函数也是class TestShellAction(TestAction)中的方法.
    其中, event类型包括如下几种:
    exit, error, eof, timeout, signal, test_case, test_case_result.
    其中signal又包括"STARTRUN", “ENDRUN”, “TESTCASE”, “TESTFEEDBACK”, “TESTREFERENCE”, “TESTSET”, "TESTRAISE"这几种类型.
    详情如下:
class TestShellAction(TestAction):
    def check_patterns(self, event, test_connection, check_char):  # pylint: disable=unused-argument
        """
        Defines the base set of pattern responses.
        Stores the results of testcases inside the TestAction
        Call from subclasses before checking subclass-specific events.
        """
        ret_val = False
        if event == "exit":
            self.logger.info("ok: lava_test_shell seems to have completed")
            self.testset_name = None

        elif event == "error":
            # Parsing is not finished
            ret_val = self.pattern_error(test_connection)

        elif event == "eof":
            self.testset_name = None
            raise InfrastructureError("lava_test_shell connection dropped.")

        elif event == "timeout":
            # allow feedback in long runs
            ret_val = True

        elif event == "signal":
            name, params = test_connection.match.groups()
            self.logger.debug("Received signal: <%s> %s" % (name, params))
            params = params.split()
            if name == "STARTRUN":
                self.signal_start_run(params)
            elif name == "ENDRUN":
                self.signal_end_run(params)
            elif name == "TESTCASE":
                self.signal_test_case(params)
            elif name == "TESTFEEDBACK":
                self.signal_test_feedback(params)
            elif name == "TESTREFERENCE":
                self.signal_test_reference(params)
            elif name == "TESTSET":
                ret = self.signal_test_set(params)
                if ret:
                    name = ret
            elif name == "TESTRAISE":
                raise TestError(' '.join(params))

            self.signal_director.signal(name, params)
            ret_val = True

        elif event == "test_case":
            ret_val = self.pattern_test_case(test_connection)
        elif event == 'test_case_result':
            ret_val = self.pattern_test_case_result(test_connection)
        return ret_val

self.patterns的内容如下:

class TestShellAction(TestAction):
    def _reset_patterns(self):
        # Extend the list of patterns when creating subclasses.
        self.patterns = {
            "exit": "<LAVA_TEST_RUNNER>: exiting",
            "error": "<LAVA_TEST_RUNNER>: ([^ ]+) installer failed, skipping",
            "eof": pexpect.EOF,
            "timeout": pexpect.TIMEOUT,
            "signal": r"<LAVA_SIGNAL_(\S+) ([^>]+)>",
        }
        # noinspection PyTypeChecker
        self.pattern = PatternFixup(testdef=None, count=0)

lava_test_runner程序

如上所述, lava_test_runner程序在ssh boot阶段时拷贝到device上, 并且, 在一切准备就绪之后会执行下面的程序(假设job id为128, running为0.):
/lava-128/bin/lava-test-runner /lava-128/0
lava-test-runner是一个shell程序.
它的主要流程如下:

  1. 读取/lava-128/0/lava-test-runner.conf配置文件, 这个文件是在deploy阶段创建的, 里面主要记录了lava测试脚本运行的路径.
    创建它的文件是lava_dispatcher/actions/deploy/testdef.py中的class TestDefinitionAction(TestAction) 的run方法,如下:
class TestDefinitionAction(TestAction):
    def run(self, connection, max_end_time, args=None):
        ...
        self.logger.info("Creating lava-test-runner.conf files")
        for stage in range(self.stages):
            path = '%s/%s' % (overlay_base, stage)
            self.logger.debug("Using lava-test-runner path: %s for stage %d", path, stage)
            with open('%s/%s/lava-test-runner.conf' % (overlay_base, stage), 'a') as runner_conf:
                for handler in self.internal_pipeline.actions:
                    if isinstance(handler, RepoAction) and handler.stage == stage:
                        self.logger.debug("- %s", handler.parameters['test_name'])
                        runner_conf.write(handler.runner)

        return connection

其中, handler.runner是指action的runner参数, 是一个位置参数.
比如class RepoAction(Action)中有定义,

class RepoAction(Action):
    def run(self, connection, max_end_time, args=None):
    	...
        # runner_path is the path to read and execute from to run the tests after boot
        runner_path = os.path.join(
            args['deployment_data']['lava_test_results_dir'] % self.job.job_id,
            str(self.stage),
            'tests',
            args['test_name']
        )
        self.set_namespace_data(action='uuid', label='runner_path', key=args['test_name'], value=runner_path)
        # the location written into the lava-test-runner.conf (needs a line ending)
        self.runner = "%s\n" % runner_path    
  1. 对lava-test-runner.conf中的每一个路径.执行下面的指令:
  • 拷贝testdef.yaml 和 testdef_metadata 文件到带有时间戳的results路径中;
  • 如果改路径下存在install.sh文件, 则执行该install.sh程序;
  • 执行:下面的指令
lava-test-shell --output_dir ${odir} ${SHELL}  -e "${line}/run.sh"
  • 上面这句话中的lava-test-shell 程序会解析出odir 路径和 shell参数后面的参数, 并执行run.sh程序

run.sh是在deploy阶段创建的,它在lava_dispatcher/actions/deploy/testdef.py中定义,如下:

class TestRunnerAction(TestOverlayAction):
    def run(self, connection, max_end_time, args=None):
        connection = super().run(connection, max_end_time, args)
        runner_path = self.get_namespace_data(action='uuid', label='overlay_path', key=self.parameters['test_name'])

        # now read the YAML to create a testdef dict to retrieve metadata
        yaml_file = os.path.join(runner_path, self.parameters['path'])
        try:
            with open(yaml_file, 'r') as test_file:
                testdef = yaml.safe_load(test_file)
        except IOError as exc:
            raise JobError("Unable to open test definition '%s': %s" % (self.parameters['path'],
                                                                        str(exc)))

        self.logger.debug("runner path: %s test_uuid %s", runner_path, self.test_uuid)
        filename = '%s/run.sh' % runner_path
        content = self.handle_parameters(testdef)

        # the 'lava' testdef name is reserved
        if self.parameters['name'] == 'lava':
            raise TestError('The "lava" test definition name is reserved.')

        testdef_levels = self.get_namespace_data(action=self.name, label=self.name, key='testdef_levels')
        with open(filename, 'a') as runsh:
            for line in content:
                runsh.write(line)
            runsh.write('set -e\n')
            runsh.write('set -x\n')
            # use the testdef_index value for the testrun name to handle repeats at source
            runsh.write('export TESTRUN_ID=%s\n' % testdef_levels[self.level])
            runsh.write('cd %s\n' % self.get_namespace_data(
                action='uuid', label='runner_path', key=self.parameters['test_name']))
            runsh.write('UUID=`cat uuid`\n')
            runsh.write('set +x\n')
            runsh.write('echo "<LAVA_SIGNAL_STARTRUN $TESTRUN_ID $UUID>"\n')
            runsh.write('set -x\n')
            steps = testdef.get('run', {}).get('steps', [])
            for cmd in steps:
                if '--cmd' in cmd or '--shell' in cmd:
                    cmd = re.sub(r'\$(\d+)\b', r'\\$\1', cmd)
                runsh.write('%s\n' % cmd)
            runsh.write('set +x\n')
            runsh.write('echo "<LAVA_SIGNAL_ENDRUN $TESTRUN_ID $UUID>"\n')

        self.results = {
            'uuid': self.test_uuid,
            "filename": filename,
            'name': self.parameters['name'],
            'path': self.parameters['path'],
            'from': self.parameters['from'],
        }
        if self.parameters['from'] != 'inline':
            self.results['repository'] = self.parameters['repository']
        return connection

把上面的创建的脚本提取出来, 它的内容大概是下面这个样子:

#从job definition path指定路径读取yaml文件, 并将变量写入到文件中,如下:
###default parameters from test definition###
#下面是definition中定义的所有params和parameters参数键值对
<def_param_name>=<def_param_value>
######
###test parameters from job submission###
#下面是job submission中定义的params和parameters参数键值对
<def_param_name>=<def_param_value>
######
set -e
set -x
export TESTRUN_ID=<testdef_levels[self.level])
cd <runner_path>
UUID=`cat uuid`
set +x
echo "<LAVA_SIGNAL_STARTRUN $TESTRUN_ID $UUID>"
set -x
#执行test definition run->steps中的那些command
<cmd in steps>
set +x
echo "<LAVA_SIGNAL_ENDRUN $TESTRUN_ID $UUID>"

所以,最终会执行lava test job definition中的steps指定的那些命令.

重要的类

class Job(object):  # pylint: disable=too-many-instance-attributes
    """
    Populated by the parser, the Job contains all of the
    Actions and their pipelines.
    parameters provides the immutable data about this job:
        action_timeout
        job_name
        priority
        device_type (mapped to target by scheduler)
        yaml_line
        logging_level
        job_timeout
    Job also provides the primary access to the Device.
    The NewDevice class only loads the specific configuration of the
    device for this job - one job, one device.
    """

    def __init__(self, job_id, parameters, logger):  # pylint: disable=too-many-arguments
        self.job_id = job_id
        self.logger = logger
        self.device = None
        self.parameters = parameters
        self.__context__ = PipelineContext()
        self.pipeline = None
        self.connection = None
        self.triggers = []  # actions can add trigger strings to the run a diagnostic
        self.diagnostics = [
            DiagnoseNetwork,
        ]
        self.timeout = None
        self.protocols = []
        self.compatibility = 2
        # Was the job cleaned
        self.cleaned = False
        # Root directory for the job tempfiles
        self.tmp_dir = None
        # override in use
        self.base_overrides = {}
        self.started = False
        
    def mkdtemp(self, action_name, override=None):
        """
        Create a tmp directory in DISPATCHER_DOWNLOAD_DIR/{job_id}/ because
        this directory will be removed when the job finished, making cleanup
        easier.
        """
        ...

DISPATCHER_DOWNLOAD_DIR的默认值:
lava_common/constants.py:47:
DISPATCHER_DOWNLOAD_DIR = “/var/lib/lava/dispatcher/tmp”

class Pipeline(object):  # pylint: disable=too-many-instance-attributes
    """
    Pipelines ensure that actions are run in the correct sequence whilst
    allowing for retries and other requirements.
    When an action is added to a pipeline, the level of that action within
    the overall job is set along with the formatter and output filename
    of the per-action log handler.
    """
    def __init__(self, parent=None, job=None, parameters=None):
        self.actions = []
        self.parent = None
        self.parameters = {} if parameters is None else parameters
        self.job = job
        if parent is not None:
            # parent must be an Action
            if not isinstance(parent, Action):
                raise LAVABug("Internal pipelines need an Action as a parent")
            if not parent.level:
                raise LAVABug("Tried to create a pipeline using a parent action with no level set.")
            self.parent = parent
class Action(object):  # pylint: disable=too-many-instance-attributes,too-many-public-methods

    def __init__(self):
        """
        Actions get added to pipelines by calling the
        Pipeline.add_action function. Other Action
        data comes from the parameters. Actions with
        internal pipelines push parameters to actions
        within those pipelines. Parameters are to be
        treated as inmutable.
        """
        # The level of this action within the pipeline. Levels start at one and
        # each pipeline within an command uses a level within the level of the
        # parent pipeline.
        # First command in Outer pipeline: 1
        # First command in pipeline within outer pipeline: 1.1
        # Level is set during pipeline creation and must not be changed
        # subsequently except by RetryCommand.
        self.level = None
        self.pipeline = None
        self.internal_pipeline = None
        self.__parameters__ = {}
        self.__errors__ = []
        self.job = None
        self.logger = logging.getLogger('dispatcher')
        self.__results__ = OrderedDict()
        self.timeout = Timeout(self.name, exception=self.timeout_exception)
        self.max_retries = 1  # unless the strategy or the job parameters change this, do not retry
        self.diagnostics = []
        self.protocols = []  # list of protocol objects supported by this action, full list in job.protocols
        self.lxc_cmd_prefix = []
        self.connection_timeout = Timeout(self.name, exception=self.timeout_exception)
        self.character_delay = 0
        self.force_prompt = False

    # Section
    section = None
    # public actions (i.e. those who can be referenced from a job file) must
    # declare a 'class-type' name so they can be looked up.
    # summary and description are used to identify instances.
    name = None
    # Used in the pipeline to explain what the commands will attempt to do.
    description = None
    # A short summary of this instance of a class inheriting from Action.  May
    # be None.
    summary = None
    # Exception to raise when this action is timing out
    timeout_exception = JobError
        
    def mkdtemp(self, override=None):
        return self.job.mkdtemp(self.name, override=override)

class Connection(object):
    """
    A raw_connection is an arbitrary instance of a standard Python (or added LAVA) class
    designed to implement an interactive connection onto the device. The raw_connection
    needs to be able to send commands, use a timeout, handle errors, log the output,
    match on regular expressions for the output, report the pid of the spawned process
    and cause the spawned process to close/terminate.
    The current implementation uses a pexpect.spawn wrapper. For a standard Shell
    connection, that is the ShellCommand class.
    Each different wrapper of pexpect.spawn (and any other wrappers later designed)
    needs to be a separate class supported by another class inheriting from Connection.

    A TestJob can have multiple connections but only one device and all Connection objects
    must reference that one device.

    Connecting between devices is handled inside the YAML test definition, whether by
    multinode or by configured services inside the test image.
    """
    def __init__(self, job, raw_connection):
        self.device = job.device
        self.job = job
        # provide access to the context data of the running job
        self.data = self.job.context
        self.raw_connection = raw_connection
        self.results = {}
        self.match = None
        self.connected = True
        self.check_char = '#'
        self.tags = []

  • 8
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值