Automation for the people: Deployment-automation patterns, Part 2

More patterns for one-click deployments

 

Deployment is yet another aspect of software creation that lends itself well to automation. Automated deployments reap the benefits of a reliable, repeatable process: improved accuracy, speed, and control. Part 1 of this two-part article describes eight deployment-automation patterns. In this installment, I expand the discussion to cover seven more equally beneficial approaches to deployment:

  • Binary Integrity , which ensures the same artifact is promoted throughout target environments
  • Disposable Container , which puts a target environment into a known state to reduce deployment errors
  • Remote Deployment , which ensures that deployments can interface with multiple machines from a centralized machine or cluster
  • Database Upgrades , which provides a centrally managed and scripted process to applying incremental changes to the database
  • Deployment Test , which uses pre- and post-deployment checks to verify the application is working as expected based on recent deployment
  • Environment Rollback , which rolls back application and database changes if a deployment fails
  • Protected Files , which controls access to certain files used by a build system

Figure 1 illustrates the relationships among the deployment patterns covered in this article (the unshaded patterns were covered in Part 1 ):


Figure 1. Deployment-automation patterns
Deployment-automation patterns

These seven additional deployment-automation patterns build upon the first eight to help you create one-click deployments.

Compile once, deploy to many environments

Name : Binary Integrity

Pattern : For each tagged deployment, the same archive (WAR or EAR) is used in each target environment.

Antipatterns : Separate compilation for each target environment on the same tag.

After numerous debates with colleagues on this topic, I've firmly come down on the side of compile once, deploy to many target environments rather than compile and package in every target environment . For instance, the deployment artifact produced from a Java Web deployment is the Web archive (WAR) or enterprise archive (EAR) file. This archive should be checked into the version-control repository and tagged one time — such as in the DEV environment.

Figure 2 illustrates the compile once, deploy to many philosophy as the same brewery.war generated on the build machine is deployed to each of the target environments:


Figure 2. The same Web archive deployed to different target environments
The same web archive is deployed to different target environments

Ant provides a checksum task — using the Message-Digest algorithm 5 (MD5) hashing algorithm — to ensure the file that was compiled and packaged on the build machine is the same one being deployed to each of the target environments.

Some will argue that although the artifact may be the same, the deployment configuration is different for each target environment. That is, when you use a Single-Command, Scripted Deployment, many automated processes can alter the application's output, regardless of whether it's the same archive. This is true; however, you could spend needless hours trying to troubleshoot a problem because the software was compiled and packaged using a different JDK version on the STAGE environment than was run in the QA environment. And opportunities for failure arise when the JARs from a centralized dependency-management repository (such as Ivy or Maven) that were used in DEV are different from those in the staging environment. These risks convince me that in order to ensure binary integrity, I must compile and package once so that I can deploy to many environments.


Make deployments cheap with disposable containers

Name : Disposable Container

Pattern : Automate the installation and configuration of Web and database containers by decoupling installation and configuration.

Antipatterns : Manually install and configure containers into each target environment.

In an earlier Automation for the people installment, "Continuous Integration anti-patterns, Part 2 ," you learned why cleaning up a "polluted" environment helps prevent falsely positive or negative builds. The Disposable Container reduces many of the problems that can occur when you rely on persistent containers. The Disposable Container pattern is based on two principles: completely remove all container components and separate the container installation from its configuration . This seems like a radical concept to some, particularly systems engineers, because it no longer assumes that containers should be managed, and obfuscated, by a separate team, never to be touched by developers or others. However, considering the common and costly problems that occur during deployments, it can be an area where all team members can realize the most benefit.

One-click deployments

Many times I've met with teams that have told me "Yep, we've got automated deployments." When I ask a few simple questions — such as "Are you able to type a single command (such as ant ) to generate a working software application?" — the response usually goes something like: "Yes, once you install and configure the Web container..." or "Yes, once you set up the database." My definition for a truly automated deployment is that you should be able to start with a clean machine, install the Java platform and Ant (there are ways to eliminate this step as well), and then type a single command to get a working software application. If you can't, it's not "one-click," and costly human bottlenecks will occur in the deployment process.

The Disposable Container pattern, shown in Figure 3, is grounded in a philosophy that everything should be in the system — (using the Repository pattern covered in Part 1 ) — not in someone's head:


Figure 3. Removing and installing container(s) during deployment
Removing and installing containers during deployment

The Ant script in Listing 1 downloads the Tomcat ZIP from the Internet, removing any container remnants from previous deployments, then extracts, installs, and starts Tomcat:


Listing 1. Scripted Deployment, in Ant, that removes, reinstalls, starts, and configures a container

<!-- Check to see if Tomcat is running prior to this -->
...
<exec executable="sh" osfamily="unix" dir="${tomcat.home}/bin" spawn="true">
<env key="NOPAUSE" value="true" />
<arg line="shutdown.sh" />
</exec>
<delete dir="${tomcat.home}" />
<get src="${tomcat.binary.uri}/${tomcat.binary.file}"
dest="${download.dir}/${tomcat.binary.file}" usetimestamp="true"/>
<unzip dest="${target.dir}" src="${download.dir}/${tomcat.binary.file}" />
<exec osfamily="unix" executable="chmod" spawn="true">
<arg value="+x" />
<arg file="${tomcat.home}/bin/startup.sh" />
<arg file="${tomcat.home}/bin/shutdown.sh" />
</exec>
<xmltask source="${appserver.server-xml.file}"
dest="${appserver.server-xml.file}">
<attr path="/Server/Service[@name='${s.name}']/Connector[${port='${c.port}']"
attr="proxyPort"
value="${appserver.external.port}"/>
<attr path="/Server/Service[${name='${s.name}']/Connector[${port='${c.port}']"
attr="proxyName"
value="${appserver.external.host}"/>
</xmltask>
<!-- Perform other container configuration -->
...
<echo message="Starting tomcat instance at ${tomcat.home} with startup.sh" />
<exec executable="sh" osfamily="unix" dir="${tomcat.home}/bin" spawn="true">
<env key="NOPAUSE" value="true" />
<arg line="startup.sh" />
</exec>

 

By putting an environment into a known state and deploying containers in a controlled manner, you reduce many common deployment errors that are the cause of most deployment pain.


Running commands in multiple external environments

Name : Remote Deployment

Pattern : Use a centralized machine or cluster to deploy software to multiple target environments.

Antipatterns : Manually applying deployments locally in each target environment.

Once database and Web containers have been installed, getting a deployment to run on a developer's workstation is usually rather trivial. However, the difference between development and production is vast. If an organization has multiple projects and different target environments (for instance, testing or staging environments), there's often a need to manage deployments centrally from a single environment: a machine or a cluster. Quite often, teams use a build server to manage deployments between each of these target environments. In Part 1 , I covered the Headless Execution pattern, in which you use public and private keys so that you don't need to log in manually to each machine. Remote Deployment, illustrated in Figure 4, relies on the Headless Execution, Single Command, and Scripted Deployment patterns to make it easy to deploy to remote machines:


Figure 4. Build-management server to multiple environments
Build management server to multiple environments

To deploy software from a centralized build server remotely, you need to use mechanisms for securely copying and running commands remotely. The two mechanisms I'll illustrate use Secure Copy (SCP) and Secure Shell (SSH). From a Scripted Deployment, as shown in Listing 2, a Web archive that was generated on a centralized build machine is remotely copied to a target environment:


Listing 2. Securely copying a war file from one machine to another

<target name="copy-tomcat-dist">
<scp file="${basedir}/target/brewery.war"
trust="true"
keyfile="${basedir}/config/id_dsa"
username="bobama"
passphrase=""
todir="pduvall:G0theD!stance@myhostname:/usr/local/jakarta-tomcat-5.5.20/webapps" />
</target>

 

After the WAR file is securely copied to the remote target environment, I can use something like the SSHExec task in Java Secure Channel to run any SSH commands — remotely from the central build machine. An alternative approach is to ssh to the remote environment and run any commands locally. This lessens the back-and-forth remote traffic and can reduce deployment times.


Putting database and data into known state

Name : Database Upgrade

Pattern : Use scripts and database to apply incremental changes in each target environment.

Antipatterns : Manually applying database and data changes in each target environment.

In Figure 5, you see an example of using automated scripts to update the database as part of a Scripted Deployment:


Figure 5. Automatically applying incremental database updates
Automatically applying incremental database updates.

In an earlier Automation for the people installment, "Hands-free database migration ," I covered the need to apply incremental database changes in an automated fashion. Like any other part of a Scripted Deployment, the database upgrade scripts are checked into the repository.

LiquiBase (see Resources ) is a tool for applying incremental changes to a database so that the same change is applied into each target environment as part of the Scripted Deployment. In Listing 3, an SQL script is called as part of the LiquiBase changelog. This changelog (defined in XML) is then called by the Scripted Deployment (which is implemented in a build-scripting tool — such as Ant).


Listing 3. Running a custom SQL file from a LiquiBase change set

<changeSet id="1" author="jbiden">
<sqlFile path="insert-distributor-data.sql"/>
</changeSet>

 

There's quite a bit more to learning and applying automated database upgrades, but the idea is to perform the upgrades as part of the Scripted Deployment so that all database changes are in the system , not a written procedure or in someone's head.


Smoke-testing deployments

Name : Deployment Test

Pattern : Script self-testing capabilities into Scripted Deployments.

Antipatterns : Deployments are verified by running through manual functional tests that don't focus on deployment-specific aspects.

Figure 6 illustrates an example of running deployment tests before and after a deployment:


Figure 6. Running functional deployment tests against application
Running functional deployment tests against application

In Listing 4, I'm using Ant to perform predeployment tests to verify I'm using the correct tool versions. In the Scripted Deployment, the script can check for existing ports in use (which would cause the Web container deployment to fail), verify a connection to the database, and check if containers have been started, along with a host of other internal deployment tests.


Listing 4. Running predeployment checks to ensure deployment efficacy

<condition property="ant.version.success">
<antversion atleast="${ant.check.version}" />
</condition>
<antunit:assertPropertyEquals name="ant.version.success" value="true" />
<echo message="Ant version is correct." />
<echo message="Validating Java version..."/>
<condition property="java.major.version.correct">
<equals arg1="${ant.java.version}" arg2="${java.check.version.major}" />
</condition>
<antunit:assertTrue message="Your Java SDK version must be 1.5+. /
You must install correct version.">
<isset property="java.major.version.correct"/>
</antunit:assertTrue>

 

A more extensive deployment test can ensure that the application's functionality is correct. By writing deployment-specific automated functional tests using a tool such as Selenium for Web applications or Abbot for client applications, you can verify the deployment changes have been properly applied. You can think of these tests as smoke tests : you need to test only the functionality that is affected by the deployment. For example, Table 1 shows ways you can use Selenium and other tools for a Web application:


Table 1. Deployment tests

Deployment testDescription
DatabaseWrite an automated functional test that inserts data into a database. Verify the data was entered in the database.
Simple Mail Transfer Protocol (SMTP)Write an automated functional test to send an e-mail message from the application.
Web serviceUse a tool like SoapAPI to submit a Web service and verify the output.
Web container(s)Verify all container services are operating correctly.
Lightweight Directory Access Protocol (LDAP)Using the application, authenticate via LDAP.
LoggingWrite a test that writes a log using the application's logging mechanism.

 

Automated tests aren't just for testing user functionality. By creating a suite that focuses on deployment tests, you can verify the efficacy of the deployment, reducing downstream errors and development costs.


Rolling back all deployment changes

Name : Environment Rollback

Pattern : Provide an automated Single Command rollback of changes after an unsuccessful deployment.

Antipatterns : Manually rolling back application and database changes.

Figure 7 illustrates rolling back database changes — using Database Upgrade — along with the automation processes for rolling back a Web deployment:


Figure 7. Rolling back deployment changes
Rolling back deployment changes

Whether or not you're automating deployments, it's nice to have a way to roll back changes when a deployment goes wrong. In some cases, erroneous changes can result in a system outage costing an organization millions of dollars. To perform an Environment Rollback, you need to get the target environment back into the state it was in before the deployment. To do this, you essentially need a rollback script for every change. A Web deployment often requires more changes to roll back. One example of an Environment Rollback is to copy the archive (for example, a WAR file) prior to the deployment and provide rollback database scripts for each change. You also need to reapply the configuration changes applied to the Web container.

Listing 6 demonstrates an example of providing a rollback statement for each roll-forward statement using LiquiBase. I'm adding a new table called brewery while providing a corresponding dropTable rollback statement.


Listing 6. Provide rollback process when applying incremental data upgrades

<changeSet id="rollback-database-changes" author="bobama">
<createTable tableName="brewery">
<column name="id" type="int"/>
</createTable>
<rollback>
<dropTable tableName="brewery"/>
</rollback>
</changeSet>

 

This simple example is meant to be illustrative, not to trivialize rollback. Reverting to a previous deployment is often a complex and time-consuming process to perform (and to automate). The time you invest in writing rollback scripts should be proportionate to the cost of the deployment failure.


Protect information from prying eyes

Name : Protected Files

Pattern : Using the repository, files are shared by authorized team members only.

Antipattern : Files are managed on team members' machines or stored on shared drives accessible by authorized team members.

Figure 8 shows a protected version-control repository used to host files that only authorized people or systems should access:


Figure 8. Using a protected version-control repository for sensitive files
Using a protected version control repository for sensitive files

In some cases, not all team members should have access to environment-specific data. However, keeping this information separate from the deployment scripts may prevent the scripts from executing. When discussing the Headless Execution pattern, I described using SSH keys with the Java Secure Channel tool to copy files and run remote commands securely without a human needing to enter commands. The properties you've used Externalized Configuration for are likely to contain data that not all team members should see. A technique I've used to ensure Headless Execution while protecting the data in the .properties files from prying eyes is to check these files into a protected repository.

In Listing 7, I'm configuring an Apache-hosted Subversion repository to deny access to all for a certain directory and then explicitly adding certain users:


Listing 7. Protecting a Subversion repository using Apache with Subversion

<DirectoryMatch "^/.*/(/.svn)/">
Order deny,allow
Deny from all
Allow bobama,jbiden,hclinton
</DirectoryMatch>

 

By protecting access to the Subversion repository, a Scripted Deployment can access the properties as one of the allowed users without being prompted for a password, providing Headless Execution in a manner as defined by the SSH keys.


The one-click deployment

I've cataloged several more deployment-automation patterns than the 15 I've documented in this two-part article, but these 15 probably address 80 percent of the deployment situations I've encountered. Each of these patterns is intended to help get you to a literal one-click/single-command deployment for each and every target environment. I wish you many painless deployments!

That's all I wrote

This is my last article in the Automation for the people series. It's been a tremendous adventure sharing my experiences with you for more than two years. My goal with this series has always been to show how and why to automate myriad software development processes so that you're spending more time on interesting problems rather than futzing with repetitive, error-prone activities. In the series, I've demonstrated how to automate code reviews in order to refactor appropriately, upgrade an application database incrementally, apply Continuous Integration practices and tools, run automated tests with every change, generate GUI installers, create one-click deployments, automate the generation of developer documentation, perform dependency management, utilize version-control repositories, and use various build scripts and tools effectively. I hope you've enjoyed reading the series as much as I've enjoyed writing it.

 

Resources

Learn

Get products and technologies

  • Ant : Download Ant and start building software in a predictable and repeatable manner.
  • JSch : Download Java Secure Channel for secure communication.
  • Selenium : Download Selenium to perform deployment-specific functional testing.
  • LiquiBase : Download LiquiBase to begin performing automated database migrations.
  • Abbot : Download Abbot to begin performing deployment-centric functional testing.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1 目标检测的定义 目标检测(Object Detection)的任务是找出图像中所有感兴趣的目标(物体),确定它们的类别和位置,是计算机视觉领域的核心问题之一。由于各类物体有不同的外观、形状和姿态,加上成像时光照、遮挡等因素的干扰,目标检测一直是计算机视觉领域最具有挑战性的问题。 目标检测任务可分为两个关键的子任务,目标定位和目标分类。首先检测图像中目标的位置(目标定位),然后给出每个目标的具体类别(目标分类)。输出结果是一个边界框(称为Bounding-box,一般形式为(x1,y1,x2,y2),表示框的左上角坐标和右下角坐标),一个置信度分数(Confidence Score),表示边界框中是否包含检测对象的概率和各个类别的概率(首先得到类别概率,经过Softmax可得到类别标签)。 1.1 Two stage方法 目前主流的基于深度学习的目标检测算法主要分为两类:Two stage和One stage。Two stage方法将目标检测过程分为两个阶段。第一个阶段是 Region Proposal 生成阶段,主要用于生成潜在的目标候选框(Bounding-box proposals)。这个阶段通常使用卷积神经网络(CNN)从输入图像中提取特征,然后通过一些技巧(如选择性搜索)来生成候选框。第二个阶段是分类和位置精修阶段,将第一个阶段生成的候选框输入到另一个 CNN 中进行分类,并根据分类结果对候选框的位置进行微调。Two stage 方法的优点是准确度较高,缺点是速度相对较慢。 常见Tow stage目标检测算法有:R-CNN系列、SPPNet等。 1.2 One stage方法 One stage方法直接利用模型提取特征值,并利用这些特征值进行目标的分类和定位,不需要生成Region Proposal。这种方法的优点是速度快,因为省略了Region Proposal生成的过程。One stage方法的缺点是准确度相对较低,因为它没有对潜在的目标进行预先筛选。 常见的One stage目标检测算法有:YOLO系列、SSD系列和RetinaNet等。 2 常见名词解释 2.1 NMS(Non-Maximum Suppression) 目标检测模型一般会给出目标的多个预测边界框,对成百上千的预测边界框都进行调整肯定是不可行的,需要对这些结果先进行一个大体的挑选。NMS称为非极大值抑制,作用是从众多预测边界框中挑选出最具代表性的结果,这样可以加快算法效率,其主要流程如下: 设定一个置信度分数阈值,将置信度分数小于阈值的直接过滤掉 将剩下框的置信度分数从大到小排序,选中值最大的框 遍历其余的框,如果和当前框的重叠面积(IOU)大于设定的阈值(一般为0.7),就将框删除(超过设定阈值,认为两个框的里面的物体属于同一个类别) 从未处理的框中继续选一个置信度分数最大的,重复上述过程,直至所有框处理完毕 2.2 IoU(Intersection over Union) 定义了两个边界框的重叠度,当预测边界框和真实边界框差异很小时,或重叠度很大时,表示模型产生的预测边界框很准确。边界框A、B的IOU计算公式为: 2.3 mAP(mean Average Precision) mAP即均值平均精度,是评估目标检测模型效果的最重要指标,这个值介于0到1之间,且越大越好。mAP是AP(Average Precision)的平均值,那么首先需要了解AP的概念。想要了解AP的概念,还要首先了解目标检测中Precision和Recall的概念。 首先我们设置置信度阈值(Confidence Threshold)和IoU阈值(一般设置为0.5,也会衡量0.75以及0.9的mAP值): 当一个预测边界框被认为是True Positive(TP)时,需要同时满足下面三个条件: Confidence Score > Confidence Threshold 预测类别匹配真实值(Ground truth)的类别 预测边界框的IoU大于设定的IoU阈值 不满足条件2或条件3,则认为是False Positive(FP)。当对应同一个真值有多个预测结果时,只有最高置信度分数的预测结果被认为是True Positive,其余被认为是False Positive。 Precision和Recall的概念如下图所示: Precision表示TP与预测边界框数量的比值 Recall表示TP与真实边界框数量的比值 改变不同的置信度阈值,可以获得多组Precision和Recall,Recall放X轴,Precision放Y轴,可以画出一个Precision-Recall曲线,简称P-R
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值